forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
SylurJHFPS | The Detection of Distributional Discrepancy for Text Generation | [
"Xingyuan Chen",
"Ping Cai",
"Peng Jin",
"Haokun Du",
"Hongjun Wang",
"Xinyu Dai",
"Jiajun Chen"
] | The text generated by neural language models is not as good as the real text. This means that their distributions are different. Generative Adversarial Nets (GAN) are used to alleviate it. However, some researchers argue that GAN variants do not work at all. When both sample quality (such as Bleu) and sample diversity (such as self-Bleu) are taken into account, the GAN variants even are worse than a well-adjusted language model. But, Bleu and self-Bleu can not precisely measure this distributional discrepancy. In fact, how to measure the distributional discrepancy between real text and generated text is still an open problem. In this paper, we theoretically propose two metric functions to measure the distributional difference between real text and generated text. Besides that, a method is put forward to estimate them. First, we evaluate language model with these two functions and find the difference is huge. Then, we try several methods to use the detected discrepancy signal to improve the generator. However the difference becomes even bigger than before. Experimenting on two existing language GANs, the distributional discrepancy between real text and generated text increases with more adversarial learning rounds. It demonstrates both of these language GANs fail. | [
"distributional discrepancy",
"real text",
"text",
"detection",
"text generation",
"gan variants",
"bleu",
"language model",
"difference",
"language gans"
] | Reject | https://openreview.net/pdf?id=SylurJHFPS | https://openreview.net/forum?id=SylurJHFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"PV64LzvDS",
"H1e5DF0aKH",
"HyxQWDWtYH",
"SJlBb0_QYB"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798730159,
1571838305936,
1571522298882,
1571159549291
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1698/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1698/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1698/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors propose a novel metric to detect distributional discrepancy for text generation models and argue that these can be used to explain the failure of GANs for language generation tasks. The reviewers found significant deficiencies with the paper, including:\\n\\n1) Numerous grammatical errors and typos, that make it difficult to read the paper.\\n\\n2) Mischarcterization of prior work on neural language models, and failure to compare with standard distributional discrepancy measures studied in prior work (KL, total variation, Wasserstein etc.). Further, the necessity of the complicated procedure derived by the authors is not well-justified.\\n\\n3) Failure to run experiments on standard banchmarks for image generation (which are much better studied applications of GANs) and confirm the superiority of the proposed metrics relative to standard baselines. \\n\\nThe reviewers were agreed on the rejection decision and the authors did not participate in the rebuttal phase.\\n\\nI therefore recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper argues that text generated by existing neural language models are not as good as real text and proposes two metric functions to measure the distributional difference between real text and generated text. The proposed metrics are tried on language GANs but fail to produce any improvement.\", \"major_issues\": \"This manuscript is poorly organized and the introduction is not well-written. It\\u2019s true that generating text from random noise vector remains a challenging problem, but sequence-to-sequence models for machine translation and question answering have achieved tremendous successes. The description in the first paragraph about neural language models is not accurate. \\n\\nThere are numerous grammar issues and mis-spellings. For e.g., pp. 1: \\u201cRelGAN which needs not...\\u201d, pp. 2: \\u201cWe analysis\\u2026\\u201d, \\u201ccould be find\\u2026\\u201d, pp 3: \\u201cequation 8\\u201d should be \\u201cequation 9\\u201d...\\n\\nThe proposed metrics are also questionable. Eq. 3 on page 2 holds for any x sampled from the distribution, not just for a single data point. To test the effectiveness of a good metric, extensive experiments on toy datasets such as MNIST, CIFAR10, and synthetic datasets should be conducted. This paper mixes text generation and proposed metrics together. The claimed failure experiments make the proposed metrics even more questionable.\\n\\nIn summary, the presentation and the organization of this paper should be significantly improved for submission. The proposed metrics are questionable and should be thoroughly tested on synthetic and toy datasets before deploying it for text generation.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes two metrics to measure the discrepancy between generated text and real text, based on the discriminator score in GANs. Empirically, it shows that text generated by current text generation methods is still far from human-generated text, as measured by the proposed metric. The writing is a bit rough so sometimes it's hard to figure out what has been done. It's also unclear how the proposed metrics compare to simply using the discriminator for evaluation. Therefore, I'm inclined to reject the current submission.\", \"approach\": [\"The proposed metric essentially relies on the learned discriminator to measure the closeness of generated text vs real text, based on the strong assumption that the learned discriminator is near-optimal. It has been previously shown that learning a classifier from generated and real text does not generalize well (Lowe et al, 2017, Chaganty et al, 2018).\", \"What's the advantage of the proposed metric, compared to existing ones, e.g. KL divergence, total variation etc.?\"], \"experiments\": [\"What's the accuracy of the learned discriminators? The discrepancy could be due to both data difference and classification error.\"], \"minor\": \"Bleu -> BLEU\", \"reference\": \"\", \"towards_an_automatic_turing_test\": \"Learning to evaluate dialogue responses. R. Lowe, M. Noseworthy, I. V. Serban, N. Angelard- Gontier, Y. Bengio, and J. Pineau. 2017.\\nThe price of debiasing automatic metrics in natural language evaluation. A. Chaganty, S. Mussmann, and P. Liang. 2018.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes an estimator to quantify the difference in distributions between real and generated text based on a classifier that discriminates between real vs generated text. The methodology is however not particularly well motivated and the experiments do not convince me that this proposed measure is superior to other reasonable choices. Overall, the writing also contains many grammatical errors and confusing at places.\", \"major_comments\": \"- There are tons of other existing measures of distributional discrepancy that could be applied to this same problem. Some would be classical approaches (eg. Kullback-Leibler or other f-divergence based on estimated densities, Maximum Mean Discrepancy based on a specific text kernel, etc) while others would be highly related to this work through their use of a classifier. Here's just a few examples: \\n\\ni) Lopez-Paz & Oquab (2018). \\\"Revisiting Classifier Two-Sample Tests\\n\\\": https://arxiv.org/abs/1610.06545 \\nii) the Wasserstein critic in Wasserstein-GAN\\niii) Sugiyama et al (2012). \\\"Density Ratio Estimation in Machine Learning\\\"\\n\\nGiven all these existing methods (I am sure there are many more), it is unclear to me why the estimator proposed in this paper should be better. The authors need to clarify this both intuitively and empirically via comparison experiments (theoretical comparisons would be nice to see as well).\\n\\n- The authors are proposing a measure of discrepancy, which is essentially useful as a two-sample statistical test. As such, the authors should demonstrate a power analysis of their test to detect differences between real vs generated text and show this new test is better than tests based on existing discrepancy measures.\\n\\n- The authors claim training a generator to minimize their proposed divergence is superior to a standard language GAN. However, the method to achieve this is quite convoluted, and straightforward generator training to minimize D_phi does not appear to work (the authors do not say why either).\", \"minor_comments\": [\"x needs to be defined before equation (1).\", \"It is mathematically incorrect to talk about probability density functions when dealing with discrete text. Rather these should be referred to as probability mass functions, likelihoods, or distributions (not \\\"distributional function\\\" either).\"]}"
]
} |
SyedHyBFwS | Relative Pixel Prediction For Autoregressive Image Generation | [
"Wang Ling",
"Chris Dyer",
"Lei Yu",
"Lingpeng Kong",
"Dani Yogatama",
"Susannah Young"
] | In natural images, transitions between adjacent pixels tend to be smooth and gradual, a fact that has long been exploited in image compression models based on predictive coding. In contrast, existing neural autoregressive image generation models predict the absolute pixel intensities at each position, which is a more challenging problem. In this paper, we propose to predict pixels relatively, by predicting new pixels relative to previously generated pixels (or pixels from the conditioning context, when available). We show that this form of prediction fare favorably to its absolute counterpart when used independently, but their coordination under an unified probabilistic model yields optimal performance, as the model learns to predict sharp transitions using the absolute predictor, while generating smooth transitions using the relative predictor.
Experiments on multiple benchmarks for unconditional image generation, image colorization, and super-resolution indicate that our presented mechanism leads to improvements in terms of likelihood compared to the absolute prediction counterparts. | [
"Image Generation",
"Autoregressive"
] | Reject | https://openreview.net/pdf?id=SyedHyBFwS | https://openreview.net/forum?id=SyedHyBFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"sT8BSRAqgw",
"rJgWhRKyjH",
"SyxKniRnYS",
"H1xVFeinYr"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798730127,
1572998824898,
1571773361037,
1571758204468
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1697/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1697/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1697/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"All reviewers rated this submission as a weak reject and there was no author response.\\nThe AC recommends rejection.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper bases its methodology on well known developments in image analysis/synthesis about similarity of pixel values in adjacent locations. Many techniques have been used for modelling this similarity, including predictive models, cliques and graphs. The paper uses a simple autoregressive model for generating pixel values based on the values of previously processed pixels, estimating the differences between these neighboring pixel values.\\n\\nThe method is implemented through copying the pixel values and adjusting the differences. Three types of prediction, based on absolute, or relative values are examined, for image generation, colorization, super-resolution. The problems are significant, but the approach rather superficial. A small experimental study is presented, based on CIFAR-10 and downsampled ImageNet datsaets. Much more experiments, including quantitative and qualitative results are reuired, to validate the prospects of the method in different types of (complex) problems and contexts. Marginal improvements are observed in the presented results. Since image generation and image to image translation are targeted, comparison and/or combined use with Sota methods, i.e., GANs should be examined. \\n\\nMoreover, the paper presentation needs improvement; for example, symbols are undefined when used for the first time in the text (see eq. 3), etc.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper the authors present a new way to use autoregressive modeling to generate images pixel by pixel where each pixel is generated by modeling the difference between the current pixel value and the preexistent ones. In order to achieve that, the authors propose a copy and adjustment mechanism that select an existing pixel, and then adjust its sub-pixel (channel values) to generate the new pixel. The proposed model is demonstrated with a suite of experiments in classic image generation benchmark. The authors also demonstrate the use of their technique in Image to Image translation.\\nOverall, although the paper explain clearly the intuition and the motivation of the proposed technique, I think that the paper in its present state have low novelty, weak related work analysis review and insufficient experiments to support a publication at ICLR. \\n\\n\\n\\n**Novelty, contribution and related work**\\nThe authors should highlight better their main contribution novelty of the proposed method compared to their baseline.\\n\\n\\n**Result and conducted experiments**\", \"the_correctness_of_the_proposed_approach_is_not_proved_by_the_conducted_experiment__in_fact\": \"The experiments do not provide the details of the used architecture compared to your baseline. \\nIn Table 1 you report the results using your technique on several computer vision tasks (generation, colorization and super-resolution) but you're not comparing with the SoA of each of these tasks.\\nThe results reported in Tables 1 and 2 are not convincing when compared to existing approaches (using only CIFAR10 in Table2). \\nThere are so many missing details specially to validate Image-To-image translation \\nFigure 3 is confusing and not clear \\n\\n**Minor comments**\", \"in__references_section\": \"(Kingma & Dhariwal, 2018) is not in a proper format (nips 2018)\\nBad quality of illustrations and images \\nBe coherent with the position of captions (figure 3)\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes an approach for image generation that relies on an autoregressive model for the image pixels. These models are popularly used in image coding and compression settings, and have been used in generative models like PixelCNN. In contrast to this prior work, the proposed model is based on the selection of a previously available pixel and the modeling of the differences between the old pixel and the new one. The copy and adjustment models, i.e., eqs (3) and (5-6), are straightforward. Applications to image-to-image translation are also presented.\\n\\nI am rating the paper \\\"weak reject\\\" mostly due to the limited set of comparisons in experimental results. There is no qualitative comparison to other algorithms for two of the problems considered (colorization, super-resolution) and the comparison with other algorithms for unconditional image generation is limited to CIFAR-10; thus, the impact of this contribution is not clear. Furthermore there is no discussion of these comparison results - i.e., what the proposed algorithm contributes given that it's outperformed by the sparse transformer.\\n\\nIt is not clear at first what the authors mean by \\\"sub-pixel\\\", which appears to be one of the color/spectrum channels of a pixel of the image? Also not clear what \\\"outcome masking\\\" refers to. The explanation of the hidden states (g,h) used for each mechanism are not always clear or explicit. For example, can you write an equation for h_{i,c-1} which is more explicit than \\\"composing the history of generated sub-pixels\\\"? Can you define Ui when it is first used in (6)? What is the difference between the pixel state h_{r,C} and its values x_r?\\n\\nMinor comments\\nThe second equation in Section 2.3 is missing =\\nIn Section 5.1, it is not clear what is meant by \\\"discrediting\\\" the image.\\nThe table in Fig. 3 could use full names for the problems instead of initials.\"}"
]
} |
BylPSkHKvB | Natural- to formal-language generation using Tensor Product Representations | [
"Kezhen Chen",
"Qiuyuan Huang",
"Hamid Palangi",
"Paul Smolensky",
"Kenneth D. Forbus",
"Jianfeng Gao"
] | Generating formal-language represented by relational tuples, such as Lisp programs or mathematical expressions, from a natural-language input is an extremely challenging task because it requires to explicitly capture discrete symbolic structural information from the input to generate the output. Most state-of-the-art neural sequence models do not explicitly capture such structure information, and thus do not perform well on these tasks. In this paper, we propose a new encoder-decoder model based on Tensor Product Representations (TPRs) for Natural- to Formal-language generation, called TP-N2F. The encoder of TP-N2F employs TPR 'binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR 'unbinding' to generate a sequence of relational tuples, each consisting of a relation (or operation) and a number of arguments, in symbolic space. TP-N2F considerably outperforms LSTM-based Seq2Seq models, creating a new state of the art results on two benchmarks: the MathQA dataset for math problem solving, and the AlgoList dataset for program synthesis. Ablation studies show that improvements are mainly attributed to the use of TPRs in both the encoder and decoder to explicitly capture relational structure information for symbolic reasoning. | [
"Neural Symbolic Reasoning",
"Deep Learning",
"Natural Language Processing",
"Structural Representation",
"Interpretation of Learned Representations"
] | Reject | https://openreview.net/pdf?id=BylPSkHKvB | https://openreview.net/forum?id=BylPSkHKvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"SXGYD_tkBP",
"HylvvoJ2oB",
"rJlElS6ioH",
"rJxLze2soB",
"BkxiUOcisB",
"SkeBMX0dsB",
"rkgaigA_sH",
"SygeZ0TuiH",
"HJl6Riadir",
"SyecPABa9S",
"SygoAfzI9r",
"rkxqciuCtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798730097,
1573808991087,
1573799148428,
1573793805734,
1573787731025,
1573606156669,
1573605540764,
1573604855906,
1573604308826,
1572851298433,
1572377298902,
1571879826103
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1695/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1695/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1695/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1695/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1695/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1695/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1695/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1695/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1695/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1695/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1695/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposed a new seq2seq method to implement natural language to formal language translation. Fixed length Tensor Product Representations are used as the intermediate representation between encoder and decoder. Experiments are conducted on MathQA and AlgoList datasets and show the effectiveness of the methods. Intensive discussions happened between the authors and reviewers. Despite of the various concerns raised by the reviewers, a main problem pointed by both reviewer#3 and reviewer#4 is that there is a gap between the theory and the implementation in this paper. The other reviewer (#2) likes the paper but is less confident and tend to agree with the other two reviewers.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to the questions\", \"comment\": \"Thank you for your helpful suggestions, and we hope you find our modifications sufficient for the revised version of this paper to make it more clear.\\n\\nRegarding to the careful questions, the $r'$ in decoder is not the dual\\u00a0of $r$ in the encoder. In encoder, $r$ is the ${\\\\bf role}$ vector of each word in natural-language, which represents more general information such as structural information of the word. In decoder, the $r'$ is the dual vector of ${\\\\bf relation}$ vector in the tuple TPR for formal language (For order-3 TPR $a_i \\\\otimes r_i \\\\otimes p_i$ ,\\u00a0 $a_i$ represents the arguments of a tuple, $r_i$ for relation of a tuple, and $p_i$ for positions of each argument in the tuple).\\u00a0\\n\\nThe encoder uses \\\"binding\\\" operation on natural-language TPR (order-2 Tensor); the decoder uses \\\"unbinding\\\" operation on formal-language TPR (order-3 Tensor). During training part, encoder learns to select ${\\\\bf role}$ vectors and ${\\\\bf filler}$ vectors for encoding natural-language via binding; and the decoder learns to use the correct unbinding vectors to unbind the formal-language tuples via unbinding. At each time step $t$, as the decoder needs to decode a order-3 tensor, to decode a binary ${\\\\bf relational}$ tuple, the unbinding module decodes it using the $two$ $steps$ of TPR unbinding ( details are described in Sec. 3.2.2 ). Although the order-3 TPR is called \\\"assumed\\\", the order-3 TPR exists when decoder successes in producing the targeted formal-language tuples (we clarified this at the 3rd point in our original response). We also added a section (Appendix A.3)\\u00a0 in the modified paper for the detailed mathematical explanation about this.\\u00a0\\n\\nDuring encoding, based on natural-language model of TPR, natural-language is modeled as the order-2 tensor. Each word is represented as the tensor product representation (2-order tensor) of a ${\\\\bf role}$ vector, which represents more general information such as structural information; and another ${\\\\bf filler}$ vector, which represents more specific information such as semantic information. We use one LSTM for the ${\\\\bf role}$ vector and another LSTM for the ${\\\\bf filler}$ vector (two LSTM). Based on the theory, two vectors are sufficient to represent the natural-language structure. Furthermore, this order-2 TPR has been shown to be an effective natural-language\\u00a0input encoding for QA in Palangi et al. (2018). In the future, higher order tensors like you mentioned (three LSTM or four LSTM) could be explored to represent different models of natural-language.\\n\\nIn this paper, one of our contributions is the learning scheme for learning structure conversion between different TPR (natural-language order-2 tensor to formal-language order-3 tensor). This learning scheme can also be used for either for same structure conversion or different structure conversion. Conversion between same structures can use the first point you mentioned (the corresponding pairs of dual vectors for both binding and unbinding).\\n\\nThanks again for your interests and questions, and we are happy to answer any further questions you might have.\"}",
"{\"title\": \"question about the encoder\", \"comment\": \"Thanks. Then I have a better understanding about your model. And I also notice you have modified your paper to make it more clear. I have another question. Are $r$ in the encoder and $r'$ in the decoder dual of each other? According to my understanding from your theory, they should be dual of each other. But it seems $r'$ is dual to some implicit vectors encoded in the assumed $H$. If so, the construction of $H_s$ using two LSTM becomes meanlingless. For example, perhaps you may use one LSTMs, product of three LSTMs or product of four LSTMs but get a better result, which, though, are not interpretable by the tensor product representation.\"}",
"{\"title\": \"Clarification for the concern\", \"comment\": \"Thank you for your continued interest in our model. Your comment starts by saying that the output of the encoder ($H_s$ in your notation) has the form $\\\\sum_i a_i \\\\otimes r_i \\\\otimes p_i$ , an order-3 tensor. But actually the output of the encoder is an order-2 tensor with the form $\\\\sum_k f_k \\\\otimes r_k$ (Sec. 3.1.1). In order to produce the order-3 tensor that you describe (Sec. 3.1.2), the MLP is necessary: it converts the order-2 tensor coming out of the encoder into the order-3 tensor that goes into the decoder (Sec. 3.1.3). We called it the \\\"reasoning MLP\\\" because the MLP is supposed to \\\"reason\\\" about the natural-language question and map it to the formal-language program to solve the question.\\n\\nIf the encoder produced the kind of order-3 tensor you call $H_s$, you are correct that we would not need an MLP between the encoder and the decoder. However the encoder\\u2019s job is not to produce the (order-3) encodings of the output relational triples; its job is to produce an encoding of the NL problem statement, which we encode as an order-2 tensor; this was shown to be an effective NL input encoding for QA in Palangi et al. (2018). So the job of the MLP \\u2013 roughly \\u2013 is to convert an encoding of the problem (order-2 tensor) into an encoding of the solution (order-3 tensor). It is indispensable for the model. Through this NLP in our model, the natural-language (order-2 tensor) is converted to formal-language (order-3 tensor) in our paper.\"}",
"{\"title\": \"Concerns about the 3rd point\", \"comment\": \"Thanks for your clarifications. But I still have somes concerns about the 3rd point, i.e. the assumed TPR.\\n\\nWhy don't you just remove the reasoning MLP layer? If the MLP layer is removed, your theory (Equations 1-5) is perfect. I agree that the learned TPR could be interpreted as having the form of Equation 3. However, I suppose the MLP layer harms your theory in this model. The reason is follows (I may use some different symbols as in your paper): \\n\\nLet $H_s$ denote the output of the encoder. Then $H_s$ is constructed as $H_s = a_1^s \\\\otimes r^s \\\\otimes p_1^s + a_2^s \\\\otimes r^s \\\\otimes p_2^s$\\nLet $H_r$ denote the output of the MLP reasoning layer, i.e. assumed TPR. Then $H_r$ is learned as $H_r = a_1^r \\\\otimes r^r \\\\otimes p_1^r + a_2^r \\\\otimes r^r \\\\otimes p_2^r$\\n\\nLet $p'$ and $r'$ denote the parameters in the decoder. Then your theory about the binding-unbinding (Equation 4 and 5) takes effect for the pairs of ($p'$, $p^r$) and ($r'$,$r^r$), rather than ($p'$, $p^s$) and ($r'$,$r^s$). So because of the assumed TPR $H_r$, it becomes meanless how $H_s$ is constructed as your theory does not apply to $r^s$ and $p^s$\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you very much for your strong recommendation for accepting the paper. We share your excitement about the novelty and impact of the proposed methods, and we would like to provide additional technical details below to address some of your questions.\\n\\n1. \\\"length of the sequence\\\":\\nThe length of the input or output sequence does not affect the order of the corresponding TPR. In the decoder, the order-3 TPR is always of the same \\\"size\\\" (rank): it represents just a single relational tuple, since these are generated one at a time (Sec. 3.1.2). In the encoder, the order-2 TPR of a NL word sequence is the sum of each word's single TPR (Sec. 3.1.1). For problems consisting of a longer sequence of words, the TPR produced by the encoder is intuitively more 'densely-packed' (literally, higher rank, as you say), but it can apparently still adequately represent all the information in the problem needed to correctly generate even quite lengthy output sequences. \\n\\nFor example, the following is generated correctly by our model (55 tuples) but wrong by the baseline (LSTM).\", \"question\": \"given numbers a , b , c and e , let d be c , reverse digits in d , let a and the number in the range from 1 to b inclusive that has the maximum value when its digits are reversed be the coordinates of one end and d and e be the coordinates of another end of segment f , find the length of segment f squared.\\n\\nTP-N2F generated tuples (correct):\\n\\n( digits c ) ( reverse 0 ) ( * arg1 10 ) ( + 2 arg2 ) ( lambda2 3 ) ( reduce 1 0 4 ) ( - a 5 ) ( digits c ) ( reverse 7 ) ( * arg1 10 ) ( + 9 arg2 ) ( lambda2 10 ) ( reduce 8 0 11 ) ( - a 12 ) ( * 6 13 ) ( + b 1 ) ( range 0 15 ) ( digits arg1 ) ( reverse 17 ) ( * arg1 10 ) ( + 19 arg2 ) ( lambda2 20 ) ( reduce 18 0 21 ) ( digits arg2 ) ( reverse 23 ) ( * arg1 10 ) ( + 25 arg2 ) ( lambda2 26 ) ( reduce 24 0 27 ) ( > 22 28 ) ( if 29 arg1 arg2 ) ( lambda2 30 ) ( reduce 16 0 31 ) ( - 32 e ) ( + b 1 ) ( range 0 34 ) ( digits arg1 ) ( reverse 36 ) ( * arg1 10 ) ( + 38 arg2 ) ( lambda2 39 ) ( reduce 37 0 40 ) ( digits arg2 ) ( reverse 42 ) ( * arg1 10 ) ( + 44 arg2 ) ( lambda2 45 ) ( reduce 43 0 46 ) ( > 41 47 ) ( if 48 arg1 arg2 ) ( lambda2 49 ) ( reduce 35 0 50 ) ( - 51 e ) ( * 33 52 ) ( + 14 53 )\\n\\nLSTM generated Lisp-code (incorrect):\\n\\n( - a d ) ( - a d ) ( * 0 1 ) ( digits c ) ( reverse 3 ) ( * arg1 10 ) ( + 5 arg2 ) ( lambda2 6 ) ( reduce 4 0 7 ) ( - 8 e ) ( + b 1 ) ( range 0 10 ) ( digits arg1 ) ( reverse 12 ) ( * arg1 10 ) ( + 14 arg2 ) ( lambda2 15 ) ( reduce 13 0 16 ) ( digits arg2 ) ( reverse 18 ) ( * arg1 10 ) ( + 20 arg2 ) ( lambda2 21 ) ( reduce 19 0 22 ) ( > 17 23 ) ( if 24 arg1 arg2 ) ( lambda2 25 ) ( reduce 11 0 26 ) ( - 27 e ) ( * 9 28 ) ( + 2 29 )\\n\\n2. \\\"convex combination\\\": \\nYes, the main reason for using a weighted combinations of fillers and roles in the encoder is that argmax is not differentiable. Additionally, in other work, we have seen that networks can effectively use the blending of role vectors, performing less well when the blend is replaced by the single argmax role vector. \\n\\n3. \\\"Seq2Tree+search results\\\":\\nThanks for the suggestion. The Seq2Tree + Search model is proposed in the original dataset paper (Polosukhin and Skidanov, 2018). For fair comparison, Table 2 shows results without beam search across the board. The accuracy for Seq2Tree without beam search is $61\\\\%$ on the full testing dataset, while ours is $84.02\\\\%$. We did not implement beam search in all the models compared in Table 2, and we have only the authors' reported value for Seq2Tree + Search, which is $85.8\\\\%$. We do not know the potential benefit of beam search for TP-N2F because the output is not a sequence of tokens, but a sequence of full relational tuples, and it is not yet clear how to implement beam search effectively at that level. This is interesting future work. \\n\\n4. \\\"ablation study\\\": \\nThanks for the suggestions on this. We actually did many different ablation studies. We tried to use LSTM to decode an entire relational tuple one at a time, i.e., use three different MLPs on each hidden state of the LSTM to predict one relation and two arguments. However, the accuracy is about $15\\\\%$ lower than TP-N2F on the MathQA dataset. We assume that the TPR structure is important for decoding relational tuples, especially relational tuples for reasoning which contain rich structural information.\\n\\n5. \\\"references\\\":\\nThank you for the suggestion of discussing the relation between our approach to program synthesis and others. A key difference is the complete absence of any use of symbolic computation in our approach. This makes our approach and others less readily comparable. Were space limits not so severe, we would have liked to follow your suggestion and attempted comparisons. \\n\\n6. \\\"typo\\\":\\nYes, thank you: you correctly spotted a typo in (5), which we ourselves only caught after the paper was submitted.\"}",
"{\"title\": \"Response to the minor comments of Reviewer #3\", \"comment\": \"We really appreciate your careful suggestions and valuable comments very much! Your 27 minor comments greatly aided us in improving the exposition; we hope that you would find the thoroughly revised paper (uploaded to OpenReview) much clearer. Because of the limitation of space, we apologize for some unclear points and formatting issues. We will address the main questions below.\\n\\nFor minor comments 1, 2, 3, 4, 7, 9, 13, 15, 17, 20, 22, 23, 24, 26, 27 we updated the paper based on these suggestions about clarification and formatting. The mathematics has been completely re-set, expanded to completely define all quantities in the model, and augmented with English explanation.\\n\\n5. \\\"details of the MLP\\\": \\nThe details of the MLP are now given in Sec. A.2.2 [p. 12]. Each layer is a linear layer followed by a tanh activation function. We tested the number of layers from 1 to 5. With 1 and 2 layers of the MLP, the performance is roughly the same and with more layers, the performance drops. We use the best result, from one linear layer with tanh.\\n\\n6. \\\"bidirectional LSTM for the encoder\\\": \\nWe tested bidirectional LSTMs, but did not get significant improvement and the best models are trained with unidirectional LSTMs. \\n\\n8. \\\"using the output of reasoning MLP in tuple-LSTM\\\": \\nAlthough the reported model passes the output of the reasoning MLP only to the first time-step of the tuple-LSTM, the decoder correctly produces lengthy sequences of tuples: see the last example now given in Sec. A.5 (and in the reply to Review #2), a correct sequence of 55 tuples. However, the performance may well be improved further by making the output of the reasoning MLP available to the decoding LSTM at every time step; thank you for suggesting this variation, which we will test in future work.\\n\\n10. \\\"classifier\\\": \\nSorry for the missing details. As now spelled out in Sec. A.2.3, Eqs. 46-51 [p. 13], we use a linear layer followed by a softmax function separately on the relation vector and the 2 argument vectors to compute the probability distribution over all possible symbolic relations and arguments. Generation is done greedily. \\n\\n11. \\\"attention\\\": \\nThe attention version we used is from \\\"Effective Approaches to Attention-based Neural Machine Translation\\\", Luong, et al. (2015), which is now described in detail in Sec. A.2.3 (Eqs. 38-40, p. 13).\\n\\n12. \\\"$f_{linear}$ in (11)\\\": \\nAs now spelled out in Eq. 43 of Sec. A.2.3, the $f_{linear}$ in (11) is a simple linear function to generate the unbinding relation vector. The linear function operates on the sum of $\\\\textbf{B}_1$ and $\\\\textbf{B}_2$ (the tensor product between the relation vector and each argument vector) to produce an unbinding vector of a relation that unbinds both arguments (Eqs. 44-45). \\n\\n14. \\\"the input of the decoder\\\": \\nThe input at each timestep of the decoder Tuple-LSTM is the concatenation of the relation and argument embedding vectors of the tuple generated at the previous timestep. \\n\\n16. \\\"predicted tokens in (9), (10) or (11)\\\" and 18. \\\"decoding at inference time\\\":\", \"the_text_now_states\": \"\\\"the softmax probability distribution over symbolic outputs is computed for relations and arguments separately. In generation, the most probable symbol is selected.\\\" [p. 6] and spelled out in Eqs. 49-51 of Sec. A.2.3 [p. 13].\\n\\n19. \\\"details of the experimental architecture setting\\\": \\nThese are described in the results and discussion section, and in more detail in the Appendix A.1. We could add details of all ablation studies, and we will publish our code on github if the paper is accepted.\\n\\n21. \\\"the noisy examples\\\": \\nThe revised version defines an example as noisy if \\\"the execution script returns the wrong answer when given the ground-truth pro-gram\\\" [p. 7]. \\n\\n25. \\\"a temperature parameter be helpful\\\": \\nAs you say, the temperature is a factor scaling the weight matrix. As now stated in Sec. A.2.1, p. 11, in the model, it is fixed (to 0.1 in the experiment). The model trained faster with this factor.\"}",
"{\"title\": \"Response to the major comments of Reviewer #3\", \"comment\": \"Your very comprehensive comments on our work and the excellent advice on the presentation are greatly appreciated. We believe we have addressed all of your comments in the new version of the paper that we have uploaded to OpenReview, which is heavily revised and much improved, thanks to your suggestions and those of the other reviewers. We have also added an important new mathematical Section A.3 to the Appendix, showing that a successfully trained model will have learned to produce, for the decoder, order-3 TPRs that have the form assumed in the network design (Eq. 3, p. 4).\\n\\nIn this comment, we address your two major points; the 27 minor points are taken up in a follow-on comment.\\n\\n1. \\\"high-level structural description of the model\\\":\", \"the_revised_version_includes\": \"\\\"TP-N2F encodes the natural-language symbolic structure of the problem in an input vector space, maps this to a vector in an intermediate space, and uses that vector to produce a sequence of output vectors that are decoded as relational structures. Both input and output structures are modeled as Tensor Product Representations (TPRs) (Smolensky, 1990). During encoding, NL-input symbolic structures are encoded as vector space embeddings using TPR ` binding' (following \\\"Palangi, et al., AAAI 2018\\\"); during decoding, symbolic constituents are extracted from structure-embedding output vectors using TPR ` unbinding' (following \\\"Huang, et al., NAACL 2018; AAAI 2019\\\").\\\" [p. 1] We have also added a more specific but still high-level description in the opening 2 paragraphs of Sec. 3, p. 3.\\n\\n2. \\\"symmetry between roles and fillers in encoder\\\":\\nOn the important issue you raise here, the revised paper includes footnote 3, p. 3: \\\"Although the TPR formalism treats fillers and roles symmetrically, in use, hyperparameters are selected so that the number of available fillers is greater than that of roles. Thus, on average, each role is assigned to more words, encouraging it to take on a more general function, such as a grammatical role.\\\" Our model uses 150 fillers and 50 roles. The text on p. 3 also now states: \\\"the mechanism closely follows that of \\\"Palangi, et al., AAAI 2018\\\", and we hypothesize similar results: the role and filler approximately encode the grammatical role of the token and its lexical semantics, respectively.\\\" Exploring inductive biases that explicitly encourage the roles to be used grammatically is a direction of future work.\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"We would like to thank you for the constructive comments and helpful suggestions. We have done our best to address each of your comments in the heavily revised version of the paper that we have uploaded to OpenReview. Regarding your specific points:\\n\\n1.\\\"non-square matrix\\\": This has been corrected in the revised paper, which now states \\\"if the embeddings of the roles are linearly independent ... the role matrix $\\\\mathbf{R}$ has a left inverse\\\" [p.2]. The case of non-linear-independence is discussed in the new Appendix Sec. A.3, which states, citing the new reference Anonymous (in prep.), that \\\"even when the number of unbinding vectors exceeds the dimension of their space by a factor of 2 or 3 (which applies to the TP-N2F models presented here), there is a set of role vectors $\\\\{ \\\\mathbf{r}_k \\\\}_{k \\\\in K}$ approximately dual to $\\\\{ \\\\mathbf{r}'_k \\\\}_{k \\\\in K}$, such that $\\\\mathbf{r}_l^\\\\top \\\\mathbf{r}_j' = \\\\delta_{lj} \\\\: \\\\forall l, j \\\\in K$ holds to a good approximation\\\" [p.14] \\n\\n2.\\\"binding-unbinding mechanism properly\\\": We clarify this potentially confusing issue in the revised paper. Regarding the relation between the role and unbinding vectors for the encoder and decoder: \\\"we will make use of both TPR binding using the tensor product with role vectors $\\\\mathbf{r}_i$ and TPR unbinding using the tensor inner product with unbinding vectors $\\\\mathbf{u}_j$. Binding will be used to produce the order-2 tensor $\\\\mathbf{T}_S$ embedding of the NL problem statement. Unbinding will be used to generate output relational tuples from an order-3 tensor $\\\\mathbf{H}$. Because they pertain to different representations (of different orders in fact), the binding and unbinding vectors we will use are not related to one another.\\\" [p.2] (Reviewer 3's comment gives a good description for our model.) The job of the MLP between the encoder and the decoder is to map the order-2 natural-language-structure TPR to the order-3 formal-language-structure (relational-tuple) TPR.\\n\\n3.\\\"the input to the decoder is an 'assumed' TPR\\\": Regarding the status of the \\\"assumed \\\"TPR form of the input to the decoder, $\\\\mathbf{H}$, the revised paper states:\\n\\\"In the model, the order-3 tensor $\\\\mathbf{H}$ of Eq. 3 has a different status than the order-2 tensor $\\\\mathbf{T}_S$ of Sec. 3.1.1. $\\\\mathbf{T}_S$ is a TPR by construction, whereas $\\\\mathbf{H}$ is a TPR as a result of successful learning. To generate the output relational tuples, the decoder assumes each tuple has the form of Eq. 3, and performs the unbinding operations which that structure calls for. In Appendix Sec. A.3, it is shown that, if unbinding each of a set of roles from some unknown tensor $\\\\mathbf{T}$ gives a target set of fillers, then $\\\\mathbf{T}$ must equal the TPR generated by those role/filler pairs, plus some tensor that is irrelevant because unbinding from it produces the zero vector. In other words, if the decoder succeeds in producing filler vectors that correspond to output relational tuples that match the target, then, as far as what the decoder can see, the tensor that it operates on is the TPR of Eq. 3.\\\"[p.4] \\n\\n4.\\\"the encoder cannot learn role and filler properly\\\": The fillers and roles in the encoder are learned through end2end supervised training on natural-language-input/formal-language output pairs, following the successful use of this technique for question-answering in \\\"Palangi, et al. (AAAI 2018)\\\".\\n\\n5.\\\"include some references in semantic parsing\\\": This is an excellent suggestion, which we have followed in the Related Work Sec. 5 [p. 8], although not to the extent we would have liked due to the length limit. Our work is not literally semantic parsing in the narrow sense, since the output is not the meaning of the input, but rather a reasoning process for solving the problem expressed by the input. But we agree there is an important connection, and semantic parsing would be an excellent application of the model.\\n\\nAgain, thank you for your comments and we welcome further helpful suggestions and questions to improve the paper.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"The authors propose a binding-unbinding mechanism for translating natural language to formal language. The idea is good and novel. As far as I know, this is indeed the first work for handling this task using binding-unbinding mechanism. The experimental results also look promising in compared with the exsiting models. However, the designed specific neural network does not support the claimed binding-unbinding theory very well. Moerover, there seem to be some errors about the correctness of the theory (See the first point below).\\n\\nFirstly, in the last paragraph of Section 2, the authors claim that the role matrix $R$ would be invertible such that there exists a matrix $U = R^{-1}$ such that the fillers would be recovered. However, $R$ is defined as a non-square matrix in the previous paragraph. How can a non-square matrix be invertible? \\n\\nSecondly, the design of the specific neural network cannot describe the theory behind proposed binding-unbinding mechanism properly. The authors try to interpret the design of the neural networks using the concepts in the proposed binding-unbinding theorybut are not convincible. In Section 2, the basics of binding-unbinding are introduced and many mathematical properties are required to make the binding-unbinding work. However, all the parameters/variables in the neural networks are freely designated and are not correlated to each other, thus they cannot work together to meet the requirements in the binding-unbinding mechanism. According to my understanding, at least there should be some direct connections between the parameters in the encoder and decoder. For example, is there any restriction on the parameters in encoder and decoder respectively to reflect the property $UR=I$ as in Section 2.\\n\\nLastly, in the encoder part, the role and filler are learned in an unsupervised without any evidence. The input for the decoder is an \\\"assumed\\\" TPR, thus the only evidence from the objective function are cut-off by the assumed TPR. Given that there are no other connections between encoder and decoder, the design of the encoder cannot learn role and filler properly.\", \"other_suggestions\": \"The natural language to formal language problem is named semantic parsing in natural language processing field. In semantic parsing problem, langugae to programatic language is a typical task. I would recommend include some references in semantic parsing.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a sequence-to-sequence model for mapping word sequences to relation-argument-tuple sequences. The intermediate representation (output of the encoder) is a fixed-dimensional vector. Both encoder and decoder internally use a tensor product representation. The experimental results suggest that the tensor product representation is helpful for both the encoder and the decoder. The paper is interesting and the experimental results are positive, but in my opinion the exposition could use some substantial work. Fixing the most substantial flaws in the exposition would be sufficient to warrant an accept in my view.\", \"major_comments\": \"I found the mix of levels of detail in the model specification in section 3 confusing. It would be extremely helpful to have a straightforward high-level mathematical description of the key parts of the encoder, mapping (which could be considered part of the encoder), and decoder in standard matrix-vector notation. While equations (7), (8), (9), (10), (11) and appendix A.2 go some way toward this, key high-level details seem to be missing, and I feel like the exposition would benefit from simply stating the matrix-vector operations that are performed in addition to describing their interpretation in terms of the semantics of the tensor product representation. Specific examples are noted below.\\n\\nIt would be helpful to be explicit about the very highest-level structure of the proposed model. If I understand correctly, it is a probabilistic sequence-to-sequence model mapping a word sequence to a probability distribution over relation-argument-tuple sequences. It uses an encoder-decoder architecture with a fixed-dimensional intermediate representation, and an autoregressive decoder using attention. Both the encoder and decoder are based on the tensor product representation described in section 2. Stating these simple facts explicitly would be extremely helpful.\\n\\nEspecially for the encoder, the learned representation is so general that there seems to be no guarantee that the learned roles and fillers are in any way related to the syntactical / semantic structure that motivates it in section 2. There doesn't seem to be any experimental investigation of the learned TPR in the encoder. If I understand correctly, the way encoder roles and fillers are computed and used is symmetric, meaning that the roles and fillers could be swapped while leaving the overall mapping from word sequences to relation-argument-tuple sequences unchanged. This suggests it is not possible to interpret the role and filler vectors in the encoder in an intuitive way.\", \"minor_comments\": \"In section 2, \\\"R is invertible\\\" should strictly be \\\"R has a left inverse\\\".\\n\\nIn section 3.1.1, the claim that \\\"we can hypothesize to approximately encode the grammatical role of the token and its lexical semantics\\\" is pretty tenuous, especially given the apparent symmetry between learned roles and fillers in the encoder and given the lack of experimental investigation of the meaning of the learned encoder roles and fillers.\\n\\nIn section 3.1.2, my understanding is that the relation-argument tuple (R, A_1, A_2, A_3), say, is treated as a sequence of 3-tuples: (A_1, R, 1), (A_2, R, 2), (A_3, R, 3). Each of these 3-tuples is then embedded using learned embeddings (separate embeddings for argument, relation and position). If correct, it would be helpful to state this explicitly.\\n\\nIn section 3.1.2, it is stated that contracting a rank-3 tensor with a vector is equivalent to matrix-vector product, which is not the case.\\n\\nIn section 3.1.3, both high-level and low-level details of the MLP module are omitted. High-level, I presume that the matrix output by the encoder is reshaped to a large vector, the MLP is applied to this vector to produce another vector, then this is reshaped to a rank-3 tensor to input to the decoder. It would be helpful to state this. Low-level, the number of layers, depth and activation function of the MLP should be specified somewhere, at least in the appendix.\\n\\nDid the authors consider using a bidirectional LSTM for the encoder? This might improve performance.\\n\\nIn section 3.1.2 and appendix A.2, why use the LSTM hidden state for subsequent processing rather than the LSTM output (which would be more conventional). The LSTM output is defined in appendix A.2 but appears not to be used for anything. Please clarify in the paper.\\n\\nDid the authors consider passing the output of the reasoning MLP into every step of the tuple LSTM instead of just using it to initialize the hidden state?\\n\\nIt would be helpful to state the rank of the tensors H, B, etc in section 3.2.2.\\n\\nIn section 3.2.2, what does \\\"are predicted by classifiers over the vectors...\\\" mean? This seems quite imprecise. What is the form of the classifier? My best guess is that the vector a_i^t is passed through a small MLP with a final softmax layer which outputs a probability distribution over the 1-of-K representation of the argument. The main text says \\\"more details are given in the appendix\\\", but appendix A.2 just has \\\"Classifier(a_1^t)\\\". Please clarify in the paper.\\n\\nWhat is the attention over in equation (9)? Attention needs at least two arguments, the query and the sequence being attended to. It seems that (9) only specifies one of these. It would also be helpful to be explicit about the form of attention used.\\n\\nWhat is f_linear in (11)?\\n\\nIt seems unnecessarily confusing to switch notation for the arguments from A_1 in section 3.1.2 to a r g_1 in section 3.2.2, and similarly for the relations.\\n\\nFor the decoder tuple LSTM, how exactly is the previous relation-argument tuple (R, A_1, A_2, A_3), say, summarized? Are each of R, A_1, A_2 mapped to a vector, these vectors concatenated, then passed into the LSTM? Or is the positional decomposition into (A_1, R, 1), ... used? Please clarify in the paper.\\n\\nBased on section 3.3, it seems that the model assumes that, in the decomposition of (R, A_1, A_2, A_3) into a sequence (A_1, R, 1), (A_2, R, 2), (A_3, R, 3) of 3-tuples at each decoder output step, the three 3-tuples are conditionally independent of each other and the three entries of each 3-tuple are conditionally independent of each other. Is this indeed assumed? If so, it would be helpful to state this explicitly. It seems like this is likely not true in practice.\\n\\nSection 3.3 refers to \\\"predicted tokens\\\". Where are these predicted tokens in (9), (10) or (11)?\\n\\nIn section 3.3, it seems the loss at each decoder step is the log probability of the relation-argument tuple at that step. Thus, by the autoregressive property, the overall loss is the log probability of the sequence of relation-argument tuples. If so, it would be helpful to state both these facts explicitly.\\n\\nSection 3 seems to be missing a section, which is how decoding is performed at inference time. For the output of the decoder at each step, is random sampling used, if so with a temperature, or is greedy decoding (selecting the most likely class, equivalent to a temperature of 0) used? Also, what is done if decoding outputs different R's for (A_1, R, 1), (A_2, R, 2), (A_3, R, 3)? The three R values here should be equal in order for this to represent a relation-argument tuple (R, A_1, A_2, A_3), but there is no guarantee the model will respect this constraint.\\n\\nUnless I missed it (apologies if so), many experimental architecture details were omitted. For example, how many hidden cells were used for the LSTMs, etc, etc? These should at least be stated in the appendix.\\n\\nIt would be interesting to investigate how long input / output sequences need to be before the fixed-dimensional internal representation breaks down.\\n\\nIn section 4.1.1, it was not clear to me what \\\"noisy examples\\\" means. Does this mean that the dataset itself is flawed, meaning that the reference sequence of operations does not yield the reference answer? Please clarify in the paper.\\n\\nIn table 1, please state the total size of the fixed-dimensional intermediate representation for all systems. This seems crucial to ensure the systems can be meaningfully compared.\\n\\nIn figure 4, left figure, the semantic classes don't apper to be very convincingly clustered. (And it seems like K-means clustering could easily have selected a different clustering given a different random initialization.)\\n\\nIn appendix A.2, mathematical symbols are essentially meaningless without describing what they mean in words. Please explain the meaning of all the symbols that are not defined in terms of other symbols, e.g. w^t, T_{t-1}, ..., f_s m (is this softmax???), f_l i n e a r (what does this mean?), C o n t e x t, C l a s s i f i e r, etc, etc. C o n t e x t in particular doesn't even have a hint of a definition.\\n\\nIn (19) and (27), why would a temperature parameter be helpful? This can be absorbed as an overall multiplicative factor in the weight matrix of the previous linear layer. Is this temperature parameter learned during training (I presume so)? Please clarify in the paper.\\n\\nUsually * is used for convolution, not simple multiplication (e.g. equation (17)).\\n\\nThroughout the main body and appendix, there are lots of instances of poor spacing. For example, $f_{linear}$ should be written as something like $f_\\\\text{linear}$ in latex to avoid it coming out as l i n e a r (which literally interpreted means l times i times n times e, etc). Please fix throughout.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper considers the challenging problem of learning to generate programs from natural language descriptions: the inputs are sequences of words describing a task and the outputs are corresponding programs solving the task. The proposed approach elegantly relies on tensor product representations. Inference with the proposed model is done in 3 steps: (1) encode the symbolic information present in the text data as a TPR, (ii) maps the input TPR to an output TPR encoding the symbolic relations of the output programs (here the authors use a simple MLP), and (iii) decode the output TPR into an actual program. The parameters of the models used in the 3 steps are learned jointly. For step (iii), the authors proposes a novel way of encoding an n-ary relation into a TPR which facilitates the recovery of the relation's arguments using unbinding operations: this is a neat trick (though I think it increases the number of parameters and may limit the expressiveness of the TPR, since reaching \\\"full-rank\\\" of the TPR will occur faster than with the encoding used in [Smolensky et al., 2016]). Experiments on two datasets demonstrate the validity of the approach.\", \"The paper is very well written and easy to follow. The idea seems original and well executed but I think the experimental section could be improved. In particular, adding/reporting stronger baselines to the comparison would straighten the paper. I also feel some relevant literature may be missing from the related work. Nonetheless, I think it is a good paper which will be relevant to the community, I thus recommend acceptance.\", \"Comments / Questions *\", \"Section 3.1.1: if I understand correctly, the length of the sequence affects the rank of the TPR. Could that be a problem in practice? E.g., the capacity of the TPR could likely be saturated quickly for long sequences?\", \"Section 3.2.1: the filler vector f_t = Fu is computed as a convex combination of the learned filler vectors. Is it a design choice to choose a convex combination rather than taking the column corresponding to the argmax of the vector u? Or is it because otherwise the model can not be trained using the classical backprop approach?\", \"The results of the Seq2Tree+Search model from (Bednarek et al. (2019)) is not reported in Table 2. Why? I believe it should be included (it is ok that it outperforms the proposed method. In addition you can maybe identify clear advantages of your method illustrating a trade-off, e.g., running time, end-to-end, scalability ...).\", \"A more thorough ablation study could also improve the strength of the experiments. For example, do you know to which extent the attention model in the decoder is necessary to achieve good performances?\", \"I am not very familiar with the literature but it seems some relevant work may be missing from the review. In particular, I believe there are many papers tackling the problem of learning programs from input output examples or execution traces, e.g. \\\"DeepCoder: Learning to Write Programs\\\", \\\"Neural Turing machines\\\", \\\"Inferring algorithmic patterns with stack-augmented recurrent nets\\\", \\\"Inferring and Executing Programs for Visual Reasoning\\\", \\\"Learning to infer graphics programs from hand-drawn images\\\"... This list is by no means meant to be exhaustive in any way, just to illustrate a large body of work that seems relevant to the present paper (even though I understand that those papers do not consider natural language description as inputs).\", \"Typos *\", \"Eq. (5) Should be r' instead of r_i' (?)\"]}"
]
} |
BJxvH1BtDS | Three-Head Neural Network Architecture for AlphaZero Learning | [
"Chao Gao",
"Martin Mueller",
"Ryan Hayward",
"Hengshuai Yao",
"Shangling Jui"
] | The search-based reinforcement learning algorithm AlphaZero has been used as a general method for
mastering two-player games Go, chess and Shogi. One crucial ingredient in AlphaZero (and its predecessor AlphaGo Zero) is the two-head network architecture that outputs two estimates --- policy and value --- for one input game state. The merit of such an architecture is that letting policy and value learning share the same representation substantially improved generalization of the neural net.
A three-head network architecture has been recently proposed that can learn a third action-value head on a fixed dataset the same as for two-head net. Also, using the action-value head in Monte Carlo tree search (MCTS) improved the search efficiency.
However, effectiveness of the three-head network has not been investigated in an AlphaZero style learning paradigm.
In this paper, using the game of Hex as a test domain, we conduct an empirical study of the three-head network architecture in AlpahZero learning. We show that the architecture is also advantageous at the zero-style iterative learning. Specifically, we find that three-head network can induce the following benefits: (1) learning can become faster as search takes advantage of the additional action-value head; (2) better prediction results than two-head architecture can be achieved when using additional action-value learning as an auxiliary task. | [
"alphazero",
"reinforcement learning",
"two-player games",
"heuristic search",
"deep neural networks"
] | Reject | https://openreview.net/pdf?id=BJxvH1BtDS | https://openreview.net/forum?id=BJxvH1BtDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"7RlwxfdCQu",
"S1lIoZdsoB",
"S1xCxJdoiS",
"rkeEfpDoiH",
"r1xuzKwGqB",
"HJgHu7bCKH",
"S1eKL42R_H"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798730066,
1573777821805,
1573777142403,
1573776651782,
1572137231789,
1571849068951,
1570845777130
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1694/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1694/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1694/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1694/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1694/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1694/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors provide an empirical study of the recent 3-head architecture applied to AlphaZero style learning. They thoroughly evaluate this approach using the game Hex as a test domain.\\n\\nInitially, reviewers were concerned about how well the hyper parameters for tuned for different methods. The authors did a commendable job addressing the reviewers concerns in their revision. However, the reviewers agreed that with the additional results showing the gap between the 2 headed architecture and the three-headed architecture narrowed, the focus of the paper has changed substantially from the initial version. They suggest that a substantial rewrite of the paper would make the most sense before publication.\\n\\nAs a result, at this time, I'm going to recommend rejection, but I encourage the authors to incorporate the reviewers feedback. I believe this paper has the potential to be a strong submission in the future.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the useful comments, we have addressed the referee's concerns in the revision\", \"comment\": \"Thank you for the referee's useful comments.\\n\\nWe think the revised version has fixed the referee's concerns. Missing evaluations in the initial submission have now been added, and now we only focus on two cases for 3HNN, 1) expand threshold 0, 2) default expand threshold of 10. \\n\\n\\nAnswers to the major concerns. \\n1. Parameters well-tuned for 2HNN and 3HNN? \\n It is difficult to say which parameter choice is the \\\"well-tuned\\\" and which is not. \\nWe agree that from Figure~2, it seems for both 2HNN and 3HNN, the hyperparameter choices were not well-set. We include this Figure mainly because this parameter setting was a natural choice by the supervised learning results in (Gao et al. 2017, Gao et al. 2018b). \\n\\nHowever, we believe this is not be the case for Figures 4 and 5, as clearly shown in the picture. \\n\\nAnother fact is that, in our evaluation, our 2HNN and 3HNN models in Figures 5 finally achieved 90% win-rate (with 95% confidence); see Table 2. By comparing to another AlphaZero implementation for 9x9 Hex (see (Thomas et al. 2019)), we believe that our AlphaZero for both 3HNN and 2HNN have yielded high quality neural nets, which is not possible if the hyperparameter choice is poor.\\n\\n2. Whether with R2 or without R2? \\nTo be clear, in the revised version, we keep only two variants for 3HNN. \\n (a) always with R2; and (b) always no R2. \\n\\nWe added a direct head-to-head playing, which shows that in both cases, 3HNN in MCTS yielded stronger playing than 2HNN. \\n\\n3. How to merge q(s,a) and v(s) when s is expanded? \\n A few possibilities have been suggested in (Gao et al. 2018b). Here, to keep it simple, \\nwe only back up v(s) if expanding s. This is also the default choice in (Gao et al. 2018b). \\n\\n4. Architecture of NN? \\nYes, we are using exactly the same architecture as in (Gao et al. 2018b). This has been clarified in the revision. \\n\\n5. Why one iteration of 2HNN is slower than 3HNN in Figure 2? \\nThis is because for the same 800 simulation search, MCTS-3HNN used default expansion threshold of 10, while MCTS-2HNN used 0. This makes them producing a game with different speed. MCTS-3HNN took about 1s per move, while MCTS-2HNN took up to 10s per move. \\n\\n6 & 7. Figures 3 and 4? \\nWe have revised these figures. \\n\\n8. Error bound in Table 2?\\n We have revised Table 2 and included error bounds. It shows that with 95% confidence, our players achieved ~90% win-rates.\\n\\n--------------------------------------------------------------------\\nAnswers to \\\"suggestions to improve the paper\\\": \\n1 & 2. There are many can be investigated. We aim to keep it simple in this paper. \\n\\n3. No, it not related to the theorem of Nash. The technique itself does not prevent its usage in games where are draws. In Figure 4, we see 3HNN worked well even without it. \\n\\n4. The citation was there. This was from (Tian et al. 2019). \\nTo be clear, we have rephrased it to \\\"selfplay Elo\\\". The argument is, for example, if A player is beaten by B, B beaten by C. If A has Elo 100, B may get 150, then C 200, but in real-playing C may not be really stronger than A, or very likely, not 100 Elo stronger than A. \\n\\n--------------------------------------------------------------------\", \"answer_to_minor_comments\": \"Thanks for the careful reading, we have fixed these writing errors.\"}",
"{\"title\": \"Thank you for the valuable comments, we have updated with a revised version\", \"comment\": \"Thank you for the valuable comments. We have revised the paper for better exposition. We summarize and answer the referee's major questions below.\\n\\n1. What are the challenges in applying AlphaZero with 3HNN?\\n 1) The first challenge is computation resource, which made Gao et al.2018b only evaluate 3HNN on fixed data with supervised learning. \\n 2) The second challenge is that it is not clear if the R2 term the loss would work on AlphaZero style training, because this data-augmentation assumes the move selected by search is the \\\"best\\\" among all candidates. AlphaZero sometimes samples a move not with the highest visit count for exploration. \\n\\nTo investigate \\u201cthe value of the R2 term in the loss\\u201d, we present results of two versions, (v1) always with R2; (v2) always without R2. \\n\\nWe have now included direct game between 2HNN and 3HNN in Figure 5 (in the revised paper uploaded to openreview), which shows that when used in MCTS, 3HNN models from (v1) and (v2) both obtained stronger playing than 2HNN. \\n\\nThe prediction of value functions are mixed though. (i) Without R2 it led to better state-value-learning, though its action-head did not generalize well on random game-states, (presumably because of the lack-of-data). (ii) With R2 the action-value head generalized much better than without R2 on random game-states, though the state-value head worse than (v2). \\n\\n\\n[We note that there is a third scheme: remove R2 only when it sees in training that the selected move is not with highest probability. This one was not investigated. Previously, we investigated a scheme by removing all R2 before $\\\\eta=30$. But this scheme looks naive. Considering the page limit, the results were removed from the paper. ]\\n\\n2. What is the relative difficulty of Hex? \\n In AI, the difficulty of Hex is often compared to Go. In late 1990s, in parallel to Go, researchers began to realize that the advancements developed for chess/checkers (and so on) do not work well on Hex. Van Rijswijck 2002a made a metaphor that Hex should be regarded as a \\\"bee\\\" if calling chess a \\\"fruitfly\\\" for AI. For state-space sizes, 9x9 Hex is 10^{38} while chess is 10^{47}, Shogi is 10^{71}. Commonly, 11x11 and 13x13 Hex are used by humans for playing, whose space sizes are 10^{56} and 10^{79}. Hex is sometimes played at 19x19 board, whose space size is close to 19x19 Go. \\n\\n The major difference between Hex and Go is that, many graph-theoretical properties can be identified in Hex; these could be used as an exact knowledge for pruning the state-space size without removing optimal strategy. In Go, such exact knowledge is more difficult to formalize, in part because of the KO rule (there is why there are Japanese, Chinese and other rules). Hex has simpler rules than Go, but as Go, well-playing strategy is hard to describe. \\n For scientifically and empirically studying the behaviour of AlphaZero, we believe that Hex is no less significant than Go. We chose 9x9 Hex because of (1) it is the largest board size where existing solver can be used to label a relatively large set of random game-states (2) it has been used in previous Zero studies (eg. Thomas et al.2017, 2019). \\n\\n3. Why expansion threshold 0 or 10? \\nThe non-zero expansion threshold has been a popular scheme in MCTS for playing games before deep neural nets came to stage (e.g. all MoHex versions):\\n-- 10 is the default one used in MoHex. \\n-- AlphaGo-Fan also used non-zero expansion threshold, although, with 2HNN, later developed AlphaGo-Zero/AlphaZero changed to 0. \\n\\n\\n4. Appendix result not well explained?\\nTo have coherent flow, we have moved appendix result on 9x9 Hex to main text, and removed 8x8 result.\"}",
"{\"title\": \"we have updated the paper, sincerely wishing the referee to reconsider the value of the updated version\", \"comment\": \"We thank the referee for the constructive comments. Yes, this work is mainly an empirically study of 3HNN in AlphaZero style training.\\nThis study is difficult because of (1) the computation demand, (2) the large number of hyperparameter choices. Due to time constraint, some evaluations were not added in our initial submission. We have now added these results to showcase 3HNN. \\nWe hope this revision could make the referee to reconsider the value of this work. We wish that this work could inspire a broader discussion of 3HNN and eventually lead to a larger scale study. \\n\\nAnswers to the major concerns\\n1. Why non-zero expansion threshold?\\nThis has been explored in (Gao et al 2018b). The role of delayed node expansion is to accumulate more simulations given the same computation resource. In our MCTS selfplay, \\nwith default expansion threshold of 10, it generally took over 1000 simulations per move. \\nWith expansion threshold of 0, it took ~200 simulations per move. (one simulation is one iteration of MCTS; in some places, it is also called rollout/playout). \\n\\nIn (Silver et al. 2017b), it is argued the merit of MCTS, in comparison to Alpha-Beta, is that it uses average to achieve stable evaluation. Clearly, delayed node expansion leads to more averaging than instant expansion. \\n\\nIn the revised paper, we only investigate two possibilities, either 0 or the default expand threshold of 10. See also our response to Rev1. \\n\\n2. Unfavourable parameter choice for 2HNN? \\n After reducing n_{mcts}=160, and revise \\\\eta, dirichlet noise, 2HNN obtained excellent result; see Figures 3 and 4. \\nA cross reference is to the Zero implementation in (Thomas et al. 2019; cited). Both against 10000-simulation-MoHex2.0, \\nour implemented MCTS-2HNN achieved over 90% win with a 95% confidence; see Table 2;\\nwhile PGS-EXIT (Thomas et al. 2019) achieved 58%; \\nboth using 800-simulation for search. \\n\\n3. Is 3HNN's performance indeed better than 2HNN?\\n As the MSEs and move prediction accuracy are only surrogate measurements on the strength of the neural network. To see if really 3HNN is stronger than 2HNN, we now present direct head-to-head match result. \\nSee Figure~5, which shows MCTS-3HNN mostly achieved over 50% win-rate. Each match consists of 162 games. Each curve in Figure~5 was produced by 162*80 games. They were not included in the initial version because they were not finished (taking a week). \\n\\n4. Significance of evaluation results with MoHex 2.0? \\n\\n We now performed three sets of match, each with 162 games. With 95% confidence, our final MCTS-3HNN achieved $91.57\\\\% \\\\pm 1.8$ while MCTS-2HNN achieved $89.7\\\\% \\\\pm 3.32$, against MoHex2.0. See Table~2 in the revised paper. \\n\\n5. Is the result comparable to MoHex3HNN (Gao et al. 2018b) and MoHex-CNN (Gao et al. 2017)? \\n These programs were only run on 13x13 Hex, while in this paper, we consider only 9x9 Hex.\", \"minor_issues\": \"1) To distinguish, we have now replaced \\\"iteration\\\" with \\\"simulation\\\" in describing MCTS. \\n2) Yes. you\\u2019re correct. In our log, it shows that for Figures 3 and 4, MCTS-2HNN and MCTS-3HNN took around 1s per move. The neural network training time is negligible in comparison to game-generation. We run 60 workers on a 56-cpu computer, the relative speed of each self-play work is rather similar. We did not observe serious synchronization overhead.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper applies the three-head neural network architecture as well as the corresponding training loss proposed in (Gao et al., 2018b) to alphazero style learning of the Hex game. The paper is mainly an empirical study, and shows that the architecture leads to faster and better learning results for Hex. The evaluation is done on two datasets, one with examples from near-optimal players produced by MoHex 2.0, and the other from randomly sampled but perfectly labelled examples generated by benzene. Performance improvement is evaluated from several different perspectives, including state-value errors, action-value errors and policy prediction accuracies. Finally, the match performance is also reported for competing with MoHex 2.0, one of the state-of-the-art agent for Hex.\\n\\nGenerally speaking, the paper does a good job in introducing and analyzing the structure of the alphazero learning scheme and the related alphago and alphago zero schemes, and the experiments within the scope of Hex is relatively thorough and the performance improvement is consistent and convincing. \\n\\nHowever, the description of the three-head neural network in Section 3 is too brief, and without looking at the original paper (Gao et al., 2018b), it is quite hard to understand the motivation of the objectives (especially the definitions and explanations of R1, R2 and R3). \\n\\nAdditionally, the challenge of applying three-head neural network architecture in the alphazero learning setting is almost not mentioned. In particular, what are the modifications needed compared to the original work (Gao et al., 2018b)? The authors may want to explain clearly how the training scheme is different, and clearly state what the detailed neural network architecture (at least in the appendix) used is, and how they are different from the original alphazero paper and (Gao et al., 2018b). Without these explanations, the significance of the paper would be largely limited to coding and engineering efforts (which are also valuable but not that much in the research sense).\\n\\nAnother related issue of this paper is that it is not clear (at least to me, who know little about the Hex game) how difficult it is to tackle Hex (compared to Go, Shogi and chess, etc.). The authors may want to elaborate more on this as well to further showcase the significance of the work.\\n\\nFinally, there are also some inconsistency in the hyper-parameter choices and architecture design. In particular, it is not clear why the authors choose the expansion threshold to 0 in the match performance part, whereas the authors use threshold 10 elsewhere. The turning on and off of the data augmentation in 3HNN in different experiments mentioned in the appendix are also not well explained. \\n\\nNevertheless, I still value the paper's effort and success in applying a newly proposed approach for a relatively challenging real-world game problem, despite the issues about experimental design and writing mentioned above.\", \"some_minor_suggestions\": \"the title of the rightmost plots should better be \\\"perfectly labelled examples\\\" instead of \\\"perfect examples\\\", and the authors may want to make it clearer which plot corresponds to dataset T1 and which corresponds to T2.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposed to use three-head network for AlphaZero-like training. The three-head network is used to predict policy, value and q function after an action is taken. While three-head network is presented by a prior work [1] and is learned via supervised learning on a fixed dataset, this paper mainly applies it to AlphaZero training for the game of Hex 9x9 and shows preliminary results.\\n\\nWhile the idea is interesting, there are many issues in the experiments and the conclusion is quite indecisive. So I feel that the paper is not ready for publication yet and thus vote for rejection. \\n\\nWhy we need expansion threshold n_th to be 10? If you keep visiting the same node without expansion, won\\u2019t the same node back-propagate the same value (or q) 10 times before expansion? If that\\u2019s the case, what\\u2019s the difference if we just back-propagate once? Note that if n_th = 0 then prediction of q(s, a) is no-longer necessary (except that predicts q(s, a) becomes an aux task during training, as mentioned in the caption of Fig. 3). \\n\\nFig. 2 shows that 3HNN trains faster than 2HNN. However, it looks like 2HNN and 3HNN show drastically different training curves, and are probably operating at different regions. In the text, the authors also acknowledge that one iteration of 2HNN is 5-6 times slower than 3HNN, since 2HNN builds a much deeper search tree. This bring about a question: is the performance difference due to unfavorable hyper-parameters on 2HNN (or other factors)? The paper doesn\\u2019t answer that. \\n\\nThe text claims that when n_th = 0, 3HNN performs better than 2HNN, however, the figure shows that 2HNN has lower or comparable MSE than 3HNN. The prediction accuracy is better, though. When n_th = 1, Fig. 4 shows that the 2HNN is doing comparable or better in terms of MSE and Prediction Accuracy than 3HNN (compared to perfect play). This somehow defeats the purpose of using the third head of q(s, a) that only helps when n_th > 0. \\n\\nIn Table 2, do you have standard derivation? Note that AlphaZero training is not that stable and the performance (in particular the initial performance since the performance might take off earlier or later) against a known bot can vary a lot, the difference between 56% and 63% can be purely due to noise. Also, how is the resulting model compared against MoHex-3HNN [1] and MoHex-CNN [2]? Note that MoHex-3HNN [1] shows 82.4% over MoHex 2.0 on 13x13, but is trained supervisedly, and Table 2 shows slightly better performance. So I am curious their performance comparison.\", \"minor\": \"The term \\u201citeration\\u201d seems to be defined twice with different meanings. It is defined as one MCTS rollout (see Appendix A) and also defined (in Fig 1) as one full synchronization of self-play and training (AlphaGo Zero setting). This causes a lot of confusions. I believe each dot in Fig. 2 is \\u201citeration\\u201d in the AlphaGo Zero sense. \\n\\nFinally, although many hardware information is revealed in the appendix, maybe it would be better if the authors could reveal more details about their AlphaZero-style training, e.g., how long does it take for each move and for each self-play game? How long does it take to wait until all self-play agent returns all games? Is there any synchronization overhead? This could give the audience some idea about the computational bottleneck. \\n\\nFrom the current number, it seems that 60 self-play processes are run on 56 cores, and each AlphaGo iteration takes approximate 5 hours (read from Fig. 2) with 200 games per self-play process. Assuming there is no synchronization overhead and 1 core per self-play process, this yields 200 games/5 hours per core, which is 1.5 min (or 90s) per game. Since each game has 9x9 = 81 moves, this means that it costs ~1.1 sec per move. Is that correct? \\n\\n[1] Chao Gao, Martin Muller, and Ryan Hayward. Three-head neural network architecture for Monte Carlo tree search. In IJCAI, pp. 3762\\u20133768, 2018.\\n\\n[2] Chao Gao, Ryan B Hayward, and Martin Muller. Move prediction using deep convolutional neural networks in Hex. IEEE Transactions on Games, 2017.\\n\\n=====Post Rebuttal=====\\nI really appreciated that the authors have made substantial efforts in improving the paper and adding more experiments. However, the additional change makes the story a bit more convoluted. After substantial parameter tuning on the 2HNN side, It seems that 3HNN is only slightly better than 2HNN (Fig. 5 in the revision, > 50% winrate, but it is not clear how much ELO it is better). Unfortunately, after tuning, 2HNN actually shows comparable performance in terms of speed (updated Fig. 3 and 4, middle columns), which somehow tarnishes the claims of the paper that 3HNN is better than 2HNN. \\n\\nThe final performance against 10000-rollouts MoHex2.0 is 89.7% (2HNN) versus 91.6% (3HNN), so the performance is slightly better with 3HNN. This number is much better than previous works e.g., PGS-EXIT (Thomas et al. 2019). This indeed shows that the paper does a good job in terms of engineering and performance push (agreed with R1). In my opinion, the paper can be better rewritten as a paper that shows strong performance in Hex, compared to previous works, plus many ablation analysis. \\n\\nI keep the score.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper applies three-head neural network (3HNN) architecture in AlphaZero learning paradigm. This architecture was proposed in [1] and the paper builds upon their work. In AlphaGo and AlphaZero 2HNN is used, which predicts policy and value for a given state. 3HNN also predicts action-value Q function. In [1], the three new terms are added to the loss to train such a network, and the network is trained on a fixed dataset. The paper utilizes the same 3HNN idea with the same loss, and the contribution is that 3HNN is trained synchronously with MCTS iterations on an updating dataset (\\u201cAplhaZero training style\\u201d). Learning speed of 3HNN is shown to be higher than that of 2HNN. The special attention is drawn to varying the threshold expansion parameter, as the 3HNN architecture allows to set it above zero, while 2HNN does not. The approach is demonstrated on the game of Hex. Results are presented on two test datasets: positions drawn from a strong agent\\u2019s games and random positions. Labels in both datasets are perfect, obtained by a special solver.\\n\\nI tend to reject the paper, because the demonstrated results suggest that the models were not tuned well enough. Indeed, the paper claims that the parameters were not tuned. The paper claims using threshold expansion > 0 to be one of the main advantages of 3HNN. However, the best model in the experiment is the one with the parameter equals zero. Overall, for a purely experimental paper, the experiments are too crude.\\n\\nMain argument\\n1.\\tIt seems that some of NN models didn\\u2019t learn at all:\\n \\u2022 \\tFigure 2, 2HNN model. MSE on both the left and right plots is not improving. Moreover, on the right plot it \\n fluctuates around 1, which is the performance of a random guess.\\n \\u2022\\tFigure 3, right. MSE of the 3HNN model is not improving. Probably, random positions are too unnatural and \\n nothing similar is presented in the dataset drawn from MCTS iterations.\\n2.\\tSupmat reveals, that some of the models are in fact learned using the different loss than it is said in the paper. In particular, data augmentation term is sometimes on and sometimes off. A disabling scheme is suggested, depending on the dithering threshold and the number of the moves played before the state s. Some models use the scheme, for others the term is always on. This should be clearly stated in the main text, not in the supmat. \\nAlso, there is an experiment in the supmat, when the data augmentation term is always off. The influence of this term is itself interesting, as it is one of the reasons 3HNN is learning q-function at all. However, introduction of this term itself is the contribution in [1]. I suggest to add to the main part of the paper the experiment, comparing three regimes: 1) with the scheme, 2) term always on and 3) term always off. In fact, it is almost done, as all three regimes are used in different figures, but somewhy the final comparison (with other parameters fixed) is not shown and partly concealed in the supmat. It could become methodological improvement of the paper over [1]. \\n3.\\tWhen the leaf node s is expanded, the v function of the new node s\\u2019 = s \\u222a a is initialized to predicted q(s, a). It is one of the advantages of 3HNN and allows node expansion threshold. When s\\u2019 itself is expanded, how do you merge q(s,a) with backup values during mcts iterations?\\n4.\\tNothing is said about the architecture of NNs. If the representation of the state is the same as in [1], it should at least be mentioned.\\n5.\\tHow exactly does figure 2 shows that one iteration AlphaZero-2HNN is 5-6 times slower, than AlphaZero-3HNN. Probably the definition of data point in figure 2 is missing (e.g. one data point corresponds to one MCTS iteration).\\n6.\\tFigures 3 and 4 basically shows the same experiment, but with different models: AlphaZero-2HNN versus AlphaZero-3HNN with threshold 0 on Figure 3 and AlphaZero-2HNN versus AlphaZero-3HNN with threshold 1 on Figure 4. They could be united in one figure.\\n7.\\tResults from Figures 3 and 4 suggest, that 3HNN with threshold = 0 is better, than with threshold = 1. However, the paper claims setting threshold > 0 as an advantage. If it allows to save time (as each mcts iteration is faster), maybe they should be compared plotting time on x-axis (like in Figure 2)?\\n8.\\tPlease provide error bounds in table 2.\\n\\nAdditional arguments\\nArgumentation presented in this section didn\\u2019t affect the score. However, it might improve the paper.\\n1.\\t3HNN predicts q(s,a), which should be equal to v(s\\u2019), where s\\u2019 = s \\u222a a (state after action a is taken in state s). It would be interesting to see how condition q(s,a) == v(s \\u222a a) holds during 3HNN models learning. It can be checked on a first dataset (drawn from games), or a special random dataset, containing consecutive positions, could be generated. Probably, this condition could potentially be an additional loss term.\\n2.\\tAlso, it would be interesting to see how the condition between p and q holds. The higher q(s, a), the lower should be p(a). There is an interesting illustrating figure 7 in supmat, it may be presented in the main text. Also, it could be interesting not only for the first move. \\n3.\\tThe supmat claims that the motivation for turning off data augmentation term is that it assumes that in the selfplay game both players are selecting the best actions produced by search. Is it connected with the fact, that in the game of Hex all states are either winning or losing, according to theorem proved by Nash? How does this term would work for games with a draw trend, for example chess? In chess, for a lot of states the \\u201cground truth\\u201d v(s) would be close to zero and there is no action guaranteeing the win.\\n4.\\tThe paper claims \\u201cElo (Elo, 1978) improvement does not always correspond to real playing improvement\\u2014the monotonic increase of Elo score did not capture the fluctuation in strength\\u201d. Citation needed, what fluctuation in strength is not captured? Is it specific to game of Hex? For example, Elo is used as the main measure of total agent strength in AplhaZero papers, as well as by chess community (both chess programs and human players).\\n\\nMinor comments\\n1.\\tPage 4, section 2.2: Even though ... . Our summarization -> Even though ... , our summarization.\\n2.\\tPage 4, table 1: mvoe -> move.\\n3.\\tPage 7, bullet point above section 4.4: perfect -> perfectly.\\n4.\\tPage 7, the lowest paragraph. \\u201cFor the value learning, however, due to fast search, the AlphaZero-3HNN learned much faster than AlphaZero-2HNN both on games states produced by strong player (T1) as well as examples produced by random play (T2).\\u201d \\nIt is confusing, it seems that 3HNN on datasets T1 and T2, however, it was only tested on these datasets.\\n5.\\tPage 8: imposing an an auxiliary task -> imposing an auxiliary task.\\n6.\\tPage 9: \\u201cproduce playing strength significantly stronger than\\u201d \\u2013 reformulate.\\n\\n[1] Chao Gao, Martin M\\u00fcller, and Ryan Hayward. Three-head neural network architecture for monte carlo tree search. In IJCAI, pp. 3762\\u20133768, 2018b.\\n\\n=====Post Rebuttal=====\\nScore updated from 3 to 6.\"}"
]
} |
HJl8SkBYPr | Consistency-Based Semi-Supervised Active Learning: Towards Minimizing Labeling Budget | [
"Mingfei Gao",
"Zizhao Zhang",
"Guo Yu",
"Sercan O. Arik",
"Larry S. Davis",
"Tomas Pfister"
] | Active learning (AL) aims to integrate data labeling and model training in a unified way, and to minimize the labeling budget by prioritizing the selection of high value data that can best improve model performance. Readily-available unlabeled data are used to evaluate selection mechanisms, but are not used for model training in conventional pool-based AL. To minimize the labeling budget, we unify unlabeled sample selection and model training based on two principles. First, we exploit both labeled and unlabeled data using semi-supervised learning (SSL) to distill information from unlabeled data that improves representation learning and sample selection. Second, we propose a simple yet effective selection metric that is coherent with the training objective such that the selected samples are effective at improving model performance. Our experimental results demonstrate superior performance with our proposed principles for limited labeled data compared to alternative AL and SSL combinations. In addition, we study the AL phenomena of `cold start', which is becoming an increasingly more important factor to enable optimal unification of data labeling, model training and labeling budget minimization. We propose a measure that is found to be empirically correlated with the AL target loss. This measure can be used to assist in determining the proper start size. | [
"Active learning",
"semi-supervised learning"
] | Reject | https://openreview.net/pdf?id=HJl8SkBYPr | https://openreview.net/forum?id=HJl8SkBYPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"LUTeBSo6ik",
"SkxcrNYsiS",
"HJl9sGNHjr",
"ByeD3J4SoB",
"SygZ4p7rsS",
"Syl2VCwWiH",
"BklkAy1pqH",
"rylu0rBd5S",
"HkgdYboDqS",
"SygeC5GPcB",
"HyeG10GntB",
"rkghWIQitS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1576798730035,
1573782593520,
1573368482441,
1573367727255,
1573367080721,
1573121587583,
1572822983300,
1572521423665,
1572479359561,
1572444871845,
1571724762404,
1571661316295
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1693/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1693/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1693/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1693/Authors"
],
[
"~Christoph_Mayer1"
],
[
"ICLR.cc/2020/Conference/Paper1693/Authors"
],
[
"~Hao_Zhongkai1"
],
[
"ICLR.cc/2020/Conference/Paper1693/Authors"
],
[
"~Hao_Zhongkai1"
],
[
"ICLR.cc/2020/Conference/Paper1693/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1693/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors leverage advances in semi-supervised learning and data augmentation to propose a method for active learning. The AL method is based on the principle that a model should consistently label across perturbation/augmentations of examples, and thus propose to choose samples for active learning based on how much the estimated label distribution changes based on different perturbations of a given example. The method is intuitive and the experiments provide some evidence of efficacy. However, during discussion there was a lingering question of novelty that eventually swayed the group to reject this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Updates of the manuscript\", \"comment\": \"Thanks again for all the valuable comments. We have updated our manuscript.\\nThe current version includes the following changes.\\n\\n1. We improved our writing (fixed typos and grammar issues etc.). \\n2. We revised some confusing statements.\\n3. We revised the manuscript according to Q2, Q3 and Q5 of the Reviewer 1.\\n4. We revised the manuscript according to Q1 of the Reviewer 2.\"}",
"{\"title\": \"Thanks for your interest.\", \"comment\": \"Thanks for your interest in several parts of our work and your kind suggestions to the possible future extensions of this work. Please see our responses as follows.\\n\\n1. I am wondering whether the idea of combining SSL and AL was not already introduced by others (Sener et al, CEAL from Wang et al.)? Nonetheless your performance is much better I guess mainly because of Mixmatch? How about other SSL techniques (mean teacher, VAT, SNTG or GAN based methods) does your approach also work there?\", \"our_response\": \"This is an interesting future work. A potential idea can be switching from AL methods to random whenever it is needed. Intuitively, random selection will be chosen more at the beginning and less afterwards. Another potential idea is AL with varying AL batch sizes (e.g. start with large batch size and then decrease for steep performance increase and then decrease again etc.).\"}",
"{\"title\": \"(Updated) Thanks for your thoughtful suggestions.\", \"comment\": \"Thanks for your thoughtful comments. Please see our responses as follows.\\n\\nQ1. The consistency of a sample is measured based on the perturbed samples. How to generate these perturbed samples may have a great influence on the query results. In the paper, it said that these samples are generated by standard augmentation operations (e.g. random crops and horizontal flips for image data). This representation is hard to follow in the experiments. If possible, it is better to show in details.\", \"our_response\": \"There is always a trade-off between a large AL batch size and a small AL batch size. Ideally, we would like to use AL as much as possible. Selecting a very large batch of samples will lead to insufficient usage of active learning given a limited budget. However, a very small AL batch size would lead to much more AL cycles which is computationally expensive.\\n\\nOur method is effective using reasonable AL batch sizes. We conducted more experiments on CIFAR-10 under the setting of Figure 1 (starting from 100 labeled samples), using different AL batch sizes. Our experiments show that when labeling 200 samples in total, we obtain accuracy of 89.5%, 89.2% and 89.3% when AL batch size is set to be 25, 50 and 100, respectively. These results suggest that, the performances are comparable using reasonable AL batch sizes.\\n\\n---------------------------------------------------------------------------------------------------------------- \\n| AL batch size | 25 | 50 | 100 |\\n----------------------------------------------------------------------------------------------------------------\\n|# of AL cycles to reach 200 labeled data | 4 | 2 | 1 | \\n----------------------------------------------------------------------------------------------------------------\\n| Accuracy | 89.5% | 89.2% | 89.3% |\\n----------------------------------------------------------------------------------------------------------------\"}",
"{\"title\": \"(Updated) Thanks for your valuable suggestions.\", \"comment\": \"Thanks for your valuable comments. Please see our responses as follows.\\n\\nQ1. This paper compares in Table 1 the difference between just active learning vs active learning + SSL. I'm not sure this is a fair comparison. I think the better comparison is shown in Table 2.\", \"our_response\": \"Thanks for your suggestion! We have changed this claim as suggested to \\u201cour method with 4K examples has 30% more error compared to the fully supervised method\\u201d.\"}",
"{\"title\": \"Some thoughts and questions\", \"comment\": \"I got some questions about the novelty of the paper and about related work.\\n\\nI am wondering whether the idea of combining SSL and AL was not already introduced by others (Sener et al, CEAL from Wang et al.)? Nonetheless your performance is much better I guess mainly because of Mixmatch? How about other SSL techniques (mean teacher, VAT, SNTG or GAN based methods) does your approach also work there?\\n\\nDoes you k-center AL (Sener et al.) include the MIP (robust k-center) or is it just plain k-center? \\n\\nThe consistency based AL criterion is indeed interesting but I think besides validating on SSL it should also be tested on standard supervised learning. I noticed such experiments in the supplementary is performs as good as entropy sampling. How does it compare to more recent approaches such as Learning Loss for Active Learning (CVPR19) or Bayesian Generative Active Deep Learning (ICML19). I think these experiments are very important for the proposed AL criterion to know in which setting it should be used. Why are they not part of the main paper?\\n\\nAnother thing that I am wondering about is did you do experiments for Learning Loss for Active Learning or Bayesian Generative Active Deep Learning using a SSL approach to train the classifier? How do they perform in SSL scenario?\\n\\nI think the section 4 is interesting. However, I believe that it would be optimal to do active learning as early as possible optimally we should never use random sampling but select already samples at the beginning. The plots in Fig. 3. show that it the performance it not very good if we start too early with AL with too few labeled samples for the three strategies. So it means that if we are forced to start with 50 labeled samples it would be a good idea to select the next 50 samples randomly right? So I get the feeling that the studied three criteria are just bad in this situation but in general we should start as early as possible but we need other criteria. What are your thoughts about this? Maybe it would also be a good idea to vary the number of labeled samples that we add in each AL cycle.\\n\\nThanks for your answer.\"}",
"{\"title\": \"Reply to \\\"Does finetuning epochs affect the accuracy?\\\"\", \"comment\": \"Standard supervised training is easily overfitted on small labeled training data. Finetuning on an overfitted model can hurt the performance based on our empirical knowledge. Based on our observation, the advanced semi-supervised method is stable and robust to overfitting. That is also our motivation, which is taking advantage of semi-supervised training to significantly improve active learning at early AL cycles.\"}",
"{\"title\": \"Does finetuning epochs affect the accuracy?\", \"comment\": \"Many thanks for your response. I noticed that when the dataset is small, which is exactly the situation when active learning begins, training and finetuning on the small dataset might be easy to overfit. I'm confused that does the selection of the finetuning epochs number affect the accuracy on the test dataset?\"}",
"{\"title\": \"Reply to \\\"Confused about the training strategy\\\"\", \"comment\": \"Thanks for your interest.\\n\\nWe finetune the model on the whole dataset. Our model is trained in the semi-supervised mode, so in each cycle both labeled and unlabeled data is used for training.\"}",
"{\"title\": \"Confused about the training strategy\", \"comment\": \"I'm confused that when you select a new batch from the unlabeled data pool, you need to minimize the loss on the labeled dataset. Do you only finetune your neural network on the new batch or finetune on the whole labeled dataset, or completely retrain a model from random initialization? I notice that many works on active learning did not mention the training or finetuning strategy. When the dataset is large, completely retraining a deep neural network is time-consuming.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new combination method for active learning and semi-supervised learning, where the objective is to make predictions that are robust to perturbations (for SSL) and select points for labeling with labels that differ under perturbations. This technique achieves 2x label efficiency over SSL with uniform-random sampling. Additionally, the authors assess (at least for CIFAR-10 with batch size 50) the best starting random seed set as 100 labels, known as K_0 in this work. This work yields pretty good empirical results and has a conceptually unified approach to SSL and active learning building off of recent works.\", \"comments\": [\"This paper compares in Table 1 the difference between just active learning vs active learning + SSL. I'm not sure this is a fair comparison. I think the better comparison is shown in Table 2.\", \"The authors write that \\\"when only 100 samples are labeled, our method outperforms kcenter by 39.24% accuracy\\\". Do the authors mean after 100 additional labels are acquired (so 200 labels) or is this number off?\", \"Can the authors clarify what is meant by \\\"or some labels correspond to rare cases, as in self-driving cars\\\"? Why are such datasets more costly to acquire? Is it because of the size of the self-driving car datasets?\", \"Although the method is more unified than some other AL + SSL approaches, I wonder if the L_u(x,M) can be made to look more like the C(B,M) = \\\\sum E(x,M). In particular, L_u(x,M) uses just a single perturbation and a different \\\"distance\\\" function than E(x,M) which uses N perturbations.\", \"The authors state that they lose 1.26% accuracy to the fully supervised model. However, this is very much not within the margin of measurement error and 1.26% accuracy is rather significant for accuracies around 95%. Another way of putting it is that the method in the paper with 4K examples has 30% more error compared to the fully supervised method. Can the authors either change this claim or provide a number of labels where their method achieves the fully-supervised accuracy?\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a semi-supervised active learning method to reduce the labeling cost. In the proposed method, a selection criterion to better integrate AL selection mechanism in SSL training framework is designed. The simple metric that aims to measure the inconsistency across a certain number of meaningful perturbations. It considers N perturbed samples of the original input data x, which can be obtained by standard augmentation operations (e.g. random crops and horizontal flips for image data). Then the variance is adopted to quantify consistency. In this way, the proposed method prefers data samples with high values, which may possess varying level of difficulty for the model to classify. To verify the effectiveness of the proposed method, several baseline methods are compared on several benchmark data sets, and the proposed method has achieved better performance. Meanwhile, to deal with the \\u201ccold start\\u201d problem, a measure that is found to be empirically correlated with the AL target loss is proposed, and this measure can be used to assist in determining the proper start size. However, there are some minor concerns:\\n[1] The consistency of a sample is measured based on the perturbed samples. How to generate these perturbed samples may have a great influence on the query results. In the paper, it said that these samples are generated by standard augmentation operations (e.g. random crops and horizontal flips for image data). This representation is hard to follow in the experiments. If possible, it is better to show in details.\\n[2] In the uncertainty of active learning, the samples are selected from different distributions in the unlabeled data, for example, the marginal sampling selects the samples around the classification hyperlanes (Settles, Burr. \\\"Active learning.\\\" Synthesis Lectures on Artificial Intelligence and Machine Learning 6.1 (2012): 1-114.). Can you show which samples may be selected in the unlabeled data. In this way, the proposed criterion can be followed more easily.\\n[3] In the experiments, whether the proposed method can select a batch of samples at each iteration. How about the influence of the batch size.\"}"
]
} |
BkgUB1SYPS | Interpretable Network Structure for Modeling Contextual Dependency | [
"Xindian Ma",
"Peng Zhang",
"Xiaoliu Mao",
"Yehua Zhang",
"Nan Duan",
"Yuexian Hou",
"Ming Zhou."
] | Neural language models have achieved great success in many NLP tasks, to a large extent, due to the ability to capture contextual dependencies among terms in a text. While many efforts have been devoted to empirically explain the connection between the network hyperparameters and the ability to represent the contextual dependency, the theoretical analysis is relatively insufficient. Inspired by the recent research on the use of tensor space to explain the neural network architecture, we explore the interpretable mechanism for neural language models. Specifically, we define the concept of separation rank in the language modeling process, in order to theoretically measure the degree of contextual dependencies in a sentence. Then, we show that the lower bound of such a separation rank can reveal the quantitative relation between the network structure (e.g. depth/width) and the modeling ability for the contextual dependency. Especially, increasing the depth of the neural network can be more effective to improve the ability of modeling contextual dependency. Therefore, it is important to design an adaptive network to compute the adaptive depth in a task. Inspired by Adaptive Computation Time (ACT), we design an adaptive recurrent network based on the separation rank to model contextual dependency. Experiments on various NLP tasks have verified the proposed theoretical analysis. We also test our adaptive recurrent neural network in the sentence classification task, and the experiments show that it can achieve better results than the traditional bidirectional LSTM. | [
"Language Model",
"Recurrent Neural Network",
"Separation Rank"
] | Reject | https://openreview.net/pdf?id=BkgUB1SYPS | https://openreview.net/forum?id=BkgUB1SYPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"axb_l2zDv6",
"SyxGrIxnoS",
"rJeDeLg2or",
"BkxKaEe2iB",
"BJg5uXxkjB",
"Bygw1CsiFB",
"Hylc0Nt8Yr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798730004,
1573811769908,
1573811694595,
1573811392736,
1572959089806,
1571696094618,
1571357906063
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1692/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1692/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1692/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1692/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1692/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1692/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper a theoretical interpretation of separation rank as a measure of a recurrent network's ability to capture contextual dependencies in text, and introduces a novel bidirectional NLP variant and tests it on several NLP tasks to verify their analysis.\\n\\nReviewer 3 found that the paper does not provide a clear description of the method and that a focus on single message would have worked better. Reviewer 2 made a claim of several shortcomings in the paper relating to lack of clarity, limited details on method, reliance on a 'false dichotomy', and failure to report performance. Reviewer 1 found the goals of the work to be interesting but that the paper was not clear, that the proofs were not rigorous enough, and clarity of experiments. The authors responded to the all the comments. The reviewers felt that their comments were still valid and did not adjust their ratings.\\n\\nOverall, the paper is not yet ready in its current form. We hope that the authors will find valuable feedback for their ongoing research.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Authors' response to reviewer 1\", \"comment\": \"Thanks for your helpful advices. We have provided our responses below.\\n1. This work builds upon the previous work (namely tensor space language model, TSLM) which has been published and the detailed introduction of the TSLM will be added in Supplementary Appendices in the revised version.\\n2. In this work, we consider in Eq (4), the order of words cannot be exchanged. In Eq. (4), $k$ cannot be estimated. Therefore, we choose to estimate $k$ in the TSLM.\\n3. At present, the proof of Claim 1 only considers the one-hot coding. The case of a dense coding will be considered in the future work. Besides, in the experiments, we do not use the one-hot embedding and use the pre-trained word embeddings.\\n4. For the issue of writing, we will check and correct it.\"}",
"{\"title\": \"Authors' response to reviewer 2\", \"comment\": \"Thanks very much for your detailed comments. Our responses are as follows.\\n1. For clarity issues and grammatical errors, we will check and correct them in this paper.\\n2. Regarding the relative less explanations of adaptive LSTM, we think that the main contributions are the theoretical analysis about the modeling ability of the contextual dependency, as well as the corresponding interpretable mechanism. Nevertheless, we will add more descriptions about the adaptive LSTM in the revised version of our paper. \\n3. For the problem of a dichotomy of tasks, this strategy is inspired by the recent work [2]. This work describes that for those tasks (e.g., NER, POS tagging, and constituency parsing), one needs the hidden vectors of the lower layer, to better capture short-range dependencies than those of the deeper layer.\\n4. For this work of the Universal Transformer [3], we will cite it in our paper.\\n[1] Zhang L, Zhang P, Ma X, et al. A Generalized Language Model in Tensor Space[J]. arXiv preprint arXiv:1901.11167, 2019.\\n[2] Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. Dissecting contextual word\", \"embeddings\": \"Architecture and representation. In Proceedings of the 2018 Conference on Empirical\\nMethods in Natural Language Processing, pp. 1499\\u20131509, 2018a.\\n[3] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. \\\"Universal Transformers\\\". In Proc. of ICLR 2019.\"}",
"{\"title\": \"Authors' response to reviewer 3\", \"comment\": \"Thanks for your helpful comments. We have provided our responses below.\\n1. We actually provide some connections between the theoretical results and the bidirectional network. The theoretical results in Eq. (4) and Eq. (6) can guide the design of the bidirectional network. In Eq. (4), there is an important condition $\\\\sum_{j=1 }^{K}p(y_j)=1$, which can help us design the dynamic halting. According to Eq. (6), we design the dynamic halting as $N(t)=min\\\\{l:\\\\sum_{i}^{l}p_{t}^{i} \\\\leq {1-\\\\epsilon} \\\\} $ in Section 5.2. \\n2. (a) About the TSLM, this work has been published [1]. Inspired by this work, the separation rank in TSLM is proposed to measure the contextual dependency in our work. We have explained the background of TSLM in Section 2.2. We will add the detailed introduction of the TSLM in Supplementary Appendices.\\n(b) For the title, \\u201cinterpretable mechanism\\u2019\\u2019 means that we need to select the adaptive number of layers or the hidden units for the modeling of different sentences.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper derives lower bounds on the separation rank of a wider class of recurrent NLP models in terms of its depth and number of hidden layers, demonstrating that both the number of hidden units as well as the number of layers improves the ability of NLP networks to model context dependency. It then introduces a novel bidirectional NLP variant that is supposed to capture a good trade-off between computational cost and performance.\\n\\nThe manuscript is very dense and does not follow a straight and easy-to-follow story line. In particular, the introduction of the bidirectional variant seems to substantially distract from the main story line of the paper (there is also no connection between the theoretical results to the bidirectional network). The improvements of the bidirectional models also seem to be minor, but no standard deviations for the performance results are reported.\\n\\nA clear description as to which language models are captured by the TSLM model is missing. Also, it is unclear how tight the bounds actually are given that no value for m (the word length) is given. Finally, the title does not reflect the content of the paper (there is nothing interpretable about the network structure).\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper first provides a theoretical interpretation of separation rank as a measure of a recurrent network's ability to capture contextual dependencies in natural language text. The analysis is primarily done in the context of tensor space language models (TSLM) that was demonstrated to be a generalisation of n-gram language models in earlier work. The theoretical derivations suggest that increasing a network's depth increases its separation rank exponentially, while increasing its width only increases its separation rank linearly. Based on this finding, the paper proposes a bidirectional adaptive recurrent neural network that adapts the network depth for each task using dynamic halting. Experiments on six NLP tasks demonstrate that increasing network depth helps more for tasks that generally require many long-range dependencies, while increasing network width is generally sufficient for tasks that require mostly short-term dependencies, although I disagree with the paper's dichotomy of tasks with long-term and short-term dependencies (see point 3 below).\\n\\nOverall, this paper suffers from several serious issues, as listed below. I am therefore recommending a rating of \\\"Reject\\\" ahead of the authors' response.\\n\\n1. This paper suffers from substantial clarity issues beyond simple grammatical errors. Some examples of the most serious clarity issues are as follows: (i) Section 6.1.1 lists \\\"sentiment analysis\\\" as one of the tasks, but Figure 2 has no entry for \\\"sentiment analysis\\\", yet instead features experimental results for \\\"semantic classifier\\\" which was never mentioned or introduced before; and (ii) Figure 2 shows \\\"1st layer LSTM\\\", \\\"2nd layer LSTM\\\", and \\\"3rd layer LSTM\\\", while (based on my reading) what the paper means are \\\"1-layer LSTM\\\", \\\"2-layer LSTM\\\", and \\\"3-layer LSTM\\\", hence highly confusing for the reader.\\n\\n2. The proposed explanation about the proposed bidirectional adaptive RNN is really sparse, despite being a central part of the paper. The explanation of the proposed model is only contained in two paragraphs (Sections 5.1 and 5.2), and are not sufficient for the readers to understand the model. A more extensive explanation and intuition about what the model is like, and how it is similar or different to TSLM and standard RNNs, is required to improve this.\\n\\n3. In the experiments, the paper assumes a false dichotomy of tasks that only require short-range dependencies (NER, POS tagging, and constituency parsing), and tasks that require long-range dependencies (WSD, sentiment analysis, and coreference resolution). This dichotomy is overly simplistic and ultimately false. For instance, constituency parsers often need to identify spans that are very long-distance in nature (for a recent investigation of this, see the work by Fried et al. (2019)), while sentiment analysis often only requires the model to identify a few salient words that are indicative of the sentiment, e.g. \\\"excellent\\\" or \\\"terrible\\\", hence not requiring much contextual dependencies. \\n\\n4. Related to point 3, a better evaluation is to examine the cases that require long-range dependencies within each task, rather than assuming which tasks require long-range dependencies and which ones do not. An example of this is reporting e.g. constituency parsing performance for long-range spans and coreference resolution performance for long-distance entity chains.\\n\\n5. The paper mostly fails to report performance comparison with existing numbers from prior work. For instance, the coreference performance (Fig. 2) are far below the result from Lee et al. (2017) that this paper is based on (Section 6.1.1), while the constituency parsing numbers (also Fig. 2) are also far below the reconciled span parser (Joshi et al., 2018) that this paper is also based on. This discrepancy calls into question the strength of the model implementation used in this paper.\\n\\n6. Some missing citations, e.g. the use of adaptive computation time and dynamic halting in Universal Transformers (Dehghani et al., 2019).\\n\\n7. This paper can benefit from careful copy-editing. Some examples of grammatical errors: (i) \\\"Eq. 16\\\" in Section 5.2 should be \\\"Eq. 6\\\", (ii) \\\"to verify our theoretical, respectively\\\" in Section 6 should be \\\"to verify our theoretical [findings/derivations]\\\", (iii) \\\"ourself\\\" on the caption of Table 1 should be \\\"ourselves\\\", (iv) \\\"The model Ling et al. (2015) is used, which using bidirectional LSTMs\\\" should be \\\"The model [of] Ling et al. (2015) is used, which [used] bidirectional LSTMs\\\", etc.\\n\\nReferences\\nDaniel Fried, Nikita Kitaev, and Dan Klein. \\\"Cross-domain generalization of neural constituency parsers\\\". In Proc. of ACL 2019.\\n\\nKenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. \\\"End-to-end neural coreference resolution\\\". In Proc. of EMNLP 2017.\\n\\nVidur Joshi, Matthew E. Peters, and Mark Hopkins. \\\"Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples\\\". In Proc. of ACL 2018.\\n\\nMostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. \\\"Universal Transformers\\\". In Proc. of ICLR 2019.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The goal of the work is to quantify the dependency between contents in NLP. The method relies on parametrization of the joint probability of words in (2), and discussed some connections between the rank of the unfolded tensor T and the dependency level between two sets of words in a sentence.\\n\\nThe goal of this work is quite interesting, but the reviewer feel a bit challenging to follow the writing. The work seems to build upon a previous work, namely, tensor space language model (TSLM). But the paper does not introduce TSLM in detail, making the part relevant to TSLM quite inaccessible.\\n\\nThe analysis of the work is based on the model in (2), which is merely an approximation for the joint probability. This is fine, but maybe this point should be more spelled out in the paper.\\n\\nThe work also has an assumption that a naive Bayes model in (4) always holds for a set of w_1 ... w_n. This may need some more discussion and maybe a reference. Since the authors are considering a sequence, this may be related to de finetti's theorem and its extensions. But in that theorem many more assumptions are needed, eg., the RVs are exchangeable. Also, it is unknown if a finite K exists.\\n\\nThe proof of Claim 1 is a bit trivial, if one only considers one-hot encoding. In fact, the statement and proof of Claim 1 might be a bit loose. If one wishes that the SVD reveals the rank K, K has to be smaller than the outer dimensions of the tensor T. This was not specified in the statement.\", \"the_above_also_brings_up_another_question\": \"are the proofs all based on one-hot encoding of words? We know in NLP pre-trained word embeddings may be more useful. Do the proofs also apply to those cases, e.g., GloVe or Word2Vec?\\n\\nDid all the experiments use one-hot embedding?\\n\\nOverall, the reviewer feels that the work has an interesting motivation, and the goal is meaningful. The writing is a bit hard to access and the proofs might be a bit loose (did not check all of them. But Claim 1 is already a bit loose).\"}"
]
} |
HJlrS1rYwH | Policy Tree Network | [
"Zac Wellmer",
"Sepanta Zeighami",
"James Kwok"
] | Decision-time planning policies with implicit dynamics models have been shown to work in discrete action spaces with Q learning. However, decision-time planning with implicit dynamics models in continuous action space has proven to be a difficult problem. Recent work in Reinforcement Learning has allowed for implicit model based approaches to be extended to Policy Gradient methods. In this work we propose Policy Tree Network (PTN). Policy Tree Network lies at the intersection of Model-Based Reinforcement Learning and Model-Free Reinforcement Learning. Policy Tree Network is a novel approach which, for the first time, demonstrates how to leverage an implicit model to perform decision-time planning with Policy Gradient methods in continuous action spaces. This work is empirically justified on 8 standard MuJoCo environments so that it can easily be compared with similar work done in this area. Additionally, we offer a lower bound on the worst case change in the mean of the policy when tree planning is used and theoretically justify our design choices. | [
"Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=HJlrS1rYwH | https://openreview.net/forum?id=HJlrS1rYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9vzZVL0Y5",
"SJgqbzEnor",
"BJez1bJniS",
"HkeAHNrisH",
"H1g3FeccoB",
"H1l2rC3KoH",
"BkeuMY2KjH",
"HyxdZrntjH",
"rke9o42Ysr",
"rkehqpXRtS",
"Hyg-1926Yr",
"S1e8BP_aYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729976,
1573827073700,
1573806298156,
1573766213736,
1573720195986,
1573666371600,
1573665040495,
1573663999872,
1573663905630,
1571859859771,
1571830232584,
1571813182191
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1691/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1691/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1691/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1691/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1691/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1691/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1691/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1691/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1691/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1691/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1691/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The consensus amongst the reviewers is that the paper discusses an interesting idea and shows significant promise, but that the presentation of the initial submission was not of a publishable standard. While some of the issues were clarified during discussion, the reviewers agree that the paper lacks polish and is therefore not ready. While I think Reviewer #3 is overly strict in sticking to a 1, as it is the nature of ICLR to allow papers to be improved through the discussion, in the absence of any of the reviewers being ready to champion the paper, I cannot recommend acceptance. I however have no doubt that with further work on the presentation of what sounds like a potentially fascinating contribution to the field, the paper will stand a chance at acceptance at a future conference.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"thanks for the clarifications i will have a read of the new draft\", \"comment\": \"Thank you for the clarifications. I will have a look at the new draft and update my evaluation.\\n\\nI am reluctant, a priori, to change my opinion upon large rewrites because it promotes poorly written initial submissions and I think, as a community, we have quite enough of those. But I will give a fair assessment of the new draft.\"}",
"{\"title\": \"Revisions\", \"comment\": \"As per your and other reviewers suggestions/questions we have updated the draft. We stress that this is not final draft, but hope that it will clear up some questions.\", \"a_brief_summary_of_the_updates\": \"1. drop PPO and give more details to PPN in related work\\n2. section 3.1 explicitly explain where targets come from\\n3. section 3.2 explain that decision-time planning has no impact on training\\n4. appendix comparison of PTN(no decision-time planning) to PPN\\n5. add algorithm describing PPN to the appendix\\n6. footnote to explain how to handle negatives and zeros in the geometric mean calculation\\n7. missing PPN figure citation\\n8. appendix learning modifications\\n9. appendix numerical evaluation of theorem 1 bound\\n10. fix bracket in Algorithm 1\\n11. reduce ambiguity in Algorithm 1 by introducing a Q_tmp as a temporary variable\\n12. remove ambiguity on \\\\pi_F integration bounds when showing the CDF\\n13. make clear that Algorithm 1 returns a vector\"}",
"{\"title\": \"handling negatives\", \"comment\": \"Thanks again for taking the time to share your thoughts. I've replied inline below\\n\\n> Suppose there is an MDP (with a fixed horizon or infinite horizon), with rewards in [-1, 1]. Now I subtract 100 from every reward, so the rewards are in the range of [-101, -99]. Note that the original optimal policy is still the optimal policy. However, all Q values are now negative. It might be possible that you didn't encounter negative Q values in your experiments, but I do think it's a very common case. \\n\\nYou bring up valuable concerns. As we previously mentioned, \\u201cWhen a negative or zero valued Q exists in the tree, we modify the geometric mean calculation by adding |min(Q) + \\\\delta| to the Q estimates. Where \\\\delta is a positive small non-zero term which could be optimized (https://www.arpapress.com/volumes/vol11issue3/ijrras_11_3_08.pdf) to more closely recover the geometric mean.\\u201d Though we note that there is a typo, we actually add |min(Q)| + \\\\delta when a negative or zero valued Q exists. \\n\\nShifting the Q function results by a constant doesn\\u2019t change the argmax if both Q and \\\\pi are optimal. Thus the optimal policy would not change.\\n\\n> Moreover, the given formula (sqrt(value * probability)) is not linear to the reward function, making it hard to interpret what this equation computes. It can't be any Q function, as Q function is linear to reward.\\n\\nPerhaps we are misleading with notation. We are not trying to say that the backup resembles the true Q function. The purpose of the backup is to choose an action that will lead to max expected returns. If the backup was exclusively using the Q approximations from the reward and value networks then it could be fair to say the backup resembles the Q function. However, we are arguing that both \\\\pi and the Q composition(from the value and reward networks) are both signals of how \\u201cgood\\u201d an action is to take. Thus the augmented geometric mean takes both quality scores into account. The argmax of \\\\pi_F leads to larger expected returns than \\\\pi or Q does individually and our experiments justify this. \\n\\nFor the camera ready version we will change the notation in Section 3.2.2 to avoid misleading readers that the \\\\pi-Q backup resembles the Q function.\"}",
"{\"title\": \"Initial response\", \"comment\": \"I didn't go through all details in your response but I still have questions on my main concern, that is, I don't understand why sqrt(value * probability) makes sense.\\n\\nSuppose there is an MDP (with a fixed horizon or infinite horizon), with rewards in [-1, 1]. Now I subtract 100 from every reward, so the rewards are in the range of [-101, -99]. Note that the original optimal policy is still the optimal policy. However, all Q values are now negative. It might be possible that you didn't encounter negative Q values in your experiments, but I do think it's a very common case. \\n\\nMoreover, the given formula (sqrt(value * probability)) is not linear to the reward function, making it hard to interpret what this equation computes. It can't be any Q function, as Q function is linear to reward.\"}",
"{\"title\": \"Initial quick response\", \"comment\": \"Thank you for your response!\\nI haven't had the time to look at it in detail. However, upon a very first quick look you mention several updates you are planning on making to the camera ready version. \\n\\nIn case you were not aware, I just wanted to mention that OpenReview allows updating the PDF already now, I believe until Friday (though I'm not sure). \\n\\nIf you already have made changes to your manuscript, I'd encourage you to upload it already now.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thanks for taking the time to review and give feedback. We\\u2019ve reviewed your points and will respond in line below.\\n\\n> It seems to me that pi(a|s) is the density of action a, so what does sqrt(value * density) mean? \\n\\nConsider value and density functions as two different quality scores for an action. We use the geometric mean to provide an average quality score for the action.The geometric mean is useful for when quality scores don\\u2019t share the scale. \\n\\n> What if the value is negative?\\n\\nThis was rare in our environments(seen early in swimmer). When a negative or zero valued Q exists in the tree, we modify the geometric mean calculation by adding |min(Q) + \\\\delta| to the Q estimates. Where \\\\delta is a positive small non-zero term which could be optimized (https://www.arpapress.com/volumes/vol11issue3/ijrras_11_3_08.pdf) to more closely recover the geometric mean.\\n\\n> Figure 1 looks the same as Figure 1 in https://arxiv.org/pdf/1909.07373.pdf, but I don't find any reference in the paper. \\n\\nGood catch, we will add the proper reference to fix this.\\n\\n > How is PTN compared to model-based RL algorithms?\\n\\nWe do, see Section 4.4. We use PPN(background planning) as a baseline and this is considered an implicit model-based method. To the best of our knowledge, PTN is the first work to perform decision-time planning with an implicit dynamics model. We didn\\u2019t compare to an explicit model-based(ex: observation prediction network) as this was shown to perform poorly in VPN.\\n\\nIn the introduction we talk about two categories of model-based reinforcement learning, \\u201cExplicit dynamics models are when the observations are directly being reconstructed. The second is implicit dynamics models. Implicit Dynamics models are when the dynamics model is learned indirectly.\\u201d This is further expanded on in related works(ex: PPN)\\n\\n> More importantly, note that the policy in PPO is stochastic, so how is PTN compared to the deterministic policy? \\n\\nPTN is also stochastic (when b<\\\\infty) so we don\\u2019t see a reason to compare with deterministic variants. It\\u2019s common(ex: ATreeC experiments) to compare with the stochastic variant(as opposed to a deterministic version). \\n\\n> How is the proposed pi-Q-backup method compared to classical control method, e.g. MPC? \\n\\nSimply put, MPC is when you replan at every step. PTN at evaluation time performs MPC.\\n\\n> Does the proposed planning algorithm work for model-free algorithms? \\n\\nThe proposed planning algorithm(w/ depth=1) could work for model-free algorithms if you have access to a Q-function and a policy.\\n\\n> As this paper talks about planning with implicit dynamics models, how is the proposed method compared with explicit dynamics models? \\n\\nWe did not test this as Observation Prediction Network(explicit model-based method) was shown to perform poorly in VPN.\", \"minor_comments\": \"> In Algorithm 1 Line 11, could you please check the brackets? \\n\\nThanks, we will fix that for the camera ready version\\n\\n> Page 4, \\\"Thus, cumulative density function (cdf) of pi_F is given by ...\\\": Could you please check the correctness of the equation? \\n\\nIn the camera ready version we can update the range of the integral to be from -\\\\infty to z\\n\\n> What does \\\"worst-case\\\" in Theorem 1 mean? \\n\\nGiven any policy pi (learned by PTN), worst-case is the maximum possible difference between the mean of pi_F and pi over all possible choices of Q-functions.\\n\\n> How is correlation in Table 2 calculated?\\n\\nWe calculate correlation from sample covariance divided by the product of sample standard deviations. In Section 4.2 we say, \\u201cTo measure correlation we first train a PPN policy for 200 thousand time-steps on the MuJoCo environments. Next we collect trajectories over 10 thousand timesteps. Then at each observation we uniformly sample 100 actions and compute the corresponding Q-values and corresponding PDF points from $\\\\pi$. This gives us 10 thousand estimates of correlation. From here we fit a normal distribution to the correlation results and report them in Table 2.\\u201d\\n\\n> In Algorithm 1, is the return value a scalar or a vector?\\n\\nA vector of Q/pi-Q values dependent on the branching factor. We will make this more clear in the camera ready version.\\n\\n> One can have infinite b but sample a uniformly to optimize Q (and then pi_F becomes maxQ policy), so I don't think b can be simply characterized as the confidence.\\n\\nThe statement is for when sampling is done based on pi and b is finite. The second paragraph on page 4 states that \\u201cwe sample b actions based on pi\\u201d. If sampling is not based on pi (e.g., uniform sampling), then pi_F does not depend on pi. Furthermore, if infinite b is used, sampling based on any distribution with non-zero pdf for all the actions will result in pi_F becoming maxQ policy, as mentioned in the second bullet point on page 3 of the paper.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thanks for taking the time to review and give feedback. We\\u2019ve reviewed your points and will respond in line below.\\n\\n> why do we care in this work about PPO and not say Q(\\\\lambda)\\n\\nPTN focuses on leveraging an implicit dynamics model to perform decision-time planning in continuous action spaces. We specifically went over works related to policy gradient methods and continuous action space(PPO) and implicit dynamics models(ATreeC/TreeQN/VPN/PPN). While Watkins Q(\\\\lambda) is interesting, it neither relates to operating in continuous action space nor building an implicit dynamics model. However, you bring up a fair point that PPO is not entirely necessary. In the camera ready version we will drop the PPO subsection and use the extra space to provide more details about PPN.\\n\\n> If PPN is so central it has to be presented before PTN and notation should be introduced there\\n\\nYou bring up a good point, some explanations are currently in the appendix(ex: clipping and importance sampling ratios) but we can improve. In the camera ready version we will include a more in depth explanation of PPN in related works and additionally will provide extra details in the appendix(ex: PPN\\u2019s training algorithm). \\n\\n> Please separately present how inference works and present the learning all in one place instead of losses in 3.1 and how to construct targets spread out until 3.2.2 \\n\\nTargets for 3.1(eq 2-4) include observed rewards, bootstrapped n-step returns, and generalized advantage estimates. Q and \\\\pi-Q backup found in 3.2 are not used as targets in eq(2-4), they are only used for decision-time planning at evaluation time.\", \"in_the_first_paragraph_section_3_we_say\": \"\\u201cActions during behavior time are chosen by a model-free policy. Learning is done with a model-based approach that follows the behavior policy\\u2019s rollout trajectory.\\u201d \\n\\u201ca latent space transition model is embedded into the architecture. The embedded latent space transition model allows us to backpropagate from multiple simulation steps into the future back to a grounded observation. As a consequence, a dynamics model is learned\\u201d\\n\\nThis means that the behaviour policy does not use the model. During training the objectives force us to make use of dynamics model at any point i>0(equation 1). In the camera ready version we can expand on the implications of the above quotes to explicitly say how inference is done.\", \"in_short\": \"\", \"behaviour_time\": \"no transition model and no decision-time planning\", \"training_time\": \"transition model but no decision-time planning(because we follow behaviour trajectory)\", \"evaluation_time\": \"transition model and decision-time planning planning\\n\\n> The algorithm was the most useful thing in the paper but even there it should be much clearer e.g. what is Q, how come we can write Q[j] = in consecutive lines. The second one should probably be Q[j] +=.\\n\\nIt seems there is a misunderstanding, most likely stemming from a poor choice of notation on our part. The value set recursively at the first mention of Q[j] is used in the following line when applying td-\\\\lambda. To be more clear we will replace the first mention with a temporary variable. \\n\\n> To me if you have a network that takes in previous step and action and produces a latent next step that is an explicit transition model. How is it not?\\n\\nWe talk about this in the introduction, \\u201cExplicit dynamics models are when the observations are directly being reconstructed. The second is implicit dynamics models. Implicit Dynamics models are when the dynamics model is learned indirectly.\\u201d This is further expanded on in related works(ex: PPN)\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thanks for taking the time to review and give feedback. We\\u2019ve reviewed your points and will respond in line below.\\n\\n> Including a description of PPN, including its main features, would greatly help the paper.\\n\\nGood point. In the camera ready version we will include a more in depth explanation of PPN in related works. There currently exists details in the appendix regarding PPN\\u2019s approach to clipping but we will expand on this(ex: PPN\\u2019s training algorithm). \\n\\n > the main usage of the model during training time is to compute the Advantages in equation (2)?\\n\\nGood question, this is our fault for not being explicitly clear. The model is not used to compute the advantages. In the camera ready version we will fix this. At training time the model helps with feature learning(via background planning). The main purpose of the implicit dynamics model is to be used at evaluation time to perform decision-time planning. \\n\\n> Or are those computed based on rollouts? \\n\\nYes, when we expand the related work section on PPN we will make this more clear and reference that PTN follows the same approach to generating advantage estimates. \\n\\n> If so, where is the model actually being used during training?\", \"in_the_first_paragraph_of_section_3_we_say\": \"\\u201cActions during behavior time are chosen by a model-free policy. Learning is done with a model-based approach that follows the behavior policy\\u2019s rollout trajectory.\\u201d \\n\\u201ca latent space transition model is embedded into the architecture. The embedded latent space transition model allows us to backpropagate from multiple simulation steps into the future back to a grounded observation. As a consequence, a dynamics model is learned\\u201d\\n\\nThis means that the behaviour policy does not use the model. During training the objectives force us to make use of dynamics model at any point i>0(equation 1). We will make this more clear in the camera ready version.\\n\\n> Figure 1 is taken directly from the PPN paper without any reference or citation (as far as I can tell).\\n\\nGood catch, we will add the proper reference to fix this.\\n\\n > For the comparison in Figure 5, it would be great if PPN could also be tested with the newly introduced parameter \\\\beta_i. At the moment, it is hard to tell whether the performance gains are due to \\\\beta or due to the proposed planning scheme. \\n\\nFair point, we will include this comparison between PTN(no decision-time planning) and PPN in the appendix. An image of a preliminary table is linked(https://imgur.com/kIIrDNW), notice that these returns are worse than PTN(with decision-time planning) shown in Figure 5. This shows performance benefits from the modifications shown in Section 3.1(ex: \\\\beta). The point of \\\\beta is to stabilize returns over different values of training depth. In PPN the authors showed that optimal depth is highly dependent on the environment and returns can drastically differ. While the modifications certainly do not entirely fix this, we find that it mitigates large differences in returns from different training depths.\\n\\n> I'm confused about Theorem 1: Wouldn't we want an upper bound on the difference of means? \\n\\nAs mentioned in the second paragraph of page 5, the goal of the theorem is to show that \\u201cThe final policy pi_F can become significantly different from pi \\u2026. even when b = 2\\u201d (emphasis added). That is, the difference can be at least sigma/sqrt(pi), which is significantly large and not desirable. However, an upper bound would aim at showing the opposite. Another way to state the theorem is that there exist Q functions for which the difference between mean of pi_F and pi is at least sigma/sqrt(pi). Linked figure(https://imgur.com/FF4o2IS) shows that, as b increases, there exist Q functions for which pi_F and pi become increasingly different. We will add this to the appendix in the camera ready version.\\n\\nIn the paper, we mention that \\u201cFor smaller values of b, pi_F is more similar to pi and as b increases pi_F becomes similar to Q0.\\u201d That is, generally, because pi and Q are correlated (as shown in Table 2), we expect that on average, pi_F to be similar to pi (it may be interesting to study this theoretically, but we have not shown it). However, the theorem shows that there exists Q functions for which pi_F and pi are very different.\\n\\n> Also, what does 'worst-case' mean? \\n\\nGiven any policy pi (learned by PTN), worst-case is the maximum possible difference between the mean of pi_F and pi over all possible choices of Q-functions.\\n\\n> Is that for the 'worst' Q-function we could choose?\\n\\nYes, the worst-case is over all the possible Q-function.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"The paper proposes a modification to Policy Prediction Networks (PPN) in which the learned transition-, reward- and value function models are used at test-time in a planning procedure.\", \"A second contribution is the \\\"pi-Q-backup\\\" which uses the geometric mean of both the policy and the value function as maximisation target of the planning step.\", \"Overall, I find the idea interesting and the experimental evaluations promising. However, I am voting for \\\"weak reject\\\" for the reasons outlined below. If some (or all) of them are address, I'd be happy to raise my score.\", \"I found the paper hard to understand. In particular, the algorithm PPN on which this work is build is not explained at all, requiring the reader to read the original PPN paper. Including a description of PPN, including it's main features, would greatly help the paper. Second, I am still not sure I correctly undestand when each component is used. As I currently understand it, the main usage of the model during training time is to compute the Advantages in equation (2)? Or are those computed based on rollouts? If so, where is the model actually being used during training?\", \"Figure 1 is taken directly from the PPN paper without any reference or citation (as far as I can tell).\", \"For the comparison in Figure 5, it would be great if PPN could also be tested with the newly introduced parameter \\\\beta_i. At the moment, it is hard to tell whether the performance gains are due to \\\\beta or due to the proposed planning scheme.\", \"I'm confused about Theorem 1: Wouldn't we want an upper bound on the difference of means? Also, what does 'worst-case' mean? Is that for the 'worst' Q-function we could choose?\"], \"edit\": \"Thank you for your comments and updated manuscript.\\n\\nI think the writing has improved significantly, but could still be further improved and clarified. In particular, the question of how the model and other components at various points in time could be made more obvious. I found the authors' response to R3 here helpful as well. \\nAt least for me some of the confusion arises not due to the complexity of the proposed approach, but just because combining real and 'simulated' transitions can be used and mixed in so many different ways that it's important to be clear about it. Also, at least personally, I found the explanation \\\"Learning is done with a model-based approach that follows the behaviour policy\\u2019s rollout trajectory. However, the test policy follows a model-based approach\\\" still not very helpful. \\nOverall, I think the presentation is on a good way but needs some more work.\\n\\nWith that being said and now having a better understanding of the algorithm I think this is very interesting work. However, I share R1's concerns about the computation of the \\\\pi-Q backup, in particular that it seems arbitrary and doesn't handle negative values. I'm also not convinced that adding |min(Q)| is a good solutions as a) we don't always have access to that value and b) If I'm not wrong, than \\\\pi_F is not invariant under a shift of Q. \\nI'm wondering why the authors decided to take the geometric mean instead of following the more typically used approach of using exp(Q/Temperature)*pi to combine Q-function and a policy distribution (see e.g. the \\\"Control as Inference\\\" literature, the \\\"Maximum a Posteriori Policy Optimisation\\\" or \\\"Soft Actor Critic\\\" algorithms, in particular the \\\"Soft Value functions\\\". I think this should at the very least be an ablation study, and could be even performing better and at the very least be robust to negative values.\\n\\nOverall, I think this is very interesting work and could become a very strong paper, but I will remain to recommend a \\\"weak reject\\\" because I think it needs some more work to get there.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper claims to present a method for combining model based and model free approaches. The paper I find very poorly written hence my certainty about understanding the method cannot be very high. In training the method seems to build up a backup tree using transition operators and a policy and using them as targets for learning. In training it is not quite as clear what they are doing. The paper seems novel and sensible and has some experimental results that are not trivial but the writing is so difficult to follow that it makes it impossible for me to assess the contributions and even check correctness. I also think that readers would find it too difficult to understand as well. This is making a complete rewrite mandatory. I have added some initial pointers that would help making this more readable but implementing these would only allow us to assess what is being done rather than guarantee acceptance.\\n\\nSince the rebuttal is not intended as a deadline extension I recommend rejecting this paper!\", \"major_points\": [\"I find the related work quite badly written. There is content but what the reader cares about it situating the paper in the landscape of existing methods. There is none of that here: why do we care in this work about PPO and not say Q(\\\\lambda). It should build up the components that were existing in the literature not just present some other methods. It needs to tell us roughly what is similar in this work to what was previously existing (roughly at least).\", \"If PPN is so central it has to be presented before PTN and notation should be introduced there. Introducing formulas without explaining notation like eq (1-4) serves only to alienate the reader and the (well-intentioned) reviewer.\", \"Please separately present how inference works and present the learning all in one place instead of losses in 3.1 and how to construct targets spread out until 3.2.2\", \"The fact that you need so many \\\"note that \\\" should be a red flag that the writing is not right (12 times).\", \"The algorithm was the most useful thing in the paper but even there it should be much clearer e.g. what is Q, how come we can write Q[j] = in consecutive lines. The second one should probably be Q[j] +=. I can't be sure because everything else is so hard to track.\", \"To me if you have a network that takes in previous step and action and produces a latent next step that is an explicit transition model. How is it not ?\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents Policy Tree Network, a novel approach to use an implicit dynamics model to perform decision-time planning in continuous action spaces. The experiments show that the proposed method performs better than the underlying model-free RL algorithm in standard MuJoCo environments.\\n\\nThe writing quality is low and I don't understand the proposed method, especially the backed-up Q-value. The notation is also confusing. It seems to me that pi(a|s) is the density of action a, so what does sqrt(value * density) mean? What if the value is negative? \\n\\nBesides, I have some questions.\\n1. Figure 1 looks the same as Figure 1 in https://arxiv.org/pdf/1909.07373.pdf, but I don't find any reference in the paper. Could you please state the difference between two figures, or explain why you want to put this figure here without any description or reference? \\n2. How is PTN compared to model-based RL algorithms? The only baseline here is PPO, which is model-free. More importantly, note that the policy in PPO is stochastic, so how is PTN compared to the deterministic policy? \\n3. How is the proposed pi-Q-backup method compared to classical control method, e.g. MPC? Does the proposed planning algorithm work for model-free algorithms? \\n4. As this paper talks about planning with implicit dynamics models, how is the proposed method compared with explicit dynamics models?\", \"minor_comments\": \"1. In Algorithm 1 Line 11, could you please check the brackets? \\n2. Page 4, \\\"Thus, cumulative density function (cdf) of pi_F is given by ...\\\": Could you please check the correctness of the equation? \\n3. What does \\\"worst-case\\\" in Theorem 1 mean? \\n4. How is correlation in Table 2 calculated? \\n5. In Algorithm 1, is the return value a scalar or a vector? \\n6. The paper states that \\\"Intuitively, the branching factor b can be thought of as interpolating how much confidence we have in pi and Q0\\\". One can have infinite b but sample a uniformly to optimize Q (and then pi_F becomes maxQ policy), so I don't think b can be simply characterized as the confidence.\"}"
]
} |
BJlBSkHtDS | Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks | [
"Alejandro Molina",
"Patrick Schramowski",
"Kristian Kersting"
] | The performance of deep network learning strongly depends on the choice of the non-linear activation function associated with each neuron. However, deciding on the best activation is non-trivial and the choice depends on the architecture, hyper-parameters, and even on the dataset. Typically these activations are fixed by hand before training. Here, we demonstrate how to eliminate the reliance on first picking fixed activation functions by using flexible parametric rational functions instead. The resulting Padé Activation Units (PAUs) can both approximate common activation functions and also learn new ones while providing compact representations. Our empirical evidence shows that end-to-end learning deep networks with PAUs can increase the predictive performance. Moreover, PAUs pave the way to approximations with provable robustness. | [
"paus",
"padé activation units",
"learning",
"flexible activation functions",
"deep networks",
"choice",
"performance",
"deep network",
"activation function"
] | Accept (Poster) | https://openreview.net/pdf?id=BJlBSkHtDS | https://openreview.net/forum?id=BJlBSkHtDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"WDBg901Ci5",
"HkgclbXsjH",
"rklNJvljoS",
"BJemW-L5ir",
"BJeUaB5YiH",
"rkesLmKBoH",
"rkg9m7YHiS",
"BkxfeQYBor",
"SkxMQumTKB",
"SkgoV1AnFH",
"HJgsBTOLKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729944,
1573757169627,
1573746396153,
1573703931332,
1573655997627,
1573389138739,
1573389090079,
1573389034372,
1571792922386,
1571770162764,
1571355971310
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1690/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1690/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1690/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1690/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1690/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1690/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1690/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1690/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1690/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1690/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposed a new learnable activation function called Pad\\u00e9 Activation Unit (PAU) based on parameterization of rational function. All the reviewers agree that the method is soundly motivated, the empirical results are strong to suggest that this would be a good addition to the literature.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Updates\", \"comment\": \"Thanks for your comments.\\nFor (iii), we have introduced the Safe PAUs to avoid poles, which was an initial difficulty we faced when training even after few epochs on very simple networks. To prove some guarantees regarding exploding values, we could introduce a form of Lipschitz regularization that combined with BatchNorm could give us some initial assumptions. \\nRegarding a proof against vanishing gradients (ii), this could be connected with a relaxed version of the Safe PAUs where the denominator is allowed to be < 1, as mentioned in the paper, this could potentially allow for gradient amplification. \\nFor the moment, we show empirically that Safe PAU is stable and doesn\\u2019t suffer from vanishing or exploding gradients more than any of the other activation functions that we compared to.\\n\\nMore importantly, we have updated the paper and included the proof for (i), we also added the calculation for the number of parameters (iv) as $\\\\phi=L*(m+n)$, which for our experiments is $\\\\phi=10*L$ where $L$ is the number of activation layers. The exact number of parameters for the experiments are in the Appendix, and they are orders of magnitude less than the remaining parameters of the network.\\n\\nWe thank you for the discussion and motivation to make the paper stronger.\"}",
"{\"title\": \"Further Experiments\", \"comment\": \"We have a small update: Here are the remaining experiments which will finish before the Deadline for Author Comments and Responses.\\n\\n \\t CIFAR10_Densenet\\nAPL* $94.45\\\\pm0.23$\\nSReLU\\t\\t $94.77\\\\pm0.24$ \\nRPAU $95.27\\\\pm0.10$\\n\\n\\n Imagenet_MobileNetV2\\n\\t\\t\\tAcc@1\\t Acc@5\\nSReLU\\t $70.62$ \\t $89.59$\\nSwish $71.24$ $89.95$ \\nPAU\\t $71.35$\\t $89.85$\\n\\nSReLU outperforms most of the activation functions. However, both Swish and PAU outperform SReLU.\", \"the_following_table_shows_the_summary_of_our_experiments\": \"| ReLU | ReLU6 | LReLU | RReLU | ELU | CELU | PReLU | Swish | Maxout |Mixture| APL | SReLU |\\n\\nPAU/RPAU >= Baseline | 33 | 34 | 33 | 32 \\t| 39 | 39 \\t | 38 \\t| 41 | 9 \\t | 20 \\t | 32 | 33\\t |\\nPAU/RPAU < Baseline | 8 | 7 | 8 | 9 | 2 | 2 | 3 \\t| 1 | 6 \\t | 0 \\t | 7 | 8 \\t |\\n \\nAgain, we believe the experiments show that PAUs are indeed competitive and have a place among the learnable activation functions.\"}",
"{\"title\": \"Thanks for the reply.\", \"comment\": \"Thanks for the reply. I think if the universal approximation property of PAU can be rigorously shown, this paper can be much stronger. I do not mind you using Kidger et al 2019 to show it. Showing that Kidger et al covers (i)-(iv) may be straightforward, but the readers may hope to see it in a clear way.\\n\\nIt is still unclear to me if the \\\"safe PAU\\\" can retain the universal approximation property of PAU, since this is almost equally important. Without discussion on this point, the theoretical side still feels not significant enough.\"}",
"{\"title\": \"I think it's a good paper.\", \"comment\": \"Thanks for the response. I am not convinced by your argument about safe PAUs above, but I like the additional experiments performed in response to R3. I'm keeping my scores and am recommending acceptance.\"}",
"{\"title\": \"Safe PAU Universal Approximator\", \"comment\": \"We thank you for taking the time to read our paper, and the comments.\\n\\nIt is true that we do not provide a formal proof for the safe PAU, as also mentioned by reviewer #1. However, PAU matches the assumption of Kidger et al. [1] proof. More precisely. Kidger et al. show that under certain size constraints, networks using non-affine continuous functions, with a continuous nonzero derivative at some point are also universal approximators. We will include this in the camera-ready version.\\n\\nMoreover, (i)-(v) are covered by PAU. For (i) we refer to the universal approximator discussion above as well as in the paper. Since PAU is seemingless integrated with the differentiable learning stack, standard methods for avoiding vanishing gradients can be used, hence, (ii) is covered. To cover (iii), we introduce safe PAUs and refer also to [1]. (iv) is covered since we only have a small overhead of parameters. Moreover, due to Telgarsky (2017) and other recent results on ResNets one can actually expect to require less parameters, but this is future work. Finally (v) is covered as demonstrated by our empirical results. We will put these arguments into the camera-ready version. Thanks for pointing us to this. \\n\\n[1] Kidger, Patrick, and Terry Lyons. \\\"Universal Approximation with Deep Narrow Networks.\\\" arXiv preprint arXiv:1905.08539 (2019).\"}",
"{\"title\": \"Safe PAUs\", \"comment\": \"We thank you for the time and comments.\\n\\nIndeed, the evaluation of the experiments is quite expensive and this is one of the challenges we are facing. Although we intend to test PAUs on other tasks/architectures, we consider the comparisons to the baselines we have (including the experiments proposed by reviewer #3) as a good introduction for PAUs into the community.\\n\\nYou are right in that we do not have a proof for the safe version of PAUs presented in the paper, but we are equally interested in this topic, too. Consequently, we now tested another \\u201csafe\\u201d version of the form P(X)/(eps + |Q(X)|). This version can be proven to be a universal approximator via similar arguments as the general PAU. However, empirically this version turned out to be very unstable. Unfortunately, the existence and form of safe PAUs, does not necessarily tell us about the stability and optimization characteristics. Fortunately, there is an indirect way around this and we redirect you to our reply to reviewer #2 for a further discussion. \\n\\nThank you once again for your review.\"}",
"{\"title\": \"Further Experiments\", \"comment\": \"We thank you for the time and comments.\\n\\nWe have a different, rather very positive perspective. We are proposing an activation function that helps practitioners to avoid the search for activation functions as done in [1], and replaces this by learning. PAU can match the performance of or sometimes outperform even the best baselines, in some cases up to 2% better than common activation functions. For instance we are boosting the performance of MobileNetV2, a CVPR 2018 state-of-the-art approach. Moreover, in contrast to previous work, PAU directly paves the way to provably robust deep learning (Croce et al., 2019). Nevertheless, we fully agree with the reviewer that the paper would be even stronger by comparing to more learnable activation functions such as SReLUs and APLs, so we implemented and ran some more experiments: \\n\\n\\n MNIST_VGG MNIST_LeNet\\nSReLUs $99.15 \\\\pm0.03$ $99.13 \\\\pm0.14$\\nAPLs $99.18 \\\\pm0.10$ $99.35 \\\\pm0.11$ \\nPAU $99.30 \\\\pm0.05$ $99.21 \\\\pm0.04$ \\n\\n FMNIST_VGG FMNIST_LeNet\\nSReLUs $89.65 \\\\pm0.42$ $89.83 \\\\pm0.30$\\nAPLs $91.41 \\\\pm0.48$ $89.72 \\\\pm0.30$ \\nPAU $91.25 \\\\pm0.18$ $90.30 \\\\pm0.15$ \\n\\n\\n CIFAR10_VGG CIFAR10_MVNet CIFAR10_RNet\\nSReLUs $92.66 \\\\pm0.27$ $94.03 \\\\pm0.11$ $95.24 \\\\pm0.13$\\nAPLs $91.63 \\\\pm0.13$ $93.62 \\\\pm0.64$ $94.12 \\\\pm0.36$\\nRPAU $92.50 \\\\pm0.09$ $94.82 \\\\pm0.21$ $95.34 \\\\pm0.13$ \\n \\n\\nAgain, of the 7 new experiments, PAU is better than APL in 5 of them, and better than SReLU in 6 of them. More experiments on DenseNet and ImageNet, are running, and we expect to have them before the rebuttal deadline is over. Hence, PAU\\u2019s perspective on robust deep learning via rationalization gets even more interesting. Thanks for pushing us to run more experiments. We believe the experiments show that PAUs are indeed competitive and have a place among the learnable activation functions.\\n\\nWe will keep you posted.\\n\\n[1] P. Ramachandran, B. Zoph, and Q. V. Le. Searching for activation functions. In Proceedings of the Workshop Track of the 6th International Conference on Learning Representations (ICLR), 2018.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors introduce an activation function based on learnable Pad\\u00e9 approximations. The numerator and denominator of the learnable activation function are polynomials of m and n, respectively. The authors name them Pad\\u00e9 activation units (PAUs). The authors also propose a randomized a version of these functions that add noise to the coefficients of the polynomials in order to regularize the network. The authors show, at best, marginal improvements over a variety of baselines including MNIST, fashion MNIST, CIFAR10, and Imagenet. The authors also show that pruning neurons with PAU units results in slightly better accuracy that pruning neurons with ReLU units.\\n\\nThe improvements over baselines shown were marginal and I do not think they warrant publication at this conference. The accuracy improvements were no more impressive than other learned activation functions which the authors perhaps did not see, such as SReLUs (Deep Learning with S-Shaped Rectified Linear Activation Units) and APLs (Learning Activation Functions to Improve Deep Neural Networks).\\n\\n** After author response **\\nChanging from reject to weak accept\\nThe authors have included new experiments that compare to a wider range of learned activation functions. While not ground breaking, it shows that it is competitive with state-of-the-art learned activation functions and could have something to offer.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces a novel parametric activation function, called the Pade Activation Unit (PAU), for use in general deep neural networks. Pade is a rational function, which is a ratio of two polynomials, and which can very well approximate any of the usually used activation functions while having only a few parameters that can be learned from data. Moreover, the authors identify five properties that an activation function should have, and either prove or empirically show that PAUs satisfy all of them, unlike some of the baselines. Additionally, since Pade approximation can have poles and be unstable, this work introduces safe PAUs, where the polynomial in the denominator is constrained to attain values greater than or equal to one. Since one of the suggested properties is that a function using a given activation function be a universal function approximator, the authors provide a sketch of a proof that PAUs do allow that. This proof applies only to the unsafe version of the PAU, and it is unclear whether it extends to the safe PAU---an issue that is not mentioned by the authors.\\nFurthermore, the authors propose a stochastic version of PAU with noise injected into parameters, which allows regularization. The empirical evaluation is quite extensive, and the PAU is compared against nine baselines on five different architectures (LeNet, VGG, DenseNet, ResNet, MobileNet) on four different datasets (MNIST, Fashion MNIST, CIfar10, ImageNet) for the classification task. The evaluation confirms that PAUs can match the performance of or sometimes outperform even the best baselines while the attained learning curves show that PAUs also lead to faster convergence of trained models. Finally, the authors demonstrate that (and provide intuition why) using PAUs allow for high-performing pruned models.\\n\\nI recommend ACCEPTing this paper as it is well written, extensively evaluated, and provides performance improvements or at least matches the performance of the best baseline across several datasets and model architectures.\\n\\nMy only two suggestions for improvement are a) make the universal approximation proof tighter by making sure that it extends to the safe PAU version, and b) evaluate the proposed activation function on tasks other than just classification.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This work proposes an activation function that contain parameters to be learned through training. The idea is to give the learning algorithm more \\\"freedom\\\" to choose a good activation function, and hopefully better performance can be achieved.\\n\\nThe paper is well written, and the experiment results look reasonable. However, there are several key issues.\\n\\n1) as the authors stated, a \\\"good\\\" activation function should maintain the universal approximation property of the neural network. This seems not discussed for the PADE activation function. Does (1) satisfy the conditions (i)-(v) listed in table I? Is there a rigorous proof? Table I seems to claim that the PADE based neural network satisfies (i), but there is no formal proof.\\n\\n2) In order to avoid poles, the activation function used in this work is (2). How well can (2) approximate (1)? What is the potential loss? Perhaps there should be more discussion on this - preferably some theoretical supports.\\n\\nOverall, the reviewer feels that this paper starts with an interesting idea, but the developments on the theoretical side is a bit thin.\"}"
]
} |
SkeBBJrFPH | Characterize and Transfer Attention in Graph Neural Networks | [
"Mufei Li",
"Hao Zhang",
"Xingjian Shi",
"Minjie Wang",
"Yixing Guan",
"Zheng Zhang"
] | Does attention matter and, if so, when and how? Our study on both inductive and transductive learning suggests that datasets have a strong influence on the effects of attention in graph neural networks. Independent of learning setting, task and attention variant, attention mostly degenerate to simple averaging for all three citation networks, whereas they behave strikingly different in the protein-protein interaction networks and molecular graphs: nodes attend to different neighbors per head and get more focused in deeper layers. Consequently, attention distributions become telltale features of the datasets themselves. We further explore the possibility of transferring attention for graph sparsification and show that, when applicable, attention-based sparsification retains enough information to obtain good performance while reducing computational and storage costs. Finally, we point out several possible directions for further study and transfer of attention. | [
"Graph Neural Networks",
"Graph Attention Networks",
"Attention",
"Transfer Learning",
"Empirical Study"
] | Reject | https://openreview.net/pdf?id=SkeBBJrFPH | https://openreview.net/forum?id=SkeBBJrFPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"iLf3K8pue",
"HkllKbpYiH",
"rygxXNFFsB",
"SJxqeVFuoH",
"HJgwHMK_iS",
"Byei-cu_sr",
"HkeK27OOiS",
"SJli7sDOjB",
"HkeCr9okcS",
"rklO62vCYH",
"HJlML8uhtr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729912,
1573667191651,
1573651480461,
1573585906040,
1573585471370,
1573583363070,
1573581744861,
1573579555409,
1571957318133,
1571876031960,
1571747402390
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1689/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1689/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1689/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1689/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1689/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1689/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1689/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1689/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1689/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1689/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper suggests that datasets have a strong influence on the effects of attention in graph neural networks and explores the possibility of transferring attention for graph sparsification, suggesting that attention-based sparsification retains enough information to obtain good performance while reducing computational and storage costs.\\n\\nUnfortunately I cannot recommend acceptance for this paper in its present form. Some concerns raised by the reviewers are: the analysis lacks theoretical insights and does not seem to be very useful in practice; the proposed method for graph sparsification lacks novelty; the experiments are not thorough to validate its usefulness. I encourage the authors to address these concerns in an eventual resubmission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Re: Re: Re: Re: Re: Official Blind Review #1\", \"comment\": \"Thank you for the suggestion and I've made an update.\"}",
"{\"title\": \"Re: Re: Re: Re: Official Blind Review #1\", \"comment\": \"Thanks for the clarification. This is now clear. For a reader that is more used to the $\\\\sum_{i\\\\in\\\\mathcal{V}}\\\\sum_{j\\\\in\\\\mathcal{N}(i)}$ notation being a sum over all edges this is a bit unintuitive. It would be good to add some explanation of this in the paper.\"}",
"{\"title\": \"Re: Official Blind Review #3\", \"comment\": \"We thank the reviewer for his encouragement and suggestions on future work.\"}",
"{\"title\": \"Re: Re: Re: Official Blind Review #1\", \"comment\": \"See if the elaboration below makes sense.\\n\\nGiven a node $i$, the attention value of $i$ over its one-hop neighbors $\\\\{\\\\alpha_{i,j}\\\\}_{j\\\\in\\\\mathcal{N}(i)}$ forms a probabilitty distribution over $\\\\mathcal{N}(i)$.\\n\\nWith static attention and learned attention, we have two distributions $\\\\{\\\\alpha_{i,j}^{learned}\\\\}_{j\\\\in\\\\mathcal{N}(i)}$ and $\\\\{\\\\alpha_{i,j}^{static}\\\\}_{j\\\\in\\\\mathcal{N}(i)}$. The L1 distance between them is then $\\\\sum_{j\\\\in\\\\mathcal{N}(i)}|\\\\alpha_{i,j}^{learned}-\\\\alpha_{i,j}^{static}|$, which is in the range of $[0, 2]$. For example, 2 is achieved if the two types of attention place $1$ on two different neighbors ($|1-0| + \\\\cdots + |0-1| + \\\\cdots$). By dividing it by $2$, we get a range of $[0, 1]$.\\n\\nFinally, we go through all nodes, and take an average, which gives us $\\\\frac{1}{|\\\\mathcal{V}|}\\\\sum_{i\\\\in \\\\mathcal{V}}$.\\n\\nThe approach you suggested is also very interesting and seems equivalent. I think your approach places an emphasis over full graphs while my motivations here root in one-hop neighborhoods.\"}",
"{\"title\": \"Re: Official Blind Review #2\", \"comment\": \"We thank the reviewer for pointing out multiple directions for future work. Below we make some clarifications:\\n\\n1. We choose L1 distance (also known as total variation when considering probability distributions) over KL divergence because it has a bounded range of $[0, 1]$ while KL divergence can be unbounded. To have an upper bound can make it easier for users to understand how numbers get associated with discrepancy. We've also considered entropy, but node degree can have a strong effect on the entropy of attention distribution, which is not easy to decouple.\\n\\n2. \\\"in Table 2, which head/layer is used for computing the attention\\\" -- we've updated the title of the table to make it more clear.\\n\\n3. \\\"GCN vs learned\\\" means the discrepancy between the static GCN attention (defined in 3.1) and the learned attention.\\n\\n4. \\\"The maximum pairwise difference is not clearly defined.\\\" -- We've added an equation to make the definition more clear.\\n\\n5. Graph sparsification is not very meaningful when the attention is almost uniform. Molecules are already very sparse. We agree it will be better to have more applicable datasets for verification.\"}",
"{\"title\": \"Re: Re: Official Blind Review #1\", \"comment\": \"Thanks for taking the time to address my questions.\\n\\nRegarding your response number 2, either I misinterpreted your terminology, or I think your explanation about normalizing by 2|V| is still incorrect, in a few ways:\\n\\n(1) The L1 distance between two probability values lies in the range of [0, 1], not [0, 2]. If both a and b are in the range of [0, 1], then their difference |a-b| cannot be greater than 1.\\n\\n(2) You have a double sum, sum_{i in V} sum_{j in N(i)} |alpha_{i,j}^learned - alpha_{i,j}^static|, which is firstly a sum over nodes, and then for each node a sum over its neighbors, effectively the number of terms in this double sum is the number of edges in the graph, multiplied by 2. So if you normalize by 2|V| this quantity is not guaranteed to be in [0, 1], but if your normalize by 2|E| it would.\"}",
"{\"title\": \"Re: Official Blind Review #1\", \"comment\": \"We thank the reviewer for the positive feedback and constructive suggestions on presentation improvement.\\n\\n1. You are right that the proposed metric can measure the discrepancy between any static attention (GCN or GraphSAGE) and the learned attention. We've added a note to make it more clear.\\n\\n2. Given a pair of probability distributions, the L1 distance between probabilities is in the range of [0, 2]. For example, you may consider two Bernoulli distributions where they place probability 1 on two different values. Dividing this distance by 2 gives us a range of [0, 1]. Now with |V| nodes, we have |V| pairs of distributions in total and dividing the total sum by |V| gives back the range [0, 1].\\n\\n3. You are right that the expression is a bit ambiguous. The motivation is that we want to see if attention gets more concentrated on self loops while getting sharper, suggesting that GNNs are degenerating to MLPs. We've modified the expression and it now says \\\"Besides, the attention does not get increasingly more concentrated on self loops while getting sharper over layers.\\\"\\n\\n4. For the left subfig of Figure 5, we guess you are probably referring to blue curves while results of top-k sparsification based on learned attention are plotted with orange curves. The reason for the big gap between blue curves and the original result is that the PPI graphs have some hub nodes with extremely large degree. In such cases, uniform neighbor sampling is not very effective after a certain degree.\\n\\n5. We've tried to make the description more clear and made an update.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper carries out several kinds of analysis on the GAT networks of Velickovic (2018), which augment GNN updates with multihead self attention. Three standard attention types are compared, on several different datasets, and differences between uniform attention and learned attention are reported. An experiment is carried out where low-attention edges are pruned.\\n\\nWhile understanding the value of attention is important, this paper leaves many questions open. First, since the graphs studied in this paper are, if not generally sparse to begin with at least they only include connections that are meaningful, the sparsification experiment is a bit hard to understand. One particular extension would improve things: adding random edges (can the model learn to prune them out?), but learning sparse attention (see e.g., Maruf et al., 2019) rather than thresholding seems to be a reasonable point of comparison.\\n\\nOverall this paper would be more valuable if a clear and concise recommendation could be given regarding how to use or understand attention; but the lack of a consistent pattern of results makes any obvious narrative hard to support. I would encourage the authors to continue this line of work so that it can be used to provided guidance to those who would like to make more effective use of GNNs.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper analyzes attention in graph neural networks. It makes two major claims:\\n(1) Datasets have a strong influence on the effects of attention. The attention in citation networks are more uniform, but they behave differently in protein or molecule graphs. \\n(2) With attention-based graph sparsification, it is possible to remove a large portion of edges while maintaining the performance.\", \"i_have_some_concerns_about_this_paper\": \"(1) the analysis lacks theoretical insights and does not seem to be very useful in practice; (2) the proposed method for graph sparsification lacks novelty and the experiments are not thorough to validate its usefulness; (3) the writing of this paper is messy, missing many details.\\n\\nIn the analysis part (section 5), the choices of probing metrics seem arbitrary and lack theoretical insights. The authors used the L1 norm, but it seems not appropriate for the tasks here, e.g. KL divergence is preferred to measure the distributional discrepancy, entropy for concentration etc. Many important details are missing or not clear, for example, in Table 2, which head/layer is used for computing the attention, and what does \\u201cGCN vs learned\\u201d mean? The maximum pairwise difference is not clearly defined. The meta graph classification (section 5.2) only considers a synthetic dataset. Overall, I feel the analysis didn\\u2019t present too many interesting observations, and I cannot see too much potential value in applications (even for the graph sparsification task in this paper, its correlation with the analysis is quite weak).\\n\\nIn section 6, it explores whether it is possible to remove part of the edges from the graph while maintaining the classification performance. It is an interesting task, but the method proposed in this paper is not realistic and lacks novelty. In 6.1 and 6.2, it needs to train a GAT first to get the attention scores, then remove edges according to attention scores and train another GAT. In this way, it doesn\\u2019t reduce the computational requirement, as it still trains a full model to get the attention. Only in 6.3 it presents a realistic setting, where the attention scores are derived from a small GAT, and train another GNN on the sparsified graph. But the paper didn\\u2019t explain why it is possible to get reliable attention scores with a small GAT, and the experiment is only on one dataset. Does it apply to other datasets (citation network, molecules) and settings (transductive, inductive)? So far the experiments are not enough to be considered as a valid contribution.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents an empirical study of the attention mechanism in the graph attention networks (GAT). The study reveals that the attention patterns largely depend on the dataset, on some datasets they are sharp, but on others the attention patterns are almost uniform and not so different from the uniform aggregation weights in GNNs that does not have attention. The authors further tried to utilize these findings and attempted to do attention-based graph sparsification, and showed that they can get a similar level of performance with only a fraction of the edges in the original graph if they do the sparsification based on the attention weights.\\n\\nGiven the popularity of the GAT model in the graph neural networks literature and the effectiveness of the attention mechanism in many deep learning architectures, the empirical study presented in this paper focused on attention is valuable. The experiments are clearly motivated and executed, which I appreciate.\\n\\nAs this is an empirical paper, one (maybe) problem with it is that the findings presented aren\\u2019t that surprising in hindsight - of course the attention patterns should be data dependent, and doing attention-based graph sparsification seems like an obvious thing that should work. The results on dataset-dependent attention patterns may have told us more about the datasets rather than the GAT model.\", \"there_are_a_few_presentation_issues_that_need_clarification\": [\"sec 5.1: it is not clear from the text what \\\\alpha^{static} is. As mentioned earlier there are multiple possible static attention weights (GCN vs GraphSAGE).\", \"sec 5.1: I found it strange to normalize the discrepancy score by 2|V|. The number of terms in the sum should be 2|E| where |E| is the number of edges as each edge is counted twice. Normalizing by 2|V| does not guarantee the score is in [0, 1] as claimed in the paper.\", \"sec 5.1: \\u201cBesides, these attention do not get concentrated on self loops based on relatively stable values.\\u201d -- I don\\u2019t see why having stable attention values can show there\\u2019s no concentration on self-loops.\", \"Figure 5 left: looks like the curves can\\u2019t reach the right end, this means 1 <= k <= 8 is probably not a good range.\", \"Table 5 is a bit confusing. From the text my understanding is that a GAT is trained first, and then do sparsification, and then train another GraphSAGE on the sparsified graph to do prediction. It\\u2019s not immediately clear why the GAT performance in Table 5 is so bad, while the GraphSAGE performance is just way better. After reading this a few times I realized a much smaller GAT is used (with much smaller hidden size) while the GraphSAGE model is always using a large hidden size. I think this part needs some improvement.\", \"Overall I liked the empirical study and think the community can benefit from this paper.\"]}"
]
} |
SJgVHkrYDH | Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering | [
"Akari Asai",
"Kazuma Hashimoto",
"Hannaneh Hajishirzi",
"Richard Socher",
"Caiming Xiong"
] | Answering questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question. This paper introduces a new graph-based recurrent retrieval approach that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions. Our retriever model trains a recurrent neural network that learns to sequentially retrieve evidence paragraphs in the reasoning path by conditioning on the previously retrieved documents.
Our reader model ranks the reasoning paths and extracts the answer span included in the best reasoning path.
Experimental results show state-of-the-art results in three open-domain QA datasets, showcasing the effectiveness and robustness of our method. Notably, our method achieves significant improvement in HotpotQA, outperforming the previous best model by more than 14 points. | [
"Multi-hop Open-domain Question Answering",
"Graph-based Retrieval",
"Multi-step Retrieval"
] | Accept (Poster) | https://openreview.net/pdf?id=SJgVHkrYDH | https://openreview.net/forum?id=SJgVHkrYDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"r1Ppis-H78h",
"cothqJEIE6",
"xVMmLJ323",
"qu0UalVQqe",
"BJegSqBYsH",
"rkeehtADiH",
"SklkStAPjH",
"HylTvDCvsr",
"HJxL380PiS",
"HJlHgIAPiB",
"rJgyjVRPir",
"Bkl-WmRviB",
"ryg3CRRRYS",
"S1gd_SqAKr",
"HkglHuj3FS",
"HJgZPKrFOS",
"r1g-3abtdr",
"BkxGvEnBuS"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"comment"
],
"note_created": [
1589924046909,
1581665139311,
1576909921449,
1576798729884,
1573636664200,
1573542311668,
1573542199126,
1573541733245,
1573541549751,
1573541357419,
1573541014885,
1573540601437,
1571905236058,
1571886448099,
1571760183715,
1570490713239,
1570475432804,
1570255961847
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1687/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1687/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1687/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1687/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1687/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1687/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1687/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1687/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1687/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1687/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1687/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1687/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1687/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1687/AnonReviewer3"
],
[
"~Siqi_Sun2"
],
[
"ICLR.cc/2020/Conference/Paper1687/Authors"
],
[
"~Siqi_Sun2"
]
],
"structured_content_str": [
"{\"title\": \"update (Revised on May 19th, 2020)\", \"comment\": \"We fixed a typo in Figure 2.\"}",
"{\"title\": \"Summary of updates (Revised on February 13th, 2020)\", \"comment\": \"We add a few updates for our camera-ready version.\\n\\n[Update 1] Add a link to our official implementation. \\nWe open-source our PyTorch code with all of the train datasets and processed Wikipedia databases (https://github.com/AkariAsai/learning_to_retrieve_reasoning_paths ). We add the link to this Github repository on the first page.\\n\\n[Update 2] Add discussions on some recent related work \\nWe incorporate some additional discussions on some recent work (e.g., PullNet by Sun et al., 2019).\"}",
"{\"title\": \"Summary of updates (Revised on December 20th, 2019)\", \"comment\": \"We thank the reviewers and the PCs for their insightful and helpful feedback. We have incorporated some experimental results and analyses that we show in the response to the reviewers as well as updated the figures presented in our paper.\\n\\n[Update 1] Add performance comparison of query-dependent and query-Independent encoding\\nWe add the performance comparison of query-dependent and query-independent encoding mentioned in our response to review#3 (https://openreview.net/forum?id=SJgVHkrYDH¬eId=BJegSqBYsH) to Appendix C.4 and also add the detailed modeling design of query-independent encoding in Appendix A.2.\\n\\n[Update 2] Add detailed results on SQuAD Open and Natural Questions Open\\nWe further analyze our experimental results on SQuAD Open and Natural Questions Open (Appendix D).\\n\\n[Update 3] Update figures\\nWe have updated Figures 1, 2, 3, and 4.\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposed a multi-hop machine reading method for hotpotqa and squad-open datasets. The reviewers agreed that it is very interesting to learn to retrieve, and the paper presents an interesting solution. Some additional experiments as suggested by the reviewers will help improve the paper further.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Updates on empirical results on query independent encoding\", \"comment\": \"[For the response to individual weaknesses, please read our response to Official Blind Review #3 (part 1 and part 2).]\\n\\nTo show the importance of the query-dependent encodings in terms of accuracy, we conducted an experiment again with a query-independent variant of our approach on the HotpotQA development set. More specifically, our retriever model encodes paragraphs independently from their corresponding queries and sequentially retrieves paragraphs in the same manner as our proposed approach. For a fair comparison, we use the same reader model (BERT wwm) with the path re-ranking. We train the alternative model without using the data augmentation technique for quick experiments. For evaluation, we provide results on both the distractor and full wiki settings.\", \"the_results_are_summarized_in_the_table_below\": \"encoding method | full wiki QA F1 | full wiki QA EM | distractor QA F1 | distractor QA EM \\n-------------------------- |----------------------|-----------------------|------------------------|-----------------------\\n query-dependent | 64.1 | 52.6 | 81.2 | 68.0 \\n query-independent | 47.3 | 37.8 | 80.0 | 66.4 \\n\\nFor our query-dependent approach, the full wiki results correspond to \\u201cretriever, no link-based negatives\\u201d in Table 6, and the distractor results correspond to \\u201cOurs (Reader: BERT wwm)\\u201d Table 1. As seen in this table, the QA F1 and EM performance significantly deteriorate on the full wiki setting, which demonstrates the importance of the query-dependent encoding for complex and entity-centric open-domain question answering. This is the reason why we employ our query-dependent approach, and we achieve competitive results on the three datasets.\\n\\nWe also found that the performance drop on the distractor setting is much smaller than that on the full wiki setting. This is due to its closed nature. In the distractor setting, we are given ten paragraphs and the two gold paragraphs are always included, which makes the retrieval task much easier than that in the full wiki setting. We have only 10 paragraphs for each question, and thus the number of the possible reasoning paths is quite limited. Therefore, our recurrent retriever model is likely to discover the gold reasoning paths by the beam search, and our reader model can select the gold paths by the robust re-ranking approach. To verify this assumption, we checked the P EM score as a retrieval accuracy in the distractor setting. If we only consider the top-1 path from the beam search, the P EM score of the query-independent model is 12% lower than that of our query-dependent model. However, if we consider all the reasoning paths produced by the beam search, the coverage of the gold paths is almost the same. As a result, our reader model can perform similarly with both the query-dependent/independent approaches. This additionally shows the robustness of our re-ranking approach.\\n\\nWe are planning to add these experimental results in our next revision.\"}",
"{\"title\": \"Response to Official Blind Review #3 (part 2)\", \"comment\": \"# Clarification on Figure 2 (re: Weakness 5, Update 6)\\nSorry for the confusion caused by our current figures. We first would like to clarify that our reader reads all of the eight reasoning paths in parallel and jointly predicts P_i^{start}, P_i^{end} and P(E|q) for each of the reasoning path. To determine the final answer, the reader selects a span (i,j) from E, whose P(E|q) is the highest (See Equation 7). In our previous version, we call the reasoning path re-ranking (\\u201csecond stage of re-ranking\\u201d in your words) as \\u201canswer re-ranking\\u201d, which might cause confusion. In our updated version, we re-name this module as \\u201creasoning path reranking\\u201d for clarification and update all of the figures and relevant section titles. \\n\\n\\n# Performance and running-time comparison with query-independent encoding (re: Weaknesses 6)\\nAppendix A.2 discussed the motivation of using our query-dependent paragraph encodings. In our preliminary experiments, we started from a query-independent model with the RNN, but we found that the retrieval accuracy was very low even on the HotpotQA distractor setting. This motivated us to use the query-dependent representations, with the help of the initial TF-IDF retrieval. Lee et al. (2019) also showed that their query-independent retrieval model performs poorly on datasets requiring entity-centric retrieval such as SQuAD Open. The query-independent encodings save computational costs on the BERT encodings, while introducing other engineering efforts like how to store and retrieve pre-computed representations (Seo et al., 2019). By contrast, we primarily put more weight on improving the accuracy, keeping our model scalable by introducing our efficient inference strategies such as the beam search.\\nFor an additional experiment about this, please also refer to another thread \\\"Updates on empirical results on query independent encoding.\\\"\\n\\n# Typo liked -> linked (Sec 4.3, line 5)\\nThank you for pointing it out. We have fixed the typo.\"}",
"{\"title\": \"Response to Official Blind Review #3 (part 1)\", \"comment\": \"We really appreciate your supportive comments on our paper and your detailed feedback. Below, we address the weakness.\\n\\n# On the training of reader and retriever (re: Weaknesses 1)\\nOur reader and retriever are separately trained, and this paper does not explore joint learning. We used the term \\u201cinterplay\\u201d to represent our reasoning path re-ranking framework where our reader verifies the retrieved reasoning paths produced by the beam search, instead of finalizing the path selection only by the retriever. Our training strategy for our reader uses not only ground-truth paragraphs but also negative examples to simulate irrelevant paths produced by our retriever. Joint learning is interesting future work; nevertheless, such a two-stage training strategy is worth investigating, considering our strong empirical results. In practice, another advantage is that the framework is flexible; for example, when better reader models are made available, we can easily leverage the advance, without re-training the retriever model. For this revision, we re-train our reader models for SQuAD Open and HotpotQA and leveraging these new models further advances the state-of-the-art results on the two datasets.\\n\\n\\n# On the inductive bias and the differences in supervision (re: Weaknesses 1, 2) \\nThere are practical differences in training our retriever and reader models. \\n\\nThe first difference is in paragraph interactions. Our retriever learns to capture the paragraph interactions through the BERT\\u2019s [CLS] representations, after independently encoding the paragraphs along with the question; this makes our sequential retrieval scalable to the open-domain scenario. By contrast, our reader model fully leverages the self-attention mechanism across the concatenated paragraphs in the retrieved reasoning paths; this is especially crucial for multi-hop reasoning as discussed in recent work (Wang et al., 2019a).\\n\\nThe second difference is in supervision signals. Our retriever is trained to predict plausibility of the reasoning paths, without learning to answer the question. Our reader model also learns to predict the plausibility with the stronger paragraph interactions, and jointly learns to answer the question. \\n\\nIn summary, our retriever is scalable, but the top-1 prediction is not always enough to fully capture multi-hop reasoning to answer the question. Therefore, we use our reader model for the additional re-ranking process to mitigate the uncertainty and make our framework robust. Table 9 shows the statistics of the re-ranking results, and one interesting observation is that our reader model prefers longer paths. We added an example in Figure 4 (and also in Table 12 in Appendix) where the re-ranking finds more convincing reasoning paths.\\n\\n\\n# The token length limitations by BERT (re: Weaknesses 3)\\nWe investigated the statistics of the token length of the concatenated paragraphs in the selected reasoning path for HotpotQA full wiki. In summary, only 0.2% of the examples exceed 512 tokens (based on the BERT tokenization), and thus we expect that the influence of BERT\\u2019s maximum length limitation is marginal. We believe this is another benefit of our framework. By selecting the reasoning path, our model can effectively avoid handling many paragraphs in the encoding steps.\\n\\n# Reliance on hyperlink information and experiments with off-the-shelf entity linking components (re: Weaknesses 4, Update 3)\\nOur updated manuscript presents a comparison of our framework with and without the given hyperlinks. In place of the hyperlinks, we used an off-the-shelf entity linking system. Please refer to the details of the experiments in Section 4.4 \\u201cThe performance with an off-the-shelf entity linking.\\u201d Table 7 shows marginal performance drop even without the hyperlinks, still achieving the state of the art on HotpotQA full wiki. We would also like to mention that the existence of hyperlinks is common, especially when using documents on the Web, and our results suggest that using hyperlinks as well as entity links is promising. The most recent work (Anonymous, 2019; Nie et al., 2019) also relies on the hyperlinks, while our results perform better.\\n\\n[...continued in next post]\"}",
"{\"title\": \"Response to Official Blind Review #2 (part 3)\", \"comment\": \"# On the effectiveness of TF-IDF-based initial paragraph candidates (re: Detailed Comments 6)\\nWe initialize the candidate paragraph set C_1 with TF-IDF-based top F paragraphs, and start using our RNN retriever from the candidate set. There are several reasons for this strategy. \\n\\nFirst, processing millions of paragraphs with neural networks is computationally infeasible, especially for non-industry scale computational resources, as discussed in previous work (Lee et al., 2019; Seo et al., 2019).\\n\\nSecond, the previous work shows that fully trainable retrieval without using any term-based features performs poorly for entity-centric questions (e.g., SQuAD), as compressing specific entity information into a fixed dimensional vector is challenging (Seo et al., 2019, Lee et al., 2019). In particular, a recently proposed end-to-end retriever, ORQA (Lee et al., 2019) shows significantly lower performance than a TF-IDF retriever (DrQA proposed by Chen et al., 2017) on SQuAD Open (See Related Work and Table 3). \\n\\nFor the reasons we listed above, we bootstrap the retrieval with the TF-IDF retriever. We clarified these points in Section 3.1.1 in our updated manuscripts. \\n\\n\\n# The definition of q (re: Detailed Comments 4)\\nWe have updated the manuscript to clear define q as a question.\\n\\n\\n# The initialization of h_0 (re: Detailed Comments 6)\\nOur RNN\\u2019s initial state is h_1, which is used to predict the first paragraph in each reasoning path. h_1 is based on an independent parameterized vector, and please also refer to Appendix A.1 for more details.\\n\\n\\n# The definition of \\u201cLoss function\\u201d, the term g_{r} (re: Detailed Comments 8)\\nWe have found that ``_{r}\\u2019\\u2019 was missing from the definition of ``g_{r}\\u2019\\u2019 in the sentence: ``In particular, we add a new training path g = [pr, p1, . . . , p|g| ] by \\u2026\\u2019\\u2019 in Section 3.1.2. We have revised this part to precisely define the term.\\n\\n\\n# The performance with different path lengths (re: Detailed Comments 9)\\nWe have added the performance comparison with the settings where we use the same model but set the length of the reasoning paths to a fixed number (i.e., 1, 2, 3, 4). We can see that our adaptive approach performs the best, even though the HotpotQA\\u2019s gold reasoning path length is always 2 (See Table 8).\\n\\nWe further present the QA performance of the model with different lengths (averaged QA F1 and EM scores on the questions whose retrieved reasoning path length is {1,2,3}) on HotpotQA (See Table 9). \\n\\n\\n# A typo in Section 4.4 (re: Detailed Comments 10)\\nThank you for pointing it out. We fixed this typo in our updated submission.\"}",
"{\"title\": \"Response to Official Blind Review #2 (part 2)\", \"comment\": \"Clarity.\\n\\nThank you for pointing out several unclear descriptions in our method, and in this revision, we did our utmost best to clarify some details and also add additional experimental results to make our results more convincing (See the details from the response on \\u201cThe clarity on the definition\\u201d below). \\n\\n================\\n\\nWe list our response to your Detailed Comments below.\\n\\n# On the reliance on hyperlinks and performance evaluation with an off-the-shelf entity linking system (re: Detailed Comments 1)\\nOur updated manuscript presents a comparison of our framework with and without the given hyperlinks. In place of the hyperlinks, we used an off-the-shelf entity linking system. Please refer to the details of the experiments in Section 4.4 \\u201cThe performance with an off-the-shelf entity linking.\\u201d Table 7 shows marginal performance drop even without the hyperlinks, still achieving the state of the art on HotpotQA full wiki. We would also like to mention that the existence of hyperlinks is common, especially when using documents on the Web, and our results suggest that using hyperlinks as well as entity links is promising. The most recent work (Anonymous, 2019; Nie et al., 2019) also relies on the hyperlinks, while our results perform better.\\n\\n\\n# The clarity on the definition of the Wikipedia graph, reasoning paths, paragraph candidates C_{t}, and search spaces (re: Detailed Comments 2, 3, 5, 7)\\nWe have updated Section 3 (Overview) and Section 3.1 and 3.2 to clarify the definitions.\\n\\n- The Wikipedia Graph \\\\mathcal{G}: each node of \\\\mathcal{G} is \\u201ca paragraph\\u201d. A paragraph is represented as p_i in our paper. Thus, an edge connects two paragraphs, and it can be either a hyperlink or a with-in article link (See Section 3.1). By default we consider all of the paragraphs in English Wikipedia, but for HotpotQA, we only consider introductory paragraphs, following all the previous work using the dataset. This is described in Section 4.1 \\u201cEvidence Corpus and the Wikipedia graph\\u201d. The whole graph is constructed in advance, and we reuse the same graph for training and inference, instead of dynamically building entity graph everytime from the TF-IDF retrieval results as in Ding et al. (2019) or Godbole et al. (2019). \\n\\n- A reasoning path E = [p_i, \\\\ldots p_k]: a reasoning path contains one or more paragraphs that are together used by out reader model to answer a given question. Our framework learns to retrieve a reasoning path for a given question from the entire Wikipedia. \\n\\n- Top B reasoning paths \\\\mathbf{E}=\\\\{E_1, \\\\ldots, E_B\\\\}: our retriever employs a beam search with the beam size of B for decoding, and thus, the top B distinct reasoning paths \\\\{E_1, \\\\ldots, E_B\\\\} are returned. Our reader further re-ranks these reasoning paths to determine an answer. Our reader jointly encodes all of the paragraphs in each reasoning path and then re-ranks the retrieved reasoning paths, fully capturing the paragraph interactions in E. \\n\\n- The candidate paragraph set C_{t}: we construct C_{t} as a set of the paragraphs to be considered for retrieval at each time step t. At t=1, the candidates are initialized with the F paragraphs with the highest TF-IDF scores with respect to the question. Then, C_{t+1} includes (i) paragraphs linked from the previously selected paragraph p_{t} or (ii) a few of top ranked paragraphs from the previous step. C_{t} is not limited to the initially retrieved F paragraphs, and thus, our retriever dynamically expands the paragraph candidates over the graph. \\n\\n\\n# On the upper-bound performance of the proposed system (re: Detailed Comments 2)\\nWe expect that your suggestion about \\u201cupper-bound performance\\u201d is to calculate how many of the ground-truth reasoning paths can be found in our initial TF-IDF retrieval and the graph. As shown in our experimental results on HotpotQA, a retrieved reasoning path consists of up to three paragraphs. Thus, we consider if a ground-truth reasoning path can be found in the subgraph within three steps from the initial TF-IDF retrieval. In particular, we calculate the upper-bound paragraph EM as follows:\\n(the number of questions whose ground-truth reasoning paths can be found in the sub-graph) / (the total number of questions).\\n\\nWe have checked the upper bound in our preliminary experiments on the development set in HotpotQA. The coverage of the gold reasoning paths is 75.4% when the initial TF-IDF retrieval size is 20. The coverage is considered as an upper bound of the P EM score of our method in Table 5. For reference, the coverage is 84.1% and 89.2% with the initial TF-IDF size of 100 and 500, respectively. This analysis is now reported in Appendix C.1. [...continued in next post]\"}",
"{\"title\": \"Response to Official Blind Review #2 (part 1)\", \"comment\": \"We thank you for your helpful feedback. We have substantially updated our manuscript to address all the concerns you kindly raised as much as we can.\\n\\nFirst of all, we would like to address the two weaknesses you mention in your overall comments. \\n\\nOriginality.\\n\\n# On the difference with other graph-based approaches\\nOur work has several significant originalities in its system design, training and inference time strategies from Ding et al. (2019) and Godbole et al., (2019), which leads to more than 20 point improvements over these previous approaches on HotpotQA full wiki.\\n\\n1) System design: We formulate the retrieval as reasoning path search over the Wikipedia graph, instead of dynamically constructing an entity-graph for each question based on compiled document lists as in the previous work; the recurrent module dynamically updates and expands candidate paragraphs from the initial TF-IDF-based candidates at each time step. In addition, our work also studies the interplay between our retrieval model and the reader model (See the response of \\u201cReasoning path retrieval and the interplay between our reader and retriever\\u201d below for details).\\n\\n2) Training strategy: To train our recurrent module to learn to retrieve the path, leveraging the graph structure, we train our model with negative sampling and multiple reference paths (See Section 3.1.2 and the summary by review#1). \\n\\n3) Inference strategy: We introduce beam-search based decoding to make the framework more scalable (See Section 3.1.1; also summarized by review#1 and #2), and the beam search with our reasoning path re-ranking is more effective than a greedy search. As in Table 6, replacing beam search with greedy search deteriorates F1 by 3.7. Also, our method does not need to encode all possible nodes like the previous studies, and instead each path only encodes its corresponding paragraphs.\\n\\nThe HotpotQA dataset used in the previous work is based on introductory paragraphs only. By contrast, our method is applied not only to HotpotQA but also to the Natural Questions (See Update 1) and SQuAD Open datasets. These two datasets are not restricted to the introductory paragraphs, and our method achieves state-of-the-art results. This is made possible by our search-based decoding strategy. One interesting observation on our Natural Questions experiments is that our model learns to retrieve multi-hop reasoning paths with our training strategy, even without multi-hop gold path annotations as in HotpotQA (See Appendix C.5 and Table 13). This demonstrates the robustness and scalability of our approach. \\n\\n\\n# On the difference with other multi-step retrieval approaches\\nPrevious multi-step approaches such as Das et al. (2019), Qi et al. (2019), Godbole et al. (2019) and Feldman and El-Yaniv (2019) do not accommodate arbitrary steps of reasoning. As review#1 and review#3 summarize, our RNN approach uses the EOE symbol to produce reasoning paths with different lengths. This allows our model to be easily applicable to both multi-hop and single-hop questions without specifically changing the model architecture. Table 8 demonstrates the effectiveness of this adaptive retrieval process. In practice, it is not obvious if a question requires single-hop or multi-hop retrieval (e.g., some of the Natural Questions Open are clearly answerable based on single paragraph, while in some questions, multi-hop reasoning helps.), and thus this flexibility is another significant advantage. \\n\\n\\n# Reasoning path retrieval and the interplay between our reader and retriever \\nOur framework benefits from the interplay between our retriever and reader. Our retriever encodes the candidate paragraphs independently for scalability, and iteratively selects a paragraph at each time step conditioned by the prediction history. Each of the resulting K reasoning paths (K=beam size) includes one or more paragraphs. Our reader encodes the paragraphs in the paths jointly and predicts probabilities of each reasoning path E containing an answer span. By encoding the paragraphs jointly, our reader model fully leverages the self-attention mechanism across the concatenated paragraphs in the retrieved reasoning paths; this is especially crucial for multi-hop reasoning as discussed in recent work (Wang et al., 2019a). The additional reasoning path re-ranking makes our overall framework robust, leading to large performance improvement (See Section 4.4 and Table 8,9). This reasoning path re-ranking is one of the novel points in our work.\\n\\nThese significant differences together demonstrate state-of-the-art performance on the four experimental settings in the three datasets, HotpotQA (full wiki, distractor), SQuAD Open and Natural Questions Open (Update 1). Notably, our method outperforms all the previous graph-based or multi-step retrieval methods by more than 20 points on HotpotQA full wiki and 15 points on SQuAD Open (See Table 1,2,3). \\n\\nWe added discussions in Section 2 (Related Work) to clarify these points.\"}",
"{\"title\": \"Response to Official Blind Review#1\", \"comment\": \"Thank you for reading our paper thoroughly and providing encouraging feedback.\\n\\n# The results on HotpotQA distractor setting\\nRegarding the Hotpot distractor setting, we have evaluated our method on the settings, and the scores on the development set are reported in the first version of our manuscript (See Table 1, columns 6-9); due to the time constraints, we did not submit our model to the leaderboard. The results show that our method achieves state-of-the-art scores on the distractor setting, outperforming the previous best-published model by more than 10 points. Our work is the first to demonstrate the state-of-the-art performance on both the distractor and full wiki settings of HotpotQA. We revised our manuscript to make the distractor evaluation clear (See Update 7 and Section 4.1 \\u201cHotpotQA\\u201d and Section 4.2 in our updated manuscript). We have a qualitative example in Appendix C.6 and Table 14, which shows how our sequential reasoning path process also helps in distractor setting. \\n\\nWe added new experimental results on Natural Questions Open (Table 4) and additional analysis on selected reasoning paths (Section 4.4) in our updated version. We hope it will be helpful in evaluating the effectiveness of our method.\"}",
"{\"title\": \"Summary of general updates\", \"comment\": \"We thank all of the reviewers for providing such insightful and valuable feedback. Based on the feedback, we made substantial updates on our paper.\", \"our_updates_are_summarized_below\": \"[Update 1] New experimental results on Natural Questions Open:\\nWe added new experimental results on Natural Questions Open (Lee et al., 2019) to show our method\\u2019s robustness and scalability. Natural Questions Open has three unique features as an open-domain QA benchmark. The questions are written by actual users independently from existing corpora. Some questions in this dataset require multi-hop reasoning (e.g., how tall is the actor who plays hagrid in harry potter), but the multi-hop reasoning annotations are not provided. This dataset requires a system to search *all* paragraphs in Wikipedia articles, and thus a system needs to be truly scalable. Our results are competitive with the state of the art (See Section 4.2 and Table 4), which demonstrates the scalability and robustness of our framework. We also provide a qualitative example which shows that our system learns to conduct multi-hop retrieval on Natural Questions Open without original multi-hop reasoning path annotations (See Appendix C.5 and Table 13). We do not add any architectural design changes for this experiment. \\n\\n[Update 2] Updated results on HotpotQA (distractor, full wiki) and SQuAD Open:\\nWe found that our reader model was under-tuned on HotpotQA and SQuAD Open during our experiments on Natural Questions Open. In particular, it is effective to use larger mini batches for training our reader model with BERT and distant supervised examples extracted from our training data for the retriever model (See Section 3.2 and Appendix B.3). Consequently, we advance our state-of-the-art results from our initial submission on HotpotQA (both fullwiki and distractor) by around 4 points and also outperforms the state of the art (multi-passage BERT) on SQuAD Open by 3 points (See Section 4.2 and Table 1,3). We re-submitted to the HotpotQA full wiki leaderboard on November 6th ( https://hotpotqa.github.io/ ), and our model ranks first, outperforming all of the published and up-to-date unpublished work (See Table 2). \\n\\n[Update 3] Performance using an off-the-shelf entity linking system vs. hyperlinks (review#2, review#3):\\nWe added an experiment by replacing the Wikipedia hyperlinks with entity links given by an off-the-shelf system, and observed a slight performance drop (2.3 F1 and 2.2 EM), still achieving the state of the art (See Section 4.4 \\\"The performance with an off-the-shelf entity linking,\\\" Table 7, and Appendix B.6 for details). This also suggests that using hyperlinks is promising, considering that hyperlinks are commonly used on the Web.\\n\\n[Update 4] Clarification of the method (review#2):\\nWe added clarification about the definition of the Wikipedia graph, reasoning path and search space by our retriever in Section 3 Overview and Section 3.1.1. \\n\\n[Update 5] Additional analysis of experimental results (review#2, #3):\\nWe added analysis on (1) the performance comparison with different reasoning path length (See Section 4.4 \\u201cThe performance of different path length\\u201d and Table 8) and (2) qualitative and quantitative analysis to understand the importance of the reader-side reasoning path re-ranking (See Section 4.4 \\u201cThe effectiveness of the interplay between retriever and reader\\u201d, Table 9, Figure 4). \\n\\n[Update 6] Rename \\u201canswer re-ranking\\u201d to \\u201creasoning path re-ranking\\u201d (review#3): \\nWe rename \\u201canswer re-ranking\\u201d to \\u201creasoning path re-ranking\\u201d to reflect our framework\\u2019s actual behavior and to avoid confusion. \\n\\n[Update 7] Clarify the experimental settings and results on HotpotQA distractor (review#1): \\nIn our first version, we briefly described the experiments on HotpotQA distractor. We added descriptions of the experimental settings and results for this setting in Section 4.1 and 4.2. \\n\\nIt should also be noted that Godbole et al. (2019) presented their work in Machine Reading for Question Answering workshop in EMNLP 2019 on November 4th, and the original manuscript was submitted to arXiv on September 17th (one week before ICLR 2020 deadline). Nevertheless, we did our utmost efforts to provide a careful comparison, and empirically our approach yields more than 20 F1 and 30 P EM improvements over this work. In this revision, we also added comparison with the most up-to-date work on HotpotQA posted after the ICLR submission deadline (Qi et al., 2019; Anonymous, 2020; Nie et al., 2019).\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper is proposing a multi-hop machine reading method tested on hotpotqa in the Full Wikipedia setting and squad-open datasets.\\nFor hotpotqa, It could also have been interesting to evaluate the method of the distractor ones.\\nFirst, the proposed method constructs a graph over the Wikipedia pages represented by their respective summary paragraphs.\\nIn this representation, the hyperlinks among pages represent the edges.\\nThen, the authors trained a normalized RNN model to retrieve the candidate reasoning paths from the question.\\nThe model is bootstrap using TF-IDF page retrieval techniques.\\nThen, a Beam-search decoding strategy is used to retrieve \\\"reasoning path\\\" which is then pass through a BertQA model using a simple question-reasoning-path concatenation technique.\\nOne originality of the method is the negative sampling strategy that includes negative TF-IDF retrieval as starting points to robustify the sequential extraction process.\\nThe detailed experiments and ablation tests give to illustrate the experimental relevance of the proposed method.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n========\\nThis paper introduces a graph-based recurrent retrieval model for retrieving evidence documents in a multi-hop reasoning question answering task. The main idea is that (1) the graph formed by Wikipedia links between passages can be used as constraint for constructing reasoning chains, and (2) the joint encoding of the question and current passage can be used to retrieve a subsequent passage in the reasoning chain. The paper describes a model for implementing the above retrieval system, and how they jointly train with a reading comprehension model. They demonstrate the effectiveness of the system on HotPotQA, showing improvements over previously published models, and SQuaD-Open, showing competitive results.\\n\\nOverall Comments\\n===============\\nThe paper is an interesting, but incremental, improvement to the area of question answering. Overall, there are two main concerns about this work. First, while the results are somewhat strong, the ideas presented are small variations on existing systems. For example, Godbole et al 2019 and Ding et al. 2019 both explore using graphical structural to constraint iterative, multi-hop, retrieval. Also, Feldman et al 2019, describe an encoder based approach to encode question and paragraph context for iterative retrieval. Asides from smaller modeling differences (choice of RNN, training regime, BERT reader, etc.) to account for the difference in results, the main difference seems to be the joint training of the retrieval system with the reader. Secondly, the paper lacks clarity on some formal definitions and definition of the graph, making it hard to understand the content precisely.\\n\\nDetailed Comments\\n================\\nBelow are some detailed comments about specific parts of the paper, in order of importance:\\n\\n1. One important limitation of this technique is the reliance on a linked documents for constructing the retrieval system. It is not clear from the paper how much of the results are obtained from constraining the set of retrieved passages (after the initial retrieval) to Wikipedia links. And whether, for example, substituting Wikipedia links with links derived from an off-the-shelf entity linking system would suffice.\\n\\n2. Given that the retrieval model is restricted to link structure in Wikipedia that induces the proposed retrieval graph, I assume that there are \\u201creasoning paths\\u201d that do not exist in the graph, given Wikipedia\\u2019s policy of avoiding adding redundant links within a Wikipedia page. It would have been informative to conduct an \\u201cOracle\\u201d experiment: that is, given the initial set of retrieved nodes and the graph structure, are there *any* paths that provide the correct answer and reasoning chain? That is to say, what is the upper-bound performance on the proposed system given the currently induced Wikipedia graph?\\n\\n3. In Section 3, and even later on in the paper, it was not clear what \\u201cE\\u201d denotes. It never seems to be defined, and is used interchangeably with \\u201cgraph node\\u201d, \\u201cwikipedia page\\u201d, \\u201cwikipedia paragraph\\u201d and \\u201creasoning path\\u201d. Are these the same thing? It would be much clearer to define what E means, and perhaps separate the different concepts (node, passage, reasoning path) properly.\\n\\n4. In Section 3, it seems that \\u2018q\\u2019 is not defined. Is it the question?\\n\\n5. In Section 3.1, it is not clear what the graph actually contains. Does it contain all the paragraphs from Wikipedia? Just the paragraphs with links? The first paragraph of every Wikipedia page? What granularity of the wikipedia page becomes an individual node in the graph?\\n\\n6. In Section 3.1.1., the representation of the starting retrieval (i.e., time-step = 0), h_0, is not defined. Later in the section, the paper mentions the use of TF-IDF for the initial set of nodes, instead of the learned retrieval model. This seems a bit unusual design decision without further explanation. Particularly when taking the results in Table 4, showing TF-IDF based retrieval performs worse that the learned retrieval system from the proposed model.\\n\\n7. In Section 3, C_{t} (the candidate set of paragraphs) is not defined. This is an important set to define. Is it the set of paragraphs derived from Wikipedia links, starting from the current node?\\n\\n8. In Section 3.1.2, \\u201cLoss function\\u201d, the term g_{r} is not defined.\\n\\n9. In Section 4.4 \\u201cAnalysis on reasoning path length\\u201d, it would have been useful to see the performance of the model with different path lengths. This analysis is somewhat common on multi-hop reasoning tasks, and should be included.\\n\\n10. Typo in Section 4.4: \\u201c..., and out model is likely too terminate \\u2026\\u201d should be \\u201c likely to terminate \\u201c\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a method to find a sequence of reasoning paragraphs in Wikipedia to answer queries requiring multi-hop reasoning. They make the key observation that answering multi-hop queries might require retrieving evidence that have very less lexical overlap with the question. Given a query, the proposed method starts from a set of initial paragraphs retrieved by a tf-idf retriever and uses the outgoing Wikipedia anchor link to hop to the next evidence. They propose a simple recurrent neural network that takes in the current paragraph (and the hidden state) and decide which paragraph to hop to in the next step. Because of the available supervision for the paragraphs (in HotpotQA), they can train a supervised path selector. They also add a special EoE token that denotes the end of the reasoning path, thereby having the ability to produce reasoning paths of different lengths. After training the retriever a beam of reasoning paths is sent to the reader module. The reader module re-ranks the reasoning paths again and then use a standard BERTQA model and the top re-ranked chain of paragraphs to find the evidence.\\n\\nOverall, the paper presents a well-designed system for handling multi-hop queries and the explicit recurrent state is a nice contribution and addition to the IR model proposed in Godbole et al., 2019. The paper is clearly written for the most part.\\n\\n===Update (11/12/2019)===\\nThe authors have addressed all my comments and have improved the results since. I am recommending acceptance. Nice work.\", \"strengths\": \"\\u2014 The proposed method has demonstrated strong results on 2 datasets in challenging open-domain settings. The ablation results are helpful.\\n\\u2014 The paper is clearly written and was straightforward to follow\", \"weaknesses\": \"1. The paper mentions that it studies the interplay between the retriever and reader. It is unclear how it is doing so, since the retriever and the reader are not explicitly interacting with each other. Cant the retriever and the reader be trained separately? \\n2. It is unclear / not motivated, why there is an extra step of re-ranking required in the reading stage? In other words, what kinds of extra inductive bias is this additional step of re-ranking providing since the same kind of supervision was used while training the retriever model. I do note that the ablation study is helpful and it is clear that it is effective, but it would be nice to see a discussion regarding why this second step of re-ranking helps.\\n3. Since the reader model (BERT reader) takes the top scoring chain of paragraphs concatenated together, that would imply that it is currently limited by the number of positional embeddings in the BERT model (512 tokens). I think this limitation should be explicitly mentioned and possible remedies discussed.\\n4. The current approach is heavily dependent on Wikipedia graph and will not work if the hyperlink graph is not provided. It would have been nice to have an entity linker component that could also create the graph structure. I believe concurrent work such as Godbole et al., 2019 has addressed this and the paper should mention this while contrasting with their work. \\n5. From figure 2, I got an impression that since the reader scored the span in \\\"Top 2 reasoning path\\u201d higher, that was selected. But after section 3.2, I was left confused because it looks like the reader model consumes the top scoring chain after the second stage of re-ranking. This is not clear from the figure and should be fixed.\\n6. Discussion on scalability: Although the retriever is clearly very effective for such questions, the running time would be prohibitive (for open domain QA) as at test time, query dependent context representations is constructed for each of the paragraph in the reasoning chain. I would like to see a discussion / some running time comparison where query independent paragraph representations are constructed and the network just encodes the query independently at test time.\", \"minor\": \"Typo liked -> linked (Sec 4.3, line 5)\"}",
"{\"comment\": \"Huge thanks for your detailed and valuable information, they are extremely helpful, much appreciated!\\n\\nWith your explanation, all numbers make sense to me now, thanks a lot for your time!\", \"title\": \"thanks for your clarification\"}",
"{\"comment\": \"Hi Siqi,\\n\\nThank you for your interest in our work and such an encouraging comment. We agree that our current description of paragraph recall (PR) might be not clear enough, and we will update our manuscript once the discussion phase begins and edits are allowed. In summary, our PR evaluates the percentage of questions for which at least one gold paragraph is retrieved. We additionally contrast our PR metric with PEM, which evaluates the percentage of questions for which both gold paragraphs are retrieved. \\n\\n> For us we just use PR = number of retrieved gold paragraph / total number of gold paragraph, then average PR across different questions.\\n> (1) The PR for TF-IDF on dev set is way too high. The hits@10 for TF-IDF retriever in HotpotQA paper is 56.06 on dev, while your TF-IDF achieves 66.9 recall on top 2... We also implemented our own TF-IDF retriever and achieved 55.71 recall if we retrieved 10 paragraphs per question, which is similar to original HotpotQA paper\\n\\nBy PR, we evaluate if at least one of the ground-truth paragraphs for each question is included among the retrieved paragraphs. In particular, the score is calculated as follows: \\n\\nPR = (the number of questions where a retriever finds at least one of the gold paragraphs) / (the total number of questions in the development set).\\n\\nWe assume that the PR would be estimated higher than the recall score calculated by you or the hits@10 calculated by the HotpotQA authors. \\n\\n> (2) We ran the CogQA retriever and achieved 69.98 PR (in table 4 the number is 87.6). Since the retrieved documents should be almost the same for different runs, can you help us find out what might be the reason that we failed to replicate the results?\\n\\nWe expect that this happens due to the same reason we described above. \\n\\nIn addition, we would like to mention the motivations behind the metric design. Here, we aim at evaluating (1) if a retriever can find at least one paragraph of the gold paragraphs (corresponding to PR), and (2) if a retriever can find all of the gold paragraphs (corresponding to P EM). We expect that even a non-parameterized retriever (e.g., TF-IDF retriever) can find one of the gold paragraphs based on lexical matching, but it is likely to fail to find one or more of the gold paragraphs consisting of little lexical overlap or semantic relationship to the original questions. \\n\\nAs in Table 4, the PR is relatively high across several retrievers, but the TF-IDF retriever or the Re-rank show low P EM, as it cannot access the paragraphs that are ranked lower by TF-IDF but entailed or the relationships between paragraphs. \\n\\nWe will add more detailed explanation about how we calculate the scores and why we design the metrics in the way in our next version. We also consider changing the names (PR and P EM). \\n\\nAgain, thank you so much for your interest and insightful comments.\", \"title\": \"Re: question about table 4 (Retrieval evaluation)\"}",
"{\"comment\": \"First of all, thanks for posting your amazing work here! The paper's novelty, presentation and results are all excellent to me!\\n\\nHowever we do have one question about your results in table 4, may I ask how do you compute the AR and PR (especially PR) in table 4 (some rows are copied below) ? Note that we absolutely trust your results because they are evaluated on a hidden test set, we just couldn't reproduce the numbers in this specific table and hope you could help if possible :)\\n \\nModels AR PR PEM EM\\n--------------------------------------------------------------------\\nTF-IDF 39.7 66.9 10.0 18.2\\nCognitive Graph 76.0 87.6 57.8 37.6\\n\\nFor us we just use PR = number of retrieved gold paragraph / total number of gold paragraph, then average PR across different questions. The numbers in table confuse me because\\n(1) The PR for TF-IDF on dev set is way too high. The hits@10 for TF-IDF retriever in HotpotQA paper is 56.06 on dev, while your TF-IDF achieves 66.9 recall on top 2... We also implemented our own TF-IDF retriever and achieved 55.71 recall if we retrieved 10 paragraphs per question, which is similar to original HotpotQA paper (of course, way below your number, especially considering your model only retrieves top 2)\\n\\n(2) We ran the CogQA retriever and achieved 69.98 PR (in table 4 the number is 87.6). Since the retrieved documents should be almost the same for different runs, can you help us find out what might be the reason that we failed to replicate the results?\\n\\nThanks again for your time.\", \"title\": \"question about table 4 (Retrieval evaluation)\"}"
]
} |
rylXBkrYDS | A Baseline for Few-Shot Image Classification | [
"Guneet Singh Dhillon",
"Pratik Chaudhari",
"Avinash Ravichandran",
"Stefano Soatto"
] | Fine-tuning a deep network trained with the standard cross-entropy loss is a strong baseline for few-shot learning. When fine-tuned transductively, this outperforms the current state-of-the-art on standard datasets such as Mini-ImageNet, Tiered-ImageNet, CIFAR-FS and FC-100 with the same hyper-parameters. The simplicity of this approach enables us to demonstrate the first few-shot learning results on the ImageNet-21k dataset. We find that using a large number of meta-training classes results in high few-shot accuracies even for a large number of few-shot classes. We do not advocate our approach as the solution for few-shot learning, but simply use the results to highlight limitations of current benchmarks and few-shot protocols. We perform extensive studies on benchmark datasets to propose a metric that quantifies the "hardness" of a few-shot episode. This metric can be used to report the performance of few-shot algorithms in a more systematic way. | [
"few-shot learning",
"transductive learning",
"fine-tuning",
"baseline",
"meta-learning"
] | Accept (Poster) | https://openreview.net/pdf?id=rylXBkrYDS | https://openreview.net/forum?id=rylXBkrYDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"EWtmZ6vRUWL",
"xBowwmbwZYS",
"r0-iLfhst_",
"Yxff_yFlvdf",
"enSC6Eok6n",
"SyxCU0c3oS",
"SJxmJaqnoB",
"H1gn4s5hsS",
"S1gBtOq2ir",
"B1lohwq2jB",
"rJejGJP_qH",
"SJlVVzkRFB",
"HJgnmTMTYS"
],
"note_type": [
"official_comment",
"official_comment",
"comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1603035398343,
1603034875484,
1603016934955,
1588317208330,
1576798729853,
1573854805874,
1573854426844,
1573854004330,
1573853309416,
1573853107433,
1572527890713,
1571840555976,
1571790115700
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1686/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1686/Authors"
],
[
"~Mayank_Lunayach1"
],
[
"~Jackie_Cheung1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1686/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1686/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1686/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1686/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1686/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1686/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1686/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1686/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"response\", \"comment\": \"Thanks for your comment. Yes, the transductive setting and the inductive setting are different; accuracy of these methods should not be compared directly. We also show very strong results with non-transductive fine-tuning in Table 1 and clearly indicate in the narrative the difference between the two.\\n\\nLet us note that semi-supervised setting is a different than (it is a subset of) transductive learning. Transductive learning is powerful because while inductive learning seeks to achieve accurate predictions over the entire distribution of test data, transductive is about achieving accurate predictions only on a few particular samples of test data; in the few-shot learning problem this is the query shot. Transduction is particularly suited to problem settings when one is interested in getting accurate predictions only on the query samples of a particular episode. Semi-supervised learning is therefore a particular technique for implementing transduction.\"}",
"{\"title\": \"please write to us and we will be happy to help\", \"comment\": \"Thank you for your comment. The code is going through internal reviews before it can be released. If you write to all the authors, we will be happy to guide you on your implementation over email.\"}",
"{\"title\": \"Source code\", \"comment\": \"Congrats on a fantastic work! When would the source code for the paper be released?\"}",
"{\"title\": \"is it fair to compare Transductive setting vs inductive setting?\", \"comment\": \"hi. thanks for the nice baseline provided in this paper.\\nI have a few questions regarding your reported performance. I have gone through your paper and think that your proposed Transductive fine-tuning is based on the transductive setting/ semi-supervised setting. In this setting, the prediction of a query sample is based not only on the support images (training images) but also on many other unlabeled query images.\\nOn the other hand, many compared methods in Table1 is based on the inductive setting, where the prediction of individual query images is solely based on the support images without any other unlabeled data. As far as I know, the performance gap between the benchmarks in these two settings is not small. For example, in this work\", \"https\": \"//arxiv.org/pdf/1911.06045.pdf,\\nthe performance reaches 78% for 1-shot 5-way mini magnet.\\nWould it be better to indicate this more clearly or to make comparisons in different tables?\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper introduces a simple baseline for few-shot image classification in the transductive setting, which includes a standard cross-entropy loss on the labeled support samples and a conditional entropy loss on the unlabeled query samples.\\n\\nBoth losses are known in the literature (the seminal work of entropy minimization by Bengio should be cited properly). However, reviewers are positive about this paper, acknowledging the significant contributions of a novel few-shot baseline that establishes a new state-of-the-art on well-known public few-shot datasets as well as on the introduced large-scale benchmark ImageNet21K. The comprehensive study of the methods and datasets in this domain will benefit the research practices in this area.\\n\\nTherefore, I make an acceptance recommendation.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3 (Part 2)\", \"comment\": \">>> Considering the randomness of the sampled few-shot tasks, the authors can consider evaluating over more episodes (e.g., 10,000 trials) than 1000 in the paper.\\n\\nWe evaluated on 10,000 episodes for Mini-ImageNet and Tiered-ImageNet for 5-way, 1-shot and 5-shot test protocols. The numbers are consistent with our reported results. We will add this table to the main paper.\\n\\n\\n+-------------------------+------------------------+----------------------+-----------------------+----------------------+\\n| | 1-shot, 5-way | 5-shot, 5-way |\\n+ +------------------------+----------------------+-----------------------+----------------------+\\n| | 10,000 episodes | 1,000 episodes | 10,000 episodes | 1,000 episodes |\\n+-------------------------+------------------------+----------------------+-----------------------+----------------------+\\n| Mini-ImageNet | 67.77 +/- 0.21 | 68.11 +/- 0.69 | 80.24 +/- 0.16 | 80.36 +/- 0.50 |\\n| Tiered-ImageNet | 72.36 +/- 0.23 | 72.87 +/- 0.71 | 85.70 +/- 0.16 | 86.15 +/- 0.50 |\\n+-------------------------+------------------------+----------------------+-----------------------+----------------------+\\n\\n>>> It's better for the authors to emphasize and differentiate the transductive fine-tune and the inductive counterpart in the paper.\\nThanks. We will expand upon Section 3.2 with an example on transductive learning, similar to Figure 2 in the paper \\\"Transductive Inference for Text Classication using Support Vector Machines\\\". We will also clarify the difference between transductive fine-tuning and fine-tuning (its inductive counterpart) in Section 4.1.\\n\\n[Thorsten Joachims, 99] Transductive Inference for Text Classication using Support Vector Machines\"}",
"{\"title\": \"Response to Review #3 (Part 1)\", \"comment\": \"We thank the reviewer for their feedback. Please also see our response to all the reviewers in the comment above.\\n\\n>>> The authors propose to use the logits instead of embedding as the main bridge between the pre-trained model and the meta-learning model. Does it mean we represent novel classes based on the properties of the meta-train classes?\", \"we_are_not_sure_of_the_meaning_of_the_first_sentence\": \"perhaps the reviewer means \\\"bridge between the pre-trained model and the fine-tuned/adapted model\\\". Yes, we use the logits instead of the features as inputs to few-shot classifier: Remark 2 and Appendix C.6 explain the rationale for doing so.\\n\\n>>> If so, does this method requires more meta-train classes to enrich the representation ability?\\n\\nNo, this method does not require more meta-training classes. We use the same number of meta-training classes as all other methods for all benchmarks. Having more meta-training classes certainly helps; our accuracy for 5-way 5-shot testing on ImageNet 21K (7,491 meta-training classes) is as high as 95%. Appendix C.6 reports the performance when embeddings are used instead of the logits.\\n\\n>>> How will the method perform when working on few-shot learning problems with a large distribution shift?\\n\\nBoth logits and features are a property of the meta-training set, both may suffer from distribution shift. Fine-tuning adapts the network explicitly and safeguards against distribution shift, we therefore expect our method to retain its performance gains with large distribution shift. See also the results on Meta-Dataset (discussed in the comments for all reviewers and Appendix C.10)\\n\\n>>> To make a fair comparison: instead of citing the values in the published papers directly and comparing different methods with different architectures, the authors should also apply the pre-trained model with the famous baselines, such as Matching Network, Prototypical Network, and MAML. Now there exists a very lap gap between the Matching Network values and the newly proposed one. For example, fine-tune the Matching Network on the pre-trained backbone in both train and train+val settings. Therefore, it is more clear to show the improvement of the proposed baseline models. Q/A 2-3 in appendix D do not fully solve this problem.\\n\\nWe are not sure we completely understand what the reviewer is saying here.\\n\\nIf the reviewer means \\\"run famous baselines with your backbone architecture\\\".\\nAs the reviewer can appreciate, it is very difficult to reproduce these published results without access to original author\\u2019s source code, or run them for newer architectures. In particular, we have not been able to reproduce the results of MAML, or obtain good results with others, e.g., Prototypical Networks on new backbone architectures. Table 3 in Appendix C.6 includes results of transductive fine-tuning on the architectures of these above algorithms, where we have similar performance gains on these algorithms.\\n\\nIf the reviewer instead means that we should pre-train our backbone with other algorithms.\", \"pre_training_the_backbone_using_other_meta_training_approaches_will_defy_our_baseline_effort\": \"it will not only make the \\\"baseline\\\" as complicated as the existing algorithms but is also prohibitively difficult to do without access to the original author's source code. Our baseline is to do standard supervised learning (no episodic meta-training) and then fine-tune transductively. This method is very simple to implement and thus can be considered a \\\"baseline\\\".\\n\\n>>> Therefore, it is more clear to show the improvement of the proposed baseline models\\n\\nThe main point of the paper is to devise a (simple) baseline, and show that a trivial form of meta-training surpasses state-of-the-art methods. This is a statement about the current methods and the evaluation benchmarks, rather than an attempt at creating a plausible state-of-the-art system.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We thank the reviewer for their feedback. Please also see our response to all the reviewers in the comment above.\\n\\n>>> Add reference to \\\"Semi-supervised Learningby Entropy Minimization. Grandvalet et al. NIPS 2015\\\"\\n\\nThanks. We already have this reference in our draft (Section 2.1).\\n\\n>>> In fact, I suggest the authors to extend Section 3.2 a bit more because this is the main technical contribution.\\n\\nWe agree. We will expand upon Section 3.2 with an example on transductive learning, similar to Figure 2 in the paper \\\"Transductive Inference for Text Classication using Support Vector Machines\\\".\\n\\n>>> The main argument of this paper is that accuracies over episodes have high variance. But isn't it expected that different episode can include samples with different difficulties, leading to high variance of accuracies? I do not think it is realistic to have one algorithm that achieves similar accuracies on both easy and difficult tasks.\\n\\nThe fact that episodes have high variance is _not_ our main argument. Figure 1 simply seeks to demonstrate that the way we measure the performance in few-shot learning may be fallible because of high variance across episodes. Identifying this is one of our results although not the main result of the paper. This is important because the standard deviation being so high has never been reported in the literature before. We agree with the reviewer completely on their second point: it is unlikely that one single algorithm will have similar accuracies on both easy and difficult tasks.\\n\\n>>> I am not convinced by the necessity of the proposed hardness metric\\n\\nAs your previous question (and our response) says, intuitively few-shot tasks can be of diverse difficulty. The hardness metric is our attempt at answering the question: \\\"how does one characterize the difficulty of a few-shot task\\\"? This contribution is important, and we believe necessary, for two reasons.\\n\\n1. The current accepted procedure of reporting mean and standard error does not capture the diversity of few-shot tasks. The proposed metric allows sampling tasks of a specific hardness and measuring their accuracy. One may thereby report a histogram of the accuracies, even for single few-shot protocol, like we have done in Figure 3.\\n2. Current algorithms train different models with different hyper-parameters for different few-shot protocols. Doing so is detrimental to ascertaining real-world few-shot performance where we do not control the way and shot. The metric provides a way to measure the performance of an algorithm across multiple protocols. This is similar to using an ROC curve for ascertaining both Type I and Type II errors in standard supervised learning.\\n\\nPlease also see our response to the next two, related, comments.\\n\\n>>> The authors also fail to evaluate different methods with the proposed metric and show if this metric makes the ranking of algorithms different.\\n\\nAs the reviewer can appreciate, it is difficult to evaluate the previous algorithms on all these ways and shot without access to the original published models and source code of the authors. We have not been able to reproduce numbers of famous algorithms, e.g., MAML, or obtain good results with others, e.g., Prototypical Networks on new backbone architectures.\\n\\nWe have compared two algorithms discussed in our paper, namely support-based initialization and transductive fine-tuning using this metric. The former is better across all test protocols except for ImageNet-21K, where both are comparable. This is a sanity check for the hardness metric.\\n\\nWe are proposing that this is one way to measure hardness and report results systematically. It is not the only way: ascertaining the efficacy of this metric and comparisons to other metrics will be part of future work.\\n\\n>>> I find Figure 3 hard to interpret because there are too much information in it, including different colors, a lot of markers and lines\\u2026. I believe writing of Section 4.4 could be further improved.\\n\\nWe will improve the clarity of Section 4.4.\\n\\nWe agree that there is a lot of information in Figure 3. The caption however explains every part of the figure, colors, markers and the lines. The purpose of plotting the data in one figure is to demonstrate that the hardness is a valid metric for all the 5 datasets, all the different shots and ways, and two different algorithms. This is an ambitious goal but proposing a new evaluation metric demands being thorough.\\n\\n>>> Why is the range of hardness 1-3 for some datasets and 1-5 for other datasets?\\n\\nThe fact that Mini-ImageNet, CIFAR-FS and FC-100 have hardness 1-3 indicates that they are easy. This is primarily due to their evaluation datasets having fewer classes. For bigger datasets we can go to higher ways (we tested up to 160-way for Tiered-ImageNet and ImageNet-21K) and the maximum hardness is almost 5.\\n\\n[Thorsten Joachims, 99] Transductive Inference for Text Classication using Support Vector Machines\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We thank the reviewer for their feedback. Please also see our response to all the reviewers in the comment above.\\n\\n>>> How can the simple baseline work so well?\\n\\nWe have included an answer to this question in Appendix D, question 1. The main reasons for the strong performance are:\\n1. It is critical to have efficient algorithms for adaptation in the case with few-labeled data. The metric-learning based initialization is important.\\n2. Not all existing algorithms use state-of-the-art backbone architectures, e.g,. the (conv-64)_{x4} network that is popular has only about 225,000 parameters (for comparison, LeNet for MNIST has about 130,000).\\n\\n>>> Is this because of some bias from the datasets?\\n\\nWe do not believe our strong results are due to \\\"bias\\\" in the datasets. The hardness metric in Section 4.4 and Figure 3 shows that none of the datasets are unduly easy; their hardness is spread across the X-axis, only constrained by the size of the datasets themselves.\\n\\n>>> I also suggest that the author can try their method on some new dataset, like Meta-Dataset (Triantafillou et al. 2019).\\n\\nThanks for this suggestion. We ran experiments on Meta-Dataset which we will add to the main paper. \\n\\nTransductive fine-tuning is better, most times significantly, than SoTA on 6/8 tasks in Meta-Dataset, the average rank across all tasks is 1.4375. We did not change hyper-parameters for transductive fine-tuning and kept them to the same values as our original submission. We could not find the link to the Fungi dataset, the original link does not seem to work anymore. Using the Quick Draw dataset requires us to accept certain legal conditions; we are working on getting the approval to use this dataset.\\n\\n+----------------------------+---------------------------+------------------------------------+-------------------------------------------------+\\n|Dataset | Best performance | Transductive Fine-tuning | Rank for Transductive Fine-Tuning |\\n| | in Meta-Dataset | | (based on Meta-Dataset) |\\n+----------------------------+---------------------------+------------------------------------+-------------------------------------------------+\\n| ImageNet (ILSVRC) | 51.01 +/- 1.05 | 55.57 +/- 1.02 | 1 |\\n| Omniglot | 63.00 +/- 1.35 | 79.59 +/- 0.98 | 1 |\\n| Aircraft | 68.69 +/- 1.26 | 67.26 +/- 0.98 | 1.5 |\\n| Birds | 68.79 +/- 1.01 | 74.26 +/- 0.82 | 1 |\\n| Textures | 69.05 +/- 0.90 | 77.35 +/- 0.74 | 1 |\\n| VGG Flowers | 86.86 +/- 0.75 | 88.14 +/- 0.63 | 1.5 |\\n| Traffic Signs | 66.79 +/- 1.31 | 55.98 +/- 1.32 | 2 |\\n| MSCOCO | 43.41 +/- 1.06 | 40.62 +/- 0.98 | 2.5 |\\n+----------------------------+---------------------------+------------------------------------+-------------------------------------------------+\\n| Average Rank | 1.4375 |\\n+----------------------------+---------------------------+------------------------------------+-------------------------------------------------+\\n\\nThe original Meta-Dataset paper samples few-shot episodes for ImageNet and Omniglot by sampling classes that are far away from each other, this therefore creates easier episodes (easily distinguishable). We did not do this for our experiments and simply sampled the classes uniformly at random, which also creates harder episodes. An interesting thing to note above is that transductive fine-tuning has consistently lower standard error in the accuracy than the original results (for the same number of few-shot episodes).\\n\\n[Triantafillou et al.] Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples\"}",
"{\"title\": \"Response to all reviewers\", \"comment\": \"We thank the reviewers for their feedback. We first summarize our response and the results of additional suggested experiments here. We have responded to the concerns of the reviewers as individual comments below.\\n\\nAll the reviewers were in agreement that our method, transductive fine-tuning, is sound and effective. They also agree that the results of the paper have been validated effectively and thoroughly on several datasets along with a large-scale experiment on ImageNet-21K.\\n\\nAs suggested by Reviewer 2 we have added an additional experiment on Meta-Dataset, a summary of the results (full results in Appendix C.10 and individual comment) is:\\n\\nTransductive fine-tuning is better, most times significantly, than SoTA on 6/8 tasks in Meta-Dataset, the average rank across all tasks is 1.4375. We did not change hyper-parameters for transductive fine-tuning and kept them to the same values as our original submission. We could not find the link to the Fungi dataset, the original link does not seem to work anymore. Using the Quick Draw dataset requires us to accept certain legal conditions; we are working on getting the approval to use this dataset.\", \"the_main_concern_of_reviewer_3_is\": \">>> To make a fair comparison, the authors should apply pre-trained model with the famous baselines, such as Matching Network, Prototypical Network, and MAML\\n\\nWe are not sure we completely understand what the reviewer is saying here.\\n\\nIf the reviewer means \\\"run famous baselines with your backbone architecture\\\".\\nAs the reviewer can appreciate, it is very difficult to reproduce these published results without access to original author\\u2019s source code, or run them for newer architectures. In particular, we have not been able to reproduce the results of MAML, or obtain good results with others, e.g., Prototypical Networks on new backbone architectures. Table 3 in Appendix C.6 includes results of transductive fine-tuning on the architectures of these above algorithms, where we have similar performance gains on these algorithms.\\n\\nIf the reviewer instead means that we should pre-train our backbone with other algorithms.\", \"pre_training_the_backbone_using_other_meta_training_approaches_will_defy_our_baseline_effort\": \"it will not only make the \\\"baseline\\\" as complicated as the existing algorithms but is also prohibitively difficult to do without access to the original author's source code. Our baseline is to do standard supervised learning (no episodic meta-training) and then fine-tune transductively.\\n\\nIn a field that may be crucial for ushering in personalized machine learning, we believe our work is essential to ascertain the empirical performance of current algorithms and valuable to the community.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces a transductive learning baseline for few-shot image classification. The proposed approach includes a standard cross-entropy loss on the labeled support samples and a Shannon entropy loss on the unlabeled query samples. Despite its simplicity, the experimental results show that it can consistently outperform the state-of-the-art on four public few-shot datasets. In addition, they introduce a large-scale few-shot benchmark with 21K classes of ImageNet21K. Finally, they point out that accuracies from different episodes have high variance and develop another few-shot performance metric based on the hardness of each episode.\", \"positive_comments\": \"1. The proposed transductive loss that minimizes entropy of query samples is novel in few-shot learning. Given limited labeled samples, finetuning with unlabeled query samples via proper loss is a good idea to tackle few-shot learning. \\n2. The evaluation is thorough. A significant number of few-shot methods are compared on 4 exisiting few-shot benchmarks. An additional large-scale benchmark is also introduced to facilitate\\u00a0 the few-shot learning research. \\n3. A novel evaluation metric is proposed to evaluate few-shot learning methods under different difficulties level. Although I am convinced by the importance of such metric, it is interesting to supplement the averaged accuracy because it tells how the methods work under easy and difficult classes.\", \"negative_comments\": \"1. The folloing important reference of the Shannon entropy on unlabeled data is missing. In fact, I suggest the authors to extend Section 3.2 a bit more because this is the main technic contribution. \\nSemi-supervised Learningby Entropy Minimization.\\u00a0 Grandvalet et al. NIPS 2015\\n2. I am not convinced by the necessity of the proposed hardness metric. The main argument of this paper is that accuracies over episodes have high variance. But isn't it expected that different\\u00a0 episode can include samples with different difficulties, leading to high variance of accuracies? I do not think it is realistic to have one algorithm that achieves similar accuracies on both easy and difficult tasks. The authors also fail to evaluate different methods with the proposed metric and show if this metric makes the ranking of algorithms different. Moreover, I find Figure 3 hard to interpret because there are too much information in it, including different colors, a lot of markers and lines. Why is the range of hardness 1-3 for some datasets and 1-5 for other datasets? I believe writting of Section 4.4 could be further improved.\\n\\nOverall, I think this paper has significant contributions of proposing a novel few-shot baseline that establishes a new state-of-the-art and would recommend weak accept.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose a fine-tune-based few-shot classification baseline, which has been validated effectively on several datasets, including Mini-Imagenet, Tiered-Imagenet, CIFAR-FS, FC-100, and Imagenet-21k. In addition to the method, the authors also provide concrete experimental setting and new evaluation proposals.\\n\\n1. The authors propose to use the logits instead of embedding as the main bridge between the pre-trained model and the meta-learning model. Does it mean we represent novel classes based on the properties of the meta-train classes? If so, does this method requires more meta-train classes to enrich the representation ability? How will the method perform when working on few-shot learning problems with a large distribution shift?\\n\\n2. To make a fair comparison: instead of citing the values in the published papers directly and comparing different methods with different architectures, the authors should also apply the pre-trained model with the famous baselines, such as Matching Network, Prototypical Network, and MAML. Now there exists a very lap gap between the Matching Network values and the newly proposed one. For example, fine-tune the Matching Network on the pre-trained backbone in both train and train+val settings. Therefore, it is more clear to show the improvement of the proposed baseline models. Q/A 2-3 in appendix D do not fully solve this problem.\\n\\n3. Considering the randomness of the sampled few-shot tasks, the authors can consider evaluating over more episodes (e.g., 10,000 trials) than 1000 in the paper.\\n\\n4. It's better for the authors to emphasize and differentiate the transductive fine-tune and the inductive counterpart in the paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper provided a baseline method for few-shot learning. It utilizes a simple but effective approach via a transductive fine-tuning. The experimental results on several benchmarks show the improvements over state-of-the-art approaches.\\n\\nIt is a comprehensive study of the methods and datasets in this domain. The motivation, experimental details and result analysis are clear to me. Overall, the paper is well written and the author is very transparent to show what they have. \\n\\nThe only drawback of this paper is it does not provide insight/explanation. How can the simple baseline work sowell? Is this because of some bias from the datasets? I also suggest that the author can try their method on some new dataset, like Meta-Dataset (Triantafillou et al. 2019). \\n\\nI agreed with the author that the paper is not novel. However, I think the acceptance of the paper could benefit the community and I encourage the author can try this on some new benchmark. Therefore, I made my recommendation.\"}"
]
} |
ByxQB1BKwH | Abstract Diagrammatic Reasoning with Multiplex Graph Networks | [
"Duo Wang",
"Mateja Jamnik",
"Pietro Lio"
] | Abstract reasoning, particularly in the visual domain, is a complex human ability, but it remains a challenging problem for artificial neural learning systems. In this work we propose MXGNet, a multilayer graph neural network for multi-panel diagrammatic reasoning tasks. MXGNet combines three powerful concepts, namely, object-level representation, graph neural networks and multiplex graphs, for solving visual reasoning tasks. MXGNet first extracts object-level representations for each element in all panels of the diagrams, and then forms a multi-layer multiplex graph capturing multiple relations between objects across different diagram panels. MXGNet summarises the multiple graphs extracted from the diagrams of the task, and uses this summarisation to pick the most probable answer from the given candidates. We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices (RPM). For an Euler Diagram Syllogism task MXGNet achieves state-of-the-art accuracy of 99.8%. For PGM and RAVEN, two comprehensive datasets for RPM reasoning, MXGNet outperforms the state-of-the-art models by a considerable margin. | [
"reasoning",
"Raven Progressive Matrices",
"graph neural networks",
"multiplex graphs"
] | Accept (Poster) | https://openreview.net/pdf?id=ByxQB1BKwH | https://openreview.net/forum?id=ByxQB1BKwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Oe_UrZprqsU",
"vqKoVb0lZS",
"S1l2nYTLoS",
"SyxZXtpUoS",
"rJllgtaUsH",
"Syepv_6IjH",
"HJlUfdaIiB",
"H1lWI44d9H",
"BJgYu7iQqH",
"B1eGUyDCKB",
"r1xaSDIRKS",
"H1lN8OSAYr"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1632852596446,
1576798729824,
1573472692280,
1573472537319,
1573472488099,
1573472357463,
1573472269641,
1572516937400,
1572217712861,
1571872585857,
1571870532915,
1571866699741
],
"note_signatures": [
[
"~Yuan_Yang2"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1685/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1685/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1685/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1685/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1685/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1685/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1685/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1685/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1685/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1685/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Question about confusing notation\", \"comment\": \"In Section ''4 METHOD'', the first paragraph, line 4, it says ''for each diagram $d_i \\\\subset C \\\\cup A$'', but shouldn't diagrams be elements of $C$ and $A$? If so, shouldn't it be $d_i \\\\in C \\\\cup A$ ?\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper a new method of constructing graph neural networks for the task of reasoning to answer IQ style diagrammatic reasoning, in particular including Raven Progressive Matrices. The model first learns an object representation for parts of the image and then tries to combine them together to represent relations between different objects of the image. Using this model they achieve SOTA results (ignoring a parallely submitted paper) on the PGM and Raven datasets. The improvement in SOTA is subtantial.\\n\\nMost of the critique made for the paper is on writing style and presentation. The authors seem to have fixed several of these concerns in the newly uploaded version of the paper. I will further request the authors to revise the paper for readability. However, since the paper presents both an interesting modeling and improved empirical results, I recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of changes made in the revised version\", \"comment\": \"In the revised version, we improved structuring and writing quality according to reviewers' comments. The major changes are:\\n\\n1. Combined Figure 1 and 2 to give more space for other sections.\\n2. Moved parts of dataset description to Appendix to give more space for other sections.\\n3. Improved explanation of model naming in \\\"Introduction\\\" section.\\n4. Improved \\\"Method\\\" section. Specifically we improved explanations in 'Search Space Reduction', and added more explanation and motivation for multiplex edges and cross multiplexing gating function. We also added more details for the 'reasoning network'.\\n5. Added more results both in section 5.1 'search space reduction' and in Appendix D 'More details on search space reduction'.\\n6. Improved presentation in Appendix. Specifically we now represent architecture configurations with figures instead of tables, which makes it more reader friendly. We also added more detailed descriptions for all the modules.\\n7. Fixes typos and other minor issues such as formatting.\"}",
"{\"title\": \"Thank you for your valuable comments\", \"comment\": \"Thank you for your valuable comments. We improved the structuring and writing quality in this revised version. We reduced non-important parts of the paper, such as dataset description, and provide more details on the architectures. We also improved the Appendix by changing some of the table architecture representation to diagrammatic representations, which should make it more readable.\"}",
"{\"title\": \"Thank you for your valuable comments\", \"comment\": \"Thank you for your valuable comments. In our revised version, we improved explanations of our models with more details. Here we address some of your concerns:\\n\\n1. \\\"the statistics in terms of the search space reduction as to how many subsets get pruned\\\":\\nWe have added more statistics of the search space reduction experiments in Appendix D, such as the top-16 subsets with the highest gating values.\\n\\n2. Further, there may be subsets of graphs that could span across rows and columns. The decision in terms of restricting the reduction to span specific rows or columns may result in pertinent nodes also being pruned:\\n\\nThe subsets are not constrained to rows and columns. During search space reduction we only make the weak assumption that edges in the same subset must be adjacent (defined as two edges linking the same node). This allows for subsets other than rows and columns, such as diagonal of the matrix. The search space reduction experiments however give lower scores for subsets other than rows and columns. This is why we hard-gate only row and columns subsets in the final architecture. We explained this more clearly in the revised version.\\n\\n3. \\\"However, the performance is slightly lesser than another paper simultaneously submitted that achieves similar results. That approach uses transformer network for spatial attention while here the spatial attention is just based on object level representation\\\":\\n\\nWe have just noted this parallel submission and compared it with our results. We found that our model performs better for PGM dataset(89.6% against 88.2% in neutral split with beta=10). In their response to comments, they stated that their model achieved performance of 19.67% for RAVEN-10000, which is the public dataset we used in the experiments. We achieved 83.91% accuracy. They did not make it clear how did they obtain 50k samples for each figure configuration, but our guess is that they used the open-source code to generate more data than available in RAVEN-10000.\"}",
"{\"title\": \"Thank you for your valuable comments (Part 2/2)\", \"comment\": \"7.\\\" If interlayer connections are between objects in different layers (diagrams), what is this supposed to capture? Clearly, there may not be any unique correspondence between objects across diagrams\\\":\\n\\nThe interlayer connections are supposed to capture relations in the attributes of objects. For example, in the simple case of 'Progression' relation of object sizes, the connections can capture the fact that objects in later layers are smaller than objects in earlier layers. For relations such as \\\"AND\\\" in object colors, the connections can capture whether for any node in the last diagram there is node of equal color in previous two diagrams. We improved explanation in section 'Multiplex Graph Network' of the revised version.\\n\\n8. \\\"What\\u2019s a cross-multiplexing gating function? If it\\u2019s a known concept, please provide a reference else explain\\\":\\n\\n'Cross-multiplexing gating function' is not a known concept but newly introduced in this paper. As discussed in the paper, this gating function accepts a set of summarised node embeddings as input, and output gating variables for each layer of node embeddings in the set. It is 'cross-multiplexing' because each embedding in the set is 'multiplexing' other embeddings in the set with gating variables that regulate which stream of information passes through. In the revised version, we added more explanations on this gating function.\", \"additional_citations\": \"1. Santoro, Adam, et al. \\\"A simple neural network module for relational reasoning.\\\" Advances in neural information processing systems. 2017.\\n2. Andreas, Jacob, et al. \\\"Neural module networks.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.\\n3. Perez, Ethan, et al. \\\"Film: Visual reasoning with a general conditioning layer.\\\" Thirty-Second AAAI Conference on Artificial Intelligence. 2018.\\n4. Ren, Shaoqing, et al. \\\"Faster r-cnn: Towards real-time object detection with region proposal networks.\\\" Advances in neural information processing systems. 2015.\\n5. Eslami, SM Ali, et al. \\\"Attend, infer, repeat: Fast scene understanding with generative models.\\\" Advances in Neural Information Processing Systems. 2016.\"}",
"{\"title\": \"Thank you for your valuable comments (Part 1/2)\", \"comment\": \"Thank you very much for your valuable comments! In the revised version, we improved the writing, removed jargon and provided better explanations for concepts. Below are explanations for a few points mentioned in your review:\\n\\n1. \\\"Why is this approach called 'Multiplex Graph Networks'? What information is being multiplexed and how?\\\":\\n\\nThis approach is called 'Multiplex' Graph Networks because the architecture use graph neural networks with 'Multiplex' edges, which means that edges contain multiplex sub-connections that capture relations with different attributes, such as color and size. We mentioned this in the introduction with a citation for multiplex networks(Kao & Porter 2018). 'Multiplex' here means that multiple types of relations exist in a multi-layer network. This is slightly different from the concept of 'Multiplexing' in digital electronics and communications. We improved the discussion of the naming in the revised version.\\n\\n2.\\\"Once the module is run for search space reduction, the set of edges or relations (node pairs) become well-defined (in adjacent rows, columns) as well diagram subsets (edge pairs). The corresponding modules are just computing vectorial embeddings.\\\":\\n\\n The graphs exist both on a diagram level and object level. While we performed search space reduction to trim edges of the graph of 'diagrams', we still construct graphs of objects for each of the diagram subsets. This is visualized in Figure 1(b) where each object is a node, and relations are inferred between objects and embedded in the edge embeddings. Thus the corresponding modules are processing a graph of objects rather than vector embeddings. In the revised version, we improved expalanations in 'search space reduction' section to make it clearer.\\n\\n3. \\\" there is no reasoning that\\u2019s taking place. Reasoning requires tokens and grammar over such tokens which is not there in this case. The proposed model is non-interpretable.\\\"\\n\\nWhile we agree that the model lacks certain interpretability, we argue that the model can still be considered as undertaking 'reasoning'. We followed a recent line of work (e.g., Andreas et al 2016 , Santoro et al 2017 and Perez et al 2018) which use differentiable neural modules to model relations (equivalent to grammars) between entities (equivalent to tokens). While the black-box neural modules lack interpretability, they are still performing the 'reasoning' tasks such as in Visual Question Answering and in Raven Progressive Matrices. We do agree that improving interpretability is an important direction of future work for our models.\\n\\n4. \\\"The reasoning module can also be considered as another graph processing module?\\\":\\n\\nThere are two hierarchical graph levels, which are graphs of diagram subsets and graph of objects in each diagram subsets. The reasoning module can be considered as processing the graphs of diagram subsets, with each diagram subset summarized by the previous graph processing module. We agree that this statement is confusing and requires substantial explanation, and thus have removed it in the revised version.\\n\\n5. \\\"... we use spatial attention to iteratively attend...\\\":\\nWe use 'iteratively' here because in some work on spatial attention such as R-CNN (Ren et al 2015) and AIR models (Eslami et al 2016), there is the idea of iteratively processing each area of attention, either with an inherently iterative Recurrent Neural Nets, or a convolutional kernel that sweeps across the images. But we agree that in practice, particularly with GPUs, the attentions are run in parallel. Thus, we removed the word 'iteratively' for less confusion.\\n\\n6. \\\"What do the \\u2018N\\u2019 nodes in each layer correspond to? There are clearly not objects or diagram primitives as they can vary in number in each diagram.\\\"\\n\\nThe 'N' nodes corresponds to the number of extracted object representations in each diagram. 'N' can be both static (for CNN grid features) and dynamic (for spatial attention). For CNN grid features, there are a fixed number of locations in the feature maps, and thus 'N' is fixed and equal to H*W where H and W are the width and the height of the feature maps. For spatial attention, as explained in Appendix A.1., 'N' can vary from diagram to diagram because the spatial attention module outputs variable number of object representations. This is because even though the number of attended locations is fixed, for each location a binary presence variable 'z_pres' is computed indicating if an object is present in this location. In this case, 'N' is equal to sum of all 'z_pres' variables.\\n\\nAdditional comments addressed in next reply.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper the authors solve for the task of Raven Progressive Matrices (RPM) reasoning. They do so by considering multiplexed graph networks. They present an architecture for the same. The basic premise is a combination of object level representation that is obtained by a method similar to region proposal and combining them with graph network. The approach uses gated graph networks that also uses an aggregation function. These are combined and result in node embeddings. Detailed analysis of the network is provided. This provides improved results over earlier WREN method. However, the performance is slightly lesser than another paper simultaneously submitted that achieves similar results. That approach uses transformer network for spatial attention while here the spatial attention is just based on object level representation.\\n\\nOver all while the contribution is useful, not much analysis is provided on the interpretability of the results. For instance, the statistics in terms of the search space reduction as to how many subsets get pruned. Further, there may be subsets of graphs that could span across rows and columns. The decision in terms of restricting the reduction to span specific rows or columns may result in pertinent nodes also being pruned. Certain aspects that relate to object level representation are not very clear. I am not fully aware about results in this specific area and that may also be a reason for the same.\\n\\nTo conclude, I believe this paper provides a useful contribution by modeling the diagrammatic abstract reasoning as a graph based reasoning approach. The multiplex graph network could be a useful component that is also relevant for other problems. The paper provides sufficient analysis to convince us regarding the claims.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a novel, feedforward, end-to-end trainable, deep, neural network for abstract diagrammatic reasoning with significant improvements over the state of the art. The proposed model architecture is reasonable and is designed to exploit the information present at multiple granularities \\u2013 at the level of objects in the diagram, their relations across diagrams, and diagram subsets. As a multimodule neural pipeline, it seems a reasonable design. Further, it shows significant performance gains over the state of the art.\\n\\nHowever, the writing quality is poor and is the primary reason for my giving it a low score. The paper is difficult to read and it\\u2019s hard to figure out the terminology and it\\u2019s grounding in the problem; the high-level abstract design and design choices that address the nature of the problem from the low level details, etc. \\n\\nThe paper uses terminology without explaining the reason for it - for example, why is the approach called \\u2018Multiplex Graph Networks\\u2019? What information is being multiplexed and how? Graphs are conceptual in the proposed approach \\u2013 there doesn\\u2019t seem to be any graph algorithms or graph based processing. Once the module is run for search space reduction, the set of edges or relations (node pairs) become well-defined (in adjacent rows, columns) as well diagram subsets (edge pairs). The corresponding modules are just computing vectorial embeddings. Similarly, there is no reasoning that\\u2019s taking place. Reasoning requires tokens and grammar over such tokens which is not there in this case. The proposed model is non-interpretable. \\n\\nThe technical writing is loose and hand-wavy. The appendix is a lot of grammatical mistakes.\", \"a_few_clarifications_may_be_helpful\": [\"\\u201cThe reasoning module can also be considered as another graph processing module\\u201d?\", \"\\u201c\\u2026 we use spatial attention to iteratively attend \\u2026\\u201d \\u2013 there is no iterative attention. It\\u2019s all parallel.\", \"What do the \\u2018N\\u2019 nodes in each layer correspond to? There are clearly not objects or diagram primitives as they can vary in number in each diagram.\", \"if interlayer connections are between objects in different layers (diagrams), what is this supposed to capture? Clearly, there may not be any unique correspondence between objects across diagrams.\", \"What\\u2019s a cross-multiplexing gating function? If it\\u2019s a known concept, please provide a reference else explain.\", \"Finally, I\\u2019m open to revising my score upwards if it turns out that I\\u2019m the only one who had difficulty with the writing. The architecture design makes sense for the addressed class of problems (though the proposed network is non-interpretable and doesn\\u2019t do any reasoning nor uses graphs or graph based processing in a meaningful way), the results are good and the experimental evaluation sufficient.\"]}",
"{\"title\": \"Comparison with parallel submission and generalization experiments\", \"comment\": \"Thank you very much for your comments. We were not aware of this paper but will definitely include it in revised version of our paper.\\n\\nWe have in fact performed the generalization experiments, as discussed in section 5.4 and also in Appendix F. In section 5.4 we have performed experiments on generalization regime of 'interpolation' and 'extrapolation'. We put results of other generaliztion regimes in Appendix F because of the page limit. If there is anything unclear in the description, please let us know and we will improve it.\\n\\nThanks again for taking time reading and commenting on our paper.\"}",
"{\"title\": \"Another ICLR 2020 submission on PGM\", \"comment\": \"It's perhaps useful to compare this paper with Paper 1456, which also shows results on PGM and Raven.\\n\\nThis paper gets slightly better numbers.\\n\\nA public comment there faults the paper for not showing results on generalization settings. I think this flaw is true for this paper as well.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper proposes using a new version of graph networks \\u2013 multiplex graph networks \\u2013 which do object representation followed by some form of graph processing and reasoning to answer \\\"IQ test\\\" style diagrammatic reasoning, in particular including Raven Progressive Matrices that have been previously studied (a little).\", \"The paper shows very strong results on multiple datasets, much stronger than previous results (from strong groups) on these datasets. On these grounds, I believe the paper should be accepted.\", \"However, the structure and writing of the paper was very frustrating to me. The paper just didn't make much of an attempt to explain and then motivate/analyze the model used. I mean, if I were writing the paper, I would have considered and done many things, such as:\", \"shortening the introduction\", \"shortening the related work\", \"making the presentation of the datasets more succinct\", \"having only one figure that covers most of what is currently in figures 1 and 2\", \"putting details of what seem more ancillary details like the treatment of background lines objects in an appendix\", \"remove Figure 3, which didn't convey much to me in the absence of more careful explanation of the model.\", \"so that I could motivate, carefully explain, and evaluate the main model in the paper. But here, all these things fill the main text, and we're told that we have to read the appendices to understand the model.... And the presentation in the appendix is more a dump-all-the-facts presentation than a careful development of the design.\", \"Nevertheless, the general direction of the architecture seems sound, and the results look very strong, and there are even some useful ablations in the appendix.\"]}"
]
} |
SklGryBtwr | Environmental drivers of systematicity and generalization in a situated agent | [
"Felix Hill",
"Andrew Lampinen",
"Rosalia Schneider",
"Stephen Clark",
"Matthew Botvinick",
"James L. McClelland",
"Adam Santoro"
] | The question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI. Here, we consider tests of out-of-sample generalisation that require an agent to respond to never-seen-before instructions by manipulating and positioning objects in a 3D Unity simulated room. We first describe a comparatively generic agent architecture that exhibits strong performance on these tests. We then identify three aspects of the training regime and environment that make a significant difference to its performance: (a) the number of object/word experiences in the training set; (b) the visual invariances afforded by the agent's perspective, or frame of reference; and (c) the variety of visual input inherent in the perceptual aspect of the agent's perception. Our findings indicate that the degree of generalisation that networks exhibit can depend critically on particulars of the environment in which a given task is instantiated. They further suggest that the propensity for neural networks to generalise in systematic ways may increase if, like human children, those networks have access to many frames of richly varying, multi-modal observations as they learn. | [
"systematicitiy",
"systematic",
"generalization",
"combinatorial",
"agent",
"policy",
"language",
"compositionality"
] | Accept (Poster) | https://openreview.net/pdf?id=SklGryBtwr | https://openreview.net/forum?id=SklGryBtwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"APCxqwApje",
"rkecRlD2jB",
"HkxvT5U2jH",
"HyxGIMUnoS",
"H1eAp6B2jH",
"SygQojS3sS",
"B1g6djrhiB",
"ByxW-PBniS",
"HyxG-PZooH",
"BJlsCL1siS",
"BJxmcLyioB",
"HJeNbA-qjB",
"H1xOA3DFiB",
"S1geqFcuoH",
"SkxuMtcujB",
"SJxtvO5uoB",
"BJgVAR1ziS",
"HJg8zgAbjS",
"rklIVGaWjB",
"BJe59P2Wsr",
"rJxuMyU1ir",
"Byeq7rzTYr",
"HyeAa6v9Fr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729795,
1573839058054,
1573837502602,
1573835337910,
1573834181698,
1573833627483,
1573833589478,
1573832440709,
1573750522151,
1573742290813,
1573742219245,
1573686780279,
1573645520187,
1573591431814,
1573591312057,
1573591136710,
1573154507683,
1573146637855,
1573143086198,
1573140370423,
1572982543522,
1571788066278,
1571614149585
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1684/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1684/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1684/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1684/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1684/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper studies out-of-sample generalisation that require an agent to respond to never-seen-before instructions by manipulating and positioning objects in a 3D Unity simulated room, and analyzes factors which promote combinatorial generalization in such environment.\\n\\nThe paper is a very thought provoking work, and would make a valuable contribution to the line of works on systematic generalization in embodied agents. The draft has been improved significantly after the rebuttal. After the discussion, we agree that it is worthwhile presenting at ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"New curves were traces of replicas in each condition\", \"comment\": \"Thanks for revising this so quickly! The light curves were an artifact of the plotting library; each dark line was a mean over multiple agent replicas, which were still rendered very lightly. We have removed these and uploaded a new version.\"}",
"{\"title\": \"Noted, and thank you\", \"comment\": \"Thank you again for continued engagement with our paper. We agree that fine-grained isn't the correct term and have removed this as you suggest.\"}",
"{\"title\": \"\\\"fine-grained\\\" control\", \"comment\": \"I appreciate the revisions. I have yet to read the revised paper more carefully, but upon a quick read, I noticed that in several places you claim that your tasks require \\\"fine-grained\\\" control. I disagree with this claim. The control required in your tasks is extremely coarse (compared to controlling a robotic hand, for instance) and this in fact seems to be crucial for generalization to work in your tasks: for instance, what the agent needs to do to lift an object isn't very sensitive to the precise shape of the object. This point is explicitly acknowledged on p. 4 (last paragraph of section 3), and I appreciate that. But then references to your tasks requiring \\\"fine-grained control\\\" elsewhere in the paper become misleading. So, I would suggest that you remove those references from the paper (doing a quick ctrl+F, I could find two such instances on p. 2 and p. 3 respectively).\"}",
"{\"title\": \"Concerns resolved and change of score\", \"comment\": \"I appreciate the efforts that the authors have undertaken to address my concerns. While the paper is far from perfect, it is still a very thought provoking work and I believe that it would make a valuable contribution to the line of works on systematic generalization in embodied agents. I will be updating my score to reflect the same shortly.\", \"minor_feedback\": \"In the newly added plot in figure 5, there are four curves annotated in the legend (light green, light black, dark green and dark black). But apart from those, I can also see several other very lightly colored curves in the actual plot. Can you clarify what those are (or remove them if they are not needed)?\"}",
"{\"title\": \"Please let us know if you are happy with our resolution of your concerns\", \"comment\": \"If you have further concerns in light of the revisions that we have made, we would really appreciate knowing this while there is still time to make improvements to the paper. Many thanks again for your efforts and engagement!\"}",
"{\"title\": \"Please let us know if your concerns have been addressed\", \"comment\": \"If you have further concerns in light of the revisions that we have made, we would really appreciate knowing this while there is still time to make improvements to the paper. Many thanks again for your efforts and engagement!\"}",
"{\"title\": \"Experiments re-run and passages edited to resolve final concerns\", \"comment\": \"Thank you. Regarding your concern (1), the topic of whether neural networks (or connectionist models) can learn an adequate treatments of logical operators has a long history, which can be traced back to the PDP books (as noted in the paper), discussed at length by Steedman [1]. These treatments consider the ability learn to approximate negation symbolic inputs like logical expressions or natural language.\\u00a0Here we extend these analyses by considering a fully-situated, behavioural metric for the comprehension of negation, and consider ability to generalize as evidence of an adequate representation. To further underline why this may be of interest to the research community, consider the following proposal from Steedman (who is certainly not an avowed connectionist) for a situated\\u00a0model of language processing.\\n\\n=================\\n\\nIt is likely that such a research program would proceed by first conceptualizing primarybodily actions and sensations, then coordinating perception and primary actions likereaching, then conceptualizing identity, permanence and location of objects, first independent of their percepts, then of the particular actions they are involved in, amounting tothe internalization of the components of a stable world independent of the child\\u2019s actions.Later stages would have to include the conceptualization of more complex eventsincluding intrinsic actions of objects themselves (such as falling), translations and eventsinvolving multiple participants, intermediate participants including tools, and goals. Atthis final stage of purely sensory-motor development most of the prerequisites forlanguage learning would be established, perhaps embedded in RAAM or some otherassociative memory, and could be used to support a program of inducing a similarlylayered sequence of linguistic categories such as: deictic terms based on a proximal/distaldimension (whose central place in language development with respect to reference anddefiniteness is discussed by Lyons, 1977\\u2014cf. Freud, 1920, pp. 11\\u201316 for a revealing casestudy), markers of topic, comment and contrast, common nouns, spatial and path terms,causal verbs, modal and propositional attitude verbs, and temporal terms. It is likely thatthe semantic theory that would emerge from this work would be rather unlike anythingproposed so far within standard logicist frameworks. Such a semantics would be likely tomake us view phenomena like quantification, modality, negation, and variable-binding innew ways, within a unified theory combining symbolic and neurally-grounded levels\\\" [1, p630]\\n\\n==============\\n\\nMore practically, we started on negation because our agent was generalizing very well on other tests of systematicity, and we wanted to consider the limits of the experiential approach we advocate here. We hope that including an experiment where our best agent is imperfect may stimulate new ways to improve systematicity, either through environmental or agent-based methods; indeed, when we have shared this work with others, many have been most engaged with trying to improve on this aspect.\", \"in_order_to_express_these_sentiments_more_clearly_we_have_added_the_following_sentences\": \"\\\"Of course, the mere fact that larger training sets yield better generalisation in neural networks is not novel or unexpected. On the other hand, we find the emergence of a logical operator like negation in the agent in a reasonably systematic way (noting that adult humans are far from perfectly systematic (Lake et al. 2019)), given experience of 100 objects (again, not orders of magnitude different from typical human experience), to be notable, particularly given the history of research into learning logical operators in connectionist models and the importance of negation in language processing [Steedman, 1999].\\\"\\n\\n\\\"We choose to consider negation it is an example of an operator on which we found that, for our standard environment configuration, our agent unequivocally fails to exhibit an ability to generalize in a systematic way.\\\"\\n\\nRegarding your concern (4), we have re-run the experiment to save the learning curves, and put a plot of these dynamics into the final figure in the paper. As you suggest, generalization does start to take-off more quickly in the language condition. Interestingly, a large amount of training the two conditions converge on the test trials, even though a small gap remains on the training episodes. We have updated the conclusions in that section too.\\n\\nWe hope that our efforts resolve your final concerns. Thank you for your engagement with the paper; it has helped to improve it substantially. We hope you agree that it's now in shape to make a valuable contribution to the growing literature and debate on generalization and representation in embodied agents.\"}",
"{\"title\": \"Some concerns resolved\", \"comment\": \"Alright, I consider that a good explanation :)\\n\\nI have also personally given more thought as to how to separate the finer-grained effects for my concerns (2) and (3), but also couldn't devise good practical experiments to do so (without either coming up with heavily contrived situations or introducing even more new factors which would then need to be disentangled further). So at this point I will consider these concerns resolved.\\n\\nI still request the authors to please provide a good justification for how exactly to assess sec 4.1 results as being interesting enough (concern 1) and provide the required plots for concern (4).\"}",
"{\"title\": \"Paper now fully amended for consistency with new title and abstract\", \"comment\": \"Please see new manuscript\"}",
"{\"title\": \"Reason why the proposed new experiments would not resolve important questions\", \"comment\": \"We understand your views. To be clear, we are not trying to avoid running new experiments. However, we simply cannot see a way to practically disentangle the factors further, and also are a little unsure about the ecological or theoretical value of doing so (even if it were possible). To illustrate the issue we're facing, consider factors (a-c) in your reply (thank you for laying these out!). As you acknowledge, we have ruled (a) out as contributing to the effects that we observe in the appendix. What exactly is the experiment that would separate (b) and (c)? Clearly, since if the agent moves, its coordinates will move, (b) implies (c). So we need a situation in which (c) but not (b). This would require an environment in which a field of view was centred on the agent, and moved when the agent moved, but moved in a direction that was random (or not quite random?) compared with that of the agent. But what debate, scientific question or model of animal learning would this result inform us about beyond what we have already showed? For any cause and any effect, one might go in search of finer-grained causes, but it feels to us like this should only be done if there is some theory or wider reason for finding a particular level of analysis or explanation important.\\n\\nTaking a step back, relative to the other contemporary literature in this space, our results are the first empirical results that demonstrate the effects of more naturalistic environmental factors (coarse they may be) on an agent\\u2019s systematic generalization. Our results are a direct consequence of recent work exploring these questions in completely abstract domains, where even the \\u201ccoarse\\u201d factors studied here were not explored at all, or were impossible to explore by design.\\n\\nFor concern (3), we are a bit more clear on the precise question you would like us to explore. We share your view that it would be nice to somehow disentangle the effect of intentional motion and interaction over time with passively modelling a scene with a temporal aspect. However, when designing the details of the experiment to run, things very quickly get murky. How exactly should we record a passive view of two objects that was in some sense neutral with respect to the agent's policy? The only option that we can think of would be a camera that orbits the objects at a fixed distance and moves its lens to focus on the objects in question. Even if it were possible for us to set this up, it feels quite a contrived situation that has limited ecological or practical validity. Indeed, it would not definitely answer the question - one could continue to doubt the outcome by questioning the radius of the orbit or the control program that we must implement for moving the lens to fixate on the objects, or other such design decisions. Given these issues, we feel that by far the most important modification that we have made to the paper is that the claims and conclusions that we reach are now entirely aligned with the facts of the original experimental effects that we observe.\"}",
"{\"title\": \"More experiments/results needed to address my concerns\", \"comment\": \"I appreciate your response and the clarifications provided but I want to emphasize that in an experiment-focused work like this one, it is important to go the extra mile and disentangle the effects of the factors being studied to as much an extent as possible. Currently, while the claims you make are technically correct and are now also worded appropriately, the factors that are being studied are in some sense \\\"compound factors\\\" and have not been disentangled appropriately.\\n\\nFor instance, for my concern 2), a frame always centered on an agent in the 2D grid world is a frame which:\\n(a) masks some information far away from the agent,\\n(b) is fixed relative to the agent's coordinates, and\\n(c) moves as the agent moves.\\nBy looking at the results in the paper, any reader would agree that such a frame improves generalization, but it is still unclear in what proportion do the above three features contribute in improving the generalization performance. I acknowledge your efforts in performing control experiments to study the masked information part (partial observability), but the effects of motion still remain to be disentangled from others via appropriate control experiments.\\n\\nSimilarly for concern 3), just choosing to change the section names to be technically correct, leaves the contribution of random motion vs intentional motion unclear and thereby leaves significant room for improvement in the paper.\\n\\nFor concern 1), I understand that negation is a harder operator to generalize over since it is significantly non-compositional, but since we agree that using more training data improves generalization, is it really that surprising/interesting if the generalization on the negation operator improves somewhat? The test accuracy doesn't seem to be increasing all the way to the training accuracy which re-affirms the fact that negation can be hard to generalize on, but how does one assess that the amount of generalization observed by including more words in training was more interesting than one would have expected for other operators? Lacking this assessment, I'm still somewhat unsure about the utility of Sec 4.1. At least a clarification about why this section is interesting is needed to understand the contribution of this section.\\n\\nFor concern 4), I am not misunderstanding your sentence or trying to interpret it out of context. My concern is that language benefits generalization in ways that cannot really be explored in the context of current work (more details in my first post on this). While I understand the limited context in which you are experimenting for the role of language here, I also understand that the task is solvable with/without language. In other words, it is not surprising for an RL agent to learn an optimal policy even without language, given enough frames/trajectories for training (disregarding trajectories where the agent chooses the first object incorrectly). However, is it possible to show how much training experience was required (in terms of frames and/or trajectories) before the language-based and vision-only agents achieved this level of generalization? Did the use of language commands increase or reduce the amount of experience required to generalize to the extent shown? Providing this information would be very useful since it would let the reader know if language helped speed-up (or slow-down) convergence to the final generalization performance in terms of the amount of experience required, while not necessarily effecting the performance level at convergence.\\n\\nOverall, I really like the effort that the authors have undertaken to perform the initial experiments and to re-write certain sections of the paper. But there is currently significant scope of improvement in the paper in terms of the experiments performed. I would be happy to accept the paper once the above concerns have been addressed.\"}",
"{\"title\": \"Please note; paper still subject to improvement\", \"comment\": \"We just wanted to make clear, we have not quite finished revising the paper yet. By the end of the rebuttal period we will have comprehensively re-written the introduction to remove the focus on systematicity, in line with the new abstract and title.\\n\\nPlease also note that the title in Open Review (above) is no longer the title of the paper, but we cannot change that here.\"}",
"{\"title\": \"Please verify that your concerns have been addressed.\", \"comment\": \"Thank you for your review! We hope your main concerns are mitigated by the reframing described above. We have also thoroughly edited the remainder of the text thoroughly to make sure no other claims could be misconstrued in ways that you describe. We won\\u2019t list all minor edits here, but, as an example, in Section 1.1 we have added the sentence. Please also take a look at the revised manuscript. \\n\\n\\\"Given that human reasoning is often not perfectly systematic (O.Reilly et al, 2013), here, we consider systematicity to be a question of degree, rather than an absolute characteristic of an intelligent system.\\\"\\n\\nAnd in Section 5, we have modified a sentence into the following: \\n\\n\\\"We also emphasize that our agent in no way exhibits complete systematicity. The forms of generalization it exhibits do not encompass the full range of systematicity of thought/behaviour that one might expect of a mature adult human, and that none of our experiments reflect the human ability to learn (rather than extend existing knowledge) quickly (as in, e.g Lake et al, 2015).\\\"\\n\\nWe have also added a table with the action set to the appendix, and a short passage describing the implications of the action set (e.g. what behaviour is specifically required to lift and place an object).\"}",
"{\"title\": \"Please verify that your concerns have been addressed\", \"comment\": \"Thank you for your review! Please take a look at the revised manuscript to verify that your concerns have been addressed.\\n\\n1) The more objects we use, the longer the agent takes to learn the training task so we didn\\u2019t work with all of them. Comparing the 4.2 and 3 we see that generalization on the \\u2018putting\\u2019 task is better with more objects in the training set (Fig 2b vs Table 2). We are sure that the effect would only be stronger if we ran the experiment with more objects than currently in 3 (poor performance if the number of objects involved during training is large is certainly not a failure mode of this agent!). The objects in the train/test set were chosen at random.\\n \\n2) We agree that 4.1 shows that that *for a given size of test set* increasing the size of the training set improves generalization (or, equivalently, increasing the train-to-test ratio). We have edited two sentences in the paper to make this clear, and to link to the passage in Bahdanau et al. 2018. The experiments do not provide information about how generalization changes when the ratio stays the same but the size of both increases; do you have a link to literature / theory for why this is an interesting thing to investigate? \\n\\n3) We agree that our experiments in 4.2 say nothing about whether the effects of what we call \\u201cego-centric\\u201d perspective rely on the camera being centred on (rather than just tied to) the agent, nor if the effect might be the same if the camera was just moving randomly. Indeed, due to the spatial invariance afforded by the convolutional architecture, it's likely that the centering is less important than the fact that the agent is in some consistent location in the visual input. We cannot say if the effect might be the same if the camera were moving randomly, but this does not seem to have the same ecological validity or intuitive basis for investigation as an egocentric or allocentric frame of reference. We have added a sentence which should hopefully temper these claims: \\u201cThis suggests that the visual invariances introduced when bounding the agents perspective, which in our case was implemented using an ego-centric perspective, may improve an agent's ability to factorise experience and behaviour into chunks that can be re-used effectively in novel situations.\\u201d We have also changed the title of the section to simply \\u201cVisual invariances in agents' perspectives\\u201d rather than \\u201cEgocentric frame of reference\\u201d, and the appropriate sections in the abstract (e.g.,\\u201d...the visual invariances afforded by the agent's perspective, or frame of reference\\u201d)\\n\\n4) We agree that the experiment does not allow distinction between interaction (RL) and merely learning from a video. To make this clearer, we have changed the term \\u201cActive perception over time\\u201d to \\u201cTemporal aspect of perception\\u201d. Our 3D environment does not have the functionality to enable the control experiment that you describe (e.g. guaranteeing objects in view), and it would be very hard to control for the various factors at play (the length and quality of the sequences of frames and interaction with the world are essentially entangled in some way). We would like to explore this question further in future projects, but for now we will make the conclusions tighter. \\n\\n5) We follow Fodor and Pylyshin\\u2019s definition of systematicity in terms of what \\u2018thoughts\\u2019 can be \\u2018understood\\u2019. As we understand it (see discussion above), systematicity is not defined in terms of internal representations (some work has argued that disentangled representations should lead to better systematicity, but this has not been conclusively shown empirically as far as we know). We make no claims about the internal representations of our agent in this work. In any case, thorough analysis and interpretation of internal representations in models is very hard (an active research area) and beyond the scope of this experimental study. \\n\\n6) Yes, well spotted - it should only be in the test set. Amended\\n\\n7) Noted and amended\\n\\n8) Noted and amended\"}",
"{\"title\": \"Please verify that your concerns have been addressed in the revised manuscript.\", \"comment\": \"Thank you for your review! Please take a look at the revised manuscript to verify your concerns have been addressed.\\n\\n1) You are right that the main finding of 4.1 is should not be surprising (this is why we wrote \\u201cthe fact that larger training sets yield better generalization in neural networks is not novel or unexpected\\u201d in the section). However, we find that the context in which it is shown (negation, a problem with a long history in neural net research, and an operator which is, in some sense, maximally non-compositional) is interesting, to us at least. \\n\\n2) We agree that the experiments in 4.2 say nothing about whether the effects of what we call \\u201cego-centric\\u201d perspective rely on the camera being centred on (rather than just tied to) the agent, nor if the effect might be the same if the camera was just moving randomly. All we claim is that \\u2018if the window is centred on the agent (or the agent has first person perspective in 3D) then generalisation improves\\u2019. We will add a sentence to make this clearer.\\n\\n3) You are correct that the experiment does not allow distinction between interaction (RL) and merely learning from a video (see also our response to Reviewer 2). To make this clearer, we have changed the term \\u201cActive perception over time\\u201d to \\u201cTemporal aspect of perception\\u201d. We would like to explore this question further in future projects / when possible. \\n\\n4) We agree that the conclusion that you cite as \\\"language is not a large factor (and certainly not a necessary cause) of the systematic generalisation...\\\" would be entirely unwarranted based on this experiment. However, the complete sentence from which those words are taken reads \\\"While not conclusive, this analysis suggests that language is not a large factor (and certainly not a necessary cause) of the systematic generalisation that we have observed emerging in other experiments.\\\" We don\\u2019t think this is a hasty or unwarranted conclusion given the evidence, but would be happy to discuss further. To remove any room for doubt about this, we have changed it to \\\"While not conclusive, this analysis raises the possibility that language may not be playing a significant role (and is certainly not the unique cause) of the systematic generalisation that we have observed emerging in other experiments\\\"\\n\\nWe have also added a description of the action set to the appendix and fixed up Figure 2.\"}",
"{\"title\": \"Systematicity may not be a binary property\", \"comment\": \"While I am not an expert at psychological texts on systematicity, a point that I wanted to discuss was if systematicity is indeed believed to be a binary property. As I understand it, systematic generalization is not understood to be a purely binary concept and can develop/improve over time. This is also expressed in prior work in psychology [Vygotsky, 1987] which discusses young kids' inability to generalize and often hold \\\"spontaneous concepts\\\" (which can be self-contradictory when applied to different situations). As kids grow over time and are subjected to rich and diverse inputs from their environment, systematic generalization slowly emerges and improves over the childhood years. So it might be just fine to retain the word systematicity, unless it is being a cause of confusion to a majority of readers.\\n\\n[Vygotsky, 1987] Vygotski, Lev Semenovitch, Robert W. Rieber, Aaron S.. Carton, and Lev Semenovitch Vygotski. The Collected Works of LS Vygotsky: Problems of General Psychology. Plenum Press, 1987.\"}",
"{\"title\": \"Environmental drivers of generalization in a situated agent\\n\\nThe question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI. Here, we consider tests of out-of-sample generalization that require an agent to respond to never-seen-before instructions by manipulating and positioning objects in a 3D Unity simulated room. We first describe a comparatively generic agent architecture that exhibits strong performance on these tests. We then identify three aspects of the training regime and environment that make a significant difference to its performance: (a) the number of object/word experiences in the training set; (b) the invariances afforded by a first-person or agent-centric perspective; and (c) the variety of visual input inherent in the perceptual aspect of the agent\\u2019s perception. Our findings indicate that the degree of generalization that networks exhibit can depend critically on particulars of the environment in which a given task is instantiated. They further suggest that the propensity for neural networks to generalize in systematic ways may increase if, like human children, those networks have access to many frames of richly varying, multi-modal observations as they learn.\\n\\n======================\\n\\n\\nFor what it's worth (and at risk of descending into the weeds with an old and sticky debate) it's clear from the original papers that Fodor and Pylyshyn were making an empirical observation (i.e.., the fact that we can understand Mary loves John if we understand John loves Mary is an observation about humans that led to the hypothesis that our models should be systematic in a similar way). F&P do not set out a rigorous definition beyond this intuition that we treat \\u201csemantically similar contents\\u201d in \\u201csimilar ways\\u201d. One can quickly see that their empirical observation is not without controversy, and indeed, it may ultimately not be representative of a universal phenomenon of cognition. For example, there are numerous known counter-examples, e.g., from the Stanford Encyclopedia of Philosophy: do those who understand \\u201c\\u2018within an hour\\u2019 and \\u2018without a watch\\u2019 also understand \\u2018within a watch\\u2019 and \\u2018without an hour\\u2019\\u201d? Moreover, it is not entirely clear what \\u201csemantically similar\\u201d contents are, whether these must be learned, or are to somehow be known a priori (who determines them, if so?). \\n\\nSo, to treat systematicity as an abstract binary property that can be \\u201cattained\\u201d by a model may not be a view that appropriately considers the controversy behind it. We instead take the view that there may be a lot of important factors that contribute to whether *we observe* a system to be systematic. Enumerating these factors, and exploring the ways in which models appear to be systematic when these factors exist is an important empirical research topic. Examples of such factors may be previous learning, the surrounding context, implicit knowledge, and so on.\", \"comment\": \"Thank you for your rapid engagement. We have thought about it and, while we believe the work to address and contribute to the wider systematicity debate, we are happy to adopt your recommendation to remove mention of systematicity (excepting the final sentence, where the connection to our results is very indirect, below) in the title and abstract. We can ensure similar amendments in the paper itself:\\n\\n===================\", \"we_also_note_a_very_relevant_quote_from_the_stanford_encyclopedia_entry_on_this_topic\": \"\\u201c\\\"Jansen and Watter note however, that the sensory-motor features of what a word represents are apparent to a child who has just acquired a new word, and so that information is not off-limits in a model of language learning. They make the interesting observation that a solution to the systematicity problem may require including sources of environmental information that have so far been ignored in theories of language learning. This work complicates the systematicity debate, since it opens a new worry about what information resources are legitimate in responding to the challenge. However, this reminds us that architecture alone (whether classical or connectionist) is not going to solve the systematicity problem in any case, so the interesting questions concern what sources of supplemental information are needed to make the learning of grammar possible.\\\"\"}",
"{\"title\": \"why not remove \\\"systematicity\\\" altogether?\", \"comment\": \"Thanks for the clarification. I think the problem is that a lot of people (most people?) understand systematicity to be a binary, all-or-nothing property (myself included). This is certainly how Fodor & Pylyshyn (1988) originally defined the concept: \\\"... the ability to produce/understand some sentences is intrinsically connected to the ability to produce/understand certain others.\\\" It is clear from the context what they have in mind is some kind of logical implication: if you understand \\\"lift X\\\" and \\\"find Y\\\", you cannot fail to understand \\\"lift Y\\\" (this is also why I suggested more rigorously \\\"proving\\\" systematicity in my review (point 5) instead of trying to infer it from a limited set of experiments). So I would personally prefer that you didn't use the word \\\"systematicity\\\" at all. To me, what you demonstrate in the paper is more accurately described as simply improved generalization or improved out-of-sample (or out-of-distribution) generalization.\"}",
"{\"title\": \"Environmental drivers of systematicity in a situated agent\\n\\nThe question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI. Here, we consider tests of systematicity that require an agent to respond to never-seen-before instructions by manipulating and positioning objects in a 3D Unity simulated room. We first describe a comparatively generic agent architecture that exhibits strong performance on these tests. We then identify three aspects of the training regime and environment that make a significant difference to its performance: (a) the number of object/word experiences in the training set; (b) the invariances afforded by a first-person or agent-centric perspective; and (c) the variety of visual input inherent in the perceptual aspect of the agent\\u2019s perception. Our findings indicate that the degree of systematicity that emerges in neural networks can depend critically on particulars of the environment in which a given task is instantiated. They further suggest that the propensity for neural networks to behave in systematic ways can increase if, like human children, those networks have access to many frames of richly varying, multi-modal observations as they learn.\\n\\n\\n======================\\n\\nPlease let us know whether, with changes along these lines would resolve your concerns around this issue? We intend to answer and amend the finer points of the reviews as well, but felt it important to try to reach consensus on this central issue first.\", \"comment\": \"Thank you for your thoughtful reviews. Based on these comments, we believe we can move towards a version of the manuscript that would be more acceptable for you. Before responding to the finer points, we hoped to discuss what we think is the single clearest perceived limitation of our work. This involves the idea that our agent exhibits \\u2018complete\\u2019 systematicity (forgive loose terminology here) -- i.e. we account for all of the ways in which a human might generalise in a systematic way. To be clear, this is certainly not a claim we intended to make. In particular, we consider systematicity to be a question of degree rather than absolute, and the strongest claim we intend is that in several very specific circumstances we observe an agent exhibiting *more* systematic behaviour than in other circumstances. That is, we have revealed a relative difference in systematicity across training conditions. We tried to be explicit about this in the final paragraph with the sentence:\\n\\n\\u201cWe also emphasize that our results in no way encompass the full range of systematicity of thought/behaviour that one might expect of a mature adult human, and that none of our experiments reflect the human ability to learn quickly\\u201d\\n\\nHowever, we have the impression that the title, abstract and intro are the main contributors to this misunderstanding. If you agree, we therefore propose changing the title and abstract as follows (and will edit the introduction along the same lines).\\n\\n===================\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This work studies factors which promote combinatorial generalization in a \\\"neural network agent\\\" embodied in a 3d simulation environment. The authors present interesting experiments and some insightful empirical findings on how a richer environment and a first-person egocentric perspective can aid a simple neural net to generalize better over previously unseen tasks. While I truly commend the effort undertaken to perform the experiments, I have several concerns which I explain below and would be happy to raise my score if they can all be addressed satisfactorily:\\n\\n1) While the authors interpret the experiment results in sec 4.1 in a positive way, the results don't seem to necessarily indicate good systematic generalization. For instance, after learning with 40 words the agent only achieves 60% test accuracy. While the accuracy increases to 78% on training with 100 words, the training and test accuracy gap indicates that the performance is still far from any kind of systematic generalization. The results instead seems to be hinting that neural nets don't indeed perform combinatorial generalization on their own, but can be forced towards it by supplying them huge amounts of diverse data (which is not true for humans). Also, the fact that increasing the number of words helps in generalizing better is true for most ML models and does not come as a surprise. So the results in this subsection are somewhat trivial and do not necessarily contribute any new understanding.\\n\\n2) For the experiments regarding egocentric frame in sec 4.3, I feel that the results are not really conclusive (even including the control exps in appendix D). Could it be that if one uses any frame rigidly attached (i.e. fixed displacement and rotational coordinates) to the agent's egocentric frame, one would achieve the same generalization performance? It is also possible that as suggested by authors in sec 4.4, it is just the motion of the egocentric frame which might be giving diverse views of the environment to the agent. So the frame might not even need to be egocentric, but just a moving frame which gives richer and diverse views whenever the agent moves. Please include experiments to test for these possibilities.\\n\\n3) In section 4.4, the authors have trained the non-embodied classifier with just a single image frame. But this does not necessarily justify the conclusion that active perception helps in generalization. This is because the motion of the RL agent gives it both a varied set of views AND also control over what views to obtain by taking actions. In order to better understand which of these factors (or perhaps both) aid in generalization, another set of experiments is required which shows the classifier agent more images while keeping the desired object in view. In one experiment, these images should be chosen with random movements but the number of such images provided to the classifier should be increased in sub-experiments to gauge if giving more varied views bridges the performance gap between the classifier and the RL agent's generalization performance. In a second experiment, one might want to first train the RL agent, then extract a few (say 10) frames out of its enacted policy for all pairs of objects and use these frames as a part of the training set for the classifier agent. This would allow one to gauge if both varied views and actively selecting to interact with the environment can help bridge the generalization gap.\\n\\n4) Lastly, sec 4.5 seems to be hinting at a potentially very incorrect conclusion: \\\"language is not a large factor (and certainly not a necessary cause) of the systematic generalisation...\\\". This cannot be said from the small single experiment presented in sec 4.5. For instance, that experiment has been devised in a way that an optimal policy can be found with/without language. However, if a language input is provided to explicitly state the desired object, that might speed up the training of the RL agents significantly. In such a case, it might be helpful to see if learning the policy with the language input is being accomplished with a much lower number of frames during training, as opposed to when no language input is provided. Please provide the training error plots. But regardless of the plots, the experiments can still be quite inconclusive since language helps in systematic generalization in a variety of other ways apart from what has been tested for. In general, language starts helping humans once it has been acquired to a sufficient extent since one needs noun-concept linkages, verb-action linkages etc. to have been acquired a priori before the benefits of language emerge in combinatorial generalization. Training an LSTM to understand the language commands in tandem with learning policies for picking desired objects could lead to sub-optimal or heavily over-fitted language models which may not help in generalization. Testing for the true role of language will require many more experiments, which may be somewhat out of scope for this paper given the space constraints for a single paper. But, I would advise the authors to refrain from drawing hasty inferences about the role of language without thorough experimentation.\", \"minor_issues\": \"1) What are the 26 actions in the Unity 3D environment in section 3? It is important to know the action space to understand how easy or hard it is for the agent to learn generalizable policies.\\n2) The x-axis of Figure 2 is not readable at all. Please rectify those graphs and reduce the number of ticks.\\n\\n\\n-------------------------- Update after interaction during author feedback period -------------------------------\\nI appreciate the efforts that the authors have undertaken to address my concerns. While the paper is far from perfect, it is still a very thought provoking work and I believe that it would make a valuable contribution to the line of works on systematic generalization in embodied agents. I am updating my score to reflect the same.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"=============================== Update after revisions =====================================================\\n\\nIn my initial review, I had raised some issues with the interpretation of the results and suggested some control experiments to tighten the conclusions. The authors chose to weaken their initial claims by rephrasing their conclusions instead. I understand that there may not have been enough time to run many of the experiments I suggested, but I still think they are worth considering for the future. I'm mostly satisfied with the rephrasing of the conclusions in the revised paper, so as promised, I'm happy to increase my score and recommend acceptance. \\n\\nI spotted several typos in the revised paper, however: section 4.1: \\\"we choose to consider negation ...\\\", p. 5: \\\"for for ...\\\", a citation on p. 5 is not compiled correctly. There may be more. For the final version please make sure to go through the paper thoroughly a couple of times and fix all the typos.\\n\\n========================================================================================================\\n\\nThe authors present a systematic study of generalization in agents embedded in a simulated 3d environment. I think there are some interesting results in this paper that might be useful for people to know about. I appreciate the thoroughness of the experiments, in particular. I have, however, some issues with the interpretation of several of the main results. I would be happy to increase my score if we can resolve some of these issues. Here are my main concerns:\\n\\n1) In the experiments in section 3, only a limited test set is used. How is the train/test split decided in these experiments? Table 6 suggests that you have a much larger repository of objects. Why not use all possible objects in the test set? It is a bit premature to declare your results as systematic generalization if you can\\u2019t show that it actually works for a much larger set of test objects (ideally for all possible objects). \\n\\n2) Section 4.1: in these experiments, the training set size is increased, but the test set size is kept constant (and small), so the train/test size ratio also increases. So, an alternative explanation of the results in this section is that the model behaves largely according to visual similarity and as the training set size is increased, it becomes easier to find a training set object that is visually similar to any test set object. I think the authors should run an experiment where both training and test set sizes increase by the same amount so that the train/test set size ratio stays constant. If the model can\\u2019t achieve systematic generalization in that case, it would be wrong to conclude, as the authors do now, that increasing the training set size itself improves systematic generalization. The correct conclusion would rather be that increasing the train/test size ratio improves generalization, which is a weaker conclusion. Please note that the results in this section are quite similar to those in Lake & Baroni (2018) and in Bahdanau et al. (2018) (see their Figure 3). Bahdanau et al. (2018), for example, also show that increasing train/test set size ratio (their \\u201crhs/lhs\\u201d ratio) improves generalization in generic neural networks. It is interesting to note, however, that neither Lake & Baroni (2018) nor Bahdanau et al. (2018) interpret these results positively (i.e., these results don\\u2019t show systematicity), whereas the current paper seems to put a more positive spin on essentially the same result. I think these earlier results should be explicitly discussed here and the authors should justify why they are interpreting the results differently (if they are). It should also be noted that in the real world the train/test size ratio for humans is presumably very small, perhaps zero (given the compositional abilities of humans).\\n\\n3) Section 4.3: I don\\u2019t think the results in this section are sufficient to establish the egocentric frame per se as the key factor. One possibility is that perhaps the frame doesn\\u2019t have to be centered on the agent, but as long as it has some systematic relationship to the agent\\u2019s location (for example, the center of the visibility window could be some distance away from the agent, and the agent itself may or may not be inside this window), that\\u2019s good enough to get generalization improvements. An even weaker possibility is that simply a moving frame is enough for improved generalization. In this case, the reference frame doesn\\u2019t even need to have a systematic relationship to the agent\\u2019s location. For example, the frame could be relative to a fictitious agent that randomly explores the environment. I think the authors should run some experiments to rule out these possibilities if they want to claim that the egocentric frame itself is responsible for generalization improvements. \\n\\n4) Section 4.4: In the experiments in this section, I think there are two relevant factors that need to be better disentangled: 1) the number and variability of image frames experienced by the two models; 2) the active perception aspect (the fact that the agent interacts with the environment and affects its own perceptual experience in one case). The authors claim the second factor as the key aspect enabling better generalization, but 1) is equally likely (this would be more in line with a standard data augmentation type result). A good control experiment here would be to not just use the first frame but a larger number of more variable frames for training the non-situated agent (for example, one can use image frames that would be seen by a camera that more or less randomly moves in front of the objects perhaps with the constraint that both objects are always at least partially visible). If the classification model generalizes as well as the situated agent in this control condition, you cannot claim active perception as the key factor.\\n\\n5) As a more general point, it\\u2019s a bit frustrating to have to judge systematic generalization by only looking at the results of some limited set of experiments. How do I interpret the results if the agent achieves only 84% accuracy in some experiment (as opposed to 100%)? It would be much better if the authors could somehow more rigorously prove systematicity. Here, I don\\u2019t necessarily mean \\u201cprove\\u201d in a mathematical sense, but just analyzing the learned representations a bit more rigorously and being able to say something along the lines of: here\\u2019s exactly how the trained model represents \\u201clift\\u201d; because of reason X, Y, Z, this representation is completely disentangled from all object representations in the dataset (and ideally from all possible object representations, because that\\u2019s really what true systematicity entails, although I highly doubt that any generic model of the type studied in this paper will be able to achieve this, regardless of the amount and type of input it receives).\", \"more_minor_issues\": \"6) In Table 5, \\u201ctable lamp\\u201d appears both in training and test sets. Is this a typo?\\n\\n7) Some results are presented in the appendix without any mention in the main text (Appendix D. 2). I think this is not a good practice in general. In the main text, please make sure to mention, however briefly, every result that appears in the appendix (something along the lines of \\\"This result could not be explained by confound X or Y (Appendix Z)\\\" would suffice).\\n\\n8) Font size in Figure 2 is tiny (axis labels are impossible to read), please make it bigger. You don\\u2019t need that many ticks on the axes.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies systematic generalization in a situated agent. The authors examine the degree to which various factors influence systematic generalization, including 2D vs. 3D environments, egocentric vision, active perception, and language. The experiments reveal that the first three factors, but not language, promote systematic generalization. The experiments are well-done and worthwhile, and identifying the key factors that affect generalization is a strength of the paper.\\n\\nI have two main criticisms. First, the model's abilities for systematic generalization are overstated. Second, critical details about the experiments are omitted that make them difficult to evaluate.\\n\\nLet's start with the abilities of the model. The title of the paper is \\\"Emergent systematic generalization in a situated agent,\\\" which of course implies that the agent has \\\"systematic generalization.\\\" The authors go on to say, in the abstract, that \\\"we demonstrate strong emergent systematic generalisation in a neural network agent\\\". The results, however, fall short of these statements.\\n\\nThe strongest results pertain to generalizing a highly-practiced action such as \\\"lifting\\\" or \\\"putting\\\" to novel objects. In this case, highly-practiced means that the actions have been trained on 31 unique objects for millions of steps. However the paper does not study whether or not the agent can learn a novel action (e.g. \\\"lifting\\\" or \\\"putting\\\" with only a few examples) and generalize it systematically to familiar objects. Nor does it study whether novel actions can be combined systematically in new ways using relations and modifiers such as \\\"finding the toothbrush ABOVE the hat\\\" or \\\"finding AND putting\\\" or \\\"putting to the right of.\\\" Benchmarks for systematic generalization such as SQOOP and SCAN include these types of generalizations, and an agent with systematic generalization should handle them as well. To be clear, I don't think it's necessary to add additional experiments to the paper, but the current results should not be overstated in their generality.\\n\\nEven within the reported experiments, the results suggest that systematicity is lacking in several places. In the negation task, where chance is 50% accuracy, the agent achieves only 60% correct after learning from 40 unique words and 78% performance with 100 unique words (doesn't systematic generalization imply 100%?) For the putting tasks, the agent achieves 90% correct in one experiment (section 3.2) and then only achieves 63% correct in another (section 4.2). Again, the generalization abilities seem far from systematic.\\n\\nCritical details about the action space and the simulation parameters are needed. The action-space has 26 actions, but the paper does not say what these actions are. These details are crucial to understanding what is required to generalize \\\"lift\\\" or \\\"put\\\" to new objects -- instead the paper only says that \\\"in particular the process required to lift an object does not depend on the shape of that object (only its extent, to a degree)\\\" and that \\\"shape is somewhat more important for placement\\\" compared to lifting.\\n\\nI would consider updating my evaluation if the authors make revisions to ensure that the evidence supports their conclusions. The paper's title should also be supported by the findings; to offer a suggestion, something like \\\"Richer environments promote systematic generalization in situated agents\\\".\\n\\nOther suggestions\\n- The axis on Fig. 2 is too small to read. Also, it should be mentioned in text that the network is trained for 100 million+ steps (also, what is a step? how many episodes was it trained for?)\\n- The number of objects in sets X_1 and X_2 is important and should be mentioned in the main text.\\n\\n------\\n\\n** Update to review **\\n\\nThanks for your response to my review. It's clear that the authors have made considerable effort to improve the paper. In particular, the revised title, abstract, and introduction now more accurately reflect the contributions of the paper. It's not perfect, but the paper is improved and I revised my score accordingly.\\n\\nWhile it did not affect my final score, not all my suggestions were incorporated and I hope the authors will make further improvements in their revisions. The number of objects in sets X_1 and X_2 (Sections 3.1 and 3.2) are not mentioned and are tucked away in the appendix' this should be in the main text. Thanks for providing the list of 26 actions, but it's still not completely clear what makes a successful \\\"lift\\\" or \\\"put\\\" in terms of the sequence of actions. Finally, rather than simply saying that your agent \\\"in no way exhibits complete systematicity\\\" (Discussion), I hope the authors will expand on this and discuss the limitations of their experiments, and the kinds of systematicity not addressed and which could be the focus of future work.\"}"
]
} |
Skgfr1rYDH | SoftAdam: Unifying SGD and Adam for better stochastic gradient descent | [
"Abraham J. Fetterman",
"Christina H. Kim",
"Joshua Albrecht"
] | Abstract Stochastic gradient descent (SGD) and Adam are commonly used to optimize deep neural networks, but choosing one usually means making tradeoffs between speed, accuracy and stability. Here we present an intuition for why the tradeoffs exist as well as a method for unifying the two in a continuous way. This makes it possible to control the way models are trained in much greater detail. We show that for default parameters, the new algorithm equals or outperforms SGD and Adam across a range of models for image classification tasks and outperforms SGD for language modeling tasks. | [
"Optimization",
"SGD",
"Adam",
"Generalization",
"Deep Learning"
] | Reject | https://openreview.net/pdf?id=Skgfr1rYDH | https://openreview.net/forum?id=Skgfr1rYDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BDotz4oli",
"rkl8RuKjir",
"HJlSAwFoir",
"ByxbUvYoiS",
"S1ghLD70FH",
"ByeabQxpFS",
"rkx2eAItFH",
"Bylf0S_7KB",
"Skek3-qxYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798729766,
1573783757682,
1573783500529,
1573783369097,
1571858259541,
1571779333506,
1571544564076,
1571157449710,
1570967975309
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1683/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1683/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1683/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1683/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1683/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1683/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1683/Authors"
],
[
"~Hao_Jin1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The reviewers all agreed that the proposed modification was minor. I encourage the authors to pursue in this direction, as they mentioned in their rebuttal, before resubmitting to another conference.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to review #3\", \"comment\": \"Thank you very much for the detailed feedback. Overall, we want to note that the algorithm significantly outperforms Adam and even outperforms SGD in computer vision tasks. The changes here have a significant effect on the generalization performance and constitute novel research.\", \"below_are_responses_to_your_points\": \"- \\\"There is many modifications proposed, but most are secondary corrections for stability, such as the warm-up schedule with the redefinition of beta_2. These modifications could as well be incorporated in Adam without the core modification that is the smoothing presented in section 3. These additional modifications also make it difficult to measure the importance of the core contribution. Without getting rid of them, an ablation study on toy problems (even synthetic data) would be necessary for a better understanding.\\\"\\n- \\\"In section 3.1, the temporal definition of beta_2t is integrating a warm-up. While the reason for doing so is supported in introduction of section 3, the effect of this modification should be weighted against no warm-up, and also compared with its effect on Adam.\\\"\\n\\nThe warmup schedule itself has been studied fairly extensively, especially recently by Liu (2019) and Ma (2019). The modification to the beta_2 for debiasing does not significantly impact the performance and is only used to make the algorithm better align with the theoretical warmup schedule\\u2014using the traditional Adam m_t debiasing does not have any impact on the results. We added a comment to this effect.\\n\\nWe also updated the results to specifically use AdamW, which is more comparable to our algorithm.\\n\\n- \\\"There is an error in algorithm 1. The last element of the last line (Perform the update) should be \\\\alpha (1 - \\\\eta) m_t/d_t. The code in Appendix corroborates this correction. Minor related note, the use of d_t to define the denominator of what d_i represents in section 3 is very confusing. I would suggest to use the ratio notation of d_i from the equations in the algorithm for coherence.\\\"\\n\\nThis is correct, thank you for the suggestion. We have adjusted the notation based on your feedback.\\n\\n- \\\"If we get pass the warm-up scheduling, by massaging the equation we get that the algorithm is different from Adam on 2 points, 1) the bias are not corrected and 2) the denominator sqrt(v) + epsilon is replaced by sqrt(v) + sqrt(mean(v) + epsilon^2). I have difficulty convincing myself that smoothing by the average is solving the issues raised in the paper and there is no experiments to study its effect directly.\\\"\\n\\n(1) the second order bias is corrected, just in a different way. As mentioned before, there is no practical difference between the debiasing methods. \\n\\n(2) In addition to changing the denominator to sqrt(v) + eta * sqrt(mean(v) + epsilon^2), there is also a scaling of the learning by 1 + eta * sqrt(mean(v)), which changes the implicit learning schedule created by Adam to be like SGD. This is what allows the Adam-like limit of eta\\u2192 infinity and the SGDM limit of eta\\u21920. We intend future studies that will dis-entangle these two effects (learning rate schedule versus smoothing direction). The purpose of this paper is to share the algorithm that allows such a study in the first place, which is unique in this regard.\\n\\n- \\\"The experiments are on 3 datasets, but only the computer vision ones are run on multiple architectures...\\\"\\n\\nWe regret we cannot train on every architecture, but we focused on representative samples for different problems. We have added a comment that the models have different initializations and the hyper-parameters are tuned separately for each algorithm. More runs have been generated to provide standard deviations for the mentioned parameters.\\n\\nMinor comments have been addressed in the updated draft. Note both z and \\\\xi_i are vectors, so the notation is already correct.\"}",
"{\"title\": \"Response to review #2\", \"comment\": \"Thank you for your feedback. Here is a response to your points:\\n\\n- The novelty of this work is limited....\\n\\nThank you for the feedback. We intend future studies that will provide more explanation of why softAdam outperforms other existing algorithms. This is not a trivial problem to answer, as we do not know why SGD and Adam generalize well in the first place. Although theoretical guarantees are not proven in this paper, it can be seen by inspection of the algorithm that the step size is always smaller than the SGD step size, and so the SGD convergence guarantees will be held.\\n\\n- The theoretical analysis about existing adaptive methods provides nothing new....\\n\\nThe theoretical analysis of the quadratic model here provides intuition for the algorithm\\u2019s behavior on convex optimization. An additional study of adaptive methods for non-convex optimization is well described in the reference you provided, \\u201cOn the convergence of stochastic gradient descent with adaptive stepsizes.\\u201d\\n\\n- The settings of experiments are limited....\\n\\nCIFAR has provided an adequate testing ground for optimization algorithms for many years. We wanted to have the most comparable results with those other optimization papers.\\n\\n- This paper lacks some references in this area. \\n\\nThank you for these references, we have included them.\"}",
"{\"title\": \"Response to review #1\", \"comment\": \"Thank you for the feedback. Here is a response to your points:\\n\\n1. Section 2 helps to understand the intuition for the paper by defining SGD and Adam in a consistent way. \\n\\n2. Thanks for the feedback. We have added some clarifications in the paper. z is the input to the learning function. n_t and n_\\\\infty are now defined again in section 3. We removed the equations adding weight decay and nesterov momentum as these were unnecessarily confusing.\\n\\n3. The optimal learning rate (eq 1) must take into account both min and max eigenvalues or else one will dominate the time to convergence. The modification to Adam allows the algorithm to significantly outperform Adam and SGD in computer vision tasks. \\n\\n4. The colors were not switched, but the legend order was not consistent. This has been fixed, and the confidence range has been added to the results where available.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new algorithm which brings closer SGD and Adam while also incorporating new corrections to improve behavior of the optimizer in contexts where there is very small or very large eigen values.\\n\\nDecision\\n\\nI vote for weak rejection because the core modification proposed to Adam is minor and is mostly supported by intuition and preliminary experiments.\\n\\nJustification\\n\\nThere is many modifications proposed, but most are secondary corrections for stability, such as the warm-up schedule with the redefinition of beta_2. These modifications could as well be incorporated in Adam without the core modification that is the smoothing presented in section 3. These additional modifications also make it difficult to measure the importance of the core contribution. Without getting rid of them, an ablation study on toy problems (even synthetic data) would be necessary for a better understanding.\\n\\nIn section 3.1, the temporal definition of beta_2t is integrating a warm-up. While the reason for doing so is supported in introduction of section 3, the effect of this modification should be weighted against no warm-up, and also compared with its effect on Adam.\\n\\nThere is an error in algorithm 1. The last element of the last line (Perform the update) should be \\\\alpha (1 - \\\\eta) m_t/d_t. The code in Appendix corroborates this correction. Minor related note, the use of d_t to define the denominator of what d_i represents in section 3 is very confusing. I would suggest to use the ratio notation of d_i from the equations in the algorithm for coherence.\\n\\nIf we get pass the warm-up scheduling, by massaging the equation we get that the algorithm is different from Adam on 2 points, 1) the bias are not corrected and 2) the denominator sqrt(v) + epsilon is replaced by sqrt(v) + sqrt(mean(v) + epsilon^2). I have difficulty convincing myself that smoothing by the average is solving the issues raised in the paper and there is no experiments to study its effect directly.\\n\\nThe experiments are on 3 datasets, but only the computer vision ones are run on multiple architectures. Caption of figure 1 explains that each model is trained 3 times, but the source of variation between each run is not described. Are the models initialized differently? In any case, there is an overlap for 3 of the 5 models between SGD and SoftAdam which makes the comparison rather unconvincing. There is no standard deviation for Adam, and none on Penn Treebank dataset and IWSLT. For a better comparison, all hyper-parameters of the algorithms should be optimized for each run. I understand that SoftAdam is meant to be close to both SGD and Adam, but using the same hyper-parameters may induces misleading results by favoring some (model, optimizer) combination nevertheless.\\n\\nMinor comments\\n\\nIn section 2, second paragraph, the term 'mini-batch' should be used instead of 'batch'.\\nIn section 2, last sentence, the betas should have no t.\\nIn section 2.1, fourth equation (unnumbered), the eigen vector xi_i is presented as a vector and then used as a scalar. Notation should be uniformed.\\nIn Section 2.1 around equation (2), the use of i and j is incoherent.\", \"in_section_3\": [\"Overall, this understanding *of* has\", \"we consider the *an* update\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\n\\nThis work proposed a new algorithm called softAdam to unify the advantages of both Adam and SGD. The authors also showed experiments to backup their theoretical results.\", \"pros\": \"The authors provided analysis of different algorithms including Adam and SGD on simple quadratic function, then proposed a new algorithm called softAdam which outperforms both Adam and SGD. Experiment results backup their theory.\", \"cons\": [\"The novelty of this work is limited. The main contribution of this work is to provide a new adaptive gradient method called softAdam, which changes the update rules for some parameters including \\\\beta. However, neither intuition or theoretical guarantees are provided in this paper. I recommend the authors to add some explanation about why softAdam outperforms other existing algorithms. Besides, the difference between softAdam and original Adam method is little.\", \"The theoretical analysis about existing adaptive methods provides nothing new. The authors showed some analysis on quadratic model, which is a very simple model and hence can not reflect the true model we face in the practice. I suggest the authors provide some analysis on more general model, including convex functions and non-convex functions.\", \"The settings of experiments are limited. The authors should at least compare softAdam with other baseline algorithms on some modern deep learning tasks including Imagenet.\"], \"minor_comments\": \"- Page 4, section 3, 'this understanding of has'... lacks object.\\n- This paper lacks some references in this area. \\n\\nJ. Chen and Q. Gu. Closing the generalization gap of adaptive gradient methods in training\\ndeep neural networks. arXiv preprint arXiv:1806.06763, 2018.\\nWard, R., Wu, X. and Bottou, L. (2018). Adagrad stepsizes: Sharp convergence over nonconvex\\nlandscapes, from any initialization. arXiv preprint arXiv:1806.01811 .\\nLi, X. and Orabona, F. (2018). On the convergence of stochastic gradient descent with adaptive\\nstepsizes. arXiv preprint arXiv:1805.08114 .\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper tends to explain how the tradeoffs between convergence speed and convergence performance are made by different optimization methods. Moreover, the paper modifies Adam-like updating rules and proposes a novel optimization methods, SoftAdam. Finally, the paper performs numerical experiments on traditional image classification tasks as well as language modeling tasks.\\n\\nPros\\nThe paper involves the language modeling tasks in empirical results besides traditional image classification tasks, which helps to explain the convergence performance of optimization methods in a wider range of applications.\\n\\nCons\\n1. The writing of this paper is not well organised. Section 1 lacks detailed description of the main idea and the proposed optimization methods, which actually confuses the reader. Section 2 describes too much details of SGD and Adam, and lacks a clear \\\"intuition\\\" which readers exactly expect.\\n2. The notation in the paper is little confusing. In the update rule of z_{t+1} in Section 2, what is meaning of z? In Section 3, n_t and n_\\\\infty are used before a proper defination, and what is relationship between \\\\alpha and \\\\alpha_t in the implementation of the SoftAdam?\\n3. The motivation of the proposed method is weak. Such a weak motivation is mainly because of the insufficient \\\"intuition\\\" in Section 2. The author mentions \\\"the Hessians in deep learning problems are not diagonal\\\", but does not provide further explanation on why more importance should be lay on serving both max and min eigenvalues.\\n4. There are also several minor problems on the numerical results. Firstly, why the colors of \\\"softAdam\\\" and \\\"sgd\\\" are switched several times in Figure 1? Secondly, the figural result in Figure 1 and he numerical results on language processing models both lack a display of the confidence range.\"}",
"{\"comment\": \"$n_t$ represents the effective number of elements in the average for $v_t$. Using an exponential weighted average as is used in Adam, for large t, $n_t \\\\approx n_\\\\infty \\\\approx 2/(1-\\\\beta_2)$. However, if $t \\\\ll n_\\\\infty$, $n_t \\\\approx t$.\", \"title\": \"Definition of n_t and n_\\\\infty\"}",
"{\"comment\": \"What do you mean by n_t and n_\\\\infty in Section 3?\", \"title\": \"Several Problems about Notations\"}"
]
} |
r1xMH1BtvB | ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators | [
"Kevin Clark",
"Minh-Thang Luong",
"Quoc V. Le",
"Christopher D. Manning"
] | Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute.
| [
"Natural Language Processing",
"Representation Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=r1xMH1BtvB | https://openreview.net/forum?id=r1xMH1BtvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"gvQ94s6q0S",
"Skxd42Y2oB",
"SkxNfzO9or",
"SkeFTOXIsS",
"B1eYvDmUiH",
"r1ebpQmLoS",
"rJxvxzXIiS",
"Skg95kdCcS",
"BkxelNPJqB",
"r1xUDhfRtB",
"SJgKAmShKH",
"HJe2AfI4dB",
"HJlI-6LZdr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798729733,
1573850160298,
1573712395687,
1573431489256,
1573431136893,
1573430200993,
1573429742876,
1572925329748,
1571939304255,
1571855453655,
1571734480609,
1570165459610,
1569971453840
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1682/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1682/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1682/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1682/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1682/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1682/Authors"
],
[
"~Feng_Yu6"
],
[
"ICLR.cc/2020/Conference/Paper1682/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1682/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1682/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1682/Authors"
],
[
"~Jules_Gagnon-Marchand1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": [\"This paper investigates the tasks used to pretrain language models. The paper proposes not using a generative tasks ('filling in' masked tokens), but instead a discriminative tasked (recognising corrupted tokens). The authors empirically show that the proposed method leads to improved performance, especially in the \\\"limited compute\\\" regime.\", \"Initially, the reviewers had quite split opinions on the paper, but after the rebuttal and discussion phases all reviewers agreed on an \\\"accept\\\" recommendation. I am happy to agree with this recommendation based on the following observations:\", \"The authors provide strong empirical results including relevant ablations. Reviews initially suggested a limitation to classification tasks and a lack of empirical analysis, but those issues have been addressed in the updated version.\", \"The problem of pre-training language model is relevant for the ML and NLP communities, and it should be especially relevant for ICLR. The resulting method significantly outperforms existing methods, especially in the low compute regime.\", \"The idea is quite simple, but at the same time it seems to be a quite novel idea.\"], \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to Authors\", \"comment\": \"Thank you for the clarifications. The response provides a lot more information and clarifies a few things. After reading through the recommendations from other reviewers as well as the GLUE scores for various snapshots of Electra-small, I'm inclined to bump up my score to a 6: weak accept if the updated descriptions can be integrated into the paper and tables with these training curves can be provided for your models!\"}",
"{\"title\": \"Paper Updates\", \"comment\": [\"We want to thank the reviewers again for their suggestions! We have updated the paper with the following changes:\", \"Addressing Reviewer 1\\u2019s concern, we added results for SQuAD (both 1.1 and 2.0). ELECTRA-Large matches RoBERTa on SQuAD (getting 0.2 exact-match points worse on 1.1 but 0.4 points better on 2.0) despite using less than \\u00bc of the pre-training compute. ELECTRA-Base scores much higher than BERT-Base/XLNet-Base and even outperforms BERT-Large.\", \"Addressing Reviewer 2\\u2019s question, we added some discussion and empirical analysis of the problems with adversarial ELECTRA.\", \"Addressing Reviewer 3\\u2019s question, we added results for ELECTRA-Small with even fewer GPU-hours in Table 1. ELECTRA performs well even when using as little as 6 GPU-hours of pre-training.\", \"We also updated the appendix with descriptions of the GLUE dataset and test-set results of the Base/Small ELECTRA models. A new finding is that ELECTRA-Small outperforms TinyBERT, a model of comparable size that uses a complicated distillation procedure to learn from BERT-Base during both pre-training and fine-tuning, as well as data augmentation (whereas ELECTRA-Small uses no distillation or data augmentation).\", \"Lastly, as suggested by Reviewer 2, we are in the process of training ELECTRA for longer (with compute comparable to RoBERTa), although it is still early along in training.\"]}",
"{\"title\": \"Reply to Reviewer #3\", \"comment\": \"Thank you for the comments! We address some of the concerns and questions below:\\n\\n>\\u201cI'd like to see a little more investigation into Table 3. I don't have intuition over why these results are the way that they are and the text nor the experimentation really gives me an indication.\\u201d\\n>\\u201cOverall I'd like to see more clarity in the overall analysis because I'm still unsure how to interpret your results...\\u201d\\n\\nThe ablations in Table 3 are a series of \\u201cstepping stones\\u201d between BERT and ELECTRA designed to show where ELECTRA\\u2019s improvements come from. To reiterate the discussion in the paper:\\n - \\u201cReplace MLM\\u201d slightly outperforming BERT suggests that a small amount of the gains can be attributed to solving BERT\\u2019s pre-train/fine-tune mismatch due to [MASK] tokens, as \\u201cReplace MLM\\u201d is essentially BERT with this mismatch fixed.\\n - \\u201cELECTRA 15%\\u201d matching \\u201cReplace MLM\\u201d suggests that ELECTRA is benefitting a lot from learning from all input tokens. ELECTRA 15% is essentially ELECTRA with this advantage over BERT removed and indeed the gains over BERT mostly go away. \\n - \\u201cAll-Tokens MLM\\u201d outperforming \\u201cReplace MLM\\u201d further demonstrates the benefit of learning from all input tokens, as we substantially improve BERT\\u2019s masked language model objective when incorporating this idea into BERT.\\n - Lastly, ELECTRA outperforming \\u201cAll-Tokens MLM\\u201d shows the additional value of our discriminative second stage classifier rather than replacing it with a BERT-style generative model. \\n\\nPlease let us know if there are other results you find unclear or if you have suggestions for further experiments/discussion that would help clarify the results. \\n\\n\\n>\\u201cHow well does this model work with very very little compute; lets say you have only a couple of gpu hours. Whats the degradation in performance?\\u201d\\n\\nHere are GLUE scores for ELECTRA-Small for various training times, which we will add to the paper.\", \"4_days\": \"79.9 (slight improvement over the number in the submission due to some additional hyperparameter tuning)\", \"2_days\": \"79.0\", \"1_day\": \"77.7\", \"12_hours\": \"76.0\", \"6_hours\": \"74.1\\nWe note that even the full 4 days is already a tiny fraction (~1/50th) of the compute used to train BERT, but these results show that an effective model can be built with even more limited resources.\"}",
"{\"title\": \"Reply to Reviewer #2\", \"comment\": \"Thank you for the comments! We address some of the concerns and questions below:\\n\\n>\\u201cIt will be helpful if the authors provide more empirical analysis why the adversarial ELECTRA perform worse or failed.\\u201d\\n\\nYes, that is a good question! We did not have too much discussion on this in the submission because it is a negative result and we were limited for space. We found two problems with the adversarially trained generator. The main one is that the adversarial generator is simply worse at masked language modeling. For example, a size-256 adversarial generator after 500k training steps achieves 58% accuracy at masked language modeling compared to 65% accuracy for an MLE-trained one. We believe the worse accuracy is mainly due to the poor sample efficiency of reinforcement learning when working in the large action space of generating text. As evidence for this, the adversarial generator's MLM accuracy was still increasing towards the end of training while the MLE generator\\u2018s accuracy vs train step curve had mostly flattened out. The second problem is that the adversarially trained generator produces a \\u201cpeaky\\u201d low-entropy output distribution where most of the probability mass is on a single token, which means there is not much diversity in the generator samples. Both of these problems have been observed in GANs for text in prior work (see \\u201cLanguage GANs Falling Short\\u201d from Caccia et al., 2018 and \\u201cEvaluating Text GANs as Language Models\\u201d from Tevet et al., 2019). We will add this additional discussion to the paper.\\n\\n\\n>\\u201cBut, it will be a big plus if the authors can show ELECTRA can outperform RoBERTa with the same amount of training time.\\u201d\\n\\nWe are working on training ELECTRA for longer (it just takes lots of compute!). We want to emphasize that our focus is on compute efficiency. However, given the trend of the accuracy vs compute curve in Figure 1, we are confident that ELECTRA will continue to improve and therefore outperform RoBERTa when given the same training time.\"}",
"{\"title\": \"Reply to Reviewer #1\", \"comment\": \"Thank you for the comments! We address some of the concerns and questions below:\\n\\n>\\u201cThe authors limit their investigation of downstream performance to the GLUE set of tasks, which are classification tasks. This is a significant limitation of the current version of the paper\\u2026\\u201d\\n\\nWe definitely agree with the reviewer\\u2019s point that evaluating on diverse tasks is useful! We initially focused on GLUE because it contains a variety of tasks and has been the main benchmark for evaluating pre-trained representations from GPT onwards. However, we have since run experiments on SQuAD (both 1.1 and 2.0) and will add them to the paper. Results are consistent with the GLUE ones (e.g., ELECTRA-Base outperforms BERT-Large and ELECTRA-Large matches RoBERTa). ELECTRA appears to be slightly better at SQuAD 2.0 (where it outperforms RoBERTa by 0.4 exact-match points) than 1.1 (where it slightly underperforms RoBERTa by 0.2 exact-match points). We think the best way of evaluating pre-trained encoders is still an open question - more challenging tasks might better distinguish models, but require more sophisticated classifiers on top of the transformer that can complicate the analysis. \\n\\n\\n>\\u201cIn contrast with BERT, there is no mention of any plan to release ELECTRA (big or small versions), which is a disappointment, lowers the significance of the work.\\u201d\\n\\nWe absolutely will release the code and pre-trained weights (for all model sizes)! We apologize for not making that clear in the submission.\"}",
"{\"title\": \"Answers to the questions\", \"comment\": \"You're welcome!\\n\\n1. The generator is a small network only used to help train the discriminator. In fact, it could be as simple as a unigram LM, in which case there would be no way to fine-tune it for downstream tasks. At any rate, it\\u2019s performance for downstream use is usually much below that of the discriminator.\\n\\n2. Could you please rephrase the question? We don\\u2019t quite understand what you are asking.\"}",
"{\"title\": \"Some questions\", \"comment\": \"Thanks the authors for the small model. Some questions:\\n\\n1. 'we throw out the generator and only fine-tune the discriminator on downstream tasks', what is the motivation?\\n\\n2. Can be changed to this: originalSentence --> [HardMaskModel] --> MaskedSentence --> ELECTRA, moreover there can be more discriminator tasks?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose replaced token detection, a novel self-supervised task, for learning text representations.\\n\\nThe principle advantage of the approach is that, in contrast with the standard masked language model (MLM) objective used by BERT and derivatives, there is a training signal for all tokens of the input (rather than a small fraction, when 10-20% of the input tokens are masked and then reconstructed under the MLM objective).\\n\\nA smaller MLE-trained BERT-style generator is used to replace masked words with plausible alternatives, which the ELECTRA discriminator (the part that is retained and finetuned on downstream tasks) must detect (unmodified word slots are also in the objective, and must be detected as such).\\n\\nIn general the paper reads well, and the authors present ablations to reveal the source of gains. ELECTRA matches the performance of RobBERTa on the popular GLUE NLP task, with just 1/4 of the training compute.\", \"strengths\": \"-Simple but novel self-supervised task for learning text representations, strong results, adequate ablation.\", \"limitations\": \"-The authors limit their investigation of downstream performance to the GLUE set of tasks, which are classification tasks. This is a significant limitation of the current version of the paper, as it may be that replaced token detection is more suitable for these tasks, but inferior to MLM (a higher precision self-supervised task) for more involved tasks like question answering. The latter is arguably of much higher importance to the NLP research community at this point, and some consider the GLUE task to be essentially solved for all practical purposes, given inherent noise levels.\\n-In contrast with BERT, there is no mention of any plan to release ELECTRA (big or small versions), which is a disappointment, lowers the significance of the work\", \"overall\": \"An okay paper. Results on SQUAD or another more elaborate NLP task and/or the release of the ELECTRA models would make the paper much stronger.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposed a novel sample-efficient pretraining task. One inefficiency of BERT is that only 15% tokens are used for training in each example. The paper introduced a generator+discriminator framework to optimize the utility of training examples. The generator task is the MLM which predicts the masked word. The author adds a discriminator to further learn from the example by classifying each word to be either generated or original. In this way, more words can be used. This method looks as only adding the discrimination task after BERT pretraining task. But, the authors later show that the best GLUE scores can be obtained only when both generator and discriminator are co-trained. Moreover, the adversarial ELECTRA perform worse. All these observations are interesting. It will be helpful if the authors provide more empirical analysis why the adversarial ELECTRA perform worse or failed. Is it because the GAN is hard to train or the adversarial task doesn't fit the pretraining?\\n\\nOverall, I think this is a good paper. The studied problem is important, the idea is new and the experimental results are positive. Specifically, it shows that ELECTRA can outperforms BERT and match RoBERTa with less training time. But, it will be a big plus if the authors can show ELECTRA can outperform RoBERTa with the same amount of training time. Analysis are also provided to give audience insights in this method.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: Authors offer an alternative for masked LM pretraining that's more sample-efficient called replaced token detection. Their method basically replaces certain input tokens with alternatives which are sampled from a generator and train a discriminative model to determine whether its generated or real. The work shows empirical success getting better results than GPT with a fraction of the compute on GLUE and others.\", \"positives\": \"Idea is simple and makes sense intuitively, but not something one would think immediately would work better with a such a small fraction of the compute. I think the formulations of the experiments and ideas to develop this are adequate.\\n\\nConcerns & Questions: I'd like to see a little more investigation into Table 3. I don't have intuition over why these results are the way that they are and the text nor the experimentation really gives me an indication. How well does this model work with very very little compute; lets say you have only a couple of gpu hours. Whats the degradation in performance?\\n\\nOverall I'd like to see more clarity in the overall analysis because I'm still unsure how to interpret your results on the why certain choices/experimental groups get the performance numbers they get.\\n\\n------------------------------------------------------------------------------------------------------------------------\\n\\nAfter the author response, I have changed my score to a 6. I think the paper merits acceptance.\"}",
"{\"comment\": \"Hi! We considered a token \\\"hard to predict\\\" if it had low probability under BERT's output distribution when it was masked out. The model learning which tokens were hard to predict was a small transformer network taking the unmasked text as input. For each token, it predicted (using a sigmoid output layer) how much probability BERT would assign that token if it was masked. We trained it by running BERT as normal on the input and then minimizing the sigmoid cross-entropy loss between the model\\u2019s output and the probability BERT gave to the correct token for the 15% of tokens that were masked. It\\u2019s hard to come up with an easily interpretable evaluation metric for this model, but the per-token KL divergences were ~0.25 for our model versus ~0.5 for the baseline of saying all tokens are equally hard to predict.\", \"title\": \"Strategic Masking Model\"}",
"{\"comment\": \"Hello, I want to thank the authors for a very interesting and useful paper.\\n\\nIn the paper, in the negative results section, the first bullet says :\\n\\n\\\"We initially attempted to make BERT more efficient by strategically masking-out tokens\\n(e.g., masking our rarer tokens more frequently, or training a model to guess which tokens\\nBERT would struggle to predict if they were masked out). This resulted in fairly minor\\nspeedups over regular BERT\\\"\\n\\nI was wondering if you could mention which model they used to \\\"training a model to guess which tokens\\nBERT would struggle to predict if they were masked out\\\", and some measure of how well the model it could do such a task.\\n\\nThanks.\", \"title\": \"Curious About Negative Results, Strategically Masking of Tokens\"}"
]
} |
SJxbHkrKDH | Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning | [
"Qian Long*",
"Zihan Zhou*",
"Abhinav Gupta",
"Fei Fang",
"Yi Wu†",
"Xiaolong Wang†"
] | In multi-agent games, the complexity of the environment can grow exponentially as the number of agents increases, so it is particularly challenging to learn good policies when the agent population is large. In this paper, we introduce Evolutionary Population Curriculum (EPC), a curriculum learning paradigm that scales up Multi-Agent Reinforcement Learning (MARL) by progressively increasing the population of training agents in a stage-wise manner. Furthermore, EPC uses an evolutionary approach to fix an objective misalignment issue throughout the curriculum: agents successfully trained in an early stage with a small population are not necessarily the best candidates for adapting to later stages with scaled populations. Concretely, EPC maintains multiple sets of agents in each stage, performs mix-and-match and fine-tuning over these sets and promotes the sets of agents with the best adaptability to the next stage. We implement EPC on a popular MARL algorithm, MADDPG, and empirically show that our approach consistently outperforms baselines by a large margin as the number of agents grows exponentially. The source code and videos can be found at https://sites.google.com/view/epciclr2020. | [
"multi-agent reinforcement learning",
"evolutionary learning",
"curriculum learning"
] | Accept (Poster) | https://openreview.net/pdf?id=SJxbHkrKDH | https://openreview.net/forum?id=SJxbHkrKDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"kP5i2Czt9P",
"B1xuPXkKjH",
"B1x_YM1tjH",
"SyxjEpAOoB",
"HkeDFh0OoS",
"B1lZmZmkqH",
"ryxYvoLRYH",
"H1lUoOjaYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729702,
1573610336435,
1573610112308,
1573608754834,
1573608574949,
1571922200749,
1571871585200,
1571825822263
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1681/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1681/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1681/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1681/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1681/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1681/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1681/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a curriculum approach to increasing the number of agents (and hence complexity) in MARL.\\n\\nThe reviewers mostly agreed that this is a simple and useful idea to the MARL community. There was some initial disagreement about relationships with other RL + evolution approaches, but it got resolved in the rebuttal. Another concern was the slight differences in the environments considered by the paper compared to the literature, but the authors added an experiment with the unmodified version.\\n\\nGiven the positive assessment and the successful rebuttal, I recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"We tackle a completely different problem from those cited evolution-in-RL references\", \"comment\": \"We don\\u2019t think our work can be directly compared against the references in the \\u201cevolution\\u201d paragraph of the related work section since the evolution in EPC is addressing a *different problem*.\\n\\nIt is a continuous trend of applying evolution to solve a variety of different challenges in RL and our paper simply tackles a novel one. Our primary focus is to propose a general curriculum learning paradigm to effectively scale MARL to a larger number of agents while the use of evolution is to address a particular *objective misalignment issue* within this paradigm, which has not yet been studied in the literature to our best knowledge. We have compared with the papers which also study scaling MARL while it is out of the scope of our paper to investigate every parallel technique in the domain.\\n\\nWe have updated the related work section to better express the above message. \\n\\nTo be more clear about the differences between the problem we studied and those cited related works, here are detailed summarization/discussion of each paper we mentioned:\\n> Salimans et al. (2017) show that population-based training can be better parallelized than standard RL algorithms. It focuses on single-agent RL benchmarks and suggests evolution can be an alternative to PG.\\n> Jaderberg et al. (2017) is the first population-based training paper from Google which boosts the performance of many benchmark tasks by running evolution on hyper-parameter tuning.\\n> Houthooft et al. (2018) propose to directly run evolution to learn a neural loss function to replace the PG loss, which is extremely expensive, but experiments on small single-agent tasks show that policies learn by the evolved loss can generalize better.\\n> Khadka et al. (2018) run population-based training and off-policy RL training together to leverage the diverse samples collected by the population to improve off-policy learning.\\n> Conti et al. (2018) proposed an improved exploration technique for running evolution algorithm in RL.\\n\\nThanks for mentioning the Capture-The-Flag paper (Jaderberg et al. 2018) from Google. It uses exactly the same training framework from Jaderberg et al. (2017) to solve the game. Particularly, 30 agents are trained as a population and the evolution algorithm is performed to tune each agent\\u2019s intrinsic reward to overcome the sparse success reward of the game. While in our work, we use evolution to tackle the objective misalignment challenge when the number of agents is increased. These are two very different problems of interest. We have cited this paper in the revision. Notably, even in this mentioned paper, there is no direct comparison with any evolution references. Instead, the proposed method is compared with only two baselines, i.e., (1) pure self-play + sparse reward and (2) self-pay + hand-tuned intrinsic rewards.\"}",
"{\"title\": \"We have updated our paper with additional experiments and details in Appendix D and E.\", \"comment\": \"\\u2014-\\u201dwhy did the authors propose new challenges ...\\u201d \\u201c new environments be released\\u201d \\u201dwithout grounding the results in a known environment it is hard to \\u2026 are fair reproductions\\u201d\\n>>> We will release all our source-code. In fact, each of the three environments is either from or slightly adapted from standard benchmarks. We pick two games suitable for many agents from the MADDPG game suites and one from the mean-field MARL paper. We have clarified this in Appendix A in the revision. \\n\\nThe Food Collection game is exactly the same as simple-spread in the original MADDPG paper. \\n\\nThe Grassland Game is a slightly enhanced version of the original predator-prey game in the MADDPG paper, with two enhancements to make it more challenging: (1) agents can die; (2) there are resources. \\n\\nThe Adversarial-Battle game is adapted from the mean-field MARL paper. It is the only high-dimensional experiment in the mean-field paper. The original environment is (1) a grid world and (2) every single agent can kill an enemy (this makes the problem easy since agents need little coordination). We convert it to the particle-world environment with food and further constraint that only two agents can kill an enemy.\\n\\nBesides, we also add another experiment on the original predator-prey game in Appendix E in the updated paper and all of our conclusions still hold.\\n\\n\\n\\u2014- \\u201cPlease provide evidence that this is a fair comparison\\u201d\\n>>> Our MADDPG implementation is based on the open-source code of MADDPG from OpenAI. \\n\\nFor mean-field, we have carefully studied their open-source implementation and tried our best to re-implement their algorithm within the MADDPG framework (their code is on Q-learning). We have even obtained confirmation directly from the authors on all our implementation details.\", \"the_poor_performance_of_mean_field_does_not_surprise_us_for_the_following_reasons\": \"1. In their original paper, it states that \\u201cWe exclude MADDPG as baselines, as the framework of centralized critic cannot deal with the varying number of agents for the battle\\u201d. That is, they only compared with single-agent RL baselines but *did NOT compare with MADDPG* at all. We think this comparison can be simply conducted by masking out dead agents (i.e., set them 0). So, we present this missing result in our paper. \\n2. Mean-field takes the average actions of only *nearby* agents within a hand-tuned distance. As we discussed in the introduction, its fundamental assumption is that the Q function can be *linearly approximated by local interactions* (they use Taylor expansion in the proof), which is not guaranteed in many games requiring complex distant coordinations. This is why it performs significantly poor in Food Collection (as we discussed in the texts below fig.8) while on a par with MADDPG in the other two games. Notably, mean-field was not tested in any high-dimensional games other than the grid-world battle game in their original paper.\\n \\n\\n\\u2014- \\u201call methods were placed against the EPC method\\u201d\\n>>> We have included the full pairwise competition results in Appendix D.3 in our revision. We show the rewards by competing against every two methods. This pairwise results show that EPC is consistently better than baselines. \\n\\nThe purpose of the histograms is to show that EPC can \\u201cdefeat\\u201d opponents trained by other methods. The reason for just showing the results against EPC in the main paper is solely for visualization simplicity (since we have 5 methods to compare).\\n\\n\\n\\u2014- \\u201cplease quantify variance ... How many repeats ...\\u201d\\n>>> For evaluation, as in Appendix C, we run 10000 games to compute each normalized score. \\n\\nFor training, we have included training variance for all methods in fig 9 over 3 seeds (baselines are trained much longer and the full curves are in Appendix D.1).\\nFor the main results (Fig. 6,7,8), we only did 1 training seed at submission time (similar to the MADDPG paper). Although the training is empirically stable, we agree that it would be better to repeat the training process and include variance. For now, due to compute and time limit, we presented the variance for food collection game in Appendix D.4, which again shows consistent results. We promise to include variance results for all the games in the final version.\\n\\nWe also include the raw reward numbers for Fig. 6,7,8 in Appendix D.2. \\n\\n\\n\\u2014- \\u201chyperparameter settings without discussion\\u201d\\n>>> All hyper-parameters for the MADDPG algorithm are exactly the same as the original MADDPG paper (clarified in Appendix B).\\nFor the number of iterations, all baselines are trained for a number of episodes that equals the *accumulative* episodes EPC has taken. The purpose of Fig 9(def) is simply showing the *transfer* performance, i.e., the initialization produced by EPC from the previous stage is effective and can indeed leverage past experiences to warm-start. The x-axis of the plot is actually shrunk (Att-MADDPG is trained much longer than one curriculum stage of EPC). We have included the full curves in Appendix D.\"}",
"{\"title\": \"Thanks for the valuable comments\", \"comment\": \"All the typos in the paper have been fixed accordingly.\\nWe really appreciate the suggestions for testing in more complex environments. NeuralMMO is an excellent option. In the original NeuralMMO paper, even though OpenAI has spent massive compute resources on this project, only maximally 8 individual policies are trained (most of the agents have shared weights). We are particularly curious to see what will emerge if we can have a much more diverse set of policies deployed in NeuralMMO via EPC. We are working on extending our implementation and plan to apply our work to NeuralMMO in future work.\"}",
"{\"title\": \"We have updated our paper with changes in red color\", \"comment\": \"We have made the following revisions to our paper with all the changes colored in red. We promise to release all the code in the final version.\\n1. New Appendix D: We add more evaluation details and additional experimental results. \\n2. New Appendix E: Since our Grassland game is adapted from the original zero-sum Predator Prey game from the MADDPG paper, we conduct additional experiments on the unmodified predator prey game to validate our implementations.\\n3. Fig.9 updated: we visualize training variances for all baselines.\\n4. Related work section updated: In the \\u201cevolutionary learning\\u201d paragraph, we put more details of existing literature.\\n5. We clarify that all baselines are trained with the same accumulative episodes as EPC took.\\n6. More details in Appendix A & B.\\n7. All typos are fixed\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a kind of curriculum for large-scale multi-agent learning. The related work section mentions some obvious points of comparison (note: see also https://science.sciencemag.org/content/364/6443/859.abstract). However, the authors do not compare with ANY of this work (either in terms of algorithm design or performance). It is therefore difficult to evaluate the contribution.\\n\\nIn more detail, the paper combines RL, multi-agent learning and evolution. This is an extremely challenging domain, with many moving parts. How does this approach relate to the work of Salimans et al, Jaderberg et al, Houthooft et al, etc? Without detailed discussion and experiments it is impossible to tell if this is an advance. Improving on the baselines is a useful sanity check. Showing the work is an actual contribution requires comparing against other algorithms in the same space. \\n\\n---- ----\\nAfter reading the rebuttal and other reviews and comments, I've modified my score to weak accept. The paper makes an interesting contribution that is distinct from other approaches.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new method of scaling multi-agent reinforcement learning to a larger number of agents using evolution. Specifically, the procedures (EPC) involves starting with a small number of agents, training multiple sets in parallel, and doing crossover to find the set of agents that generalize best to a larger number of agents. This is motivated by the intuition that agents that perform best in small groups may not be the ones that perform best in larger groups. These claims are empirically verified in three games based on the particle world set of environments.\\n\\nI\\u2019m a fan of \\u2018automatic curriculum learning\\u2019-style methods designed to gradually add complexity to improve final agent performance, and this paper is no exception. The proposed method is simple, but it makes sense. I like the fact that it is both RL algorithm agnostic, and that it can be largely executed in parallel, which means that it introduces only small training time overhead. I also like the proposed method of making the Q function agnostic to the number of agents and entities using attention (although whether these policies incorporate information from previous time steps, or if they can be made to do so). The experimental results are thorough, comparing to MADDPG, a simpler version of their curriculum without evolution, and a recently proposed method for scaling up MADDPG, showing that EPC consistently outperforms all of them, and is more stable. I think the complexity of the environments is also suitable for this style of paper, although it would be nice to see results in a more open and complex domain such as the recent NeuralMMO game.\\n\\nOverall, I think this is a good paper and I recommend acceptance.\", \"small_fixes\": [\"\\u2018asThank yoTha\\u2019 -> not sure what this means\", \"N1 agents for of role 1 -> N1 agents of role 1\", \"\\u201cWe adopt the decentralized learning framework\\u201d --- even if each agent has their own Q function, if that Q function is centralized (uses the observation of all agents) then training is still centralized\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Review Update: Thank you for the detailed response, it raised my opinion of the paper as it reduced my concerns on the rigor of the experiments performed. I believe the changes increase the significance of the contribution and may help it reach a broader audience.\\n\\n--\\nThis paper proposes evolving curriculums of increasing populations of agents to improve multi-agent reinforcement learning with large number of agents. The topic is of relevance to the ICLR community and the results are tending towards publishable contributions, but I have some concerns that I would like the authors to discuss in their rebuttal.\\n\\nThe method is evaluated on a good range of suitably challenging environments. However, why did the authors propose new challenges within the particle environments and not those used in the original publication? This change makes it harder to compare results across publications, whilst not seeming to add a significant change in the learning problem beyond what was present in the original benchmark tasks. The food collection task sounds like it may be equivalent to simple spread. Will the new environments be released as open source for others to build upon this work? \\n\\nThe method is compared against a good range of baseline methods and ablations of the proposed method. However, without grounding the results in a known environment it is hard to place whether the implementations of MADDPG and Mean-Field are fair reproductions. Presuming the authors are using the open source implementation of MADDPG (please confirm) this is most significant for Mean-Field particularly given its poor performance in Section 5.4. Please provide evidence that this is a fair comparison.\\n\\nTo evaluate the resultant agents in the competitive environments, all methods were placed into games against the authors proposed EPC method. Was this the same EPC opponents the evaluated EPC team were trained against? If so, this is an unfair advantage to the EPC team as it has time to optimize against this opponent whilst the other methods have not. Even if it is an EPC team from a different training run, there may be outstanding biases in the joint policies EPC tends towards that benefit the EPC team evaluated. This could be overcome by evaluating all methods in competition with all other methods.\\n\\nFor all experimental results, please quantify variance in performance as well as average value (currently only done in Figure 9 a and b). How many repeats of evaluation and training were performed for each? Without these details the claim on page 9 that \\\"EPC is always the best among all the approaches with a clear margin\\\" is too strong. Are these differences statistically significant? Additionally, please also include the maximum scores (where normalized score = 1.0) for all experiments, as presenting results with only normalized scores unnecessarily reduces the reproducibility of the work. \\n\\nFinally, in Appendix B, the authors provide a list of hyperparameter settings without discussion of how these were chosen. Were they optimised for one specific method or set to defaults from the literature? In particular, as the performance of Att-MADDPG is still improving at the end of the plot in Figure 9e, I am concerned that the #episodes was chosen to optimise the performance of EPC.\\n\\nOverall, this is an interesting approach with promising initial results. I believe the contribution would be significantly improved by addressing the issues above and look forward to the authors responses which could increase my rating to acceptance.\", \"things_that_could_improve_the_paper_but_did_not_impact_the_score\": \"1) On page 5 it is noted that the authors do not share parameters between the Q-function and policy. It would improve the paper to justify why this choice was made.\\n2) Page 5: \\\"N_1 agents for of the role\\\" -> N_1 agents of the role\\n3) Page 5: \\\"as follows to evolved these K parallel sets\\\" -> evolve\\n4) Page 7: \\\"resources asThank yoTha green landmarks\\\"\\n5) Page 8: \\\"understand how the trained sheep behavior in the game\\\" -> how the trained sheep behave in the game\\n6) Page 14: There are repeated grammatical issues in Appendix A e.g. \\\"is more closer to grass / sheep / other agents\\\" -> is closer to grass / sheep / other agents and \\\"will less negative reward\\\" -> will receive less negative rewards\"}"
]
} |
SJe-HkBKDS | Amharic Text Normalization with Sequence-to-Sequence Models | [
"Seifedin Shifaw Mohamed",
"Solomon Teferra Abate (PhD)"
] | All areas of language and speech technology, directly or indirectly, require handling of real text. In addition to ordinary words and names, the real text contains non-standard words (NSWs), including numbers, abbreviations, dates, currency, amounts, and acronyms. Typically, one cannot find NSWs in a dictionary, nor can one find their pronunciation by an application of ordinary letter-to-sound rules. It is desirable to normalize text by replacing such non-standard words with a consistently formatted and contextually appropriate variant in several NLP applications. To address this challenge, in this paper, we model the problem as character-level sequence-to-sequence learning where we map a sequence of input characters to a sequence of output words. It consists of two neural networks, the encoder network, and the decoder network. The encoder maps the input characters to a fixed dimensional vector and the decoder generates the output words. We have achieved an accuracy of 94.8 % which is promising given the resource we use. | [
"Text Normalization",
"Sequence-to-Sequence Model",
"Encoder-Decoder"
] | Reject | https://openreview.net/pdf?id=SJe-HkBKDS | https://openreview.net/forum?id=SJe-HkBKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"z8RctS87so",
"Skg1Jt4msB",
"HkgF8KJT9r",
"BkxPOKMe9r"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729671,
1573238998667,
1572825425094,
1571985774746
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1680/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1680/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1680/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a text normalisation model for Amharic text. The model uses word classification, followed by a character-based GRU attentive encoder-decoder model. The paper is very short and does not present reproducible experiments. It also does not conform to the style guidelines of the conference. There has been no discussion of this paper beyond the initial reviews, all of which reject it with a score of 1. It is not ready to publish and the authors should consider a more NLP focussed venue for future research of this kind.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper describes a method for word normalization of Amharic text using a word classification system followed by a character-based GRU attentive encoder-decoder model.\\n\\nThe paper is very short and lacks many important details, such as where the data is collected from, how it is processed and split into training and evaluation sets, and how the initial token classification is performed. The paper also doesn't adhere to the conference paper template, which is grounds for desk rejection.\\n\\nThe authors should revise the paper with this information and consider submitting to a different venue, as the task considered, while interesting, seems far from the core focus of ICLR.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Text normalization or the transformation of words from the written to the spoken form is an important and realistic question in natural language processing. This paper aims to use sequence-to-sequence models to perform text normalization.\\n\\nHowever, this paper does not use the official template and the content is too short to be a conference paper.\\nI suggested resubmitting to another (NLP) conference after extending the content with detailed description for the model and the method, and conducting more experiments on public acceptable benchmarks.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper addresses the text normalization problem, where the special processing is required for different kinds of non-standard words (NSW\\u2019s).\", \"dataset\": \"a new dataset is collected, including many types of non-standard words from different Amharic news Media\\nand websites, FBC more than eighty percent, VOA and BBC.\", \"model\": \"Bidirectional GRU with the size of 250 hidden units both are used for encoding and decoding layers.\\n\\nThis paper is not ready to publish. Please consider to complete the project, polish the writing, and submit to a different venue.\"}"
]
} |
SJexHkSFPS | Thinking While Moving: Deep Reinforcement Learning with Concurrent Control | [
"Ted Xiao",
"Eric Jang",
"Dmitry Kalashnikov",
"Sergey Levine",
"Julian Ibarz",
"Karol Hausman",
"Alexander Herzog"
] | We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action. Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed. In order to develop an algorithmic framework for such concurrent control problems, we start with a continuous-time formulation of the Bellman equations, and then discretize them in a way that is aware of system delays. We instantiate this new class of approximate dynamic programming methods via a simple architectural extension to existing value-based deep reinforcement learning algorithms. We evaluate our methods on simulated benchmark tasks and a large-scale robotic grasping task where the robot must "think while moving." | [
"deep reinforcement learning",
"continuous-time",
"robotics"
] | Accept (Poster) | https://openreview.net/pdf?id=SJexHkSFPS | https://openreview.net/forum?id=SJexHkSFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"TjaTTCcVEw",
"BygtJHzqsB",
"BkexhKk5iB",
"H1x8xt1coH",
"SkgRRIJqsS",
"Hye7aUyciB",
"ByxoiRtUcB",
"ryg6d_-MqB",
"rygdpEwhKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729643,
1573688544950,
1573677480018,
1573677294296,
1573676757636,
1573676730683,
1572408994525,
1572112501380,
1571742912212
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1679/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1679/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1679/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1679/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1679/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1679/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1679/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1679/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper studies the setting in reinforcement learning where the next action must be sampled while the current action is still executing. This refers to continuous time problems that are discretised to make them delay-aware in terms of the time taken for action execution. The paper presents adaptions of the Bellman operator and Q-learning to deal with this scenario.\\n\\nThis is a problem that is of theoretical interest and also has practical value in many real-world problems. The reviewers found both the problem setting and the proposed solution to be valuable, particularly after the greatly improved technical clarity in the rebuttals. As a result, this paper should be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Rating improved\", \"comment\": \"Thanks for the efforts taken to address my concerns. I have checked the updated draft and I think most of my major concerns have been addressed. And I have improved my rating accordingly. Thanks.\"}",
"{\"title\": \"Response to Blind Review #3\", \"comment\": \"We thank the reviewer for their constructive feedback and exceptionally detailed review.\\n\\nWe agree with the reviewer that although our theoretical justification is based on continuous-time RL and discusses a general framework for handling delays in Q-learning (continuous or discrete), our actual experiments return to the \\u201cwell-trodden\\u201d regime of discrete-time RL with an auxiliary VTG input to the critic network. Regardless of whether the setting is continuous or discrete time RL, the problem of dealing with delays in RL persists. To the best of our knowledge, most of the SOTA DRL implementations are based on discrete-time RL formulations. We are not aware of any image-based DRL results that use a continuous-time RL formulation. While that may be an interesting avenue, we believe this is outside of the scope of our work and believe our method to adapt discrete methods to handle delays is another way to approach the problem. We clarified this in Section 2.\\n\\nWe agree with the reviewer that since \\u201cpolicy duration\\u201d is a quantitative metric that we use to support our claim of faster learned trajectories, a reasonable baseline method should incorporate this optimization goal. The baseline model we compare against penalizes slower policies that take more episode steps through reward discount gamma as well as an timestep penalty, a hyperparameter that returns a fixed negative reward every timestep. This timestep penalty was tuned through a hyperparameter search, and is described in further detail in the Appendix. Additionally, in Table 1 we add two baselines that do not utilize this reward penalty. \\n\\nWe also thank the reviewer for suggesting that we clarify the motivations behind restricting the design space. We focus our study on model-free methods because image-based model-based methods, such as video prediction models, are challenging to learn and an active area of research that is tangential to our main focus of studying concurrent environments. We limit our environments to known latency regimes because this is motivated by real-world robotics setups, where latencies can often be constrained within known upper bounds.\\n\\nWe appreciate the reviewer introducing additional related work and improvements that would help contextualize our contribution. These are exciting research directions that we think show much promise for when they are applied to vision-based robotic control tasks. We added a description of spiking neurons, point processes, and adaptive skip intervals in Section 2. \\n\\n\\\"1. The description of the Vector-to-go is insufficient.\\\"\\nThank you for this suggestion. We have clarified the description of concurrent knowledge representations with a section in the appendix as well as with Figure 5.\\n\\n\\\"2. The results of the simulated experiments are given in the form of distributions and it is very difficult to discern the effect of individual features in Figure 1. Additionally, due to missing error bars, or other measures of uncertainty, the claim that the performance of models with and without the delayed-actions is comparable to the blocking setting seems tenuous at best.\\\"\\nWe assume the reviewer is referring to Figure 2, not Figure 1, as our robotic grasping experiments do indeed report confidence intervals computed over multiple random seeds or real-world evaluations. Figure 2 is a hyperparameter sensitivity plot obtained by performing a hyperparameter tuning experiment across many training runs of the CartPole and Pendulum control tasks. The hyperparameter configurations are then sorted from best to worst, with the X axis plotting the sorted rank of the experiment. One can interpret the entire plot as a distribution over returns over N experiments, where shorter-tailed distributions imply that the method more \\u201crobust\\u201d. Larger area-under-curve means that obtaining good performance is less sensitive to choice of hyperparameters (which is crucial for getting RL algorithms to work on real robots, where sample complexity is prohibitive). \\n\\nBecause this computationally expensive hyperparameter optimization procedure does not yield multiple i.i.d. experiments w.r.t a single hyperparameter configuration (for computational efficiency), we cannot estimate per-experiment uncertainty from this dataset as commonly done in RL. \\n\\n\\\"3. Could the authors describe why the gap can be completely covered through simulations but not in the real world?\\\"\\nThank you for this suggestion. To be completely frank, we are not sure exactly why the large-scale grasping success could be covered in the simulated but not in the real world. However, real-world robotic tasks are difficult and sensitive to many parameters. Given that, we still felt it important to report the full results. We also added more experiment details in the Appendix.\\n\\nFinally, we would like to thank the reviewer for summarizing the main concerns. We felt these insightful comments could be useful to other reviewers, and added our response to the general comment.\"}",
"{\"title\": \"General Summary of Changes\", \"comment\": \"We thank the reviewers for insightful and useful suggestions. We have incorporated feedback into our manuscript and uploaded a new draft.\\n\\nThe writing changes are highlighted in red text.\", \"the_main_changes_we_made_are\": \"- More clearly specifying our contribution and the relationship of our work to continuous-time RL methods (Section 3 and Section 3.4)\\n- Simplifying notation in the derivations (New Section 3, Section A.1, Section A.2)\\n- Introducing simulated robotic grasping baselines that incorporate a timestep penalty that encourages faster policies (Table 1 and Section A.4)\\n- Clarifying the concept of concurrent actions (Section 3.1 and new Figure 4).\\n- Describing concurrent knowledge representations in more depth (New Section A.3, new Figure 5)\\n- Adding an algorithm frame (New Algorithm 1)\\n- Contextualizing biologically-inspired related work with temporally-aware architectures (Section 2)\\n\\nWe believe our responses to Reviewer 3\\u2019s high-level concerns may potentially be useful to the other reviewers, so are including our responses here:\\n\\n\\u201c1. Theoretically relatively straight forward\\u201d\\nWe believe that the simplicity of our contribution is a feature, not a bug. We wanted to provide the simplest extension to existing image-based DRL implementations and measure the impact of making RL delay-aware. The simplicity of the implementation allows any discrete-time RL framework to be easily extended to handle delays by simply changing the network architecture inputs.\\n\\n\\u201c2. Are not expressive enough to capture the problem in its full generality\\u201d\\nWe believe that time-continuous solutions that generalize to more unconstrained problem settings are very promising future extensions, but outside the scope of this work. We add revisions to the manuscript to reflect this.\\n\\n\\u201c3. Need more empirical justification with problems where their modification is indeed indispensable\\u201d\\nWe believe that concurrent knowledge models are indispensable in speed critical vision-based real robot grasping tasks. A metric such as picks per minute are an important task for both research and practical use cases, and depends on policy speed as well as grasp success. As the reviewer suggested previously, optimizing for policy speed during reward-shaping is one baseline approach, but our experiments show that even with reward-shaping alone we reach an upper bound bottleneck on picks per minute that we were only able to surpass with concurrent action models.\"}",
"{\"title\": \"Response to Blind Review #2\", \"comment\": \"We thank the reviewer for their positive review and an accurate summary of the paper.\"}",
"{\"title\": \"Response to Blind Review #1\", \"comment\": \"Thank you for a thorough review. Please see our responses to your comments below:\\n \\n\\\"1. The settings in sections 3.1 and 3.2 are not clear.\\\"\\nWe agree with the reviewer that the clarity of the setup and the method can be improved. We addressed the reviewer\\u2019s comments in the updated version of the paper. In particular, we: i) added the missing definitions (e.g. trajectory \\\\tau), ii) clarified the exact problem setup considered, iii) added missing superscripts to the notation and iv) simplified the notation by removing the distinction between bolded and unbolded symbols.\\n\\n\\\"2. The explanation of concurrent actions in continuous and discrete time is not clear.\\\"\\nThank you for this excellent suggestion. We added a section (i.e. Section 3.1) and illustrative figure (i.e. Figure 4) that hopefully clarify the main aspect of the paper. We also added the definition of the episode as suggested by the reviewer.\\n\\n\\\"3. The concurrent Bellman equation does not make much sense to me. \\\"\\nThank you for your suggestions. We updated the manuscript to provide the details about the exact algorithm that was used in the experiments, which hopefully clarifies how the introduced method fits into a bigger robot-learning framework. We also added an algorithm frame in the Appendix that should allow readers to fully understand the contribution of this paper. We also applied the changes to the concurrent Bellman operator that include taking the maximum over actions.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a first step in the direction of 'real-world relevant RL approaches' in the sense of considering environments that don't halt their execution until an agent has finished its optimal action computation and execution but actually just go on being an environment. For this, the notion of a concurrent action is introduced.\\n\\nThe paper focuses on value-based RL approaches. It introduces modifications to the classical MDP formulation such that concurrent actions can be handled. From a theoretical perspective the resulting Bellman operators (for both continuous and discrete time) remain contractions and thus maintain q-learning convergence guarantees. Qlearning models are adopted to support concurrent actions and the experiments demonstrate that the suggested enhancements are working well.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper considers the theoretically interesting and practically important problem of concurrent deep reinforcement learning (DRL), i.e., DRL in which the agent has to decide the next action while performing the previous one. This introduces several significant challenges, including delays/latency and interruption of an on-going action. To address this issue, this paper proposes to consider the continuous time formulation of the concurrent control problem, derive a continuous-time Bellman equation for the concurrent control scenarios, and then derive its discrete-time counterpart. Contraction properties are shown for both the continuous-time and discrete-time concurrent Bellman equations, and a value-based DRL algorithm based on the concurrent Bellman equations is proposed and tested on a few tasks.\\n\\nThe high level idea of this paper is very interesting and attractive, and in particular, the introduction of continuous-time reinforcement learning is novel. In addition, the numerical experiments do show that a consistently improved performance of the proposed approach on both synthetic and more real-world robotic control tasks. However, there are several significant issues about technical clarity or even correctness in this paper, which I elaborate below:\\n\\n1. The settings in sections 3.1 and 3.2 are not clear. In particular, for 3.1, the author may want to specify the policy clearly, including whether it is stationary or non-stationary. And in addition, Q and V functions should either come with a \\\\pi superscript, indicating which policy they use, or a \\\\star superscript to indicate optimality. Section 3.2 does not make sense to me in general. It is not clear what the index i and the state value s_i(t) are. And it is not clear why we need to differentiate between values of states/actions and the functions themselves. The trajectory \\\\tau is also not clearly defined. The authors need to make these much more clear, and should clearly state the main setting/model that the paper is considering (which seems to be the concurrent discrete-time case, but also not very clear to me).\\n\\n2. The explanation of concurrent actions in continuous and discrete time is not clear. In particular, Section 3.3 only speaks of the settings on a high level, and only brief explanations are given in Figure 1b and the beginning of Section 3.4. Since the concurrent action setting is the central theme in this paper, I think a much more formal explanation should be given about how the system proceeds, instead of just a graphical example illustration. In addition, the concurrent actions in discrete-time setting part is not even clearly mentioned (but is stated in the title of Section 3.3 and discussed subsequently). The authors may also want to explain clearly what the episode is at the beginning of Section 3.4.\\n\\n3. The concurrent Bellman equation does not make much sense to me. In particular, I think to define the optimal Q function, the bellman equation (7) and (9) should have a \\\\max operator included. Otherwise, it is only for policy evaluation. Since the authors didn't clearly specify what the exact algorithm they are using (apart from a brief explanation by words in Section 3.5), I'm not sure whether I'm missing anything or not. But the authors should definitely include a algorithm frame at least in the appendix, to clearly specify which of and how the concurrent Bellman equations are applied in their algorithm.\\n\\nSo in sum, although I think the paper is interesting and novel on the high level, I don't think it's ready for publishing.\\n\\n############ post rebuttal comment ############\\nAfter reading the authors' rebuttal and the modified version of the paper, I think most of my concerns have been correctly addressed. So I decide to improve my score to weak accept.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper tackles the problem of making decisions for the next action while still engaged in doing the previous actions. Such a delay could either be part of the design (like a robot deciding the next action before its actors and motors have come to full rest after the current action) or an artefact of the delays inherent in the system (i.e. latency induced by calculations or latency of sensors). The paper shows how to model such delays within the Q-learning framework, show that their modelling preserves desirable contraction property of the Bellman update operator, and put their model into practice by an extensive set of experiments: learning policies for several simulated and a real-world setting.\\n\\nThe authors claim that that addition of this \\\"delay\\\" does not hinder the performance much of the RL method is given sufficient \\\"context\\\" about the delay, i.e., given extra features as input in order to learn to compensate for it. The writing of the paper is lucid and sufficient background is provided to make the paper self-sufficient in its explanations.\\n\\nHowever, there are some reasons which do not allow me to fully support the paper's acceptance.\\n\\nThe changes made to the basic Q-learning setup, albeit novel and with desirable properties, in my opinion, are (i) theoretically relatively straight forward, (ii) are not expressive enough to capture the problem in its full generality (explained later), and (iii) need more empirical justification with problems where their modification is indeed indispensable. The authors touch on several different research areas cursorily (viz. continuous reinforcement learning, Bellman contractions, feature engineering) while providing grounds for their idea, but in the end return to the familiar domain of discrete Q-learning with semi-hand-crafted (though theoretically motivated) features where the latency of actions can take a set of fixed values and the state is sampled at fixed intervals. \\n\\nIf the actions are continuous, then could method from Doya (2000) directly be used to solve these problems? Can the value-based models which he describes be augmented and extensions developed which build on Lemma 3.1 instead of the well-trodden ground of Lemma 3.2? Especially, if one of the objectives which the authors claim their policies are better is \\\"policy duration\\\", then the absence of purely continuous policies is particularly egregious. Further, reducing the policy duration seems like an independent objective which perhaps can be used for reward shaping for the traditional policy methods, which will also lead to different baselines.\\n\\nThe authors explicitly say that their method focuses on \\\"optimizing for a specific latency regime as opposed to being robust to all of them;\\\" and that they explicitly avoid learning forward models by including additional features. However, the advantages of placing such restrictions on the design space are unclear at best. Would it be the case that the high-dimensional methods will fail in this setting? Are there theoretical advantages to working on limiting the attention to known latency regimes? I suspect that the authors have concrete reasons for making these design decisions, but these do not come across in the paper in the writing, or by means of additional baselines.\\n\\nAs an example of a different approach towards the problem, which the authors overlook in their related work section, is that of learning with spiking neurons and point processes. These areas of research have also been interested in problems of the \\\"thinking while moving\\\" nature: that of reinforcement learning in the context of neurons where the neurons act by means of spikes in response to the environment and other \\\"spikes\\\" [1, 2]. More recently, with point processes, methods have been developed to attain truly asynchronous action and state updates [3, 4]. A differently motivated work which ends up dealing with similar problems is in the direction of adaptive skip intervals [5], where the network also chooses the \\\"latency\\\" in the discrete sense. Adding such related work would help better contextualize this paper.\\n\\nSome other ways the authors can improve the paper are (in no particular order):\\n\\n - The description of the Vector-to-go is insufficient; some concrete examples will help.\\n - The results of the simulated experiments are given in the form of distributions and it is very difficult to discern the effect of individual features in Figure 1. Additionally, due to missing error bars, or other measures of uncertainty, the claim that the performance of models with and without the delayed-actions is comparable to the blocking setting seems tenuous at best, just looking at the rewards.\\n - In particular, for the real experiments, we need more details about the experiment runs to determine why the performance of the policies in the real world is so vastly different. Could the authors describe why the gap can be completely covered through simulations but not in the real world?\\n\\n[1]: Vasilaki, Eleni, et al. \\\"Spike-based reinforcement learning in continuous state and action space: when policy gradient methods fail.\\\" PLoS computational biology 5.12 (2009): e1000586.\\n[2]: Fr\\u00e9maux, Nicolas, Henning Sprekeler, and Wulfram Gerstner. \\\"Reinforcement learning using a continuous time actor-critic framework with spiking neurons.\\\" PLoS computational biology 9.4 (2013): e1003024.\\n[3]: Upadhyay, Utkarsh, Abir De, and Manuel Gomez Rodriguez. \\\"Deep reinforcement learning of marked temporal point processes.\\\" Advances in Neural Information Processing Systems. 2018.\\n[4]: Li, Shuang, et al. \\\"Learning temporal point processes via reinforcement learning.\\\" Advances in Neural Information Processing Systems. 2018.\\n[5]: Neitz, Alexander, et al. \\\"Adaptive skip intervals: Temporal abstraction for recurrent dynamical models.\\\" Advances in Neural Information Processing Systems. 2018.\"}"
]
} |
BJxeHyrKPB | RATE-DISTORTION OPTIMIZATION GUIDED AUTOENCODER FOR GENERATIVE APPROACH | [
"Keizo Kato",
"Jing Zhou",
"Akira Nakagawa"
] | In the generative model approach of machine learning, it is essential to acquire an accurate probabilistic model and compress the dimension of data for easy treatment. However, in the conventional deep-autoencoder based generative model such as VAE, the probability of the real space cannot be obtained correctly from that of in the latent space, because the scaling between both spaces is not controlled. This has also been an obstacle to quantifying the impact of the variation of latent variables on data. In this paper, we propose a method to learn parametric probability distribution and autoencoder simultaneously based on Rate-Distortion Optimization to support scaling control. It is proved theoretically and experimentally that (i) the probability distribution of the latent space obtained by this model is proportional to the probability distribution of the real space because Jacobian between two spaces is constant: (ii) our model behaves as non-linear PCA, which enables to evaluate the influence of latent variables on data. Furthermore, to verify the usefulness on the practical application, we evaluate its performance in unsupervised anomaly detection and outperform current state-of-the-art methods. | [
"Autoencoder",
"Rate-distortion optimization",
"Generative model",
"Unsupervised learning",
"Jacobian"
] | Reject | https://openreview.net/pdf?id=BJxeHyrKPB | https://openreview.net/forum?id=BJxeHyrKPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"XgeyUMBMJm",
"rJeBEtbnsB",
"BklKgdWnsS",
"BkeYHLbnsr",
"SylLp4W3jB",
"rkg7tMbnsS",
"r1lHzzZ3iH",
"SylJiIjpqr",
"HJewScvRYH",
"r1xu3tgSYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729611,
1573816620737,
1573816305024,
1573815873274,
1573815486337,
1573814906574,
1573814797119,
1572873879129,
1571875391468,
1571256752302
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1678/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1678/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1678/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1678/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1678/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1678/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1678/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1678/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1678/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Agreement by the reviewers: although the idea is good, the paper is very hard to read and not accurately enough formulated to merit publication.\\n\\nThis can be repaired, and the authors should try again after a thorough revision and rewrite.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to the comments (part 1)\", \"comment\": \"Thank you for your time and valuable comments. Please understand that we revised to some extent in order to respectfully deal with your comments and to make our claim more persuasive.\\n\\nFirst of all, we added an explicit discussion about our motivation, idea, and connection among prior works which we overlooked before. Figures 1 and 2 give an overview. \\n\\n>>Why the introduced method is better than VAE as a generative model for capturing the latest representation is not explained well. It is not also used as a baseline in most of the experiments.\\n\\nWe added section 3 to make the relation and difference between VAE and our method much explicit.\\nBy this section, we believe the following two points stated repeatedly in the entire text became easy to follow. These are difference not only from VAE but also from other autoencoders without Jacobian control. Since you seem not to care about the second point very much, we would appreciate your attention to it. We insist the second point promotes the interpretation of latent variables which has been discussed as one of the most important problems of deep learning. \\n\\n(i) the probability distribution of the latent space obtained by this model is proportional to the\\nprobability distribution of the real space because Jacobian between two spaces is constant;\\n(ii) our model behaves as non-linear PCA, where the energy of acquired latent space is concentrated on several principal components and the influence of each component can be evaluated quantitatively\\n\\nThe experiment in section 5.1(in the revised version) demonstrates the first feature. Furthermore, an experiment in section 5.2 shows the validity of the practical task. The second feature is examined in the experiment in section 5.3. Figure. 4 (Fig. 6 in revised ver.) demonstrates the second property. \\n\\nDAGMM is known as a model to estimate the density better than (beta-)VAE and suitable for baseline though, we added the result of the experiment in section 5.1 (toy data task) in Appendix G. As it is mentioned before, in VAE, Jacobian is not constant and Px(x) and Pz_\\\\psi(z) have no correlation. We can also move this to the main text if it is necessary.\\n\\nIn the anomaly detection task, we added the score of VAE cited from Liao et al. (2018). Actually GMVAE is also a VAE based method. Unfortunately, we couldn\\u2019t reproduce the result by ourselves though, our model performs significantly better compared with that. Since GMVAE does not care about Jacobian and maximizing ELBO as well, it essentially includes disorder in the density estimation.\\n\\n>>The motivation for having the third term in Equation (4) needs to be explained. Also, what is h() in the second term? The authors only describe briefly both terms together after they used it here but failed to describe what each term is. Why there is an h for the second term but not for the third term. h() becomes more clear much later in the paper but when it is used the first time, it not defined.\\n\\nWe added the explanation the third term and h() when it is used the first time. The second and third terms are actually decomposition of D(x, x_\\\\breve) as shown in Rolinek et al., 2019. By this decomposition, we can independently control the reconstruction loss and scaling Jacobian and lead to better performance.\\n\\n>> I believe A in Equation (5) should be also positive-definite.\\nYes, it is. We added the description.\\n\\n>>What is L(x) in Equation (8). It needs to be defined.\\nWe added the definition of L(x).\"}",
"{\"title\": \"Response to the comments (part 2)\", \"comment\": \"Regarding experiments:\\n>>1- It is useful to also plot the original data in space s to see how the results in Figure 2 make sense. \\nThanks for your point. We added the plot of the original data source.\\n\\n>>2- Figure 3 is not clear. \\nFigure 3 (Figure 5 in revised version) depicts plot of Px(x) (x-axis) and Pz_\\\\psi (z) (y-axis). A linear plot means that the probability density of Px(x) is tidily mapped into the latent space. Thanks to this property, Px(x) can be estimated by Pz_\\\\psi (z) in our model. It is obvious that DAGMM does not have this trait. This is also quantitatively evaluated. The correlation coefficients are 0.691 (baseline) vs 0.997 or more (ours).\\nFor the easy following, we revised the caption and description in the main text.\\n\\n>>3- In the Anomaly detection experiments, the authors make two assumptions that usually do not exist in real-worlds: (1) they assume that they have access to a training set that only contains normal cases. (2) They assume that they know the correct rate of anomaly. I think both these assumptions are very restrictive and unreal. While these assumptions are used for all the comparing methods, it is not obvious how different algorithms behave in a real scenario. \\n\\nI understand that there is an unrealistic assumption, but this setting is established and widely admitted in this anomaly detection task.\\nRegardless of this assumption is realistic or not, density estimation remains a critical issue, and better estimation provides better performance. Although investigating the performance in a truly real scenario might be future work, we argue this point is not a defect to show the validity of our method. \\n\\n>>4- Figure 4 and what it represents is not clear.\\nThis is caused because we could not tell you the purpose of this experiment sufficiently. This is an experiment to show an important property of our model: our model behaves as PCA, where the energy of acquired latent space is concentrated on several principal components and the influence of each component can be evaluated quantitatively\\nThe two on the left of Fig. 4 (Fig. 6 in the revised version) is the variance of the latent space. Since our model works as PCA, the variance is concentrated in a few dimensions. Two on the right shows that the influence of minute displacement of each z to the real image is the almost constant in our model while it is varied in beta-VAE. Thus, we can evaluate the importance of latent variables by variance like PCA. We added the caption and enhanced the purpose of the experiment.\\n\\n>>Regarding writing issue\\nThank you for pointing. We fixed them.\"}",
"{\"title\": \"Response to the comments (part 1)\", \"comment\": \"Thank you for your time and valuable comments.\\nFrom your comments, we found that our work would be closely related to a practical method of isometric embedding of Riemannian manifold. \\nBecause our background is not only deep autoencoders but also image compression, we have overlooked that there is a gap between image compression and VAE. \\nPlease understand that we revised to some extent in order to respectfully deal with your comments and to make our claim more persuasive.\\n\\n>>Regarding our motivation and its connection with prior works \\nWe added an explicit discussion about our motivation, idea, and connection among prior works. Figures 1 and 2 give an overview. Please find the following discussion are added.\\nThe term \\u201cRate-distortion optimization\\u201d or \\u201cRDO\\u201d is a method to improve quality in image compression with orthonormal transform coding.\", \"https\": \"//en.wikipedia.org/wiki/Rate-distortion_optimization\\nPrior works of deep image compression such as Balle et al., 2018 also used RDO. We added an overview of RDO and our motivation. The derivation of our idea is based on the analogy of orthonormal transform coding.\\nWe also added the analogy and difference between VAE and our idea. The summary is as follows. \\n\\nAccording to RDO theory, the condition of optimization in transform coding is that: (i) transform data deterministically using orthonormal basis (orthogonal is not enough) such as DCT, KLT, and so on (ii) quantize by uniform quantizer for all channels which cause uniform noise (iii) assign the optimum entropy code.\\nOur intuition is that if the equivalent noise is added to latent variables z and rate-distortion is optimized, z should have orthonormality. Consequently, Jacobian becomes constant automatically.\\nObeying this flow, z is obtained deterministically and the entropy is used rather than KL divergence between an encoder and a prior.\\n\\nRate-distortion optimization condition for VAE and our model is contrasted as follows. In VAE, because PDF is fixed as prior, noise should be variable and scaling between data and latent spaces is also variable, meaning Jacobian is inconstant. In ours, because noise is uniform, PDF should be variable(parametric) and scaling is constant. Thus, in our model, there is not fixed prior.\\n\\nAbout the loss function, The second and third terms in eq (5) are an approximate decomposition of D(x, x_\\\\breve) as shown in Rolinek et al., 2019. By this decomposition, we can independently control the reconstruction loss and scaling Jacobian and lead to better performance.\\n\\n>> Relation to [1] http://proceedings.mlr.press/v84/chen18e.html\\nThank you for introducing an interesting paper to us. Thanks to your comment, we found that our work, especially Eqs. (10) and (11), would be related to an isometric embedding of Riemannian manifold where A(x) is a Riemannian tensor. In this paper, the authors discussed the distance between two points is the shortest path on a Riemannian manifold induced by the transformation. Then, the impact on the domain data caused by the variance of latent variables is measured. This is related to the discussion of Fig. 6 (c) and (d). While VAE needs to find a winding road in the latent space that corresponds to the shortest path, in our model, a linear path in the latent space expected to be connected to that. While we did not include this discussion this time because of page limitation though, this will be our future work.\\n\\n\\n>> Do your parameters need to be in the optimum for your analysis to hold true?\\nStrictly speaking, yes. Although, as experiment result showed, when parameters are optimized decently, it works almost as in theory even though there remains the left behind margin.\"}",
"{\"title\": \"Response to the comments (part 2)\", \"comment\": \">>Regarding the experimental result.\\nFirst of all, actually, our model does not increase model complexity even though you concerned about this point. \\nWhen we compared with our model and DAGMM (baseline model), the number of network parameters is completely the same. We added this point explicitly.\\nNevertheless, our model provides a significant performance boost in the anomaly detection task.\\n\\nExperiment with toy data is executed to confirm our model\\u2019s property though, it also supports the result of anomaly detection. In DAGMM, the relation PDF of x and z is unclear. On the other hand, in our model, the PDFs of x and z are close to proportional. That means, our model can capture the probability of real data methodically in the latent space. This fact should be very intuitive to explain the performance boost in the anomaly detection task in which PDF estimation is a critical issue. Other comparison methods also essentially lead the disorder in the density estimation like DAGMM because the Jacobian is not controlled.\\n\\nIn the analysis of the latent state in CelebA, we assume that since we could not tell you the purpose of the experiment enough, it was not convincing for you.\\nThis is an experiment to confirm that the latent variable in our model works as PCA components, and the influence of each component can be evaluated quantitatively as in theory while (beta-)VAE does not have this property. We revised this sections and captions for easy following.\\nThe two on the right of Fig. 6 in the revised version show the scaling between the latent and metric dependent data space. (c) shows the scaling in VAE is anisometric, and (d) shows the scaling in ours is isometric.\\nThe two on the left of Fig. 6 in the revised version is the variance of the latent space. Since the scaling of z in our model is isometric, the variance shows the importance of each latent variable like PCA. \\n\\nConsequently, we believe our experimental results demonstrate the validity of our method decently. \\n\\nPCA can simultaneously disentangle the data and estimate the importance of latent variables by variance. We believe this trait is very helpful to the interpretation of the latent variable of deep models. \\n\\n\\n>>minor issues\\nWe fixed the minor issues you pointed (we would appreciate if you could be indulgent of a bit long model acronym). \\nWe also promise to request a grammatical check by a native no later than the camera-ready version.\"}",
"{\"title\": \"Response to the comments (part 1)\", \"comment\": \"Thank you for your time and comments. Thanks to your comments, we could improve the paper a lot! We hope the revised version and these rebuttal comments will solve your confusion. Please understand that we revised to some extent in order to respectfully deal with your comments and to make our claim more persuasive.\\n\\n>>Regarding our motivation and connection to prior works.\\nBecause we have a background not only about deep autoencoder but also about image compression, we have overlooked a gap between image compression and VAE as you pointed out. To make our motivation and the difference from previous work clear, we added section 3. Figures 1 and 2 describe the overview. Please find the following points are described.\\n\\nOur method is based on rate-distortion optimization(RDO) of transform coding for image compression. RDO is a method to improve quality in image compression with orthonormal transform coding. \\n(https://en.wikipedia.org/wiki/Rate-distortion_optimization)\\nFor the readers who are not familiar with RDO, we added the overview of RDO, its connection to VAE, and our motivation to introduce RDO to autoencoder (not VAE). \\n\\nHere is a summary of the added section.\\nAccording to RDO theory, the condition of optimization in transform coding is that: (i) transform data using orthonormal basis (orthogonal is not enough) such as DCT, KLT, and so on (ii) quantize by uniform quantizer for all channels which cause uniform noise (iii) assign the optimum entropy code.\\nOur intuition is that if the equivalent noise is added to latent variables z and rate-distortion is optimized, z should have orthonormality. Consequently, Jacobian becomes constant automatically.\\nAccordingly, z is obtained deterministically. The reason to add equivalent noise is based on this idea. Since our model minimizes entropy of z, there is no prior for p(z).\\n\\nActually, RDO can be analogously discussed in VAE and there are works considering rate-distortion trade-off into VAE. But in the way to assume fixed distribution as prior like VAE, even if rate-distortion is optimized, orthonormality is not guaranteed, and Jacobian is not constant.\\n\\nAs we mentioned in section 2., Flow-based models take Jacobian of into account (we assume this model would be \\u2018elsewhere\\u2019). Although, in Flow method, the encoder and the decoder need to be a bijection, which means the dimension of the data space and the latent space is the same. On the other hand, our model can compress data into a lower dimension with a constant Jacobian. \\n\\n>> Regarding theory part\\n>>It seems that eq 14 is the final result\\nThe final cost function is eq. (5) and it is substantially the same as eq. (14) ((15) in revised version). This equation is related to rather the needlessness of ELBO than orthonormality. \\nTo avoid confusion, we split the section into the method part and theoretical part. Also, we enhanced the purpose of each equation.\\n\\n>>orthogonal argument\\nAlthough we gave full proof in Appendix A, in the main text, we rely on Rolinek et al., (2019) for the argument of orthogonality since it is already proved. We show the proof of constant Jacobian and orthonormality, combining the orthogonality.\\n\\n>>Optimizing eq 14 seems trivial since we can always match Pz and Pz_\\\\varphi easily with neural networks, or similarly the two x-distributions.\\n\\nYou seem to imagine matching the KL-divergence of Pz as prior and pz\\\\varphi or something like that. As it is mentioned, Jacobian J is not constant in the most previous methods. Thus, even if just Pz and Pz_\\\\varphi were matched, it does not mean estimating p(x), which is our goal, is achieved.\"}",
"{\"title\": \"Response to the comments (part 2)\", \"comment\": \">>Regarding experimental results\\n>>In the experiment 4.1. the proposed method seems to achieve matching densities, although the distributions are wrongly normalized. How does the density matching improve? All three methods seem to have equally good scatters.\\n\\nAs written in the result section, even though the baseline method (DAGMM) also captured good scatter, the density is not estimated adequately. Figure 5 (in revised version) depicts plot of Px(x) (x-axis) and Pz\\u03c8(z) (y-axis). It is obvious that we can see the proportionality between Px(x) and Pz\\u03c8(z), while we cannot see the tendency in the baseline. This is also quantitatively evaluated. The correlation coefficients are 0.691 (baseline) vs 0.997 or more (ours). (Originally we showed residual of linear regression though, for intuitive understanding, we replaced it by correlation coefficients.)\\nFor the easy following, we also updated the caption and description in the main text. \\n\\n>>The face experiment is unconvincing since the VAE spreads variance across all latent dimensions while RADO seems to compress them to just first 20 or so. If one would visualize the z_100 there would be no variance in RADO and possibly some variance in VAE. \\n\\nYes, what you said happens. This means that our model correctly works as PCA as in theory. The point is the variance of latent variables directly correlated \\nThis is the experiment to confirm that latent variables of our model work as PCA components. To make it further explicit, we added this statement at the beginning of the section.\\n\\nLet\\u2019s say if we want to find an important latent variable in terms of the influence on the metrics function, what should we do?\\nIn our model, we can find that latent variable easily, since the variance of z is directly related to the visual. In (beta-)VAE, as we described, variance and impact to visual are uncorrelated. Thus, for example, we need to visualize all latent variables to find the important ones. Moreover, even if you come up with you run PCA for the latent variables of VAE, you need to set the number of PCA components and may struggle to decide how many components are appropriate. In our model, it is automatically optimized in terms of minimizing entropy.\\nConsequently, the latent variable in our model is quantitatively understandable. We believe this character is helpful for the interpretation of latent variables and meta-prior of data.\\n\\n>>The paper also should compare their model to simple MNIST/VAE to highlight what problems are there in standard approaches (such as VAE), and how does the proposed method alleviate them.\\n\\nAn Experiment with MNIST could be worth-doing though, we demonstrated the above characteristic clearly with CelebA dataset.\\n\\nIn terms of the interpretation of latent variables, some of the standard approaches are visualizing or evaluate independencies of variables as in (Lopez et al., 2018; Chen et al., 2018; Kim & Mnih, 2018; Chen et al., 2016). They do not directly evaluate the importance of latent variables on the metric function(such as MSE or SSIM). In our method, this can be quantitatively measured like PCA. \\nNote that, we do not intend to claim this way of analysis is always better than previous ways. We argue that making use of PCA like analysis as an option and incorporating them will promote further interpretation of latent variables. \\n\\n>>For minor comments\\no The point of eq 5 is unclear, it seems unnecessary. It also does not contain h(), which is claimed after eq6\\nEquation 5 (6 in the revised version) is a condition for function D(\\u30fb,\\u30fb). As long as D(\\u30fb,\\u30fb) can be approximated as eq 5, it can be applied. We added this explanation.\\n\\no The log pz(z) in eq 4 is not entropy\\nWe fixed it.\\n\\no eq 8 is unclear, is the dx a derivative, distance or change?\\nIt is derivative. We added the notation.\\n\\no the t prefix notation is confusing, what does it mean?\\nThe t denotes the transpose of a matrix. We added the notation.\\n\\no what is the \\\\sim and line notation in eq 5\\n\\\\simeq denotes approximation.\\n\\n o what are the products in eq9, are these inner products?\\nIt is a multiplication. We removed dots.\\n\\noi n eq 13 pz, pxd or hat(pxd) have not been introduced or defined\\nhat(pxd) is defined as \\u201clet hat(pxd) be estimated probability of xd.\\u201d\\u3000We added definition pz, pxd as the true PDF of z and xd\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary of the paper: The authors propose a latent variable model RaDOGAGA, a generative autoencoding model. The model is trained via a tradeoff between distortion (the reconstruction error) and the rate (the capacity of the latent space, measured by entropy). The paper provides an analysis of theoretical properties of their approach, and presents supporting experimental results.\\n\\nReview tl;dr: weak reject, for three main reasons:\\n(i) While the existing literature around VAEs, beta-VAEs, and Rate-Distortion theory is mentioned in the related work, the connections are not nearly discussed sufficiently. \\n(ii) On top of (i), the derivation of their loss function and architecture is not sufficiently motivated. This is in astonishing contrast to 1.5 pages of main text and 8 pages of (much appreciated!) analysis of properties.\\n(iii) Given the paper is clearly related to existing approaches in the literature, the experiments would require a much more careful comparison to existing models. It remains unclear why an interested user should favor your model over conceptually simpler generative models with fewer hyperparameters.\", \"detailed_review\": \"\", \"nota_bene\": \"This review is a late reassignment. While I reviewed the paper to the best of my ability, time constraints did not allow me to review parts of the paper in depth. I am open to reassess my review during the second stage.\", \"connection_to_prior_art\": \"As a probabilistic, neural autoencoding model, the connections to the family of VAE models are obvious. The loss function (eq. (4)) still looks very much like the ELBO, where the typical conditional log-likelihood was split into two distortion terms. How is this different from e.g. a beta-VAE? Particularly, what is the connection between the rate-distortion analysis of beta-VAE by Alemi et al. and yours? These things need to be discussed explicitly, with more than a sentence or two in the related work section.\\nA lesser, but still important omission in your discussion of prior work: The Jacobian of the generator has also been studied, even for the VAE, cf. e.g. [1]. I believe this deserves more attention in your assessment of prior art.\", \"motivation\": \"You use two distortion terms: actual sample vs. undistorted reconstruction. Why is that? What is the interpretation of the multipliers? How do I choose them? Why is a large part of your architecture (the pipeline from x to \\\\hat(x)) actually deterministic? Why are you using the entropy of the prior over the latents, rather than the KL divergence between encoder and a prior? I think an interested reader could learn much more from your paper if you discussed your model embedded in th related work rather than in isolation.\", \"theory\": \"Due to aforementioned time constraints, I was not able to review the extensive theoretical analysis in depth. Still, I would strongly recommend structuring the respective sections more clearly. Separate model and architecture description from the theoretical analysis; precisely formulate your claims. In particular, state your assumptions clearly. For instance, you assume \\\"that each function's parameter is rich enough to fit ideally\\\" (and similar e.g. in Appendix A). Does this only mean that the true distributions are part of the parametric family? What if this is not the case? Do your parameters need to be in the optimum for your analysis to hold true?\\n\\nGiven that the full 20-page manuscript spends 10 pages on theory, I think this contribution is not given appropriate space in the main text.\", \"experiments\": \"There are three experiments: a simple 3D proof of concept; anomaly detection; analysis of the latent state in CelebA. As mentioned in my review of the methods section, I believe the approach to be very similar to established models. None of the experiments provides convincing evidence why I should prefer the new, arguably more complex model.\\nFor instance, I would have much preferred that you investigate properties of your model against alternatives over the anomaly detection experiments, which did not further my understanding of the proposed model.\", \"summary\": \"The paper tackles an important problem, namely the lack of control over the latent embedding in autoencoding generative models. I believe the author's contribution can be valuable, and I particularly appreciate the effort to investigate theoretical properties. As is, the case is not sufficiently convincing to be accepted, but I encourage the authors to improve the paper.\", \"minor_comments\": \"1. While I appreciate a pun, I would recommend to rename the model along with the acronym to a more concise name.\\n2. Please revise your notation and typsetting. Examples: x1 instead of x_1, f of f(\\\\cdot) instead of f(), \\\\log instead of log.\\n3. Introduce acronyms before using them (e.g. VAE, MSE, SSIM), even when they seem obvious to you.\\n4. Please carefully check the manuscript for typos, missing articles, missing spaces etc.\\n5. Your citations are inconsistent, in that they sometimes use first names, sometimes first name initials, and sometimes no first names.\\n6. To my knowledge, the term scale function does not have an obvious definition. I think you are simply referring to monotonically increasing functions. Please clarify!\\n7. Your figures should be understandable without too much context, they need more detailed captions.\\n\\n[1] http://proceedings.mlr.press/v84/chen18e.html\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper aims to obtain latent representation of data such that probability density for the real space can be calculated correctly from that in the latent space. The authors optimize a loss function that has components related to parametric probabilistic distribution and auto encoder simultaneously. While this might be an important problem (I am not sure), the paper is not written and organized well which makes a through evaluation very difficult. I provide below some of the problems with this the paper:\\n\\nWhy the introduced method is better than VAE as a generative model for capturing the latest representation is not explained well. It is not also used as a baseline in most of the experiments.\\n\\nThe motivation for having the third term in Equation (4) needs to be explained. Also what is h() in the second term. The authors only describe briefly both terms together after they used it here but failed to describe what each term is. Why there is an h for the second term but not for the third term. h() becomes more clear much later in the paper but when it is used the first time, it not defined. \\n\\nI believe A in Equation (5) should be also positive-definite. \\n\\nWhat is L(x) in Equation (8). It needs to be defined.\", \"experiments\": \"1-\\tIt is useful to also plot the original data in space s to see how the results in Figure 2 make sense. \\n2-\\tFigure 3 is not clear.\\n3-\\tIn the Anomaly detection experiments, the authors make two assumptions that usually do not exist in real-worlds: (1) they assume that they have access to training set that only contains normal cases. (2) They assume that they know the correct rate of anomaly. I think both these assumptions are very restrictive and unreal. While these assumptions are used for all the comparing methods, it is not obvious how different algorithms behave in real scenario. \\n4-\\tFigure 4 and what it represents is not clear.\", \"writing_problems\": \"1-\\tIn the text of paragraph before Figure 1, Eq. (5) in \\u201cin the second term of Eq. (5)\\u201d is a typo and should be Eq. (4). \\n2-\\tIn the paragraph before Figure 1, the following sentence is not complete: \\u201cThen, averaging Eq. (4) according to distribution, x~P_x(x) and epsilon~ P(epsilon).\\u201d\\n3-\\tSection 4.2.1: \\u201cthere is a difference is PDF \\u2192 \\u201cthere is a difference in PDF\\u201d\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper propose a noisy autoencoder that considers the jacobian between data and latent spaces to match the corresponding densities. This idea has already been proposed elsewhere, and here it is applied to autoencoders. Overall I had hard time understanding the paper, the motivation, the main contribution or the claim, the model definition and the jacobian method. The paper is poorly written, with lots of issues in math notation and poor motivation and explication of what the sections are introducing, and what parts of the presentation is novel and what is already known. Lots of the math machinery is too vague to follow.\\n\\nThe distribution p(z) is unclear, and whether z is random variable or not. It seems that \\u201cz\\\" is a non-random variable, and then adding noise \\\\eps makes it stochastic. However, then p(z) without \\\\eps does not make any sense since z is not random. It seems that p(z) is maybe a prior distribution instead (or maybe the variational posterior?), but then adding \\\\eps noise to an already stochastic variable is strange. Overall I have hard time understanding the motivation of the two discrepancies in eq 4, what is the point of adding more noise to \\u201cz\\u201d? This seems some kind of noisy or perhaps robust AE variant, but the paper does not explicate this. I have hard time following the eqs 8-15. I am not convinced of the orthogonality argument, and I fail to see what this section tries to show or demonstrate. It seems that eq 14 is the final result, but its difficult to follow due to most terms in eq 14 being undefined. Optimizing eq 14 seems trivial since we can always match pz and pz_\\\\varphi easily with neural networks, or similarly the two x-distributions. \\n\\nIn the experiment 4.1. the proposed method seems to achieve matching densities, although the distributions are wrongly normalized. How does the density matching improve? All three methods seem to have equally good scatters. The benchmarks on table 1 show clear improvement with the method. The face experiment is unconvincing since the VAE spreads variance across all latent dimensions while RADO seems to compress them to just first 20 or so. If one would visualise the z_100 there would be no variance in RADO and possibly some variance in VAE. The paper also should compare their model to simple MNIST/VAE to highlight what problems are there in standard approaches (such as VAE), and how does the proposed method alleviate them.\\n\\nOverall the paper is poorly presented and difficult to follow. Despite this the method does seem to work remarkably well, and the Jacobian idea is clearly very promising. Nevertheless in its current form the paper is badly premature for ICLR, and needs a lot more work and polish to be made understandable for wider ML audience.\\n\\nMinor comments\\no Px(x), x1, x2 are probably missing subscripts\\no The point of eq 5 is unclear, it seems unnecessary. It also does not contain h(), which is claimed after eq6\\no The log pz(z) in eq 4 is not entropy\\no eq 8 is unclear, is the dx a derivative, distance or change?\\no the $^t$ prefix notation is confusing, what does it mean?\\no what is the \\\\sim and line notation in eq 5?\\no what are the products in eq9, are these inner products?\\no in eq 13 pz, pxd or hat(pxd) have not been introduced or defined\"}"
]
} |
SygkSkSFDB | On the expected running time of nonconvex optimization with early stopping | [
"Thomas Flynn",
"Kwang Min Yu",
"Abid Malik",
"Shinjae Yoo",
"Nicholas D'Imperio"
] | This work examines the convergence of stochastic gradient algorithms that use early stopping based on a validation function, wherein optimization ends when the magnitude of a validation function gradient drops below a threshold. We derive conditions that guarantee this stopping rule is well-defined and analyze the expected number of iterations and gradient evaluations needed to meet this criteria. The guarantee accounts for the distance between the training and validation sets, measured with the Wasserstein distance. We develop the approach for stochastic gradient descent (SGD), allowing for biased update directions subject to a Lyapunov condition. We apply the approach to obtain new bounds on the expected running time of several algorithms, including Decentralized SGD (DSGD), a variant of decentralized SGD, known as \textit{Stacked SGD}, and the stochastic variance reduced gradient (SVRG) algorithm. Finally, we consider the generalization properties of the iterate returned by early stopping. | [
"non-convex",
"stopping times",
"statistics",
"gradient descent",
"early stopping"
] | Reject | https://openreview.net/pdf?id=SygkSkSFDB | https://openreview.net/forum?id=SygkSkSFDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ExJmOU7UHz",
"B1ghnfO49B",
"rJgTmIO6Kr",
"r1gFw9KcFH"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729583,
1572270771888,
1571812901102,
1571621472876
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1677/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1677/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1677/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors made no response to reviewers. Based on current reviews, the paper is suggested a rejection as majority.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes an optimization approach in which the optimizer computes the gradient on a given function yet uses another to decide a stopping time. Conceptually those functions are empirical errors on train and validation folds in the most common setting, although the authors seem to use other settings later in the paper to consider decentralized optimization schemes. The authors introduce a bound on the Wasserstein distance between the train and validation distributions in their analysis which plays a crucial role in their results. The authors use these results to motivate variants of existing optimization algorithms.\\n\\nThe paper is interesting but its message is a bit blurred to me. I had trouble pinpointing one main contribution, since the paper is split as theory (with some results) and a collection of slightly modified SGD type algorithms that are now impacted by this \\\"gradient somewhere / monitor progress elsewhere\\\". The theoretical results are worth reading and the idea appealing. \\n\\nThe paper also requires a *lot* of polishing. It has been sloppily written. For these reasons I am inclined to reject the paper and encourage the authors to improve their draft with a better formulation.\", \"minor_comments\": [\"I have found the \\\"main contributions\\\" paragraph to be poorly phrased. Since the authors only monitor the validation loss and not the training loss, I do not think this falls into the \\\"standard\\\" definition of early stopping.\", \"please use citet and citep consistently.\", \"please add labels to figures and format them properly (e.g. SSGD on p.6)\", \"unsure about the format used to display f_T(x_t) in p.6\", \"Villani 2008 has over 900 pages. any page in particular?\", \"Assumption 2.3 requires significantly more work... All bounds scale as G^2 (e.g. eq.9, 10,11), therefore an idea of what G's impact on the analysis sounds crucial. In Example 2.4 the authors start working out an example, but wouldn't it be more interesting to carry that out completely, e.g. for the KL? What kind of bound would that result in?\", \"I find it disturbing that important comments on some of the crucial quantities (such as descent direction Eq.3) are left out of the algorithmic box... This defeats the purpose of having an algorithmic box.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors consider stochastic optimization in the setting where a validation function is used to guide the termination of the algorithm. In more details, the algorithm terminates if the gradient of the validation function at an iterate is smaller than a threshold. In this framework, the authors consider several variants of SGD, including distributed variant and SVRG, for each of which the authors study the expected number of iterations for a prescribed accuracy under an assumption between the training and validation set.\\n\\nWhile the use of a validation function is useful for early stopping, it introduces additional cost.\\n\\nWhile bounds on the expected number of iterations are derived for several variants of SGD, it seems that most arguments are adapted from the existing analysis to take into account the validation function.\\n\\nIn Corollary 3.4 and Corollary 3.5, the bound is an increasing function of m. This suggests that the best choice would be m=1. However, in this case, one needs to calculate the gradient of the validation function at each iteration, which may wastes a lot of computation.\\n\\nThe authors consider constant step sizes. In practice, step sizes are often decreasing along the optimization. Can the analysis be extended to cover the case with decreasing step sizes?\\n\\nIn eq (30), there is a missing factor of 2.\\n\\nThere is a required $\\\\epsilon>G62d_1(\\\\mu_V,\\\\mu_T)^2$ in the results. Therefore, to achieve a high accuracy we need $d_1(\\\\mu_T,\\\\mu_T)$ to be small. How many numbers of sample size to make $d_1(\\\\mu_V,\\\\mu_T)$ small? This has an influence on the computational cost.\\n\\n\\n----------------------\", \"after_rebuttal\": \"The authors do not respond. I would like to keep my original score.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper studies the problem of the number of first-order-oracle calls for the SGD type of algorithms to find a stationary point of the objective function. The main results in the paper are built upon a new, general framework to analyze the SGD type of algorithms.\", \"the_main_framework_can_be_summarized_as_follows\": \"At each iteration, the algorithm receives h_t, a (potentially biased) estimator of the gradient at a given point x_t, and performs a simple update x_{t + 1} = x_t - \\\\eta * h_t. The framework says that as long as the norm (V_t) of \\\\Delta_t = h_t - v_t (where v_t is an unbiased estimator of the true gradient with bounded variance) satisfies a particular Lyapunov-type inequality, then the algorithm can find an epsilon-stationary point as long as epsilon is not too small.\\n\\n\\nThe analysis of the framework is quite standard, one only needs to write the decrement in function value at each iteration into the following three terms: the norm of the true gradient of the function, \\\\delta_t: the difference between v_t and the true gradient (so E[\\\\delta_t] = 0) and \\\\Delta_t: the difference between the received gradient h_t and v_t.\\n\\n\\nThe authors showed some application of this framework in Stacked SGD and decentralized SGD. The main intuitions of these applications are (1). \\\\Delta_t comes from the synchronization difference of the nodes when computing the gradient. (2). The shrinking of V_t is due to the (better) synchronization at each iteration. (3). The increment of V_t is due to the gradient update. \\n\\nOverall, I find the general framework quite interesting and potentially useful for future research and could be used as a guide for choosing the proper algorithm in distributed computation. The bounds in this paper are also in principle tight. The only question I have about this result is the dependency of m (the number of iterations between each evaluation of the gradient norm of the underlying function). (1). How can this (the evaluation of the gradient norm of the underlying function)) be done in a decentralized environment? What is the computation overhead? (For example in DSGD, how can we compute \\\\bar{x}_t?) (2). It seems that the computation cost (number of IFO) scales quadratically with respect to m. What is the intuition for this scaling? It appears to me that the scaling should be linear or better (the worst case is that within the \\\"m\\\" iterations, only one iteration has gradient >= epsilon). The authors should elaborate more on this point.\"}"
]
} |
ryl1r1BYDS | Multiagent Reinforcement Learning in Games with an Iterated Dominance Solution | [
"Yoram Bachrach",
"Tor Lattimore",
"Marta Garnelo",
"Julien Perolat",
"David Balduzzi",
"Thomas Anthony",
"Satinder Singh",
"Thore Graepel"
] | Multiagent reinforcement learning (MARL) attempts to optimize policies of intelligent agents interacting in the same environment. However, it may fail to converge to a Nash equilibrium in some games. We study independent MARL under the more demanding solution concept of iterated elimination of strictly dominated strategies. In dominance solvable games, if players iteratively eliminate strictly dominated strategies until no further strategies can be eliminated, we obtain a single strategy profile. We show that convergence to the iterated dominance solution is guaranteed for several reinforcement learning algorithms (for multiple independent learners). We illustrate an application of our results by studying mechanism design for principal-agent problems, where a principal wishes to incentivize agents to exert costly effort in a joint project when it can only observe whether the project succeeded, but not whether agents actually exerted effort. We show that MARL converges to the desired outcome if the rewards are designed so that exerting effort is the iterated dominance solution, but fails if it is merely a Nash equilibrium. | [
"multiagent",
"reinforcement learning",
"iterated dominance",
"mechanism design",
"Nash equilibrium"
] | Reject | https://openreview.net/pdf?id=ryl1r1BYDS | https://openreview.net/forum?id=ryl1r1BYDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"pvMiT0yc4",
"BkloUdRjjS",
"S1gushOisH",
"H1x1FlvssB",
"SygfZk8oiH",
"r1emt9Q5jr",
"SyxVGcXqiH",
"ByeQsYX9iS",
"rylLgtQ5oS",
"H1luZksD9r",
"HJeqKrkBcS",
"H1x2QH4-cB",
"SyxMP-S9tB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729554,
1573804114617,
1573780639921,
1573773430957,
1573768953993,
1573694074855,
1573693963884,
1573693851095,
1573693678052,
1572478719596,
1572300161877,
1572058404170,
1571602778422
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1675/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1675/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1675/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1675/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1675/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1675/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1675/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1675/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1675/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1675/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1675/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1675/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proofs that reinforcement learning (using two different algorithms) converge to iterative dominance solutions for a class of multi-player games (dominance solvable games).\\n\\nThere was a lively discussion around the paper. However, two of the reviewers remain unconvinced of the novelty of the approach, pointing to [1] and [2], with [1] only pertaining to supermodular games. The exact contribution over such existing results is currently not addressed in the manuscript. There were also concerns about the scaling and applicability of the results, as dominance solvable games are limited. \\n\\n[1] http://www.parisschoolofeconomics.eu/docs/guesnerie-roger/milgromroberts90.pdf\\n[2] Friedman, James W., and Claudio Mezzetti. \\\"Learning in games by random sampling.\\\" Journal of Economic Theory 98.1 (2001): 55-84.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Theorem 8 relates only to supermodular games.\", \"comment\": \"Milgrom and Roberts's Theorem 8 statement says that:\\n\\\"Let {x(t)} be an adaptive dynamic process and let x = inf(S) andX = sup (S). Then for every *supermodular* game T...\\\"\\ni.e. the theorem relates only for supermodular games. \\n\\nGiven this theorem and a *supermodular game*, one could indeed run through the iterated elimination sequence. However, the theorem only applies for supermodular games (which is again a restricted class of game).\\n\\nYou've asked where specifically in the proof of their Theorem 8 Milgrom and Roberts use the fact that the game is supermodular. Their proof of theorem 8 applies their Theorem 5 (or rather, their Lemma 1 which is the key element of Theorem 5). In the proof of Theorem 8, in the last transition (top of page 1270), the authors explicitly state the last transition follows from Lemma 1.\", \"their_lemma_1_again_relates_only_to_supermodular_games\": \"note that the Lemma is proved using Theorem 1 and Theorem 2, which require a supermodular function f and a lattice (the definition of a supermodular game requires having these). Note that x \\\\wedge y and x \\\\vee y in the definitions of the paper relate to the supermum and infimum of a lattice. Their Lemma 1 leverages the fact that the smallest and largest best response are defined, which requires invoking their Theorem 1 and their Theorem 2, which again relate only to *supermodular* games (the definition of a supermodular game requires having such a lattice structure and supermodular function).\\n\\nIn short, Theorem 8 applies only to supermodular games and not general games - the condition A6 is used along their Lemma 1 which relates to supermodular games. Indeed, for the *restricted class* of supermodular games Milgrom and Roberts results are sufficient to show that many learning dynamics converge to the iterated elimination sequence. However, our results hold for *general games* that are dominance solvable, which is a much larger class of games (though our proof is only for specific learning dynamics of REINFORCE and IW-MCPI).\"}",
"{\"title\": \"Response\", \"comment\": \"Can you explain why the Proof of Thm 8 doesn't apply to *all* games with an iterated dominance solution? It's a simple induction that says that given the weak condition (A6) the agents will eventually be restricted to playing only best-responses to the initial strategies B(x), and then best responses to those B^2(x), and so on for any B^k.\\n\\nIn iterated dominance solvable games, B^k(x) is the Nash as k --> \\\\infty, right?\"}",
"{\"title\": \"This additional paper relates only to supermodular games, not any arbitrary game. Our results are for general games.\", \"comment\": \"Thanks for reading our response and further discussion!\\n\\nThe paper you cite focuses on supermodular games, which is a restricted class of games. \\n\\nAs the authors themselves note (in the beginning of section 3, page 1268):\\n\\n\\\"Recently, Fudenberg and Kreps (1988) have investigated limiting behavior in a class of learning models for general extensive form games. ...\\nThey conclude that learning may, even in the long-run, yield a larger set of strategies than is identified by Nash equilibrium.\\nShapley and Fudenberg-Kreps establish the rather negative conclusion that Nash equilibrium play is not the only possible outcome of learning in general games. For *supermodular* games, however, sharper and more positive results are possible...\\\"\\n\\nIn other words, their additional results relate only to this class of games. Supermodular games are games where the strategies of each player have a lattice structure such that the incentives of one agent to choose a higher strategy (in their lattice) increases as other players switch to higher strategies (on their lattices). See more details here:\", \"https\": \"//ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-254-game-theory-with-engineering-applications-spring-2010/lecture-notes/MIT6_254S10_lec07.pdf\\n\\nSupermodular games are an interesting class as they are dominance solvable if and only if they have a unique Nash equilibrium (this is clearly not the case for general games, e.g. Rock-Paper-Scissors has a unique Nash but is clearly not dominance solvable). However, this is a *restricted* class of games (see discussion in the bottom of page 2 here: https://pdfs.semanticscholar.org/e889/fcddbe6bb87ee9a0de6a93ef25fa17f366ee.pdf )\\n\\nOur results hold for *general games* that are dominance solvable, but only to the specific learning rules we've examined (REINFORCE, IW-MCPI). In contrast, the results you discuss hold for a much wider class of learning rules, but only for the restricted class of supermodular games. \\n\\nAgain, we hope this helps address your concerns about our results being subsumed by earlier work.\"}",
"{\"title\": \"Response\", \"comment\": \"Hi, thanks for the detailed response. I still believe that the class of learning rules known to converge to Nash in iterated dominance (supermodular) games is known to be quite large. Lets look at what I believe is the original paper on this, [1] reading from (6A) to Theorem 8 and it's corollaries, which prove convergence to Nash in supermodular games for a very wide class of learning rules:\\n\\n\\\"It requires only that, for any date T, there is a later date after which each player selects either a strategy that is \\\"justifiable\\\" in terms of the competitors' play since T ... Here, \\\"justify\\\"is used in a very weak sense. A strategy choice is justified if there is no other strategy that would have done better against every combination of strategies [from] the competitors' recent past play.\\\"\\n\\nConverting the terminology here to RL terminology, I believe it says:\\n\\nAny learning rule that eventually chooses only among actions that are the BR to *some* action played by the agents in the last T turns (for some T), will converge to Nash.\", \"stating_the_converse_of_this\": \"As long as a learning rule eventually stops playing actions that are the BR to *no* actions played in the last T turns (for some T), it will converge to Nash. \\n\\nI think all reasonable RL learning rules (all reasonable learning rules?) will have this property. Your example illustrates this: action 1 is the best response to *no* action played by B and therefore the learning rule will stop playing it. \\n\\n[1] http://www.parisschoolofeconomics.eu/docs/guesnerie-roger/milgromroberts90.pdf\"}",
"{\"title\": \"See comments to reviewer 1 regarding novelty. We will discuss scaling to real world applications.\", \"comment\": \"Thank you for the helpful comments.\\n\\nFirst, regarding novelty: we point out in response to Reviewer 1 that the previous results they noted relate to Fictitious Play and Replicator Dynamics, which are fundamentally different from the RL algorithms we consider. Please see the full comments in our response to Reviewer 1. As we point out, our paper investigates very basic RL methods that are the foundation of many popular RL agents, and we are not aware of any existing work that shows convergence of these methods to the iterated dominance solution. We added a section on \\u201cComparison of Our Results and Existing Results on the Convergence of Other Algorithms to the Iterated Dominance Solution\\u201d.\", \"regarding_scaling_to_real_world_applications\": \"we wholeheartedly agree this is an important issue! Indeed, there are known polynomial algorithms for computing strict iterated dominance solutions (see, e.g. Conitzer and Sandholm, Complexity of iterated dominance, 2005, who also note that things are trickier for *weak* iterated dominance). However, these are based on a normal-form representation of a game. Games with multiple timesteps (extensive form) can be translated into normal form, but with the size growing very quickly in the number of timesteps. This means that for practical applications, we might only be able to show convergence for restricted classes of games / environments. We are adding a discussion of this in the paper, and expanding the discussion on mechanism design, where we may want to *design* a game so as to guarantee it has an iterated dominance solution.\\n\\nWe will of course fix the typos you noted and extend the captions in the Appendix - much appreciated!\"}",
"{\"title\": \"We will condense the definitions and preliminaries sections, and fix the typos. Thanks!\", \"comment\": \"Thank you for your helpful comments.\\n\\nWe are condensing the the sections on definitions and preliminaries to get to the point more quickly. \\nWe are also fixing the typos and the notation inconsistencies you noted - much appreciated!\"}",
"{\"title\": \"We have added an analysis of convergence rates. Also, we DO cover the case of simultaneous updates.\", \"comment\": \"Thank you for the comments.\\n\\nFirst, regarding item 3, as we wrote in the original submission (see first paragraph of Section 3 on page 4), our results hold for *both* the \\u201cround-robin\\u201d setting where one agent learns at a time (which we call the serial mode) and for the case where all agents learn simultaneously (which we call the parallel mode). As we discuss there, our analysis is more elaborate, as it covers the parallel mode as well. In other words, we agree the more realistic setting is the one where all agents learn simultaneously, and our results hold for this more realistic case as well. We\\u2019ll emphasize this earlier in the paper. \\n\\nSecond, as you suggest in item 2 regarding convergence rates, we have added a discussion of the convergence rate in an appendix (see \\u201cConvergence Rates for IW-MCPI\\\"), which we briefly discuss here. \\n\\nOur convergence result was an asymptotic one, showing that eventually IW-MCPI almost surely converges to an iterated elimination solution. Note that even in a single bandit settings, the number of samples required to discern that one action x yields a better reward than another action y with high probability 1-\\\\delta depends on the difference in rewards r_x - r_y. Similarly, in our case the rate of convergence depends on the game\\u2019s payoffs, with the key factor being the degree to which dominated actions are suboptimal, as captured by the g in the proof our Theorem on IW-MCPI convergence. \\n\\nIn our proof g denotes to the gap in rewards between a dominated action i and any dominating action j (see Theorem 3.4, and note rewards are normalized to be in [0,1]). Denote the total number of actions across all the players as S = \\\\sum_i |S_i| (and note this bounds the number of elimination steps). Denote by epsilon the minimal gap between the dominated action and the other actions across all the elimination steps (i.e. in all elimination steps, the difference between the reward of the eliminated action and the reward of other actions is at least epsilon). Then the required number of steps so that IW-MCPI reaches the iterated elimination solution with high probability is: \\nO( S / \\\\epsilon^(2 / (1-p))) \\nwhere p is the decay rate of our exploration.\\nFor instance, setting p=1/2 we get a convergence time of:\\n O(S / epsilon^4). \\n\\nFinally, regarding 1, as we wrote in the paper, we acknowledge that many games are not dominance solvable and are not covered by our results. We thus emphasized the implications of our work to mechanism design settings, where one can, under some costs or restrictions, design the game as to make it dominance solvable. We will expand the discussion to emphasize the implications of this. \\n\\nWe hope this addresses your concerns regarding the paper.\"}",
"{\"title\": \"The work you mention relates to Fictitious Play and Replicator Dynamics which are very different from the RL algorithms we consider. Our results are not subsumed by existing work.\", \"comment\": \"Thank you for the comments, and for pointing out this related work.\\n\\nOur paper deals with Monte-Carlo Policy Improvement and REINFORCE (a policy gradient method). In general, convergence results on one type of algorithm may not apply to other types of algorithms. In other words, though different RL algorithms may sometimes converge to the same outcome (if they converge at all), this is certainly not an automatic guarantee. \\n\\nThe results in Fudenberg and Levine [1] relate to Fictitious Play (FP) and the Replicator Dynamics (RD); as they mention, their results are actually due to Nachbar 1990, see Evolutionary selection dynamics in games: Convergence and limit properties. The work by Bowling [2] mentions Q learning, but merely says that in some cases multi-agent Q learning can find Nash equilibria, such as in fully collaborative games where all agents had identical rewards. It then points out that *other* naive learning methods (i.e. not Q-learning) converge to iterated dominance solutions, and point to the publication by Fudenberg and Levine. In short, these existing results apply to Fictitious Play and Replicator Dynamics, rather than the algorithms we considered, which are different (and again, all of these are different from Q learning), so our results are not covered by this existing work. \\n\\nFurther, we note that the algorithms covered by this existing work (FP and RD) originate from evolutionary game theory, and were designed to compute equilibria. Thus, they are *fundamentally different* from the RL algorithms we consider. There exist good algorithms for computing iterated dominance solutions (see, e.g. Conitzer and Sandholm, Complexity of iterated dominance, 2005), but our motivation is not to compute the iterated dominance solution, but rather to study how commonly used RL algorithms behave in dominance solvable games. We focused on policy gradient methods as these lie at the heart of many popular agents, and on policy iteration as it is among the simplest and most basic methods. \\n\\nFP does not rely on computing gradients, but rather on repeatedly finding the best response to the empirical distribution of actions taken by the opponents so far. Similarly, RD remains a different dynamical system from policy gradient methods; RD is a *regret minimizing* approach which is known to have significant differences with policy gradient methods (see, e.g. Neural Replicator Dynamics, Omidshafiei et al. and Cycles in Adversarial Regularized Learning, Mertikopoulos, Papadimitriou and Piliouras). We are not aware of any results on the RL methods we study in this paper. \\n\\nIn short, the results you mention deal with very different algorithms from those we consider (which are the ones relating to frequently used RL agents). We thus believe our results are novel and valuable to the RL community. \\n\\nBased on your comments, we are adding a section on \\u201cComparison of Our Results and Existing Results on the Convergence of Other Algorithms to the Iterated Dominance Solution\\u201d, discussing how our results differ from the papers you discussed (and other work). \\n\\nFinally, you pointed out that you expect any \\u201creasonable\\u201d learning rule would converge on this iterated dominance solution. To emphasize why the analysis is tricky, we have added an appendix discussing convergence issues with REINFORCE, which we briefly discuss here (see the new appendix \\u201cConvergence Issues for REINFORCE with More Than Two Actions\\u201d for full details). \\n\\nConsider the case of player 1 having 3 actions, where the reward from these actions depends on the action choice of player 2, with the rewards being r^b1 = (0, 0.5, 1) (respectively for the player 1\\u2019s actions) when player 2 takes the first action or being r^b2 = (0, 0.5, 0.1) when player 2 takes the other action. In other words, in this case the first action is always dominated for the player 1, but depending on player 2\\u2019s action, the optimal action for player 1 may be either its second or third action. \\n\\nPerforming the REINFORCE update for player 1 always reduces the logit of the first action. However, for some action logits, the update on r^a increases the logit of action 3 and decreases that of action 2, where *the decrease in the logit of action 2 is larger than the decrease in the logit of the dominated action 1*. Similarly, the update on r^b increases the logit of action 2 and decreases that of action 3, where *the decrease in the logit of action 3 is larger than the decrease in the logit of the dominated action 1*.\\n\\nWhen player 2 constantly switches between the two actions, we oscillate between the first and second updates. This shows that the REINFORCE update does *not* guarantee that the logits of the dominated action decreases relative to that of the non-dominated actions, highlighting why the analysis is tricky. \\n\\nWe hope the discussion above addresses your concerns regarding the novelty and technical contribution of the paper.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studies reinforcement learning algorithms in a specific subset of multi-agent environments that are 'dominance solvable'. This means that, given an initial set of strategies in the game, if we iteratively remove 'dominated strategies' (those whose utility is strictly less than another strategy independent of the strategies used by other agents), then only one strategy remains for each player. The remaining strategy is called the iterated dominance solution. The paper proves the convergence of certain RL algorithms (REINFORCE in the 2-action case, and importance weighted monte-carlo policy iteration in the multi-action case) for normal-form games. The paper demonstrates the utility of this via mechanism design: in a principal-agent problem where one can design the rewarding scheme given by a 'principal agent' to various (RL) sub-agents, rewarding schemes motivated by iterated dominance guarantees the best solution for the principal agent, whereas schemes motivated by Nash equilibria do not.\\n\\nThe paper is quite well-written and understandable. To my knowledge, the idea is novel and has not yet been explored in the RL literature (UPDATE: based on Reviewer #1's review, this may not be the case. I'll wait to hear the author response to this). I did not check the proofs thoroughly. However, the experiments in the principal-agent problem make sense, and it's interesting to see that iterated dominance reward schemes results in good performance for the principal agent. I appreciate that, while the main results in the paper are limited to normal-form games (which are quite restricted), there are empirical results in the appendix showing the extension to Markov games with multiple timesteps, suggesting that the applicability of iterated dominance reward schemes extend beyond the simple two-action case, where no temporally extended decisions need to be made. Even so, the Markov game considered is fairly simplistic. \\n\\nMy personal curiosity about this paper revolves around scaling to real-world applications. This is not really discussed in the paper; the conclusion talks about directions for future work, for example expanding the number of RL algorithms where convergence can be proven, or producing complexity bounds for convergence. What I want to know is: what sorts of games can we compute the iterated dominance reward schemes for? How can this be applied when the space of policies becomes too large to be enumerated (and thus determining whether a policy is strictly dominated becomes impossible)? I don't expect this paper to solve these issues, but it would be nice to have a discussion of them. \\n\\nOverall, I'd say this paper is interesting to the multi-agent RL community and I could imagine others building off of this work, so I err on the side of acceptance.\", \"small_fixes\": [\"Our proof of Theorem 3.1 -> Theorem 3.2\", \"I'd recommend extending the captions of figures 6-8 and 9-11 in the Appendix.\", \"Close bracket in Section 6.3 title\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies independent multi-agent reinforcement learning (MARL) in dominance solvable games. The main contribution of this paper is that the authors have proved the convergence to the iterated dominance solution for two RL algorithms: REINFORCE (Section 3.1, binary action case only) and Importance Weighted Monte-Carlo Policy Improvement (IW-MCPI, Section 3.2). Empirical analysis for principal-agent games is demonstrated in Section 4.\\n\\nThe paper is interesting in general, however, I do not think this paper has quite met the (very high) standard of ICLR, due to the following limitations:\\n\\n1) As the authors have mentioned, the dominance solvable games are quite limited.\\n\\n2) This paper only has *convergence* results, but does not have *convergence rate* results. In other words, the authors have not proved how fast the agents converge to the iterated dominance solution. Might the authors establish a convergence rate result such as a regret bound?\\n\\n3) This paper assumes an unrealistic setting in which when one agent learns, the strategies (policies) of all the other agents are fixed. In other words, the agents learn in a round-robin fashion, rather than learn simultaneously. I do not think this setting is realistic in most practical problems.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The main idea of this paper is to solve multi-agent reinforcement learning problem in dominance solvable games. The paper reviewed general multi-agent reinforcement learning and general norm-form game in game theory. The authors aim to recover multi-agent policies through independent MARL in norm-form dominance-solvable games. The paper states that one of solution concepts of dominance-solvable games is iterated dominance solution, which is different from Nash Equilibrium and may be more suitable under certain scenarios. Furthermore, the paper considers two common RL methods for control and learning policy: REINFORCE and Monte-Carlo policy iteration. The main contribution of the paper is to prove that both REINFORCE in binary action case and Monte-Carlo algorithms find the agents\\u2019 policies converging to the iterated dominance solution. The interesting aspect of this paper is that iterated dominance solution based reward scheme can guarantee convergence to the desired agents policies at a cheaper cost in practical principal-agent problems. In appendix, the paper extended its conclusion to Markov games and three possible action cases. To the current status of the paper, I have a few concerns below.\\n\\n1.\\tIt takes too much space for preliminary work and basic concepts, in Sec 1.1 (preliminary) and Sec 2 (MA-RL and Dominance-Solvable Games).\\n2.\\tThe notations are inconsistent and unnecessarily complicated. For example, for agent i \\u201cits possible actions are the strategies in S_i\\u201d (section 2); any action \\u201ca \\\\in S_i\\u201d (section 2,1); for agent i \\u201cfor all s_i \\\\in S_l\\u201d (Algorithm 1 line 2). It can be consistent to use the same notation to describe the same term. Moreover, \\u201ca score per action, x_1, \\u2026, x_{m_i}\\u201d and \\u201ceach agent starts with initial logits for x_1, \\u2026, x_n\\u201d. Formally, the corner mark in the same location should represent the uniform meaning. \\n3.\\tTypos: lemma 3.1 proof \\u201cg = \\u2026 (r_{s_h}-r_{s_h})\\u201d should be \\u201cg = \\u2026 (r_{s_h}-r_{s_l})\\u201d; above section 3.2 \\u201cour proof of Theorem 3.1\\u201d, should be \\u201cLemma 3.1\\u201d.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work studies learning under independent MARL, and shows theoretically and experimentally that two independent MARL algorithms converge for games that can be solved by iterated dominance.\\n\\nThis work is clear and well-written, but I do not understand what the contribution of this work is to the literature. The fact that standard MARL learning rules (e.g. independent Q learning) converge in games with iterated dominance solutions is a very well-known result in Learning in Games (see [1], [2]). The authors examined slightly different learning rules (REINFORCE and MCPI), but I would expect that almost any reasonable learning rule would converge in iterated-dominance-solvable games; if anything, it would be surprising if this were *not* the case. The applications of the convergence result result to \\\"noisy effort\\\" games is pretty standard and the results expected based on the theory.\", \"question_to_the_authors\": \"- How does this work differ from the known results about convergence of naive learners in iterated-dominance-solvable games?\\n\\n[1] Michael Bowling, \\\"Convergence Problems of General-Sum Multiagent Reinforcement Learning\\\", Sec. 5.2\\n[2] Fudenberg & Levine, 1999\"}"
]
} |
HylA41Btwr | CP-GAN: Towards a Better Global Landscape of GANs | [
"Ruoyu Sun",
"Tiantian Fang",
"Alex Schwing"
] | GANs have been very popular in data generation and unsupervised learning, but our understanding of GAN training is still very limited. One major reason is that GANs are often formulated as non-convex-concave min-max optimization. As a result, most recent studies focused on the analysis in the local region around the equilibrium. In this work, we perform a global analysis of GANs from two perspectives: the global landscape of the outer-optimization problem and the global behavior of the gradient descent dynamics. We find that the original GAN has exponentially many bad strict local minima which are perceived as mode-collapse, and the training dynamics (with linear discriminators) cannot escape mode collapse. To address these issues, we propose a simple modification to the original GAN, by coupling the generated samples and the true samples. We prove that the new formulation has no bad basins, and its training dynamics (with linear discriminators) has a Lyapunov function that leads to global convergence. Our experiments on standard datasets show that this simple loss outperforms the original GAN and WGAN-GP. | [
"GAN",
"global landscape",
"non-convex optimization",
"min-max optimization",
"dynamics"
] | Reject | https://openreview.net/pdf?id=HylA41Btwr | https://openreview.net/forum?id=HylA41Btwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"60wBNwWgB",
"ByezNv2hjB",
"Bke2tkj2jB",
"SJgnsRq3iH",
"BJeGUR9njH",
"rkl8N05hiH",
"HJleOiRaFr",
"SygX86JaYS",
"rygZ-NwDKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729525,
1573861161769,
1573855108117,
1573854884289,
1573854793806,
1573854765955,
1571838824050,
1571777866744,
1571415032771
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1674/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1674/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1674/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1674/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1674/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1674/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1674/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1674/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper is proposed a rejection based on majority reviews.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Meta-response: modification of papers\", \"comment\": \"Dear reviewers,\\n\\nThank you for your effort and time.\", \"we_mainly_added_the_following_parts_in_the_revised_paper_to_address_the_comments\": \"1) Add Appendix A (with 4 figures), to explain why learning n-points can be viewed as the \\\"macro-learning\\\" part of learning n-modes. \\n 2) Add Appendix B on some related works. Especially elaborate a fundamental work on learning single-Gaussian (single-mode). Will include more in the final version. \\n 3) Other parts: \\n --add Appendix A.1 to briefly discuss generalization;\\n --add interpretation of the loss in Sec 2.2;\\n --add E.5 to extend dynamical analysis to convex-linear D.\"}",
"{\"title\": \"Our response for Reviewer2\", \"comment\": \"We thank the reviewer for the detailed comments. The comments lead us to add Appendix A to explain the motivation in detail, and make the paper stronger. We really appreciate the comments.\\nBelow, we provide detailed responses to each concern and question.\\n\\n----------------\", \"comment_1\": \"\\u201cThe paper is mainly on analyzing the case when the true data has n points instead of on a continuous support. It would be more interesting to see theoretical guarantee on even Gaussian mixture model.\\u201c\", \"response_1\": \"Thank you very much for the insightful comment.\", \"short_reply\": \"(1) We add Appendix A to explain the motivation in much detail (1.5 pages with 6 figures), to elaborate our previous point \\u201cn-point mimics n-modes\\u201d. In particular, we highlight the \\\"macro-learning\\\" perspective.\\n (2) Multi-Gaussian is probably a more difficult problem than ours, and may rely on some techniques of this paper. It is an interesting future work.\", \"longer_reply\": \"1)The n-point model mimics the n-Gaussian model. It captures the macro-learning behavior. Consider learning a two-mode distribution P, starting from an initial two-mode distribution Q. There are two differences between P and Q: first, the locations of the two modes are different; second, the distribution within each mode is different. To learn the distribution, we want to eliminate both differences: first, move the two modes of Q to roughly overlap with the two modes of P which we call \\\"``macro-learning\\\"; second, adjust the distributions of each mode to match those of P, which we call ``\\\"micro learning\\\". Our analysis captures the macro-learning part.\\n\\n2)The n-point model generalizes the 1-point model in Mescheder et al. ICML\\u201918. Our analysis is already much more general than the 1-point model. \\n\\n3) As R1 pointed out, there is a paper on learning a single Gaussian, but we did not notice an extension to 2-Gaussians yet. We think our analysis can be combined with 1710.10793 for future analysis of multi-Gaussian. Currently, this paper on the n-point case is already 34 pages.\\n We kindly remind the reviewer the current optimization theory for GAN is quite rare. Even the 2-point case was not proved for global convergence before (to our knowledge). \\n\\n--------------------------------\", \"comment_2\": \"Also since GANs are mostly known for generalizing what is seen to generate new data, whether converging only to the n points are good or not still worth debating.\", \"response_2\": \"Thank you for this comment. We add Appendix A.1 to explain. To summarize A.1: \\n(1) Generalization is proved in a classical work Arora et al'17, and can be easily extended to our setting. More specifically, for JS-GAN and other GANs, the generalization error of \\\"fitting n data points\\\" is bounded in Arora et al'17. If the reviewer insists, we can even add a proof of generalization bound for CP-GAN in the final version (this is probably just simple exercise adopted from Arora et al.'17). \\n(2) We provide some intuition about why it generalizes, by using a figure.\\n(3) Anyways, fitting the training data is what GAN is doing in practice, and is also what neural-nets doing for image classification. However, fitting can be difficult and requires analysis. More broadly, generalization and optimization are orthogonal issues.\\n\\n--------------------------------\", \"comment_3\": \"In claim 4.2 and 4.3, what if the initialization of y is completely random? Then the claim cannot say anything on mode collapse. So is the formulation in the paper the real characterization of mode collapse?\", \"response_3\": \"First, if the initialization of y is random, then it depends on the gaps between y_i and y_j. Due to randomness, there could be one close pair (y_i, y_j) at initialization and stay close throughout, which causes mode collapse. If the random y turns out to be spread out (e.g. choose y_1 in [-1,0] and choose y_2 in [10,11]), then it either gets close to \\u201cmode collapse\\u201d and then stuck, or it never gets close to mode collapse and converges to global-min.\\n \\\"Random\\\" is hard to control (in GAN with neural-nets, we do not control Y directly). In our experiments of JS-GAN, we see some \\\"random\\\" initial point causes mode collapse and some causes success. With a better global landscape, the training shall be more stable to initialization. \\n\\nSecond, as a prediction of our theory, for mode collapse, discriminator gradients are small compared to the successful cases. We did experiments on 2-Gaussian and 5-Gaussian data, and see the phenomenon. \\n As reviewed in Appendix B, existing works conjectured \\\"improper loss function\\\" and \\\"weak discriminator\\\" cause mode collapse. There is no rigorous definition of the cause. Given the context, we think we provided a more concrete hypothesis for future study.\\n\\nThird, characterizing mode collapse as bad local-min is just a partial goal of our paper. Our major motivation is to get a global landscape for further theoretical analysis.\"}",
"{\"title\": \"Our response to reviewer 3\", \"comment\": \"Reviewer 3:\\nWe thank the reviewer for the detailed comments and support. Below, we provide detailed responses to each concern and question.\", \"comment_1\": \"The primary question I am left with after reading the paper is: is there a probabilistic interpretation of the new loss function (equation 4a). The authors justify this formulation because it allows analysis via Lyapunov functions, but it would be very useful to know if it itself is the maximum likelihood estimate under an alternate data model. Such an explanation would improve the understandability of this method.\", \"response_1\": \"Thanks for the comment. See more discussions in Sec 2.2 in revised paper. \\n The JS-GAN is using binary classification loss as the \\u201cshell\\u201d, and a probabilistic interpretation is that p(x_i) is the probabilities of x_i being in class 1. Another interpretation is that D wants to find a single hyperplane that separates {x_i}, {y_i}, and -log(1 + exp(-x_i)) and -log(1 + exp(y_i)) mimics the hinge loss. However, the final goal is not binary classification, but generating y_i\\u2019s to fool D. We do not need to use a single hyperplane to separate them. Instead, we can have multiple hyperplanes.\", \"comment_2\": \"The fourth bullet point under the contributions section should specific the sense in which the new GAN \\\"performs better\\\"\", \"response_2\": \"Thanks for the comment. We add \\u201cin terms of Inception Scores and FID\\u201d.\"}",
"{\"title\": \"Our response to reviewer 1\", \"comment\": \"\", \"q4\": \"Finally, there are a few works in the literature about understanding the optimization landscape of GANs. For a sample, see https://arxiv.org/abs/1706.04156 and https://arxiv.org/abs/1710.10793. The later uses a Lyp function to analysis the global convergence of a GAN. Also there is a few papers about the mode collapse issue in GANs. See for example https://arxiv.org/abs/1712.04086\", \"a4\": \"Thanks a lot for pointing out these references. We add detailed discussions in the Appendix B \\u201cRelated Work\\u201d. They help a lot in positioning our work in the context. They are mostly complementary, and may lead to interesting new works when combined with our analysis. We summarize the contents of Appendix B here.\\n\\n[R1] https://arxiv.org/abs/1706.04156: This work is cited in our paper. It only analyzed local convergence. Moreover, as pointed out by Mescheder et al.\\u201918, \\u201cthe assumption of absolute continuity is not true for common use cases of GANs, where both distributions may lie on lower dimensional manifolds\\u201d. This is why [R1] proved local convergence, while Mescheder et al.\\u201918 proved that even for single point the local convergence does not hold. The fundamental difference is that [R1] considers the micro-learning effect of \\\"letting density change continuously\\\", while Mescheder et al.\\u201918 considers the macro-learning effect of \\\"letting a single mode move\\\". \\n\\n[R2] https://arxiv.org/abs/1710.10793 Thank you for pointing out this paper. It looks very interesting, and we have explained in detail the relation to this paper in the revised version. \\n --It is a very different paper. The major difference with our paper is that they considered the single-mode case, while we consider the multi-mode case (a simplified version; see detailed discussions in Appendix A). There are a few other differences: (a) They consider the population version, and we consider the empirical version. (b) They consider the quadratic discriminator, and we analyze both the powerful discriminator case and the linear discriminator.\\n --It is complementary to ours. This paper and ours capture two somewhat orthogonal aspects of the problem. Our comment in the revised paper is: \\u201cTo extend to the multi-mode case such as multi-Gaussian, as we discussed earlier, there is a macro-learning effect and micro-learning effect. Our work on n-point distributions captures the macro-learning effect, and Feizi et al. captures the micro-learning effect for Gaussian data. In the future, it would be quite interesting to combine the analysis of Feizi et al. and our analysis to the multi-Gaussian case.\\u201d (To be more rigorous, we think [R2] captures both the micro-learning studied in [R1] and macro-learning studied in Mescheder et al.\\u201918; but that is just the single-mode macro-learning, and we study multi-mode macro-learning. Single-mode learning is somehow easy according to Mescheder et al.\\u201918, so the major challenge of [R2] may be to capture the micro-learning effect. Anyhow, we would read [R2] more carefully later, to make the claim more precise.).\\n\\n It is quite interesting that [R2] also used a Lyapunov function. However, the underlying mathematics of [R2] and our paper are quite different. The formulation of this paper (16) is a matrix factorization version of a bi-linear zero-sum game (19). Our formulation involves logarithmic (and extendable to convex), and is a non-zero sum game. \\n\\nMode collapse and [R3] https://arxiv.org/abs/1712.04086. Most papers on mode collapse are empirical; [R3] has a rigorous theory. We discuss in detail the differences in revised paper. \\n First, they did not provide theoretical analysis for a specific GAN; in contrast, we prove theoretical results of specific JS-GAN and CP-GAN formulations. Second, we provided an explanation for ``why mode collapse happens'', by linking mode collapse \\nto a fundamental optimization subject ``bad basin''. Third, their focus is to mitigate ``bad basin'', and our starting point is to analyze the global landscape, and the link to mode collapse is a natural byproduct of the analysis.\", \"the_major_difference_is\": \"they analyze in the \\\"statistical distance level, andn proves TV(P^m, Q^m) is better than TV(P,Q)\\\", and borrow the insight to use packing. We directly analyze the GAN min-max problem (or game), not analyzing a general distribution distance like [R3].\"}",
"{\"title\": \"Our response to reviewer 1\", \"comment\": \"We thank the reviewer for the detailed comments. These comments are very helpful for improving the paper. The comments lead us to add Appendix B to explain related work. We really appreciate the comments.\\nBelow, we provide detailed responses.\\n\\n-------------------------------------------------\", \"q1\": \"Most of the analysis is tailored for a very simple linear discriminator case which for the WGAN means just matching the first moments.\", \"a1\": \"First, we would like to kindly remind the reviewer that most of the proofs (10 pages, page 23-34) are for the powerful-D case.\\n\\nSecond, while an ideal result is to analyze practical cases with neural-net discriminators, such an analysis for GANs is rare (if any). Most current analyzes are for linear-D. We provide results on both extremes: powerful-D and linear-D. We believe this provides convincing evidence regarding the nice properties of CP-GAN.\\n\\nThird, our analysis can be easily extended to a convex case for CP-GAN. We added Appendix E.5 to provide details. Note that the analysis of WGAN-GP is not our focus, and we just showed a small negative result for it. At a high level, the analysis is one part of the big picture, trying to validate the benefit of CP-GAN. \\n\\n-------------------------------------------------\", \"q2\": \"Even in this simple setup, they consider d=1 (the scalar case). I am not sure how one can generalize this analysis to a more realistic case.\", \"a2\": \"Please note that we are not only considering d=1:\\n\\nFor general d, we proved the existence of a global Lyapunov function for CP-GAN and a negative result for JS-GAN and WGAN-GP. This differentiates CP-GAN from other GANs. \\n\\nFor general d, we have proved global convergence to the set of critical points of the Lyapunov function (but did not state explicitly). To present this more explicitly, we changed Proposition 3 to add a convergence result for general d. Currently, there is a technical difficulty for proving convergence to the set of stationary points for general d, and it is left as future work. \\n\\nWe think our results may be generalizable to GANs with neural-nets. The reason is that overparameterization analysis in recent advances such as https://arxiv.org/abs/1806.07572 relies heavily on a \\u201cshell problem\\u201d with a nice landscape. We provided such a \\u201cshell problem\\u201d. \\n\\n----------------------------------------------------------------------------------------\", \"q3\": \"Also the experimental gains seem incremental which makes me worried about such generalization.\", \"a3\": \"With only two lines of code change in PyTorch, we improve FID scores by 16 points on CIFAR10 over JS-GAN and 25 points on STL10 (also a few points better than WGAN-GP). We think this experimental gain is remarkable compared to the algorithmic changes. We think this is a very promising experimental result.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Authors propose a modification to the original GAN formulation, by coupling the generated samples and the true samples to avoid mode collapse.\", \"i_have_some_concerns_about_the_analysis_and_the_experiments_of_the_paper\": \"Most of the analysis is tailored for a very simple linear discriminator case which for the WGAN means just matching the first moments. Even in this simple setup, they consider d=1 (the scalar case). I am not sure how one can generalize this analysis to a more realistic case. Also the experimental gains seem incremental which makes me worried about such generalization. Finally, there are a few works in the literature about understanding the optimization landscape of GANs. For a sample, see https://arxiv.org/abs/1706.04156 and https://arxiv.org/abs/1710.10793. The later uses a Lyp function to analysis the global convergence of a GAN. Also there is a few papers about the mode collapse issue in GANs. See for example https://arxiv.org/abs/1712.04086\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper attempts to perform global analysis of GAN on the issue of sub-optimal strict local minima and mode collapse, and proposes a new GAN formulation (CoupleGAN) that enjoys nice global properties. The paper is overall well written and conveys an interesting new formulation of GANs. However, the reviewer is concerned with the following questions:\\nThe paper is mainly on analyzing the case when the true data has n points instead of on a continuous support. It would be more interesting to see theoretical guarantee on even Gaussian mixture model. Also since GANs are mostly known for generalizing what is seen to generate new data, whether converging only to the n points are good or not still worth debating.\\nIn claim 4.2 and 4.3, what if the initialization of y is completely random? Then the claim cannot say anything on mode collapse. So is the formulation in the paper the real characterization of mode collapse?\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors introduce a new training loss for GANs. This loss allows the outer optimization problem to have no spurious local minima, under an appropriate finite sample analysis. In contrast, the authors establish that there are exponentially many spurious local minima under the conventional GAN training loss. Under a linear discriminator model, the authors show that a standard GAN can not escape from collapsed modes in a finite sample analysis, whereas the new trining loss allows for such an escape (due to the presence of a Lyapunov functional with favorable properties). The authors use this new training loss to train GANS on MNIST, CIFAR10, CelebA, and LSUN datasets, and observe mild improvements in Inception Scores and Frechet Inception Distances of the resulting generated images.\\n\\nI recommend the paper be accepted because it provides a new formulation for training GANs that both demonstrates improved empirical performance while also allowing theoretically favorable properties (on spurious local minima and avoidance of mode collapse) that specifically do not hold for a standard GAN.\", \"the_primary_question_i_am_left_with_after_reading_the_paper_is\": \"is there a probabilistic interpretation of the new loss function (equation 4a). The authors justify this formulation because it allows analysis via Lyapunov functions, but it would be very useful to know if it itself is the maximum likelihood estimate under an alternate data model. Such an explanation would improve the understandability of this method.\", \"minor_comment\": \"The fourth bullet point under the contributions section should specific the sense in which the new GAN \\\"performs better\\\"\"}"
]
} |
Hke0V1rKPS | Jacobian Adversarially Regularized Networks for Robustness | [
"Alvin Chan",
"Yi Tay",
"Yew Soon Ong",
"Jie Fu"
] | Adversarial examples are crafted with imperceptible perturbations with the intent to fool neural networks. Against such attacks, adversarial training and its variants stand as the strongest defense to date. Previous studies have pointed out that robust models that have undergone adversarial training tend to produce more salient and interpretable Jacobian matrices than their non-robust counterparts. A natural question is whether a model trained with an objective to produce salient Jacobian can result in better robustness. This paper answers this question with affirmative empirical results. We propose Jacobian Adversarially Regularized Networks (JARN) as a method to optimize the saliency of a classifier's Jacobian by adversarially regularizing the model's Jacobian to resemble natural training images. Image classifiers trained with JARN show improved robust accuracy compared to standard models on the MNIST, SVHN and CIFAR-10 datasets, uncovering a new angle to boost robustness without using adversarial training. | [
"adversarial examples",
"robust machine learning",
"deep learning"
] | Accept (Poster) | https://openreview.net/pdf?id=Hke0V1rKPS | https://openreview.net/forum?id=Hke0V1rKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"qFEsPkhf64",
"6PeMCKQdB",
"Q45nRnJSGu",
"H1eCvDhiiH",
"rkllMwhsjr",
"SklSSInojB",
"SkeInMnsoH",
"S1l4Bz3soH",
"Byg5Bb6GiH",
"r1xkPfF6tr",
"ryxg23o2tS",
"rklhsmOwFH"
],
"note_type": [
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1578491451938,
1578474050750,
1576798729494,
1573795685984,
1573795591610,
1573795389261,
1573794478512,
1573794364462,
1573208386392,
1571816022589,
1571761319617,
1571419044013
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1673/Authors"
],
[
"~Haonan_Qiu1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1673/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1673/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1673/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1673/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1673/Authors"
],
[
"~Zhengyu_Zhao1"
],
[
"ICLR.cc/2020/Conference/Paper1673/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1673/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1673/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Model Used Same as Madry et al.'s Setting\", \"comment\": \"Thank you for your interest in our paper. We use the Wide-Resnet from Madry et al., which would be the 28-10 you mentioned. We initially called it 32-10 to be consistent with how a previous paper \\u201cAdversarial Training for Free!\\u201d, by Shafahi et al., named it. We will remove the name and emphasize our model\\u2019s similarity to Madry et al. for more clarity.\"}",
"{\"title\": \"Confused About Layers of Wide-Resnet.\", \"comment\": \"It's a nice paper. I just want to ask about a small problem. In experimental part 4.3, which Wide-Resnet do you use? 28-10 (Madry's setting) or 34-10? I never see the 32-10 before because WRN requires (N-4) % 6 == 0. Thanks.\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper extends previous observations (Tsipars, Etmann etc) in relations between Jacobian and robustness and directly train a model that improves robustness using Jacobians that look like images. The questions regarding computation time (suggested by two reviewers, including one of the most negative reviewers) are appropriately addressed by the authors (added experiments). Reviewers agree that the idea is novel, and some conjectured why the paper\\u2019s idea is a very sensible one. We think this paper would be an interest for ICLR readers. Please address any remaining comments from the reviewers before the final copy.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of Revision\", \"comment\": \"We would like to thank all the reviewers for their insightful and constructive comments to improve the paper. We have uploaded a revision of the paper with the following updates:\\n\\n> Added more discussion on why Jacobian saliency confers robustness Section 4.3.2, as suggested by Reviewer #1.\\n\\n> Included results on JARN\\u2019s computational efficiency compared to adversarial training in Section 4.3.4 and Table 4, as advised by Reviewer #1, 2 & 3.\\n\\n> Added experiments to show JARN\\u2019s stability across key hyperparameter changes in Section 4.3.5 and Appendix Figure 5, as recommended by Reviewer # 2 & 3.\\n\\n> Added experiments to more thoroughly evaluate JARN and show its effectiveness against black-box and transfer attacks in Section 4.3.6 and Table 5, as suggested by Reviewer # 2.\\n\\n> Included comparison with double backpropagation Jacobian norm regularization baseline and showed (Section 4.3 and Table 3) JARN\\u2019s approach for Jacobian saliency outperforms double backpropagation\\u2019s approach through minimizing Jacobian\\u2019s Frobenius norm values, as suggested by Reviewer # 3.\\n\\n> Added further discussion on differences between JARN and closely related prior art in Section 2: \\u201cNon-Adversarial Training Regularization\\u201d, as recommended by Reviewer # 3.\"}",
"{\"title\": \"Response to Comment\", \"comment\": \"We thank the commenter for mentioning this paper. The main contribution of [1] lies in an algorithm to efficiently approximate the input-class probability output Jacobians to minimize their Frobenius norms to improve prediction stability and adversarial robustness [1]. Similar to works like [2-4], [1]\\u2019s objective is to reduce the effect that perturbations at individual pixels have on the classifier\\u2019s prediction through the Jacobians\\u2019 norm. Different from our paper, the work in [1-4] did not aim to optimize Jacobians to explicitly resemble their corresponding images. In contrast, we propose JARN with the aim to make the Jacobian resemble input images more closely through an adversarial loss term, to explore whether this leads to improved robustness. We will add this paper to related work to make a better distinction (Section 2: Non-Adversarial Training Regularization). Furthermore, we have also included additional experiments to compare with double backpropagation [2-4] and found that JARN outperforms it in PGD attacks on CIFAR-10 (Section 4.3 and Table 3).\\n\\n[1] Judy Hoffman, Daniel A Roberts, and Sho Yaida. \\u201cRobust learning with jacobian regularization.\\u201d arXiv:1908.02729, 2019.\\n\\n[2] Drucker and Le Cun \\\"Double backpropagation increasing generalization performance\\\" IEEE Transactions on Neural Networks, 3(6):991\\u2013997, 1992.\\n\\n[3] Simon-Gabriel et al., \\\"First-order Adversarial Vulnerability of Neural Networks and Input Dimension\\\" ICML, pp. 5809\\u20135817, 2019\\n\\n[4] Ross et al. \\u201cImproving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients.\\u201d AAAI, 2018.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We thank the reviewer for the helpful comments and would like to respond to them as follows,\\n\\n> Cost of JARN training:\\nTo address the comment on computational cost, we have added experiments to compare JARN\\u2019s computational efficiency with PGD adversarial training in Section 4.3.4 and Table 4. Even when combined with 1-step adversarial training, JARN takes less than half the time compared to 7-step PGD adversarial training while outperforming it. JARN\\u2019s GAN component adds a relatively small computational burden since the discriminator only updates once every 20 classifier update steps in the CIFAR-10 experiments and is much smaller in parameter size compared to the main classifier. Moreover, we find that using JARN framework only on the last few epoch (25%) to train the classifier confers similar adversarial robustness compared to training with JARN for the whole duration, further supporting its efficiency.\\n\\nRegarding the concern of hyperparameter tuning, we have added more experiments to test JARN performance across a range of key hyperparameters ($\\\\lambda_{adv}$, batch size and discriminator update intervals) that are different from Section 4.3 and find that its performance is relatively stable across hyperparameter changes (Appendix Figure 5). In a typical GAN framework, each training step involves a real image sample and an image generated from noise that is decoupled from the real sample. In contrast, a Jacobian is conditioned on its original input image and both are used in the same training step of JARN. This training step resembles that of VAE-GAN [1] where pairs of real images and its reconstructed versions are used for training together, resulting in generally more stable gradients and convergence than GAN. We believe that this similarity favors JARN's stability over a wider range of hyperparameters.\\n\\n\\n> Why not test other Jacobian regularization methods?\\nFollowing the suggestion of the reviewer, we implemented double backpropagation (DBP) [2-4] as a baseline with additional results to compare in our paper (Section 4.3 and Table 3). We found that DBP provides robustness against FGSM attacks compared to standard training but is outperformed by JARN across all the attacks. While Proposition 3 in [5] shows that DBP is equivalent to training with $l_2$ adversarial examples, we believe they are single-step adversarial examples rather than the stronger multi-step adversarial examples generated by iterative methods like PGD. This explains DBP\\u2019s performance under the more recent and stronger PGD attacks. The comparison demonstrates that JARN\\u2019s approach to robustness through the saliency of Jacobian is fundamentally different and more effective than DBP\\u2019s approach through minimizing Jacobian's Frobenius norm. We further discuss the difference from the prior art in the following paragraphs for completeness.\\n\\nSince the contribution of our paper is to study if regularizing for image-resembling Jacobians can be a new way to improve robustness, we wish to study this by directly training for generated Jacobians to look like images. [2-4] employ a regularization term to minimize the Jacobian's Frobenius norm together with the standard training objective. While their approaches improve robustness by reducing the effect that perturbations in individual pixel have on the classifier\\u2019s prediction through the Jacobians\\u2019 norm, it does not have the aim to optimize Jacobians to explicitly resemble their corresponding images. We describe those studies in more detail here and in the revision (Section 2: Non-Adversarial Training Regularization): \\n\\n[2] first proposed this to improve model generalization on natural test samples and called it \\u2018double backpropagation\\u2019 while [3,4] are concurrent studies that evaluate this method against adversarial examples. [5] proved that double backpropagation is equivalent to adversarial training with $l_2$ examples. [6] trained robust models using double backpropagation to study the link between robustness and alignment in non-linear models but did not propose a new defense in their paper.\\n\\nAll in all, our work aims to open up Jacobian\\u2019s saliency as a new avenue to boost adversarial robustness.\\n\\n[1] Larsen et al., 2015 \\u201cAutoencoding beyond pixels using a learned similarity metric.\\u201d arXiv:1512.09300, 2015\\n\\n[2] Drucker and Le Cun \\\"Double backpropagation increasing generalization performance\\\" IEEE Transactions on Neural Networks, 3(6):991\\u2013997, 1992.\\n\\n[3] Ross et al. \\u201cImproving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients.\\u201d AAAI, 2018.\\n\\n[4] Jakubovitz et al. \\u201cImproving dnn robustness to adversarial attacks using jacobian regularization\\u201d ECCV, pp. 514\\u2013529, 2018.\\n\\n[5] Simon-Gabriel et al., \\\"First-order Adversarial Vulnerability of Neural Networks and Input Dimension\\\" ICML, pp. 5809\\u20135817, 2019\\n\\n[6] Etmann et al. \\u201cOn the connection between adversarial robustness and saliency map interpretability.\\u201d arXiv:1905.04172, 2019.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We thank the reviewer for the constructive comments and would like to respond to them as follows,\\n\\n> Computational cost of JARN:\\nFollowing the advice of the reviewer, we have added experiments to compare JARN\\u2019s computational efficiency with PGD adversarial training in Section 4.3.4 and Table 4. Even when combined with 1-step adversarial training, JARN takes less than half the time compared to 7-step PGD adversarial training while outperforming it. JARN\\u2019s GAN component adds a relatively small computational burden since the discriminator only updates once every 20 classifier update steps in the CIFAR-10 experiments and is much smaller in parameter size compared to the main classifier. Moreover, we find that using JARN framework only on the last few epoch (25%) to train the classifier confers similar adversarial robustness compared to training with JARN for the whole duration, further increasing its efficiency.\\n\\n\\n> Reproducibility and sensitivity to hyperparameters:\\nTo address the reviewer\\u2019s comment, we have added more experiments to test JARN performance across a range of key hyperparameters ($\\\\lambda_{adv}$, batch size and discriminator update intervals) that are different from Section 4.3 and find that its performance is relatively stable across key hyperparameter changes (Section 4.3.5 and Appendix Figure 5). \\n\\nIn a typical GAN framework, each training step involves a real image sample and an image generated from noise that is decoupled from the real sample. In contrast, a Jacobian is conditioned on its original input image and both are used in the same training step of JARN. This training step resembles that of VAE-GAN [1] where pairs of real images and its reconstructed versions are used for training together, resulting in generally more stable gradients and convergence than GAN. We believe that this similarity favors JARN's stability over a wider range of hyperparameters. To further ensure reproducibility, we will release the source code after the paper\\u2019s acceptance.\\n\\n\\n> More evaluation of the defense:\\nFollowing the reviewer\\u2019s suggestion to evaluate the JARN classifier\\u2019s robustness more comprehensively [2], we have added experiments of black-box transfer attacks on JARN (Section 4.3.6 and Table 5). Defenses relying on gradient masking will display lower robustness towards transfer attacks than white-box attacks [2,3]. When evaluated on such black-box attacks using adversarial examples generated from a PGD-AT7 trained model and their differently initialized version, both JARN and JARN-AT1 display higher accuracy than when under white-box attacks (Table 5), a sign that JARN's robustness does not rely on gradient masking. We would like to also point out Section 4.3.1 where evaluation is carried out on attacks over a range of different parameters to test the generalization of JARN\\u2019s robustness.\\n\\n\\n> Other ways to encourage Jacobian saliency:\\nSince our main research question is to study if regularizing for image-resembling Jacobians can lead to robustness, we wish to study this by directly training for generated Jacobian to look like images. Due to GAN\\u2019s remarkable success in synthetic image generation, we incorporate GAN into the JARN framework for this objective. While we agree that other ways that can encourage Jacobian saliency are interesting future work, we would like to point out that this paper is the first to show this approach can lead to improved robustness.\\n\\n\\n[1] Larsen et al., 2015 \\u201cAutoencoding beyond pixels using a learned similarity metric.\\u201d arXiv preprint arXiv:1512.09300, 2015\\n\\n[2] Carlini, Nicholas, et al. \\\"On evaluating adversarial robustness.\\\" arXiv preprint arXiv:1902.06705, 2019.\\n\\n[3] Papernot et al., 2015 \\u201cTransferability in machine learning: from phenomena to black-box attacks using adversarial samples.\\u201d arXiv preprint arXiv:1605.07277, 2016\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We thank the reviewer for the thoughtful comments and the interest in our work. Our detailed response to the questions follows.\\n\\n> The idea of Jacobian saliency from prior work:\\nIn contrast to prior work, to the best of our knowledge, our paper is the first with the aim to train the classifier\\u2019s Jacobian to resemble input images more closely as a way to improve robustness. We summarize those previous studies here to better contrast with our paper:\\n\\nThe focus of the work from (Tsipras et al., 2018) is on the trade-off between a model\\u2019s standard accuracy and its robustness while showing that robust models learn different feature representations than their non-robust counterparts. The authors observed from their experiments that the adversarial examples generated from robust models look perceptually different from the original images. They attributed this to the higher saliency of the Jacobian generated at each gradient step of the adversarial attack on robust models. While (Tsipras et al., 2018) is one of the first to point out this observation, the authors did not propose a new defense based on it. \\n\\n(Etmann et al., 2019) further investigated this observation by proving that linearized robustness (distance from samples to decision boundary) increases as the alignment (unit vector cosine similarity) between the Jacobian and input image grows in linear models. The authors train robust models using double backpropagation (Drucker & Le Cun, 1992) and show empirically the relationship between robustness and alignment weakens for non-linear models but did not propose a new defense in the paper.\\n\\n(Drucker & Le Cun, 1991; Ross & Doshi-Velez, 2018; Jakubovitz & Giryes, 2018; Hoffman et al., 2019; Simon-Gabriel et al., 2019) use a regularization term to reduce the Jacobian's Frobenius norm. While their approaches constrain the effect of individual pixels\\u2019 perturbation on the model prediction, our approach focuses on the intuition to improve robustness by training for salient Jacobians that look like images. Our newly added experiments (Section 4.3 and Table 3) show that JARN outperforms this earlier approach in adversarial robustness.\\n\\n\\n> Discussion on why Jacobian saliency confers robustness:\\nWe thank the reviewer for the insightful discussion and would like to share our interpretation. The input Jacobian indeed characterizes how the final logits are affected by small changes to the pixels. Indeed, regularizing the Jacobian to resemble the input would likely result in adversarial perturbations that perceptually change the labeled object. A possible explanation behind the improved robustness through increasing Jacobian saliency is that the space of input-output Jacobian shrinks under this regularization, i.e. Jacobians have to resemble non-noisy images. Intuitively, this means that there would be fewer paths for an adversarial example to reach an optimum in the loss landscape, improving the model\\u2019s robustness. As suggested by the reviewer, we added more discussion on this in Section 4.3.2.\\n\\n\\n> Computational efficiency of JARN compared to adversarial training:\\nFollowing the suggestion of the reviewer, we have added experiments to compare JARN\\u2019s computational efficiency with adversarial training in Section 4.3.4 and Table 4. Even when combined with 1-step adversarial training, JARN takes less than half the time compared to 7-step PGD adversarial training while outperforming it. In our experiments on CIFAR-10, the JARN discriminator only updates once every 20 classifier update steps and is much smaller in parameter size compared to the main classifier, explaining JARN\\u2019s efficiency. Moreover, we find that using JARN framework only on the last few epoch (25%) to train the classifier confers similar adversarial robustness compared to training with JARN for the whole duration.\", \"references\": [\"Tsipras et al., \\u201cRobustness may be at odds with accuracy.\\u201d arXiv preprint arXiv:1805.12152, 2018.\", \"Etmann et al., \\u201cOn the connection between adversarial robustness and saliency map interpretability.\\u201d arXiv preprint arXiv:1905.04172, 2019.\", \"Drucker and Le Cun \\\"Double backpropagation increasing generalization performance\\\" IEEE Transactions on Neural Networks, 3(6):991\\u2013997, 1992.\", \"Ross & Doshi-Velez, \\u201cImproving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients.\\u201d In Thirty-second AAAI conference on artificial intelligence, 2018.\", \"Jakubovitz & Giryes, \\u201cImproving dnn robustness to adversarial attacks using jacobian regularization\\u201d In Proceedings of the European Conference on Computer Vision (ECCV), pp.\", \"514\\u2013529, 2018.\", \"Hoffman et al., \\u201cRobust learning with jacobian regularization.\\u201d arXiv preprint arXiv:1908.02729, 2019.\", \"Simon-Gabriel et al., \\\"First-order Adversarial Vulnerability of Neural Networks and Input Dimension\\\" In International Conference on Machine Learning, pp. 5809\\u20135817, 2019\"]}",
"{\"title\": \"related reference\", \"comment\": \"Hi, I just came across one arXiv article that also addresses the same question that Jacobian Regularization leads to robustness to adversarial perturbations.\\n\\nIt would be good to compare the differences.\", \"https\": \"//arxiv.org/pdf/1908.02729.pdf\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nIt was previously observed that models that were more robust to adversarial perturbation had more interpretable jacobian. The authors attempt to train for interpretable jacobian in order to improve the robustness of the model.\\nThis is done by employing a GAN-like procedure where a discriminator attempts to distinguish between the transformed jacobian matrix (fake images, equivalent to generator) and real images.\\n\\nExperiments indicates that this improves robustness compared to unprotected models and approximately similarly to models trained with adversarial training.\", \"comments\": [\"The motivation given for this line of research is the cost of adversarial training (2nd paragraph of Section 3)\", \"No experimental comparison is given with regards to the time it takes to train a model with adversarial training, versus the time it takes to train a model with JARN. It is also important to note that this introduces additional complexity (needs to choose an architecture for the discriminator, tune proper learning rates, etc...), which is not mentionned.\", \"Why not test simpler jacobian regularization method as proposed by other papers (see below). Proposition 3 of Simon-Gabriel et al. shows that results similar to adversarial training can be obtained, and they don't need several iterations like adversarial training, nor do they need to train an additional discriminator like your method.\"], \"opinion\": \"The paper provides an interesting proof of concept for a method, showing that it is feasible. It however doesn't make the the case for why it is a good idea. Discussion and comparison to very significant related work is missing and experimental measurement of any advantages of the proposed method vs. adversarial training is lacking. I think that these aspects should be improved before the paper is ready for publication.\", \"typos\": \"Line 11 in Algorithm 1 -> The label is wrong, i assume it's \\\"Update the discriminator f_disc to maximize L_adv\\\"\", \"related_works_that_needs_discussing\": [\"Drucker, Lecun 91, \\\"Double backpropagation increasing generalization performance\\\" for other regularizer on the jacobian, discusses generalization rather than robustness.\", \"Simon-Gabriel et al., \\\"First-order Adversarial Vulnerability of Neural Networks and Input Dimension\\\"\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a novel regularization strategy for improving the robustness of networks to adversarial noise. A term is added to the standard supervised cross-entropy loss that encourages the Jacobian of the network to itself be interpreted as a valid image. This \\\"regularization\\\" term is constructed by running the input-output Jacobian of the classification network through an \\\"adapter network\\\" and then in turn interpreting its output as a \\\"generator\\\" in a GAN setup. A separate discriminator network is training to distinguish real input images from these adapter-processed input-output Jacobians. The overall regularization is the standard minimax GAN loss applied to this generator/discriminator setup. The impetus for this stems from a previous observation that salient or interpretable input-output Jacobians naturally arise for networks that have undergone adversarial training to increase robustness.\\n\\nAlthough this whole setup seems to be a little \\\"Rube-Goldberg\\\"-esque, I think there's some real sensible reasons for this sort of regularization to make intuitive sense. The input-output Jacobian characterizes how much the output (i.e. the logits) are affected by small changes to the input. The Jacobian, reinterpreted as living in the input image space (as the authors do), is a map of which input pixels have the strongest effect on the output of the network. If the Jacobian image looks like the underlying input image -- in particular, highlighting the labeled object -- this indicates that changing those pixels will result in the largest change on the network output. (This should be clear when looking at Figure 4 of the paper.) On the other hand, adversarial noise by definition leaves the underlying object alone (so that a human isn't aware of the perturbation) and modifies other pixels. Models that fall for such adversarial noise will not have salient Jacobians.\\n\\nThis is an amusing original idea, and I think this paper probably should be accepted to ICLR -- though I don't hold that position very strongly. However, I think the most interesting point is idea of Jacobian saliency, which is from prior work (Tsipras et al., 2018) that I haven't read. Therefore, I'm not sure how significant this paper is on it's own. Regardless, I would have liked to see more discussion in the paper of why Jacobian saliency should confer robustness (as I tried to do in the paragraph above), with perhaps some additional experiments designed around understanding whether this intuition (or something similar) is actually correct. There's some discussion of the theory behind the method in section 3.1, but it's not very intuitive to the situation at hand (non-linear neural networks), and I don't find it particularly informative.\\n\\nFinally, some effort is spent arguing that this method is more computational efficient than adversarial training -- I wonder if that's still true when the all the complexity of GAN training is taken into account or how to consider that point when part of the conclusion is that their method is best when it is also combined with some amount of adversarial training.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"I think the main contribution of this paper is that it introduces a new way of robust training by encouraging Jacobian saliency. Previous research like Etmann et al 2019 and Tsipras et al 2018 showed that robustness leads to saliency. But surprisingly, this paper shows the other way, saliency map can also lead to robustness, which indicates a stronger connection between these two. In general, I like the intuition behind this paper, since it introduces a new perspective of robust training.\\n\\nThe training method proposed in this paper is still kind of preliminary, though. I suspect that training a GAN together with the classifier will cost even more time than min-max adversarial training or some certified robust training methods. It would be great if the authors can provide the training time comparison between JARN and some state-of-the-art robust training methods. Another concern is reproducibility since the training process of GAN is sensitive to hyperparameter selection. It would be better if the author can have some discussion on the training process to show that the reported performance of the defense is easy to reproduce instead of cherry-picking. Also, there are too many works on robustness defense that have been proven ineffective (consider the works by Carlini). Since this is a completely new way of robust training and there is no certified guarantee, I suggest the authors refer [1] to evaluate the effectiveness of the defense more thoroughly to convince the readers that it really works. Especially, evaluation under adaptive attack is necessary.\\n\\nI think this is a very interesting work. But since this method is completely new, more detailed information is needed to convince me that it really works. If it does work, I believe there must exist better ways to encourage Jacobian saliency than using a GAN.\\n\\n[1] Carlini, Nicholas, et al. \\\"On evaluating adversarial robustness.\\\" arXiv preprint arXiv:1902.06705 (2019).\"}"
]
} |
BJg641BKPH | Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems | [
"Atsushi Nitanda",
"Geoffrey Chinot",
"Taiji Suzuki"
] | Recently, several studies have proven the global convergence and generalization abilities of the gradient descent method for two-layer ReLU networks. Most studies especially focused on the regression problems with the squared loss function, except for a few, and the importance of the positivity of the neural tangent kernel has been pointed out. However, the performance of gradient descent on classification problems using the logistic loss function has not been well studied, and further investigation of this problem structure is possible. In this work, we demonstrate that the separability assumption using a neural tangent model is more reasonable than the positivity condition of the neural tangent kernel and provide a refined convergence analysis of the gradient descent for two-layer networks with smooth activations. A remarkable point of our result is that our convergence and generalization bounds have much better dependence on the network width in comparison to related studies. Consequently, our theory significantly enlarges a class of over-parameterized networks with provable generalization ability, with respect to the network width, while most studies require much higher over-parameterization. | [
"gradient descent",
"neural network",
"over-parameterization"
] | Reject | https://openreview.net/pdf?id=BJg641BKPH | https://openreview.net/forum?id=BJg641BKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"lDo4NPzNoG",
"B1eXd8M2jH",
"r1eeznMtiB",
"rJgcQjGYir",
"HygV-iztiH",
"Syez85fKoB",
"S1lRoqAaKB",
"SygjVAKaYS",
"rylWgpohKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729465,
1573820011212,
1573624839831,
1573624610432,
1573624572087,
1573624394410,
1571838629573,
1571819058716,
1571761384783
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1672/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1672/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1672/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1672/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1672/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1672/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1672/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1672/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This article studies gradient optimization for classification problems with shallow networks with smooth activations, obtaining convergence and generalisation results under a separability assumption on the data. The results are obtained under much less stringent requirements on the width of the network than other related recent works. However, with results on convergence and generalisation having been established in other previous works, the reviewers found the contribution incremental. The responses clarified some of the distinctive challenges with the logistic loss compared with the squared loss that has been considered in other works, and provided examples for the separability assumption. Overall, the article makes important contributions in the case of classification problems. However, with many recent works addressing challenging problems in a similar direction, the bar has been set quite high. As pointed out by some of the reviewers, the contribution could gain substantially in relevance and make a more convincing case by addressing extensions to non smooth activations and deep models.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Our paper has been revised.\", \"comment\": \"We have also added the following comments to the revised version:\\n5. Different proof techniques for the squared loss and the logistic loss functions.\\n6. Difference with another separation assumption in [1-5] (in review #2) made for regression problems.\"}",
"{\"title\": \"Our paper has been revised.\", \"comment\": \"Dear reviewers,\\nWe have updated the paper. The main changes are as follows.\\n1. Add a comparison to [Cao & Gu (2019b)].\\n2. State the difference with [Allen-Zhu et al. (2018a)] and [Cao & Gu (2019a)] clearly, that is, we clarify that these study cover deep ReLUs (which are challenging problem) and do not include each other because of the difference of the problem setting and network structure.\\n3. Explain that there are a lot of examples such that Assumption (A4) is satisfied.\\n4. Add a result that achieve a improved sample complexity $O(\\\\epsilon^{-2})$ with an efficient network width $O(\\\\epsilon^{-3/2})$.\\n\\nAs for a new additional result.\", \"this_result_can_be_easily_obtained_based_on_our_result_as_follows\": \"Step 1. Show the convergence of the loss function (Theorem 3 and Corollary 2) based on the convergence analysis of the functional gradient norm (which is a result in the previous version).\\nA proof for this statement is not difficult and a slight modification of well-known technique in the convex optimization literature (which is also utilized in [Allen-Zhu et al. (2018a)] and [Cao & Gu (2019b)])\\n\\nStep 2. Provide a sharper bound (Proposition 2) on $\\\\|\\\\Theta^{(T)} - \\\\Theta^{(0)}\\\\|_2$ by utilizing the convergence of the loss function.\\nAs a result, Rademacher complexity is much reduced and an improved sample complexity is achieved.\\n\\nThis additional proof is simple and easy to follow. However, this leads to a significant improvement of the sample complexity at the price of slight increase of network width: $O(\\\\epsilon^{-1})$ -> $O(\\\\epsilon^{-3/2})$.\\nIn addition, a proof technique in Step 2 is quite interesting and might be useful for the community.\\n\\nWe would be grateful if reviewers would see the revised version.\\nWe will also include other suggestions in the final version.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thanks for the positive feedback and suggestions.\\nWe will further revise our paper according to your suggestions to make our contributions clearer.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the feedback. As you said, our problem setting is different from those in [Allen-Zhu et al. (2018a)] and [Cao & Gu (2019a)] and our theory is restricted to two-layer network with smooth activation functions. We think the contributions of their papers are nice and they are not included in our study. So, we have clearly stated this difference (depth and activation functions) in the revised version to clarify each position in this context. Thank you for the suggestion.\\n\\nA margin-based generalization bound is useful when the convergence is shown only for the empirical classification error. In general, to show the convergence for the logistic function is somewhat difficult compared to the empirical classification error due to lack of the strong convexity, but derived generalization bound on the expected classification error is comparable with the standard generalization bound (if ignoring the margin effect).\\n\\nThere are a lot of examples that satisfy Assumption (A.4) because a tangent model in (A.4) includes a usual infinite-width two-layer network. Thus, Assumption (A.4) with a certain positive constant $\\\\rho$ is satisfied as long as a data distribution is separable by an infinite-width two-layer network with mild weights $w(\\\\theta)$. Note that this network can separate any region due to the universal approximation ability. We have emphasized this point in the revised version.\\n\\nAs for the assumption on the data distribution. We note that a data separation assumption in [1-5] is essentially the same as the positivity of the NTK as shown in [5] (see Proposition 3.6 in arXiv ver. of [5]). On the other hand, our separability assumption is weaker than the positivity of NTK as stated in Proposition 1, that is, we do not need the positivity of NTK on the whole space. Thus, our theory does not require a separation of the training dataset like [1-5] made for the regression problem. \\n\\nBesides the above comments, as commented in another post, we have included an additional result in the revised version, which improves the sample complexity with an efficient network width by slightly refining the proof. In this result, we utilize the convergence of loss function. Please see that post for detail.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the feedback. As you pointed out, [Oymak & Soltanolkotabi (2019)] also studies the over-parameterized network with the smooth activation function. However, their analysis is tailored to the squared loss function, so that direct comparison seems difficult. Generally speaking, the logistic loss is more challenging than the squared loss from the viewpoint of the optimization and generalization analyses because we cannot utilize the strong convexity (i.e., the linear convergence) and parameters will diverge. However, a much reasonable generalization bound can be obtained for the logistic loss compared to the case of the squared loss for the classification setting as shown in our study. This is because a separability assumption works effectively for the logistic loss while a more stronger assumption (i.e., the positivity of the NTK) is essentially required for the squared loss. Indeed, [Oymak & Soltanolkotabi (2019)] uses the positivity of NTK (see comment Reviewer 2). This is why their theory is not applicable to our setting and it has not been well studied that how small network width is sufficient for the classification problems with the logistic loss function. We would like to emphasize this point in the final version.\\n\\nWe have added a comparison to [Cao & Gu (2019b)] in the revised version. In addition, we have included an additional result that shows an improved sample complexity of $O(\\\\epsilon^{-2})$ with an efficient network width $O(\\\\epsilon^{-3/2})$ by refining our proof. An additional proof technique is interesting but simple, so it might be useful for the community. Please see another post for detail.\\n\\nAs you said, we focus on two-layer networks with a fixed second layer. However, this setting is essentially important to investigate the convergence behavior of the optimization method for the non-convex problems, and many studies (for instance [Du et al. (2019), Arora et al. (2019), Wu et al. (2019), Chizat & Bach (2018)], etc.) have been also considering the same setting. Therefore, we think two-layer networks are still an interesting research subject.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies the training of over-parameterized two layer neural networks with smooth activation functions. In particular, this paper establishes convergence guarantee as well as generalization error bounds under an assumption that the data can be separated by a neural tangent model. The authors also show that the network width requirement in this paper is milder than the existing results for ReLU networks.\\n\\nIn terms of significance, I think this paper is slightly incremental. As is discussed in the paper, results on both convergence and generalization have already been established in earlier works even for deep networks. The major contribution of this paper is probably the weaker requirement of network width, as is shown in Table 1. However, all other results in Table 1 are for ReLU networks, and it has been discussed in Oymak & Soltanolkotabi, 2019 that the over-parameterization condition for smooth activation functions are naturally weaker than that for ReLU networks. Although Oymak & Soltanolkotabi, 2019 did not study generalization, based on their discussion, the result in this paper is not surprising. Moreover, the authors should probably add comparison to Cao & Gu, 2019b in Table 1.\\n\\nMoreover, the results in this paper is restricted to two-layer networks with fixed second layer weights. This seems to be a much simpler setting than many existing results. The definition of neural tangent kernel in equation (5), as a result, seems to be over simplified, compared to the original definition given in Jacot et al., 2018. The improvement of requirement in network width, which is the major contribution of this paper, might not be very meaningful if it only works for shallow networks.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studied the generalization performance of gradient descent for training over-parameterized two-layer neural networks on classification problems. The authors proved that under a neural tangent based separability assumption, as long as the neural network width is $\\\\Omega(\\\\epsilon^{-1})$, the number of training examples is $\\\\tilde\\\\Omega(\\\\epsilon^{-4})$, within $O(\\\\epsilon^{-2})$ iterations GD can achieve expected $\\\\epsilon$-classification error.\\n\\nOverall this paper is well written and easy to follow. The theoretical results on the neural network width and iteration complexity are interesting. \\n\\nMy major concern is that the comparison with Allen-Zhu et al and Cao & Gu seem somewhat unfair. First, Allen-Zhu et al and Cao & Gu both studied the generalization performance of GD for training multi-layer neural networks, which is fundamentally more difficult than two-layer networks. Second, they use ReLU activation functions, which brings in the nonsmoothness along the optimization trajectory. This would also make the condition on the neural network width become worse. Therefore, when claiming the advantage of the derived guarantees, the authors should clearly clarify such differences.\\n\\nAnother concern is that whether the derived theoretical results can be generalized to ReLU network?\\n\\nWhen proving the generalization result, this paper takes advantage of margin-based generalization error bound. However, the generalization results in Cao & Gu are proved via applying standard empirical Rademacher complexity based generalization error bound. I would wonder which technique can give a tighter bound?\\n\\nCan you provide some examples regarding which type of data can satisfy Assumption (A.4) with constant margin $\\\\rho$?\\n\\n The authors would like to briefly discuss another data separation assumption adopted in the following papers (although this assumption is typically made for regression problem).\\n\\n[1] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. arXiv preprint arXiv:1811.03962, 2018b.\\n[2] Allen-Zhu, Z., Li, Y. and Song, Z. (2018c). On the convergence rate of training recurrent neural networks. arXiv preprint arXiv:1810.12065 .\\n[3] Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over-parameterized deep relu networks. arXiv preprint arXiv:1811.08888, 2018.\\n[4] Samet Oymak and Mahdi Soltanolkotabi. Towards moderate overparameterization: global convergence guarantees for training shallow neural networks. arXiv preprint arXiv:1902.04674, 2019.\\n[5] Difan Zou and Quanquan Gu. An improved analysis of training over-parameterized deep neural networks. arXiv preprint arXiv:1906.04688, 2019.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors study the problem of binary logistic regression in a two-layer network with a smooth activation function. They introduce a separability assumption on the dataset using the neural tangent model. This separability assumption is weaker than the more Neural Tangent Kernel assumption that has been extensively studied in the regression literature. In that case, a certain Gram-matrix must be nonnegative. In the current work, the authors observe that the structure of the logistic loss in the binary classification problem restricts the functional gradients to lie in a particular space, meaning that nonnegative of the Gram-matrix is only needed on a subspace. This is the underlying theoretical reason for why they can get improvement over those methods in the setting they study. Under the separability assumption, the authors prove convergent gradient descent and generalization of the ensuring net, while assuming the two-layer networks are less overparameterized than what would have been possible under the Gram-matrix perspective.\\n\\nThis paper appears to be a significant contribution to the field of convergent gradient descent algorithms because of the introduction of a weaker condition that guarantees convergence. While the work only applies to smooth activations and to logistic loss classification problems, it can inspire additional work both in rigorous guarantees for training neural nets in regression and classification. As a result, I recommend the paper be accepted for ICLR.\", \"minor_comments\": \"(1) The abstract, title, and introduction emphasize the aspect of being \\\"less overparameterized\\\" than other methods. It would be helpful to readers to have an absolute claim instead of a relative claim.\\n(2) The abstract claims the separability assumption is \\\"more reasonable\\\" than the positivity condition. This claim is overly vague and should be clarified.\\n(3) There is a stray \\\\forall in the third line of Theorem 2.\"}"
]
} |
BkeaEyBYDB | Improving Federated Learning Personalization via Model Agnostic Meta Learning | [
"Yihan Jiang",
"Jakub Konečný",
"Keith Rush",
"Sreeram Kannan"
] | Federated Learning (FL) refers to learning a high quality global model based on decentralized data storage, without ever copying the raw data. A natural scenario arises with data created on mobile phones by the activity of their users. Given the typical data heterogeneity in such situations, it is natural to ask how can the global model be personalized for every such device, individually. In this work, we point out that the setting of Model Agnostic Meta Learning (MAML), where one optimizes for a fast, gradient-based, few-shot adaptation to a heterogeneous distribution of tasks, has a number of similarities with the objective of personalization for FL. We present FL as a natural source of practical applications for MAML algorithms, and make the following observations. 1) The popular FL algorithm, Federated Averaging, can be interpreted as a meta learning algorithm. 2) Careful fine-tuning can yield a global model with higher accuracy, which is at the same time easier to personalize. However, solely optimizing for the global model accuracy yields a weaker personalization result. 3) A model trained using a standard datacenter optimization method is much harder to personalize, compared to one trained using Federated Averaging, supporting the first claim. These results raise new questions for FL, MAML, and broader ML research. | [
"Federated Learning",
"Model Agnostic Meta Learning",
"Personalization"
] | Reject | https://openreview.net/pdf?id=BkeaEyBYDB | https://openreview.net/forum?id=BkeaEyBYDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Lgv5ai_usX",
"Syxuj0h8iB",
"rklkJ0hIjB",
"ByxphT2UjB",
"B1eIla2IoH",
"HkxMzGThcS",
"r1xmrgpU9H",
"Byxju1vRFr",
"rkeipu_fOS",
"H1ehS-3-_B",
"r1eGfIbZdB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798729435,
1573469855838,
1573469654695,
1573469620518,
1573469422345,
1572815370442,
1572421691313,
1571872627406,
1570044098795,
1569993027631,
1569949193657
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1671/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1671/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1671/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1671/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1671/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1671/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1671/AnonReviewer1"
],
[
"~Stone_Jamess1"
],
[
"ICLR.cc/2020/Conference/Paper1671/Authors"
],
[
"~Stone_Jamess1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The reviewers have reached consensus that while the paper is interesting, it could use more time. We urge the authors to continue their investigations.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to all reviewers\", \"comment\": \"We thank the reviewers for their time, and\\n- Feel discouraged because none of the reviews provide us with feedback to the main message of our work - Section 4\\n- Agree that studying datasets other than EMNIST-62 is valuable, but argue that the presented results already challenge existing practices of the field.\\n\\nWe feel our submission may have been read with incorrect expectations. Our argument does not fit into the usual \\u201chere is a new algorithm, here is why it is better than the state-of-the-art\\u201d. Rather, it presents novel insights, which challenge what is the objective of the state-of-the-art, and argues that different measures should be the object of study of future works in this area.\\n\\nAs such, we hope the work has the potential to become influential, and we seek feedback on these arguments. Much of the reviews we received focus on what we don\\u2019t claim to be our contribution.\\n\\nThe most interesting observation, as summarized in Figure 2 - and motivating the main conclusions of our work - is that the same models, trained differently, to a similar initial accuracy, can have very different capacity to personalize to a task of the same type as it was trained on. We are not aware of any observation of this kind in the ML literature. As summarized in the concluding Section 4, we formulate concrete challenges to the main objectives of the existing FL and supervised MAML works, and also motivate questions beyond the areas of FL/MAML.\\n\\nWe also highlight that traditional measures predicting generalization/overfitting are a surprisingly misleading indicator of how well a model can personalize (Table 2).\\n\\nWe would like to ask the reviewers to re-read Section 4 - where we summarize why and how we think this paper can influence the future work of other researchers - and provide feedback on \\n- Do the presented results support the challenges? If not, why?\\n- Are the recommendations likely to impact future works in the field? If not, why?\\n- Do any of these already have an answer? If yes, which?\\n\\nWe would be disappointed to go through the review process without any feedback on these questions.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer #4 for their time.\\n\\nWe agree that the method we experiment with, Algorithm 2, is not particularly complex or novel. Note, however, this is not what we present as our main contribution, either. It is not mentioned in the abstract nor the concluding chapter.\", \"re\": \"Distributed Reptile\\nWe are not sure what you refer to. The paper https://arxiv.org/pdf/1803.02999.pdf has only a single short remark (end of Section 3) on anything related to distributed optimziation.\\n\\nPlease also see our shared response to all reviewers.\", \"let_us_restate_the_design_objectives_from_section_1\": \"(1) good initial global model, (2) good personalized model, and (3) fast convergence.\\n\\nIn Section 2, we show that FedAvg and Reptile are essentially the same, with the difference being that FedAvg handles different local data size differently, while this was non-existing concern in the setting in which Reptile was introduced.\\n\\nRunning with FedAvg with large epoch addresses (2) and, mainly (3), but lacks (1). Then switching to a smaller number of steps, independent of the amount of local data (i.e., Reptile) improves (1), without hurting (2) and not needing many additional iterations (3). This contrasts with the experiments in the original Reptile paper, where on the Omniglot task, 40,000 iterations are presented - which would be very expensive in the context of FL.\\n\\nWe also show that (and we don\\u2019t find this intuitively expected) Reptile with different number of steps show quite different performance - using K=1 degrades the personalized performance.\\n\\nIn summary, our claim is not that one algorithm \\u201cbeats\\u201d another in a narrow sense, but rather when focusing on the three objectives simultaneously, a combination works better than either of them separately. Fig 2 then shows that models of similar initial accuracy can have very different capacity to personalize, motivating the case for expanding the scope of MAML algorithms, as suggested in the concluding Section 4.\\n\\n---\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer #3 for their time.\\nPlease also see our shared response to all reviewers - we would like to ask for feedback to the main points presented in this work.\", \"responding_to_specific_points_made\": \"\", \"re\": [\"Q\\u2019s on Fig 2:\", \"75% on average, similar to Momentum, as presented.\", \"In Fig. 1, initial accuracy of FedAvg trained model is stable (in terms of mean/variance across experiments) during rounds 300-500. Further training with the same parameters did not produce a different result.\"]}",
"{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer #1 for their time.\", \"re\": \"point 3)\\nThis is what we formulate as the practical requirements in Section 1 - that we need to consider jointly all three of the following objectives - (1) Improved Personalized Model (2) Solid Initial Model and (3) Fast Convergence - with (2) motivated by the fact that many clients will not have data to personalize on. And it is the motivation for what we study in Section 3.2. - start with model with good personalization and improve the initial accuracy.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studies the application of techniques from meta-learning (a method\\nto train a single model which can then be easily adjusted to perform well on\\nmultiple tasks) to federated learning (the task of distributed training of\\nmodels on distributed datasets). The paper notes that standard meta-learning\\nalgorithms are similar to standard federated learning algorithms, and uses\\nthis perspective to produce a merged method and evaluate it empirically.\\n\\nPros.\\n+ The motivation of the paper is clear and indeed these methods seems similar,\\n and meta-learning can help with federated learning.\\n\\nCons.\\n- The resulting method appears somewhat underdeveloped; it is simply to run\\n some amount of federated learning and then some amount of meta-learning,\\n whereas the first parts of the paper led me to believe that a single\\n simultaneous merge of the methods is the way to go. The paper does not\\n report any fine-grained evaluation of various such choices, thus I don't know\\n why they did that they did, and thus do not find their choices compelling.\\n- The Reptile method is already presented in the original paper with\\n a distributed counterpart, so why not just run that? I am not convinced that\\n some more minor modification of Reptile could not already do well on this\\n paper.\\n- The empirical evaluation is not very extensive, so I am also not convinced\\n there, and in particular I need convincing of this type to believe that\\n regular reptile is beaten by FedAvg+reptile.\\n\\nMinor comments.\\nPage 1, second paragraph, the word \\\"outperform\\\". I'm not sure what the\\nperformance measure is; in federated learning, we care about many things, for\\ninstance privacy, keeping the work on the distributed clients low, etc.\\nPage 2, the \\\"three objectives\\\". I feel meta-learning is doing all three too.\\nPage 3, Algorithm 1. I realize space is a concern, but this was hard to read.\\nPage 4, Algorithm 2. \\\"relatively larger\\\" is vague.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Update: I thank authors for the rebuttal. I agree that direction of exploring personalization in FL is interesting. With a stronger methodological contribution, this could become a good paper.\\n\\n----------------------------------------------------------------------------------------------------------------\\nThe main contribution of this paper is to notice the connection between Federated Averaging (FedAvg) and Model Agnostic Meta Learning algorithms (MAML). Authors also consider an algorithm that first trains with FedAvg and then continues training using Reptile.\", \"pros\": \"Interpretation of FedAvg as a meta-learning algorithm is interesting.\", \"cons\": \"Very limited methodological contribution. Proposed algorithm is essentially two existing algorithms applied one after another.\\n\\nExperiments are not conducted rigorously enough. There are many arbitrary hyperparameter choices which may bias the conclusions made in the paper. Statement \\\"We tried SGD with a range of other learning rates, and in all cases we observed this value to work the best.\\\" is alarming suggesting that authors tried a variation of settings observing test data performance and reported a few selected runs. Although \\\"each experiment was repeated 9 times with random initialization\\\", the train/test split of the clients was fixed. Randomizing over client train/test split could help to improve the reliability of the results.\\n\\nEMNIST-62 is the only dataset analyzed in some detail. This dataset has drastically varying P(y|x) across clients, i.e. some people write zeros as some others write 'o's. This suggests that it is very hard to train a good global model and personalization is necessary. However this doesn't mean that Shakespeare dataset \\\"does not provide any interesting insights\\\". Perhaps, it is indeed more interesting and challenging, demanding more advanced methodology.\\n\\nIn Figure 1, number of communication rounds may be impractical for FL (considering also addition 200 Reptile rounds). On Shakespeare, FedAvg paper reports 54% accuracy achieved in under 50 communication rounds in one of the settings. There are also recent works on improving communication efficiency that were not discussed or studied for personalization quality, e.g. FedProx from \\\"Federated Optimization in Heterogeneous Networks\\\" and PFNM from \\\"Bayesian Nonparametric Federated Learning of Neural Networks\\\".\", \"questions_about_figure_2_experiments\": \"1. Fine-tuning requires 200 extra epochs over the initially trained model. What's the initial model accuracy when FedAvg is further trained with Adam optimizer for 200 extra communication rounds?\\n2. The personalized test accuracy with FedAvg and Reptile fine-tuning reaches the same value in 10 update epochs, even when Reptile fine-tuning gets 200 extra initial training epochs. Does Reptile fine-tuning provide additional benefits to the initial model as compared to running FedAvg for more number of epochs?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper considers personalization federated learning problem in which the goal is to personalize the global model on a given client/device based on available data on that device/client. The paper claims not only their proposed method can lead to fast convergence time but also provide a solid initial model per device/client and results in a better-personalized model. To evaluate the performance of their method, EMNIST-62 and Shakespeare data are used.\\n\\nEven though personalization in federated learning is very interesting and challenging, I am not sure about the contribution of this paper and what is exactly proposed in this paper: \\n\\n1) Section 2: this paper shows the relationship between FedAvg and MAML. In my view, the connection is very straight forward and can be shown in a couple of sentences. I might be missing something here, but it is not obvious to me what this paper adds to the connection between MAML and FedAvg. \\n\\n2) Personalized FedAvg Section: The same is about section 3. In my view, Algorithm 2 doesn't say anything new rather than to use Adam in local machine and SGD on global models and to optimize for \\\"E\\\" steps. But what if we use other datasets rather than EMNIST-62 and Shakespeare? will these recommendations still hold, i.e. using SGD on server and Adam on the devices? Per section 3 of this paper, Algorithm 2 indeed is the result of the experimental adaptation of the FedAvg algorithm so generalization to other datasets won't be obvious and it is a big question to me.\\n \\n3) Also, the paper mentioned that this method can work even if there is no local data available on some of the devices/clients. I wasn't able to understand how personalization possible if there is no data to personalize. Wouldn't a device/client just use the global model?\\n\\nIn summary, I find the contribution and novelty of this paper limited and the empirical findings of this paper can't be always applicable to other datasets and scenarios. Plus, I am not convinced this paper shows anything different than FedAvg rather than some recommendations about local and global optimizer selections.\"}",
"{\"comment\": \"Thanks for your detailed information\", \"title\": \"Thanks for your reply\"}",
"{\"comment\": \"Thank you for your interest!\\n\\nYou are correct, the subsequent local gradients are computed with respect to different models, and averaged to form a new model only after a number of local steps. This is motivated by the usual high cost of such averaging operation in federated learning. See for instance Figure 1 in (McMahan, 2017) which proposed FedAvg, for empirical visualization that this idea makes sense if you start from the same point, but not if you have two random models. Our eq (5) is thus only a different view on this existing method, providing additional insight into what is it actually optimizing for.\", \"title\": \"Interpretation of Equation (5)\"}",
"{\"comment\": \"I quite don't understand why the eq(5) is derived.\\n\\nIf you choose the Fedsgd, it means every step you have to update the model and every step the gradient global(g_i) is computed based on the current average global model. \\n\\nIf you use the fedavg, then the local client will compute gradients k steps, but the gradient local(g_i) is computed based on the local model. \\n\\nIf you don't average at every step, then the global model parameter and local model parameter are different, So how can you connect this two together? \\nCan you explain about it?\", \"title\": \"Questions about Equation\"}"
]
} |
SyxhVkrYvr | Towards Verified Robustness under Text Deletion Interventions | [
"Johannes Welbl",
"Po-Sen Huang",
"Robert Stanforth",
"Sven Gowal",
"Krishnamurthy (Dj) Dvijotham",
"Martin Szummer",
"Pushmeet Kohli"
] | Neural networks are widely used in Natural Language Processing, yet despite their empirical successes, their behaviour is brittle: they are both over-sensitive to small input changes, and under-sensitive to deletions of large fractions of input text. This paper aims to tackle under-sensitivity in the context of natural language inference by ensuring that models do not become more confident in their predictions as arbitrary subsets of words from the input text are deleted. We develop a novel technique for formal verification of this specification for models based on the popular decomposable attention mechanism by employing the efficient yet effective interval bound propagation (IBP) approach. Using this method we can efficiently prove, given a model, whether a particular sample is free from the under-sensitivity problem. We compare different training methods to address under-sensitivity, and compare metrics to measure it. In our experiments on the SNLI and MNLI datasets, we observe that IBP training leads to a significantly improved verified accuracy. On the SNLI test set, we can verify 18.4% of samples, a substantial improvement over only 2.8% using standard training. | [
"natural language processing",
"specification",
"verification",
"model undersensitivity",
"adversarial",
"interval bound propagation"
] | Accept (Poster) | https://openreview.net/pdf?id=SyxhVkrYvr | https://openreview.net/forum?id=SyxhVkrYvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Cqtd5aUKBs",
"B1xeKnauiH",
"rJe5coadiH",
"H1gdBspusH",
"HJeGB96doS",
"HJlme7pccr",
"ryxiFWbHcH",
"HJefc93Z5S",
"rkxuhTae5S"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729406,
1573604472039,
1573604241681,
1573604160310,
1573603897583,
1572684522893,
1572307330600,
1572092554258,
1572031919521
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1670/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1670/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1670/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1670/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1670/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1670/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1670/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1670/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper deals with the under-sensitivity problem in natural language inference tasks. An interval bound propagation (IBP) approach is applied to predict the confidence of the model when a subsets of words from the input text are deleted. The paper is well written and easy to follow. The authors give detailed rebuttal and 3 of the 4 reviewers lean to accept the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"\\u201cSince the accuracy of the proposed model drops the most, I am wondering how the verfied accuracy and accuracy are related during training? For example, can you show what is the verified accuracy with accuracy being close to the standard training?\\u201d\", \"response\": \"In our experiments we observed that verified accuracy and standard accuracy correlate negatively with one another, and prior work has shown that there is a tradeoff between robustness and standard accuracy (Tsipras et al. (2019), https://arxiv.org/abs/1805.12152). It actively hurts standard test accuracy when the model becomes verifiably less under-sensitive. This indicates that the signal that the model uses to form its prediction under standard training cannot be exploited (to the same extent) during verifiable training, and that the NLI task is then much harder to learn.\", \"we_found_that_verified_accuracy_is_heavily_skewed_towards_one_label_in_snli\": \"contradiction (58.2%), compared to neutral (1.3%) and entailment (0.0%, though nonzero). We furthermore observed a notable negative correlation (-0.19) of verified accuracy with lexical overlap of premise and hypothesis, even within the contradiction label, which has a lower rate of lexical overlap than the other two classes.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"\\u201ctranspose should be written with (not ).\\u201d\", \"response\": \"Thank you for your suggestion regarding notation, we have updated it throughout the paper.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"\\u201cCan you explain a bit more for IBP-training? How that hinge loss applies to the objective function? Is the IBP training differentiable?\\u201d\", \"response\": \"There is an upper bound (IBP) for the model output corresponding to the gold label y, and a nominal probability P(y|x). Both depend on all model parameters, and the difference delta between the two is again a function of all model parameters. This difference delta is fed into a hinge function lambda * max{0, delta} and we add it to the standard training loss, again with a scalar hyperparameter lambda. The resulting loss is then differentiable, and during model training the parameters are tuned to minimise the loss via gradient-based optimisation.\"}",
"{\"title\": \"Our Model Choice, Context of this Work, and Concrete Contributions.\", \"comment\": \"\\u201cHence, this work does not make enough contribution to be accepted\\u201d\", \"response\": \"We agree that certification of other model architectures, such as the transformer, is a challenging and worthy goal. We see our work and model choice (smaller, and with transferable architectural components, such as attention) as a step in this direction. We believe that our contributions on:\\n1) Evaluating and treating undersensitivity\\n2) Use of bound propagation principles to verify attention-based layers\\n3) Training of models that are verifiably not undersensitive \\nare all useful steps towards understanding and extending the robustness of NLP models.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This work is an application of interval bound propagation on evaluating the robustness of NLI model. This work is well-motivated, assuming that the confidence of a neural model should be lower when part of the sentence is missed. However, the application of vanilla IBP is quite limited in certain model architectures. In this work, the author considers specifically the decomposable attention model, which is a very shallow network, and not a state-of-the-art model anymore. It is non-trivial to adapt the proposed method to other more advanced models, such as the ones based on the Transformer model. Hence, this work does not make enough contribution to be accepted.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"-- Overall --\\nThis submission tackles to verify the \\u201cunder-sensitivity\\u201d problem of neural network models in the natural language inference by ensuring modes do not become more confident in the predictions when arbitrary subsets of words from the input text are deleted. The authors developed new verification approaches based on decomposable attention mechanism with interval bound propagation (IBP), which can prove the under-sensitivity issue given a model and a particular sample. The experimental results on SNLI and MNLI show that the proposed approach leads to a much improved verified accuracy.\\n\\n-- In general, \\u201cunder-sensitivity\\u201d is a very critical problem for applying neural models in natural language understanding where powerful neural networks tend to capture spurious correlations from the biased datasets. This submission formulates \\u201cunder-sensitivity\\u201d as a mathematical specification and then try to verify it with IBP verification. Although the used technique IBP is not new, it would interesting to have the verification in NLI models.\\n\\n-- Section 5 is a bit unclear how to compute the IBP for deleting several words, and what is the output. It would be better to have a clear example for how this was computed.\\n\\n-- As the author mentioned, the verification of under-sensitivity can also be done by using beam-search, although it is costly and not accurate. IBP is another more efficient option, but not the optimal neigher. Maybe consider to change the title as \\u201cefficient verification\\u201d?\\n\\n-- Specific Questions -- \\nThe entire paper builds on decomposable attention. Is the same approach also applicable to other model types, or only single layer attention-based models? \\nAlso, how this methods work for other NLI or NLU tasks?\\nIn experiments, how the data augumentation penalize the model with a loss for specification violation? What does the equation look like?\\nCan you explain a bit more for IBP-training? How that hinge loss applies to the objective function? Is the IBP training differentiable?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This works considers the task of Natural Language Inference (NLI).\\nThe question addressed is that SOTA NLI models tend to lead to\\nhigher confidence when some parts are deleted from the \\\"premise\\\".\\nIt is a problem known as under-sensitivity.\\nA method based on IBP is proposed to address this issue.\\nThe idea of Interval Bound Propagation (IBP) is to use interval arithmetic to propagate\\nintervals and bound the variation of the target based\\non variation of the input. In other words, one propagates\\nupper and lower interval bounds through the network.\\nThe DAM model from (Parikh et al., 2016)\\nis studied in particular.\\n\\nThe paper is well written and easy to follow.\\n\\nMy only concern is about the relevance of approach based on DAM when\\nthere are now more accurate models for this task. The paper is however\\ninteresting and addressed a relevant topic.\", \"misc\": [\"transpose should be written with $^\\\\top$ (not $^T$).\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper proposes a model to verify the robustness of NLP models (change in the original probability), more specifically DAM, in the case of word removals in the input. The idea is given the lower and upper bound on the hidden state at previous layer, compute the new bound by propagating the bounding box around the hidden state at previous layer. The upper bound at the final layer is then compared with the label probability of the original input to assess if the probability increases or not. By training model with a hinge loss based on this verification method, they show that the model becomes more robust to word removals.\", \"Overall, the paper is well written and the idea of using IBP with an attentive model seems to work empirically for SNLI datasets. But, the technical contribution feels incremental over previous approaches, especially Huang (2019). I have several questions related to some parts of the paper:\", \"Since upper and lower bounds are also propagated, do you backpropagate the gradients via these bounds or only via the original inputs?\", \"How sensitive is the label in SNLI dataset to word removal? For some label types, such as entailment, it might have less of an effect that for the others.\", \"How is the accuracy distributed wrt different label types?\", \"Since the accuracy of the proposed model drops the most, I am wondering how the verfied accuracy and accuracy are related during training? For example, can you show what is the verified accuracy with accuracy being close to the standard training?\"]}"
]
} |
Skx24yHFDr | Discovering Topics With Neural Topic Models Built From PLSA Loss | [
"sileye ba"
] | In this paper we present a model for unsupervised topic discovery in texts corpora. The proposed model uses documents, words, and topics lookup table embedding as neural network model parameters to build probabilities of words given topics, and probabilities of topics given documents. These probabilities are used to recover by marginalization probabilities of words given documents. For very large corpora where the number of documents can be in the order of billions, using a neural auto-encoder based document embedding is more scalable then using a lookup table embedding as classically done. We thus extended the lookup based document embedding model to continuous auto-encoder based model. Our models are trained using probabilistic latent semantic analysis (PLSA) assumptions. We evaluated our models on six datasets with a rich variety of contents. Conducted experiments demonstrate that the proposed neural topic models are very effective in capturing relevant topics. Furthermore, considering perplexity metric, conducted evaluation benchmarks show that our topic models outperform latent Dirichlet allocation (LDA) model which is classically used to address topic discovery tasks. | [
"neural network",
"topic model",
"neural topic model",
"bag-of-words",
"PLSA"
] | Reject | https://openreview.net/pdf?id=Skx24yHFDr | https://openreview.net/forum?id=Skx24yHFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Bu5HQgvfx",
"rJg_OocPcS",
"S1lgvje0Yr",
"rJlIw4RqKB",
"H1xBYH9OdH",
"r1xnaL5yur",
"B1ebr8W2DH"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"comment"
],
"note_created": [
1576798729374,
1572477808416,
1571846999743,
1571640414060,
1570444668562,
1569855171874,
1569621560636
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1669/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1669/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1669/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1669/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1669/Authors"
],
[
"~pankaj_gupta1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a neural topic model with the goal of improving topic discovery with a PLSA loss. Reviewers point out major limitations including the following:\\n\\n1) Empirical comparison is done only with LDA when there are many newer models that perform much better.\\n2) Related work section is incomplete, especially for the newer models.\\n3) Writing is unclear in many parts of the paper.\\n\\nFor these reasons, I recommend that the authors make major improvements to the paper before resubmitting to another venue.\", \"title\": \"Paper Decision\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"I am unimpressed with the quality of writing and presentation, to begin with. There are numerous grammatical errors and typos that make the paper a very difficult read. The presentation also follows an inequitable pattern where the backgrounds and related works are overemphasized and the actual contribution of the paper seems very limited. In its current form, this paper is not ready for publication in ICLR.\\n\\nThe idea of representing a document as an average of the embeddings of the words is a rather crude idea. Paragraph2vec and many of its derivatives have shown significant improvements with document modelling. The perplexity improvements are nice to have, but I would have liked to see the embeddings being applied to some supervised problems to assess their utilities. \\n\\nThere are quite a few computationally expensive normalization terms. I am curious to understand how these summations do not slow the training process down without further approximations. The authors may present some computational complexity measures to convince readers about the practical applications of the proposed models.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a neural topic model that aim to discover topics by minimizing a version of the PLSA loss. According to PLSA, a document is presented as a mixture of topics, while a topic is a probability distribution over words, with documents and words assumed independent given topics. Thanks to this assumption, each of these probability distributions (word|topic, topic|document, and word|document) can essentially be expressed as a matrix multiplication of the other two, and EM is usually adopted for the optimization. This paper proposes to embed these relationships in a neural network and then optimize the model using SGD.\", \"i_believe_the_paper_should_be_rejected_because\": \"1) most aspects of this paper are a little dated 2) novelty is little 3) experimental section is very limited and unconvincing.\", \"to_elaborate_on_the_experimental_section\": [\"Only LDA has been presented as baseline. There's plenty of neural topic models to compare against (you mentioned some in your related work section) but no comparison with any of those is presented. If the concern is their training time on large datasets, they should be at least presented as comparison for the smaller datasets. For the large datasets there's other approaches that would scale and should be presented as baselines: 1) train on a sample of the dataset 2) co-occurrence based topic methods on sliding windows of text are extremely fast (eg see \\\"A Biterm Topic Model\\\", \\\"A Practical Algorithm for Topic Modeling with Provable Guarantees\\\", and \\\"A Reduction for Efficient LDA Topic Reconstruction\\\" which could fit your scenario with large datasets where topics most likely have small overlap with each other and are almost separable by anchor words.)\", \"Even regarding just LDA: what hyper-parameters \\\\alpha and \\\\beta did you set for LDA? Tuning \\\\beta to a small value might have an impact for large datasets.\", \"Metrics: only perplexity is presented and metrics but it's well known that perplexity on its own is quite limited and often is not correlated to human judgment. Consider adding topic coherence measures as well.\", \"The section on continuous document embeddings is confusing and the explanation should be improved and the formalism tightened.\", \"Other (did not impact the score):\", \"Biases: you're adding biases to your probability estimation equations. This is not in line with the PLSA assumption. What happens if no biases are used?\", \"The paper has several typos and grammatical errors, e.g.:\", \"page 2, L#1: networks -> network\", \"page 4, sec 3.2: set unobserved -> set of unobserved\", \"page 5, sec 5: pratise -> practice\", \"several places: it's -> its\"]}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"First, some minor issues. I didn't understand equation (3). It seems to be a variant of equation (4), and seems to be in disagreement with equation (6). Might be better if the equation was just dropped. For equation (9), you should have brackets \\\"()\\\" around the argument to the exp.\\n\\nSecond, in terms of comparisons, the paper lacks adequate related work. Some non-parametric but non-neural\\nmodels not implemented in GPUs substantially beat LDA, and will run on all the big data sets you list, though perhaps\\nnot quickly! There has also been a number of neural and hybrid topic models developed. \\nDocNADE and LLA (Zaheer etal), for instance, work very well in PPL. Then there are many new deep topic models. Some use the amortised inference that you adopt in section 5. Some incorporate word embeddings or document metadata to\\nfurther improve performance metrics. Note some of the earlier ICLR/NeurIPS papers with deep models didn't\\ndo extensive comparative empirical testing, so may not work well against DocNADE or more recent algorithms.\\n\\nIn terms of related work, topic models is a bit of a mine-field because there is a huge amount of work in\\na huge number of venues, and few authors do a good job of covering related work. What you have listed are mainly the\\nolder works. Recent work also includes Poisson Matrix Factorisation and its variants, as well as hierarchical\\nvariants of LDA, much better than the 2004 paper you mention.\\n\\nTo do the coherence comparisons, easiest way is to use the Palmetto software.\\nYou can also evaluate models by using them as features in a classification task.\\n\\nIt was interesting that you only did one layer for your networks, i.e., equations (4)-(6). Why was this?\\nI would have liked to have seen the impact of more layers. However, your model is remarkably simple \\nso if it works well, that is good.\\n\\nAnyway, the experimental evaluation shows good results on all three datasets for your models, but its hard to be sure\\nsince you only have one comparison, an old LDA, and nothing recent. So promising work, but\\nrelated work and experimental work need to be improved.\"}",
"{\"comment\": \"Dear Pankaj\\nAgain thank you for your feedbacks on our paper. Here we respond to your concerns.\\nFirst we accounted about missing reference you mentionned and added them to the paper. We note that Larochelle & Lauly was already cited in the related work section.\\n\\nAbout your question about related to the perplexity that are high. This is due to the fact that our vocabulary are not filtered: we used all the words appearing in the document. Just to show that, we designed an experiment on TwentyNewsGroup dataset where we used as vocabulary word appearing more than: 20, 40, 60, 80, and 100 times. These results will be added to the paper. When using words appearing more than 100, perplexity are much lower. But this did not change any conclusions.\\n\\nAbout your concerns related to coherence scores, we added results about UMAss coherence scores (Mimno et al Optimizing semantic coherence in topic models. EMNLP 2011).\\n\\nAbout your concerns related to comparison with neural topics models, some comparisons with such methods will be added to the paper. In the first version we compared mainly to LDA because it remains the most popular unsupervised topic model.\\n\\nWe will also display TSNE based document embedding for the TwentyNewGroupDataset which show that documents cluster according to their categories\\n\\nHope these responses answer your concerns.\", \"title\": \"Responses to Pankaj Gupta about missing references, comparisons, and evaluation\"}",
"{\"comment\": \"Thanks for your feedbacks Pankaj. They will be taken into accounts in the coming days. I will come back to you as soon as they are done.\", \"title\": \"Adding references, experiments and comparison\"}",
"{\"comment\": \"Following are the missing references, especially in Neural topic modeling:\\n\\n[1] Hugo Larochelle and Stanislas Lauly. A neural autoregressive topic model. In NIPS 2012.\\n[2] Pankaj Gupta, Yatin Chaudhary, Florian Buettner, and Hinrich Schuetze. Document informed neural autoregressive topic models with distributional prior. In AAAI 2019. \\n[3] Pankaj Gupta, Yatin Chaudhary, Florian Buettner, and Hinrich Schuetze. textTOvec: Deep Contextualized Neural Autoregressive Topic Models of Language with Distributed Compositional Prior. In ICLR 2019.\\n[4] Akash Srivastava and Charles Sutton. Autoencoding variational inference for topic models. In ICLR 2017.\\n\\nPlease include the reference [3] for the mentions of combining topic and language models (e.g. in conclusion).\", \"additional_comments\": \"1. Why are the perplexity values are too high? \\n2. Please include a quantitative comparison with other neural topic models [e.g., 1, 2, 3, 4]. \\n3. What do the high perplexity scores signify? \\n4. To better demonstrate the applicability of topic models, could you include additional evaluation such as topic coherence for quality of topics, document clustering or classification or retrieval, similar to [2, 3, 4]?\", \"title\": \"Missing References, missing comparisons with recent Neural topic models, incomplete evaluation\"}"
]
} |
rJehVyrKwH | And the Bit Goes Down: Revisiting the Quantization of Neural Networks | [
"Pierre Stock",
"Armand Joulin",
"Rémi Gribonval",
"Benjamin Graham",
"Hervé Jégou"
] | In this paper, we address the problem of reducing the memory footprint of convolutional network architectures. We introduce a vector quantization method that aims at preserving the quality of the reconstruction of the network outputs rather than its weights. The principle of our approach is that it minimizes the loss reconstruction error for in-domain inputs. Our method only requires a set of unlabelled data at quantization time and allows for efficient inference on CPU by using byte-aligned codebooks to store the compressed weights. We validate our approach by quantizing a high performing ResNet-50 model to a memory size of 5MB (20x compression factor) while preserving a top-1 accuracy of 76.1% on ImageNet object classification and by compressing a Mask R-CNN with a 26x factor. | [
"compression",
"quantization"
] | Accept (Spotlight) | https://openreview.net/pdf?id=rJehVyrKwH | https://openreview.net/forum?id=rJehVyrKwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"J1gn3PlotrX",
"HJWNvyUr-P",
"nfzoubyRgVz",
"bky56UZZCC",
"OgcAdDx7Y9",
"jjnNUFHmw1",
"GfyAvLt4aF",
"B1ezIjyQiH",
"SJlzQjkXjB",
"rJlnnKJQiH",
"rkxdLt17jr",
"rklaWryJoH",
"S1e82d0HqB",
"S1ezZlil5B",
"Bkg1AQulcH",
"ryegpfeaYS",
"H1gAi6GdtH"
],
"note_type": [
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1587704256894,
1587642197058,
1587603606491,
1581332467210,
1577179532700,
1576879683600,
1576798729343,
1573219146029,
1573219097633,
1573218740279,
1573218639685,
1572955397079,
1572362414189,
1572020218451,
1572008903384,
1571779255610,
1571462565567
],
"note_signatures": [
[
"~Eunhui_Kim1"
],
[
"ICLR.cc/2020/Conference/Paper1668/Authors"
],
[
"~Eunhui_Kim1"
],
[
"ICLR.cc/2020/Conference/Paper1668/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1668/Authors"
],
[
"~Sourya_Basu1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1668/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1668/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1668/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1668/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1668/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1668/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1668/Authors"
],
[
"~Weihan_Chen1"
],
[
"ICLR.cc/2020/Conference/Paper1668/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1668/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"It works! I can evaluate the result as your paper.\", \"comment\": \"Thank you\\nAfter I change the code as given in the issue (https://github.com/facebookresearch/kill-the-bits/issues/9), I can get the accuracy as your paper.\"}",
"{\"title\": \"Answer\", \"comment\": \"Good morning Eunhui Kim,\\n\\nThanks for your interest in our work! To facilitate the collaboration and the debugging, could you please fill an issue here: https://github.com/facebookresearch/kill-the-bits/issues/new by copy-pasting what you wrote above? Also, could you indicate in the issue:\\n\\n- Relevant output logs?\\n- Clarify the following: do you refer to the validation accuracy as the one given by running inference.py on the compressed model obtained by running quantize.py? In this case, resolved issue #9 (https://github.com/facebookresearch/kill-the-bits/issues/9) should help you (you have to patch two lines of code in inference.py).\\n\\nThanks again for reaching out,\\n\\nThe authors\"}",
"{\"title\": \"Question about open code and different accuracy when infer by using quantized pth file\", \"comment\": \"Dear authors:\\n\\n Thank you for your efforts to make improved achivement in the perspective of high compression ratio with high accuracy. \\n\\n I tried to validate your code according to your README.md and your paper in my development environment.\", \"i_used_the_github_code_https\": \"//github.com/facebookresearch/kill-the-bits.\\n\\n After quantize with as follow args, I can get pth files per layer and state_dict_compressed.pth, finally. \\n\\n Thus using this compressed pth, I ran inference. \\n \\n The result accuracy is 10%, however When I used the given compresed pth - 'models/compressed/resnet18_small_block.pth', then it shows the accuracy as your paper inform.\", \"the_args_i_used_for_quantization_experiment_using_your_code\": \"model - resnet18\\n dataset - imagenet\\n n-iter - 100, # of EM iteration\\n n-activations - 1024, size of the batch of activations\\n block-size-cv - 9, quantization block size for 3x3 conv\\n block-size-pw - 4, quantization block size for 1x1 conv\\n block-size-fc - 4, quantization block size for fully-c layers\\n n-centroids-cv, 256, # of centroids for 3x3 conv\\n n-centroids-pw, 256, # of centroids for 1x1 conv\\n n-centroids-fc, 2048, # of centroids for classifier\\n n-centroids-t, 4, threshold for reducing # of centroids\\n eps, 1e-8, empty cluster resolution\\n n-workers, 20, # of workers for data loading\\n finetune-centroids, 2500, # of iters for layer-wise fine tuning of centroids\\n lr-centroids, 0.05, Learning rate to fine tune centroids\\n momentum-centroids, 0.9, momentum when using SGD\\n weight-decay-centroids, 1e-4, weight decay\\n finetune-whole, 10000, # of iters for global fine tuning of centroids\\n lr-whole, 0.01, learning rate to fine tune classifier\\n momentum-whole, 0.9, momentum when using SGD\\n weight-decay-whole, 1e-4, weight decay\\n finetune-whole-epochs, 9, # of epochs for global fine tuning of the centroids\\n finetune-whole-stepsize, 3, learning rate schedule for global fine tuning of the centroids\\n batch-size, 128, batch size for fine-tuning step\\n\\nThe development environment is on the pytorch 1.4.0 version with 32GB V100 2-GPUs.\\n\\nCould you kindly explain the reason I can not validate your notifed accuracy as your paper?\\n\\nThank you\"}",
"{\"title\": \"Final version\", \"comment\": \"Dear AC, Dear reviewers,\\n\\nThank you again for your constructive comments and feedback. We uploaded the final version of our manuscript. \\n\\nSee you in Ethiopia!\"}",
"{\"title\": \"Answer\", \"comment\": \"Thanks for pointing out the reference! The authors propose an interesting theoretical viewpoint on quantization, and also consider the activations to derive theoretical bounds on the MSE error when using scalar quantization. We will include this work in our final version.\"}",
"{\"title\": \"Similar theoretical work on quantization of neural networks\", \"comment\": \"I was wondering if the work in [1] is related to this paper. It seems to me that [1] is similar to this work but seen from a somewhat theoretical point of view using high-rate functional quantization.\\n\\n[1] Avhishek Chatterjee, and Lav R. Varshney. \\\"Towards optimal quantization of neural networks.\\\" 2017 IEEE International Symposium on Information Theory (ISIT). IEEE, 2017.\"}",
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper addresses to compress the network weights by quantizing their values to some fixed codeword vectors. The paper is well written, and is overall easy to follow. The proposed algorithm is well-motivated, and easy to apply. The method can be expected to perform well empirically, which the experiments verify, and to have potential impact. On the other hand, the novelty is not very high, though this paper uses these existing techniques in a different setting.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Answer\", \"comment\": \"We thank Reviewer 3 for raising important questions. We answer them below.\\n\\nUsing \\\\tilde x in the E- and M-steps. \\nWe agree with Reviewer 3 that \\u201cthe error arising from quantizing v into c is only affected by a subset of rows of \\\\tilde x\\u201d. However, we solve Equation (2) with this proxy algorithm for two reasons. First, using the full \\\\tilde x matrix allows to factor the computation of the pseudo-inverse of \\\\tilde x and thus allows for a much faster algorithm, see answer to Reviewer 2 and the details of the M-step in the paper (as well as footnote 2). Second, early (and slow) experiments suggested that the gains were not significant when using the right subsets of \\\\tilde x in this particular context. \\n\\nMinimizing the reconstruction error\\nOur method results in both better reconstruction error and better training loss than na\\u00efve PQ *before* any finetuning. As we state in the paper, applying naive PQ without any finetuning to a ResNet-18 leads to accuracies below 18% for all operating points, whereas our method (without any finetuning) gives accuracy around 50% (not reported in the paper, we will add it in the next version of our paper). \\n\\nChoosing the optimal number of centroids/blocks size\\nThere is some rationale for the block size, related to the way the information is structured and redundant in the weight matrices (see in particular point 1 of answer to Reviewer 1). For instance, for convolutional weight filters with a kernel size of 3x3, the natural block size is 9, as we wish to exploit the spatial redundancy in the convolutional filters. For the fully-connected classifier matrices and 1x1 convolutions however, the only constraint on the block size if to be a divisor of the column size. Early experiments when trying to quantize such matrices in the row or column direction gave similar results. Regarding the number of centroids, we expect byte-aligned schemes (256 centroids indexed over 1 byte) to be more friendly for an efficient implementation of the forward in the compressed domain. Otherwise, as can be seen in Figure 3, doubling the number of centroids results in better performance, even if the curve tends to saturate around k=2048 centroids. As a side note, there exists some strategies that automatically adjust for those two parameters (see HAQ for example). \\n\\nComparison with pruning and low-rank approximation\\nWe argue that both pruning and low-rank approximation are orthogonal and complementary approaches to our method, akin to what happens in image compression where the transform stage (e.g., DCT or wavelet) is complementary with quantization. See \\u201cDeep neural network compression by in-parallel pruning-quantization\\u201d, Tung and Mori for some works investigating this direction.\"}",
"{\"title\": \"Answer\", \"comment\": \"We thank Reviewer 2 for their support and questions. We answer them below.\\n\\nQuantization time\\nAs we state in our paper, quantizing a ResNet-50 (quantization + finetuning steps) takes about one day on one Volta V100 GPU. The time of quantization is around 1 to 2 hours, the rest being dedicated to finetuning. Thus, the time dedicated to quantization is relatively short, especially compared with the fine-tuning and even more with the initial network training. This is because we optimized our EM implementation in at least two ways as detailed below. \\n-\\tThe E-step is performed on the GPU (see file src/quantization/distance.py, lines 61-75) with automatic chunking. This means that the code chunks the centroids and the weight matrices into blocks, performs the distance computation on those blocks and aggregates the results. This falls within the map/reduce paradigm. Note that the blocks are automatically calculated to be the largest that fit into the GPU, such that the utilization of the GPU is maximized, so as to minimize the compute time. \\n-\\tThe M-step involves calculating a solution of a least squares problem (see footnote 2 in our paper). The bottleneck for this is to calculate the pseudo-inverse of the activations x. However, we fix x when iterating our EM algorithm, therefore we can factor the computation of the pseudo inverse of x before alternating between the E and the M steps (see file src/quantization/solver.py and in particular the docstring). \\n\\nWe provided pointers to the files in the code anonymously shared on OpenReview. To our knowledge, these implementation strategies are novel in this context and were key in the development of our method to be able to iterate rapidly. Both strategies are documented in the code so that they can benefit to the community. \\n\\nIncorporating the non-linearity\\nAs the Reviewer rightfully stated, optimally we should take the non-linearity in Equation (4) into account. One could hope for a higher compression ratio. Indeed, the approximation constraint on the positive outputs would stay the same (they have to be close to the original outputs). On the other hand, the only constraint lying on the negative outputs is that they should remain negative (with a possible margin), but not necessarily close to the original negative outputs. However, our early experiments with this method resulted in a rather unstable EM algorithm. This direction may deserve further investigation.\"}",
"{\"title\": \"Answer\", \"comment\": \"We thank Reviewer 4 for stating that \\u201cthe proposed method has a good compression ratio while maintaining competitive accuracy\\u201d. We provide clarification for the two main questions of the Reviewer below.\\n\\nNovelty of the paper\\nAs we state in our introduction, using codebooks to compress networks is not new, as well as using a weighted k-means technique. However, as we state in the paper: \\u201cThe closest work we are aware of is the one by Choi et al. (2016), but the authors use a different objective (their weighted term is derived from second-order information) along with a different quantization technique (scalar quantization). Our method targets a better in-domain reconstruction, as depicted by Figure 1\\u201d. \\n\\nNote that we already cite two of the suggested references by Reviewer 4, namely \\u201cTowards the limit of network quantization\\u201d and \\u201cThiNet: A filter level pruning method for deep neural network compression\\u201d in our work. We will further clarify our positioning in an updated version of the paper. \\n\\nCompression ratio\\nWe provide an example of the computation of compression ratio in Section 4.1, paragraph \\u201cMetrics\\u201d. Let us detail it further here. The memory footprint of a compressed layer is split between the indexing cost (one index per block indicating the centroid used to encode the block) and the cost of storing the centroids. Say we quantize a layer of size 128 \\u00d7 128 \\u00d7 3 \\u00d7 3 with 256 centroids and a block size of 9. Then, each block of size 9 is indexed by an integer between 0 and 255: such integer can be stored using 8 bits or 1 byte (as 2^8 = 256). Thus, as we have 128 x 128 blocks, the indexing cost is 128 x 128 x 1 byte = 16,384 bytes = 16 kB. Finally, we have to store 256 centroids of dimension 9 in fp16, which represents 256 x 9 floats (fp16) = 256 x 9 x 2 = 4,608 bits = 4.5 kB. The size of the compressed model is the sum of the sizes of the compressed layers. Finally, we deduce the overall compression ratio which is the size of the compressed model divided by the size of the non-compressed model.\"}",
"{\"title\": \"Answer\", \"comment\": \"We thank Reviewer 1 for their insightful questions and suggestions. We agree that Product Quantization (PQ) is key to get \\u201cimpressive compression ratio\\u201d while maintaining competitive accuracy, provided that there is some special structure and redundancy in the weights and the way we quantize them.\\n\\nWhich kind of redundancy does our method capture? \\nAs rightfully stated by Reviewer 1, choosing which elementary blocks to quantize in the weight matrices is crucial for the success of the method (what the Reviewer calls \\u201chorizontal/vertical/other\\u201d correlation). In what follows, let us focus on the case of convolutional weights (of size C_out x C_in x K x K). As we state in our paper: \\u201cThere are many ways to split a 4D matrix in a set of vectors and we are aiming for one that maximizes the correlation between the vectors since vector quantization-based methods work the best when the vectors are highly correlated\\u201d. We build on previous work that have documented the *spatial redundancy* in the convolutional filters [1], hence we use blocks of size K x K. Therefore, we rely on the particular nature of convolutional filters to exploit their spatial redundancy. We have tried other ways to split the 4D weights into a set of vectors to in preliminary experiments, but none was on par with the proposed choice. We agree with Reviewer 1 that the method would probably not yield as good a performance for arbitrary matrices. \\n\\nUsing row permutations to improve the compressibility?\\nThis is a very good remark. Indeed, redundancy can be artificially created by finding the *right* permutation of rows (when we quantize using column blocks for a 2D matrix). Yet in our preliminary experiments, we observed that PQ performs systematically worse both in terms of reconstruction error and accuracy of the network that when applying a random permutation to a convolutional filter. This confirms that our method captures the spatial redundancy of the convolutional filters as stated in the first point. \\n\\n[1] Exploiting linear structure within convolutional networks for efficient evaluation, Denton et al.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The suggested method proposes a technique to compress neural networks bases on PQ quantization. The algorithm quantizes matrices of linear operations, and, by generalization, also works on convolutional networks. Rather than trying to compress weights (i.e. to minimize distance between original and quantized weights), the algorithm considers a distribution of unlabeled inputs and looks for such quantization which would affect output activations as little as possible over that distribution of data. The algorithm works by splitting each column of W_ij into m equal subvectors, learning a codebook for those subvectors, and encoding each of those subvectors as one of the words from the codebook.\\n\\nThe method provides impressive compression ratios (in the order of x20-30) but at the cost of a lower performance. Whether this is a valuable trade-off is highly application dependent.\\n\\nOverall I find the paper interesting and enjoyable. However, as I am not an expert in the research area, I can not assess how state of the art the suggested method is.\\n\\nThere are a few other questions that I think would be nice to answer. I will try to describe them below:\\n\\nSuppose we have a matric W_{ij} with dimensions NxM where changing i for a given j defines a column. By definition, linear operation is defined \\ny_i = sum_j W_ij x_j . Now say each column of matrix W is quantized into m subvectors. We can express W_ij in the following way:\\nW_ij = (V^1_ij + V^2_ij + ... V^m_ij)x_j where V^m_ij is zero everywhere except for the rows covering a given quantized vector.\\nFor example, if W had dimensions of 8x16 and m=4, \\nV^2_{3,j}=0, for all j, V^2_{4,j}=non_zero, V^2_{7,j}=non_zero, V^2_{8,j}=0, V^2_{i=4:8,j}=one_of_the_quantized_vectors.\\n\\ny_i = sum_j W_ij x_j = sum_k sum_j (V^k_ij) x_j =def= sum_k z^k_i where z^k are partial products: z^k_i=0 for i<k*N/m and i>(k+1)N/m\\n\\nThus, the suggested solution effectively splits the output vector y_i into m sections, defines sparse matrices V^k_{ij} 1<=k<=m, and performs column-wise vector quantization for these matrices separately.\\n\\nGenerally, it is not ovious or given that the current method would be able to compress general matrices well, as it implicitly assumes that weight W_{ij} has a high \\\"correlation\\\" with weights W_{i+kN/m,j} (which I call \\\"vertical\\\" correlation), W_{i,k+some_number} (which I call \\\"horizontal\\\" correlation) and W_{i+kN/m,k+some_number} (which I call \\\"other\\\" correlation). It is not given that those kind of redundancies would exist in arbitrary weight matrices.\\n\\nNaturally, the method will work well when weight matrices have a lot of structure and then quantized vectors can be reused. Matrices can have either \\\"horizontal\\\" or \\\"vertical\\\" redundancy (or \\\"other\\\" or neither). It would be very interesting to see which kind of redundancy their method managed to caprture.\\n\\nIn the 'horizontal' case, it should work well when inputs have a lot of redundancy (say x_j' and x_j'' are highly correlated making it possible to reuse code-words horizontally within any given V^k: V^k_ij'=V^k_ij''). However, if thise was the case, it would make more sense to simply remove redundancy by prunning input vector x_j by removing either x_j' or x_j'' from it. This can be dome by removing one of the outputs from the previous layer. This can be a symptom of a redundant input.\\n\\nAnother option is exploiting \\\"vertical\\\" redundancy: this happens when output y_i' is correlated with output y_{i'+N/m}. This allows the same code-word to be reused vertically. This can be a symptom of a redundant output. It could also be the case that compressibility could be further subtantially improved by trying different matrix row permutations. Also, if one notices that y_i' ir correlated with y_i'', it might make sense to permute matrix rows in such a way that both rows would end up a multiple N/m apart. It would be interesting to see how this would affect compressibility.\\n\\nThe third case is when code words are reused in arbitrary cases. \\n\\nGenerally, I think that answering the following questions would be interesting and could guide further research:\\n1. It would be very interesting to know what kind of code-word reusa patterns the algorithm was able to capture, as this may guide further research.\\n2. How invariance copressibility is under random permutations of matrix rows (thus also output vectors)?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes to use codes and codebooks to compress the weights. The authors also try minimizing the layer reconstruction error instead of weight approximation error for better quantization results.\\nDistillation loss is also used for fine-tuning the quantized weight. Empirical results on resnets show that the proposed method has a good compression ratio while maintaining competitive accuracy.\\n\\nThis paper is overall easy to follow. My main concern comes from the novelty of this paper. The two main contributions of the paper: \\n(1) using codes and codebooks to compress weights; and \\n(2) minimizing layer reconstruction error instead of weight approximation error\\nare both not new. For instance, using codes and codebooks to compress the weights has already been used in [1,2]. A weighted k-means solver is also used in [2], though the \\\"weighted\\\" in [2] comes from second-order information instead of minimizing reconstruction error. In addition, minimizing reconstruction error has already been used in low-rank approximation[3] and network pruning[4]. \\nClarification of the connections/differences, and comparison with these related methods should be made to show the efficacy of the proposed method.\\n\\nIt is not clear how the compression ratio in table 1 is obtained. Say for block size d=4, an index is required for each block, and the resulting compression ratio is at most 4 (correct me if I understand it wrong).\\nCan the authors provide an example to explain how to compute the compression ratio? \\n\\n[1]. Model compression as constrained optimization, with application to neural nets. part ii: quantization. \\n[2]. Towards the limit of network quantization.\\n[3]. Efficient and Accurate Approximations of Nonlinear Convolutional Networks.\\n[4]. ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression.\"}",
"{\"title\": \"Answer\", \"comment\": [\"Thanks for pointing out this reference! It is definitely relevant to our work, and therefore we will add it in our paper. We would like to point out that our method goes beyond this prior work on several aspects:\", \"We use Product Quantization (i.e quantizing chunks of columns) whereas in the cited work the authors use Vector Quantization (i.e. the authors quantize the columns). Our choice takes better advantage of the spatial redundancy of information in the convolutional filters as VQ is less likely to discover the mutual dependency except if using very large amount of data for learning.\", \"The cited work does not quantize the layers sequentially and does not finetune the learned centroids -- it finetunes the dense (non-compressed) weights of the classifier.\", \"The cited work does not use distillation to take advantage of the teacher, non-compressed network to help compressing the student network.\", \"The speed-up reported in the cited work assumes that the scalar products between the input activations of all layers and the centroids are already pre-computed, which is not the case in a real inference scenario.\"]}",
"{\"title\": \"similiar work without quoted\", \"comment\": \"https://arxiv.org/abs/1512.06473\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper suggests a quantization approach for neural networks, based on the Product Quantization (PQ) algorithm which has been successful in quantization for similarity search. The basic idea is to quantize the weights of a neuron/single layer with a variant of PQ, which is modified to optimize the quantization error of inner products of sample inputs with the weights, rather than the weights themselves. This is cast as a weighted variant of k-means. The inner product is more directly related to the network output (though still does not account for non-linear neuron activations) and thus is expected to yield better downstream performance, and only requires introducing unlabeled input samples into the quantization process. This approach is built into a pipeline that gradually quantizes the entire network.\\n\\nOverall, I support the paper and recommend acceptance. PQ is known to be successful for quantization in other contexts, and the specialization suggested here for neural networks is natural and well-motivated. The method can be expected to perform well empirically, which the experiments verify, and to have potential impact.\", \"questions\": \"1. Can you comment on the quantization time of the suggested method? Repeatedly solving the EM steps can add up to quite an overhead. Does it pose a difficulty? How does it compare to other methods?\\n2. Can you elaborate on the issue of non-linearity? It is mentioned only briefly in the conclusion. What is the difficulty in incorporating it? Is it in solving equation (4)? And perhaps, how do you expect it to effect the results?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper addresses to compress the network weights by quantizing their values to some fixed codeword vectors. The authors aim to reduce the distortion of each layer rather than the weight distortion. The proposed algorithm first selects the candidate codeword vectors using k-means clustering and fine-tune them via knowledge distillation. The authors verify the proposed algorithm by comparing it with existing algorithms for ResNet-18 and ResNet-50.\\n\\nOverall, I think that the proposed algorithm is easy to apply and the draft is relatively well written. Some questions and doubts are listed below.\\n\\n-In k-means clustering (E-step and M-step), is it correct to multiply \\\\tilde x to (c-v)? I think that the error arising from quantizing v into c is only affected by a subset of rows of \\\\tilde x. For example, if v is the first subvector of w_j, then I think that only 1-st, m+1-th, 2m+1-th, \\u2026 rows of \\\\tilde x affect to the error.\\n\\n-Does minimizing reconstruction error minimizes the training loss (before any further fine-tuning) compared to na\\u00efve PQ? If not, \\n\\n-Is there any guideline for choosing the optimal number of centroids and the optimal block size given a target compression rate?\\n\\n-Is there any reason not comparing the proposed algorithm with other compression schemes? (e.g., network pruning and low-rank approximation)\"}"
]
} |
rkesVkHtDr | Meta-Learning Runge-Kutta | [
"Nadine Behrmann",
"Patrick Schramowski",
"Kristian Kersting"
] | Initial value problems, i.e. differential equations with specific, initial conditions, represent a classic problem within the field of ordinary differential equations(ODEs). While the simplest types of ODEs may have closed-form solutions, most interesting cases typically rely on iterative schemes for numerical integration such as the family of Runge-Kutta methods. They are, however, sensitive to the strategy the step size is adapted during integration, which has to be chosen by the experimenter. In this paper, we show how the design of a step size controller can be cast as a learning problem, allowing deep networks to learn to exploit structure in the initial value problem at hand in an automatic way. The key ingredients for the resulting Meta-Learning Runge-Kutta (MLRK) are the development of a good performance measure and the identification of suitable input features. Traditional approaches suggest the local error estimates as input to the controller. However, by studying the characteristics of the local error function we show that including the partial derivatives of the initial value problem is favorable. Our experiments demonstrate considerable benefits over traditional approaches. In particular, MLRK is able to mitigate sudden spikes in the local error function by a faster adaptation of the step size. More importantly, the additional information in the form of partial derivatives and function values leads to a substantial improvement in performance. The source code can be found at https://www.dropbox.com/sh/rkctdfhkosywnnx/AABKadysCR8-aHW_0kb6vCtSa?dl=0 | [
"odes",
"step size",
"initial value problem",
"mlrk",
"traditional approaches",
"local error function",
"partial derivatives",
"initial value problems",
"differential equations",
"specific"
] | Reject | https://openreview.net/pdf?id=rkesVkHtDr | https://openreview.net/forum?id=rkesVkHtDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"1AbRf1Niw",
"ryxVF3rhoB",
"HklZ0EFFjB",
"BkeMBr8usS",
"HylH-GnVor",
"H1xtwynNsS",
"Hyll7CsNsB",
"BJeZoWc19r",
"Syl61bA3KB",
"Hke_qEKjtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729312,
1573833851848,
1573651657181,
1573573945876,
1573335548981,
1573334880572,
1573334551734,
1571951001265,
1571770596988,
1571685519567
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1667/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1667/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1667/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1667/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1667/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1667/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1667/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1667/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1667/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Summary: This paper casts the problem of step-size tuning in the Runge-Kutta method as a meta learning problem. The paper gives a review of the existing approaches to step size control in RK method. Deriving knowledge from these approaches the paper reasons about appropriate features and loss functions to use in the meta learning update. The paper shows that the proposed approach is able to generalize sufficiently enough to obtain better performance than a baseline.\\n\\n\\nThe paper was lacking in advocates for its merits, and needs better comparisons with other baselines before it is ready to be published.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for addressing my comments\", \"comment\": \"The authors have been very responsive to reviewer comments. I think this will make their paper much stronger. In particular, I would strongly suggest that they include the exhaustive table above in the body of the paper.\\n\\nRegarding (2), the authors make a good point---MLRK stays within the allowed error. I'm still not convinced that \\\"fewer steps\\\" and \\\"similar wall time\\\" constitutes an improvement over the baseline, since wall time is what we really care about. Can the authors make an argument that refinement of their method will lead to a method that is actually faster than the baseline? \\n\\nAs I mentioned above, I think the premise of this paper is really interesting, but I would like to see stronger experimental results. That said, I would be willing to raise my rating to a 5.\\n\\nWhether the paper is accepted or not, the authors should certainly keep pursuing this line of research.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thanks for the valuable feedback.\\n\\nConcerning (1), while pushing for more general problems is indeed interesting and on our agenda, one of the take-away messages of the present paper is to illustrate that DNNs can actually help speeding up classical initial value problem solvers. Our examples clearly show that classical engineers could benefit from current deep learning. This was also the feedback we got from colleagues from the engineering domain.\\n\\nConcerning (2), we disagree in the following sense. Thee errors are indeed higher but within the range of digits where one says that a solution has been found. So the main point is indeed the \\u201cfewer steps\\u201d and \\u201csimilar wall time\\u201d. Our goal is to speed up classical engineering techniques without compromising the quality, which our numbers do show. \\n\\nConcerning (3):\\n\\nint | steps | mean local error | time |\\n | baseline| err | partial| grad | baseline | err | partial | grad | baseline | err | partial | grad |\\n------------------------------------------------------------------------------------------------------------------------------------------------------\\n1 | 21.59 | 16.42| 12.40 | 12.09 | 7.17e-4 | 6.58e-4 | 4.01e-4 | 3.74e-4 | 0.0255 | 0.0263 | 0.0254 | 0.0221 |\\n3 | 33.74 | 29.12| 25.16 | 24.74 | 6.40e-4 | 5.45e-4 | 2.49e-4 | 2.28e-4 | 0.0405 | 0.0375 | 0.0460 | 0.0403 |\\n5 | 45.43 | 41.07| 36.42 | 36.02 | 5.18e-4 | 4.47e-4 | 1.95e-4 | 1.75e-4 | 0.0591 | 0.0517 | 0.0681 | 0.0596 |\\n7 | 56.84 | 52.57| 48.34 | 47.97 | 4.97e-4 | 4.16e-4 | 1.57e-4 | 1.39e-4 | 0.0858 | 0.0742 | 0.1036 | 0.0897 |\\n10 | 73.46 | 69.41| 65.40 | 65.04 | 4.59e-4 | 3.82e-4 | 1.32e-4 | 1.18e-4 | 0.0971 | 0.0825 | 0.1201 | 0.1065 |\\n\\n\\\"err\\\" is slightly faster and uses fewer steps than the baseline while producing smaller local errors during the integration as can be seen in Figure 3 of the paper and in this table. While \\\"partial\\\" and \\\"grad\\\" reduce the number of steps even further, wall time is increased. However one can clearly see that \\\"partial\\\" and \\\"grad\\u201c outperform both the baseline and \\\"err\\\" regarding the local error.\\n\\nThe method \\\"err\\\" produced the values in the table in (d) of the previous comment.\\n\\nFinally about (4), indeed, this is an interesting setting that we will definitely explore. Thanks for pointing this out. However, even the current results demonstrate already the benefit that learning to learn can have for classical engineering tasks.\"}",
"{\"title\": \"Response to Response to Reviewer 2\", \"comment\": \"Thank you for clarifying some of these points. I think adding wall clock times is particularly interesting. I still have a few concerns about the paper:\\n\\n1. It's unclear whether the Lagrange loss will ever be useful in practice. To apply it, we require a problem that is simple enough for either a closed form solution or a very accurate simulation. In this case, why would we want to use learned step sizes? I think the authors need to make the cases that there could be a family of problems with some \\\"easy\\\" members that could be used to train the model with Lagrange loss and some \\\"hard\\\" members that provide interesting applications. I appreciate that they are leaving a good deal of the work with Lagrange loss for future papers, but I think they need to describe a case where it could be useful or possibly leave it out altogether.\\n\\n2. To summarize the table in (e), their method yields\\n * Fewer steps\\n * Higher error\\n * Similar wall time\\nBased on this, it's hard to make a case for MLRK.\\n\\n3. For van der Pol, it's still not totally clear. Figure 3 depicts errors for the baseline, along with three versions of MLRK. It appears that 'partial' and 'grad' perform better than the baseline, but it would be nice to have this data in tabular form, along with timing information for each (it's not clear which method produced the values in the table in (d) above).\\n\\n4. It seems like the van der Pol oscillators in the training and test set all have $\\\\sigma ~ U(0, 1)$. I'm concerned that this doesn't give sufficient evidence that the model is generalizing. What happens if it is trained on vdP(0, 1) and tested on vdP(1, 2)?\\n\\n\\nAgain, I think this is a great premise for a paper, but I don't think the experimental evidence is strong enough at this point. I would still lean toward rejecting the paper, but I think the authors should certainly keep pursuing this idea. In it's current form, I also think this would make a really compelling workshop paper.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your time and feedback.\\n\\n1. You addressed your concern that we did not describe the problem of step size control well. We did not consider this as a fundamental preliminary in order to understand the general problem. However, we agree that for the ICLR community may be necessary. We will include a more thorough description.\\n\\n2. We think the reference to Butcher is sufficient in the main text, however we agree that an explanation would be helpful and we will add that to the appendix.\\n\\n3. In order to use the loss function in Equation 5, we either need a closed form solution as in the experiments with harmonic oscillators or we need to use global error estimation in order to approximate this value. Alternatively, one could consider solving the problem with a very small tolerance parameter to obtain a good approximation of the global error. However, with both these approaches we only obtain an expensive approximation of the global error. In order for our method to work well, we need a global error estimation algorithm that works reliably well, which is still an active area of research. For this reason we left further experiments with the Lagrange loss (Eq. 5) for future work.\\n\\n4.\\n(a) As you point out, we missed to reference the baseline used in the experiments. It is the one given in Equation (2) and is a standard step size controller used for Runge-Kutta. We will add a comment in the experiment section for the final version of the paper.\\n\\n(b) This is an interesting point. When we give both methods the same computational butget, they will arrive at different time points in the integration interval and hence, the resulting accuracy of the numerical solution is not comparable. We are uncertain about how to choose a tolerance that allows a clear comparison in both the number of steps and global error. We agree that the results are hard to interpret, however MLRK is within the given tolerance (0.001 * number of steps).\\n\\n(c) The intention of the experiment with van der Pol equations was that they are an interesting class of ODEs for step size control. As pointed out in 3. we left experiments with global error estimation to future work.\\n\\n(d) Agreed. We will include this in the final version. Here are the number of steps and computation time for van der Pol equations:\\nint | steps | time |\\n | Baseline | Our Method | Baseline | Our Method |\\n---------------------------------------------------------------------------\\n1 | 21.59 | 16.42 | 0.045 | 0.043 |\\n3 | 33.74 | 29.12 | 0.072 | 0.075 |\\n5 | 45.43 | 41.07 | 0.092 | 0.102 |\\n7 | 56.84 | 52.57 | 0.117 | 0.131 |\\n10 | 73.46 | 69.41 | 0.141 | 0.167 |\\n\\n(e) We agree that the clock times would be informative, here are some values:\\nTable for (low-freq) oscillators\\nint | steps | error | time |\\n | Baseline | Our Method | Baseline | Our Method | Baseline | Our Method |\\n--------------------------------------------------------------------------------------------------------------\\n1 | 3.28 | 3.19 | 0.000006 | 0.000007 | 0.0086 | 0.0074 |\\n3 | 6.82 | 6.04 | 0.000030 | 0.000103 | 0.0144 | 0.0130 |\\n5 | 10.35 | 8.18 | 0.000059 | 0.000326 | 0.0211 | 0.0207 |\\n7 | 13.70 | 10.15 | 0.000089 | 0.000608 | 0.0277 | 0.0293 |\\n10 | 18.85 | 13.03 | 0.000138 | 0.001083 | 0.0407 | 0.0460 |\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your time and feedback.\", \"how_mlrk_tackles_the_accuracy_vs_computation_challenge\": \"We argue that current hand-designed update rules are constructed based on certain assumptions that - if fulfilled - lead to a minimization of the step control objective. Replacing these hand-designed update rules by a learned one can significantly improve step size control as a learned method aims to minimize the objective without these assumptions.\", \"on_what_set_of_problems_do_we_expect_meta_learning_to_better_tackle_this_trade_off\": \"MLRK will lead to improvements whenever an ODE does not satisfy the assumptions made for the corresponding step size algorithm. Examples of ODEs where the assumptions are not met are given in the experiments; the baseline method is not able to control step sizes well for van der Pol equations or double pendulums. This is due to high spikes or chaotic behaviour in the local errors. Other types of ODEs with suddenly changing behaviour in the local errors will likely benefit from MLRK as well.\\n\\nNext, we want to address the distribution of classes of ODEs. As you point out our method may fail when applied to problems of very different distributions. A controller that is able to generalize to many different classes of problems is the ultimate goal and was proposed as future work in the conclusion. In particular, an approach similar to that of Wichrowska et al. is pointed out as a way to achieve a general step size controller. However, a controller that is specialized to a certain class of problems can also be of great interest for applications that require repeating numerical integrations of ODEs of similar form. For example, if the application continuously needs to integrate some parametric form of ODE with varying parameters our approach can lead to great improvement. We will make sure to point this out in the final version of the paper.\\n\\nAs pointed out, we missed to reference the baseline used in the experiments. It is the one given in Equation (2) and is a standard step size controller used for Runge-Kutta. We will add a comment in the experiment section for the final version of the paper.\\n\\nFurthermore, you propose an interesting idea to study the effect of a varying number of training instances. We do think this is a compelling idea that deserves further consideration. We are going to design accoring experiments and hope to be able to include them in the final version. Currently, we can not make any comment on the effect of a varying number of training data.\\nTo address the computational cost of our method, we want to point out that the ODEs of our current experiments are of rather low dimensionality and hence the additional cost of executing an LSTM is comparably high. For higher dimensional problems, the cost of an LSTM is comparably lower and hence we expect significant improvement in the computation cost.\\n\\nThe tolerance parameter tol corresponds to lambda, this is only very briefly touched on in the paragraph on \\\"The Objective of Step Size Control\\\" (Section 2), we will try to make this more clear in the discussion of the performance measures.\", \"reference\": \"Olga Wichrowska, Niru Maheswaranathan, Matthew W. Hoffman, Sergio G\\u00f3mez Colmenarejo, Misha Denil, Nando de Freitas, and Jascha Sohl-Dickstein. Learned optimizers that scale and generalize. In Proceedings of the 34th International Conference on Machine Learning, pp. 3751\\u20133760, Sydney, Australia, August 2017.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your time and feedback.\\n\\nYou argue that some of our arguments and ideas come without justifications. We motivate different aspects of our method.\\nFor example, we argue that current hand-designed update rules are constructed based on certain assumptions and that replacing these hand-designed update rules by a learned one can significantly improve step size control.\\nFurthermore, we justify the different performance measures and input features. The first performance measure in Equation (5) is based on the fact that in numerical integration we are interested in minimizing both the global error of the approximated solution as well as the computational cost. Our second performance measure in Equation (6) is the underlying objective of most common step size control algorithms and therefore qualifies as an approriate performance measure in our setup as well. The different sets of input features are justified and discussed as well. \\nFor this reason we are unsure which arguments you are refering to. Can you give a few more specific comments on that regard? \\n\\nThe distribution of training and test data is described in the appendix, for construction of the data an according number of samples were sampled from the distribution. In particular, a parametric form of the ODE is assumed and a distribution over the parameters is defined. We will make sure to include this description in the final version of the paper.\\n\\nYou mention an apparent contradiction in our experiments, however we think this is due to a confusion. The depicted local errors in Figure 3 show the local error of van der Pol equations, whereas the mean global error and number of steps in Table 1 and 2 are evaluated on harmonic oscillators - a different kind of ODE. Furthermore the models in these two experiments are trained with different loss functions. The loss functions and their relation are discussed at length in Section 2. Can you give us feedback if this clarifies your problem? Your comment would help us to decide if we need to clarify this in more detail in the paper.\\n\\nAs you point out, we missed to reference the baseline used in the experiments. It is the one given in Equation (2) and is a standard step size controller used for Runge-Kutta. We will add a comment in the experiment section for the final version of the paper. We agree that a comparison to other step size controllers, e.g. the ones in Equation (3) and (4), is appropriate and try to include them in the experiments of the final version.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes to learn the step size for a Runge-Kutta numerical integrator for solving ordinary differential equations initial value problems. The authors frame the stepsize control problem as a learning problem, based on different performance measures, on ODE dependent inputs and on a LSTM for predicting the next step coefficient. Experiments are performed on 3 ODEs by training and testing in different contexts.\\nThe problem of designing adaptive controllers for ODE numerical schemes is interesting and is probably a new application for ML. The paper makes an effort to introduce the necessary background and for reviewing some classical adaptive controller techniques. The description of the method is relatively clear, but could however be largely improved in order to make it more accessible to the audience. Many of the arguments and proposed ideas come without justification, some definitions should be made more precise. The construction of the training and test sets should be better explained. The experiments show that the proposed approach leads to fewer evaluations but larger mean errors. The graphics also show that the local error is smaller for the proposed method than for the baselines which is in contradiction with the global error behavior. This should be clarified \\u2013 the relations between the two error types should be made clear. The baseline is not defined in the text so that it is difficult to judge the performance. Why not comparing to several adaptive baselines?\\n\\nIn conclusion, this is an interesting topic, the paper proposes new ideas. A more careful writing and especially a better comparison with sota baselines would greatly improve the paper. \\n\\n\\n------ post rebuttal -----\\nThanks for the answers. I still think that the ideas are interesting but that the experiments do not demonstrate enough of the proposed method. I will keep my score.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Updated review: Thanks to the authors for their response to my comments. I believe the strong point of this paper is the novel idea, however, I find the justification for that idea incomplete as author's seems to suggest that the proposed method is probably computationally more expensive (which is opposite to the original motivation of the paper).\\n\\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\", \"summary\": \"This paper casts the problem of step-size tuning in the Runge-Kutta method as a meta learning problem. The paper gives a review of the existing approaches to step size control in RK method. Deriving knowledge from these approaches the paper reasons about appropriate features and loss functions to use in the meta learning update. The paper shows that the proposed approach is able to generalize sufficiently enough to obtain better performance than a baseline.\\n\\nI think this paper, in general, is clear and well-written. I believe the idea of the paper is interesting too. \\n\\nThe paper argues that the main challenge of solving the step size control problem for the RK method is balancing the computation vs accuracy trade-off. Existing methods tackle this problem in different ways and this paper proposes to solve it via meta-learning. However, the paper does not mention how and why meta-learning is expected to tackle this challenge?\\nSo a couple of comments on what set of problems do we expect meta-learning to better tackle this trade-off than the existing methods would have been useful. I am wondering if it is even possible to say something about this in principle? \\n\\nThe paper argues that the idea behind using meta-learning is to learn behaviour from a given class of problems and then generalize to new unseen problems (from the same or different classes). \\nHow do we know that these problems are even from same distributions? \\nWon't the proposed approach fail spectacularly when the problems are not from the same distribution? It would have been nice if the paper made this distinction even if empirically. \\n\\nIn the experiments section, I could not find/understand what exactly is the baseline the paper is comparing to. \\n\\nI was more interested in a study that compared the performance of MLRK as the number of instances of the training problems are varied. \\nThis again makes me come back to the original point of computational cost vs accuracy. What is the computational cost of collecting data on 30000 instances of problems? Should we not worry about this cost?\\nAlso, what is the computational cost of the proposed approach and why are we not comparing it to existing approaches/baseline?\", \"minor_comments\": \"what is tol? it tol the same tolerance as lambda.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"## Summary ##\\n\\nThe authors present a method for learning a step-size adaptation strategy for Runge-Kutta methods. The principal contributions of the paper are:\\n\\n1. They define a loss function that better captures global performance of the controller, rather than just local behavior.\\n2. They propose a set of input features to a step size controller. It includes the intermediate Runge-Kutta evaluation values, which allow the controller to approximate the derivatives of the Jacobian function $g$.\\n3. They describe a recurrent architecture for adapting step size over time.\\n\\nRunge-Kutta methods are a workhorse of ordinary differential equations and choosing step size is one of the central challenges involved in their application. Better methods for step size selection would definitely be of broad interest. As the authors point out, existing methods often consist of hand-tuned heuristics---a feature that often suggests machine learning could provide significant improvements.\\n\\nWhile the premise of the paper is very promising, I don't think it is ready to be accepted to ICLR at this time. Most significantly, the experimental results are not particularly compelling. I believe the authors should refine their method, aim for better experimental results and resubmit. I have included more specific comments below.\\n\\n## Specific Comments ##\\n\\n1. I think the paper would benefit from a clearer description of the RK step-size selection problem. For instance, for a p^th order RK solver, at each time step,\\n\\n * Inputs: t, y(t), g # Also possibly intermediate values from previous time steps.\\n * Select a step size h(t)\\n * Evaluate g at p different points based on h(t).\\n * Use these evaluations to compute a value of y_(t + h_t)\\n\\n For those that aren't familiar with these methods (at ICLR there will be many!) I think this would help explain where the authors' method (and the other methods your describe) fits into the larger algorithm.\\n\\n2. I think a short explanation of error estimation would be helpful in addition to the reference to Butcher. This estimation is critical to step size adaptation. In particular I think this would be clearer if the authors expanded the paragraph at the bottom of page 4, where they describe error in a polynomial in (t_n, h) whose coefficients are derivatives of g.\\n\\n3. The authors' proposed loss function (Eq. 5) includes the true value of y at t_n, y(t_n). They acknowledge that this may make it computationally prohibitive, but I think this point warrants further discussion. Does this mean that their loss function can only be used on problems for which we have a closed form solution (such as harmonic oscillators)? I noticed that in their van der Pol experiments, the authors switch to a more standard loss (Eq. 6). Is this because Eq. 5 is intractable in this example? Is there a reasonable approximation to Eq. 5 that could be used when a closed form solution is not known?\\n\\n4. There are a number of issues with the experiments that I think could use clarification or improvement:\\n\\n (a) The authors compare to a 'baseline' but I don't believe this baseline is defined anywhere. Is it one of the adaptation methods described in Section 2?\\n\\n (b) In Table 1, the baseline method achieves lower error, while MLRK uses fewer steps. It is difficult to assess if this is an improvement since this this cost-accuracy tradeoff is at the heart of the Lagrangian formulation. Ideally, shouldn't we be able to adjust 'tol' to trace out some Pareto frontier for the cost-accuracy tradeoff? In this case, wouldn't we hope for MLRK to be able to achieve better accuracy given the same computational budget?\\n\\n (c) In the van der Pol experiments, the authors switch to local L1 loss (Eq. 6). Was the intention to experiment with local L1 loss and van der Pol provided an interesting class of examples? Or was the reasoning that Eq. 5 is intractable for van der Pol so they had to use local L1 loss? If Eq. 5 can still be evaluated on these experiments, this might be a more convincing comparison.\\n\\n (d) The number of steps required was provided for the harmonic oscillator experiments, but not the van der Pol ones. This would be helpful for comparing the methods.\\n\\n (e) What is the computational overhead of running an RNN alongside your solver? Although it doesn't tell the whole story, it would be informative to report wall clock time along with the number of steps required for each method.\"}"
]
} |
HyxjNyrtPr | RGBD-GAN: Unsupervised 3D Representation Learning From Natural Image Datasets via RGBD Image Synthesis | [
"Atsuhiro Noguchi",
"Tatsuya Harada"
] | Understanding three-dimensional (3D) geometries from two-dimensional (2D) images without any labeled information is promising for understanding the real world without incurring annotation cost. We herein propose a novel generative model, RGBD-GAN, which achieves unsupervised 3D representation learning from 2D images. The proposed method enables camera parameter--conditional image generation and depth image generation without any 3D annotations, such as camera poses or depth. We use an explicit 3D consistency loss for two RGBD images generated from different camera parameters, in addition to the ordinal GAN objective. The loss is simple yet effective for any type of image generator such as DCGAN and StyleGAN to be conditioned on camera parameters. Through experiments, we demonstrated that the proposed method could learn 3D representations from 2D images with various generator architectures. | [
"image generation",
"3D vision",
"unsupervised representation learning"
] | Accept (Poster) | https://openreview.net/pdf?id=HyxjNyrtPr | https://openreview.net/forum?id=HyxjNyrtPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"WybA5-my0c",
"SJl2lItnjr",
"H1lhwI5KiB",
"rkgrqN9Fir",
"S1e_fN9Ysr",
"SJxNx3Optr",
"H1gPwuOTtH",
"r1lqcDSiYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729279,
1573848564351,
1573656164380,
1573655693302,
1573655567802,
1571814380319,
1571813470800,
1571669905731
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1666/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1666/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1666/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1666/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1666/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1666/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1666/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper has initially received mixed reviews, with two reviewers being weakly positive and one being negative. Following the author's revision, however, the negative reviewer was satisfied with the changes, and one of the positive reviewers increased the score as well.\\n\\nIn general, the reviewers agree that the paper contains a simple and well-executed idea for recovering geometry in unsupervised way with generative modeling from a collection of 2D images, even though the results are a bit underwhelming. The authors are encouraged to expand the related work section in the revision and to follow our suggestion of the reviewers.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper modifications\", \"comment\": [\"We would like to thank all the reviewers for their valuable comments.\", \"We revised the paper. The major modifications are as follows.\", \"Normalize depth images and visualize them with colormaps\", \"Separate the related works section from the introduction section\", \"Add some related works and discussions\", \"Additional results in the appendix\", \"Add results for point cloud visualization\", \"Quantitative evaluation on color and depth\", \"Add the explanation for \\\"warp\\\" and \\\"projection\\\" in the appendix\", \"Add the explanation for initial K in the appendix\"]}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We would like to thank the reviewer for valuable comments.\\n\\nRelated works\\n- We will separate the related work section from the introduction section.\\n\\nDepth visualization\\n- We will normalize the depth maps and visualize it in colormap (as reviewer2 says) for better visualization.\\n\\n3D vs. 2.5D\\n- Our model can not only generate RGBD images, which is commonly considered 2.5D, but also explicitly control camera poses while preserving the image content. Therefore, it can be regarded that the model can learn full 3D geometry implicitly, though the output is not fully 3D. This is the intuition to use the word \\\"3D\\\".\\n\\nScalability of HoloGAN\\n- We agree that the badness of the scalability of HoloGAN is not supported by our experimental results. Therefore, we will delete the scalability part from the introduction.\\n\\nNecessity for 3D annotations\\n- Thank you for introducing related papers. Though they do not need annotations, both methods can only deal with synthetic primitive datasets. Our method, however, can work on natural images. We will add the discussion to the related works section.\\n\\nwavy flag\\n- This is a conceptual figure of learned DeepVoxels. DeepVoxels are implicit representations, and we cannot visualize what is acquired. We agree that the figure is ambiguous, we will replace the figure.\\n\\nAbout K\\n- Because learning K from single images is difficult, we initialize K with [[2s, 0, s/2], [0, 2s, s/2], [0, 0, 1]] (numpy-style order), where $s$ is image size. We will add the explanation in the paper.\\n\\nfloor/sky\\n- We did not try adding floor or sky to render the ShapeNet car dataset. We think adding simple sky or floor will help learning depth information to some extent, but it is difficult to learn consistent depth. This is because foreground regions have common salient concepts across views (eg. tire, headlight, window, ...) but the background does not. This is also problematic when we train the model on a car image dataset, which has floor and sky, as shown in Figure 7.\\n\\nEvaluation for depth\\n- Evaluating the generated depth is difficult because we cannot obtain ground truth depth for the generated images. A possible approach to evaluate depth images without ground truth images is calculating the inception score (IS) [5] or FID on the generated depth images, but we do not think it is appropriate. This is because IS and FID are estimated in the feature space of a pre-trained CNN, and they cannot consider the geometry in the 3D world. Therefore it is almost impossible to evaluate how the generated depth is plausible in 3D space. Instead, we will evaluate the depth consistency across views to quantitatively compare the generated depth among different methods. When we plot point clouds generated from the same $z$ but different $c$, all points should be on a single surface. Therefore, by calculating the variation of the generated depth, we can quantitatively evaluate the 3D consistency across views. It is expected that 3D-latent-feature-based models have better performance than other models. We will add the results in the paper.\\n\\n[5] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We would like to thank the reviewer for valuable comments.\\n\\nAdditional results\\n- We will add a large number of results to the appendix\\n\\nWriting style\\n- We will get English proofreading for our paper. If you do not mind, please let us know concretely which parts are difficult to read.\\u00a0\\n\\nDepth visualization\\n- For all depth images, we will normalize them (as reviewer1 says) and visualize them with colormap. Moreover, we will add a reference sphere in Figure 6.\\n\\nBackground depth\\n- Figure 6 shows the results on StyleGAN. The background depth seems small in Figure 6, but it does not cover foregrounds even if the image is rotated (within the angle range during training). This is contrary to the results of DeepVoxels in Figure 4, where the background pixels hide foregrounds when the generated image is rotated. I will add this explanation in Section 3.3, and comparison experiments in the appendix.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We would like to thank the reviewer for valuable comments.\", \"equation_5\": \"- Figure 3 in [1] shows the detailed illustration of the \\\"warp\\\" operation. First, we calculate the position of pixels in G_{RGB}(z, c_1) when they are viewed from c_2 (c_{1->2} is used here), and warp G_{RGB}(z, c_2) according to the calculated positions with bilinear interpolation. Therefore, the relative transformation matrix we need is c_{1->2}. We will add more explanation in the paper.\\n- Since the depth values in warp(G_{D}(z, c_2), c_{1->2}) are sampled from the depth values viewed from c_2, to compare G_{D}(z, c_1) with warp(G_{D}(z, c_2), c_{1->2}), we need to project the depth values of each pixel in G_{D}(z, c_1) to the viewpoint c_2. This is what we call \\\"projection\\\". We will add the explanation in the paper.\\n\\nDifferences between not using 3D loss and using it\\n- I will pick one generator and compare the generative results qualitatively in the paper. Because PGGAN and StyleGAN without 3D loss cannot control camera poses, we can only compare the random generation results. Comparisons on the 3D-latent-feature-based methods are already provided in Figure 4, 5 and Table 1, but we will add more results in the appendix.\\n\\nRepresentation learning\\n- Learning generative models is often called \\\"representation learning\\\" because it can learn latent representations of the images [2, 3, 4, ...]. We will add a discussion about this in the paper.\\n\\n\\n[1] Tinghui Zhou, Matthew Brown, Noah Snavely, and David G Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, 2017.\\u00a0\\n[2] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.\\u00a0\\n[3] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016.\\n[4] Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. 2019.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"SUMMARY: Unsupervised/Self-supervised generative model for image synthesis using 3D depth and RGB consistency across camera views\", \"claims\": [\"New technique for RGBD synthesis using loss in 3D space\", \"Can disentangle camera parameters from content (I disagree slightly with \\\"disentangle\\\" since you are conditioning on camera parameters in the first place)\", \"Different generator architectures can be used\"], \"method\": \"Generate RGBD images of 2 different views, have an adversarial loss on the RGB image, have a content loss between RGB1 and warp(RGB2), have a depth loss between D1 and warp(D2)\", \"equation_5\": [\"Possibly either \\\"c_{1->2}\\\" needs to be replaced by \\\"c_{2->1}\\\", or \\\"G_{RGB}(z, c_1) - warp(G_{RGB}(z, c_2), c_{1->2})\\\" needs to be replaced by \\\"warp(G_{RGB}(z, c_1), c_{1->2}) - G_{RGB}(z, c_2)\\\" (or am I missing something?)\", \"Not entirely sure why there is a different \\\"projection\\\" operation, since both \\\"warp\\\" and \\\"projection\\\" are calculated from Equation 3. I understand that \\\"warp\\\" is the combined Rt matrix that is estimated using the two views and Equation 3, assuming that the \\\"d\\\"s are correct. Not sure what \\\"projection\\\" does though, possibly explain it better?\"], \"decision\": \"Very clearly written paper, simple idea executed well\\n\\nThe paper is clearly written and well organized. It uses a simple idea, and performs sufficient number of experiments to explore the idea. It is not very novel, but the paper shows its applicability with multiple architectures as a bonus.\\n\\nThe figures showed results almost only from their method. It would be great to pick one generator architecture, and elucidate more on the differences between not using their 3D loss and using it. Good attempt though.\", \"additional_feedback\": [\"Might not be \\\"representation learning\\\", instead it is learning a generative model.\", \"\\\"3 EXPERIMETNS\\\" -> \\\"3 EXPERIMENTS\\\"\", \"The appendix should have more details on the equations and the specific formulations of warp and projection operations\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The submission proposes a technique to learn RGBD image synthesis from RGB images. A distinctive feature proposed by the technique is the user-controllable camera rotation parameters, learned in an unsupervised manner. The technique can be used in conjunction with various models, such as PGGAN, StyleGAN, and DeepVoxels.\\n\\nThis paper provides an interesting approach that can be a useful building block for future investigations.\\n\\nThe main issue I see with the paper is the number of results provided. Only 2 different images are shown per combination of model and dataset, limiting the reader's ability to assess the technique's performance. Would it be possible to provide a large number of results in a supplementary material or appendix?\\n\\nIn my opinion, this may be due to a difference in writing style, but the paper, in general, is slightly hard to read.\\n\\nThe depth in figures 1, 4, 5, 7 and 9 would be easier to read if it was displayed as a colormap (with the corresponding color bar) instead of grayscale. Additionally, a reference sphere would be appreciated near the normal maps shown in fig. 6 to inform the reader of the coordinates system used.\\n\\nSec. 3.3 states that \\u201cthe depth of the background is smaller than that of the face [for DeepVoxels]; however, this does not occur when the proposed loss is used\\u201d, however fig. 6 seems to show the contrary. Is it due to the depth discontinuity?\\n\\nMinor details\\n- Sec. 3 \\u201cExperimetns\\u201d: typo.\\n- Sec. 3.3, \\u201c[...] use the 2D CNN\\u201d: I would replace \\u201cthe\\u201d by \\u201ca\\u201d.\\n- Sec. 3.3, Third paragraph, the first sentence is hard to read.\\n- Sec. B \\u201cthe later voxels are ignore[d]\\u201d\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"# Review ICLR20, RGBD-GAN\\n\\nThis review is for the originally uploaded version of this article. Comments from other reviewers and revisions have deliberately not been taken into account. After publishing this review, this reviewer will participate in the forum discussion and help the authors improve the paper.\\n\\n## Overall\\n\\nThe article proposes a method of modifying image-generating networks to also produce depth maps in an unsupervised way by enforcing rotational consistency.\\n\\nI enjoyed reading this work and I'm recommending it to be accepted. However, first there are some (in my opinion straight-forward) changes that need to be made to this work before I can recommend its publication: \\n\\n- The common \\\"Related Works\\\" section is missing and some of the literature is taking place in the introduction. I find this unorganized and I'd recommend keeping the intro shorter and just moving the literature either behind the intro or to the end of the paper.\\n- Most figures and especially your headline figure (1) suffer from not having the depth normalized and not having a scale to it. The fix for this is simple and two-fold: for each depth image, subtract the minimum value and divide by the range (to normalize it and increase contrast), then write in the caption or as a legend that white is closer to the camera and black is further back.\\n- 3D vs. 2.5D - If the common geometric definition of \\\"3D\\\" was applied here, the article's title was correct. However, in computer vision and especially 3D vision, the term is commonly used to refer only to models that include full scene geometry, including the occluded backs of objects and the term 2.5D is used to describe assigning depth values to pixels in an RGB image (and therefore only covering the view-dependent front of the object), which I think is the case here. However, this is not a hill that I'll die on so if you insist on that terminology, I won't block acceptance.\\n- When you first discuss HoloGAN, you mention one of its main downsides being scalability and then proceed to not only explain that but also use a HoloGAN-like architecture in one of your experiments. I'd either remove the scalability argument or justify not just that but also how that's not relevant to your experiments.\\n- The following phrase occurs multiple times throughout: \\\"camera parameter conditional image generation\\\". I _think_ you're missing a dash between \\\"parameter\\\" and \\\"conditional\\\".\\n\\n\\n## Specific comments and questions\\n\\n### Abstract\\n\\nAll good.\\n\\n### Intro\\n\\n- Fig.1 normalize image \\n- The literature section in intro mentions \\\"For all methods, 3D annotations must be used...\\\" - that's not true. See [Rezende, 2016][1] and [Rajeswar, 2018][2]\\n- I understand how some literature is required to position your method, but I think it's better to not have the entire literature section in the center of the introduction\\n\\n[1]: https://arxiv.org/abs/1607.00662\\n[2]: https://openreview.net/forum?id=BJeem3C9F7\\n\\n### Method\\n\\n- 2.1 clear + nicely written\\n- Figure 2 good, caption a bit too short - figure+caption should be able to stand on their own\\n- Illustration of Figure 3 nice, except for unclear DeepVoxel part: what's the wavy orange flag stand for?\\n\\n### Experiments\\n\\n- You mention K is fixed, but where does the initial K come from? I assume it's just neglected (since it's not important for StyleGAN/PGGAN), but then this needs to be mentioned in the methods sections closer to the formulas dealing with K.\\n- Figure 4 - the depth maps need to be normalized. All we see here is a grey mush, even worse in Fig. 7\\n- For ShapeNet cars, the model seems to suffer from not having a reference for the top and bottom of the image - have you tried adding floor/sky?\\n- Figure 6, the tire marker is a good idea but image still unclear - I recommend slightly less rotation or an intermediate step between generated image and e.g. front view\\n- For quantitative results/FID: try using Hausdorff or Chamfer distance on the rendered scenes' pixels. We don't care about the goodness of the RGB generation but the depth.\\n\\n### Conclusion\\n\\nAll good, albeit a bit short.\\n\\n### Appendix\\n\\nI don't think I saw any references to the appendix in the main paper.\"}"
]
} |
rkgqN1SYvr | Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks | [
"Wei Hu",
"Lechao Xiao",
"Jeffrey Pennington"
] | The selection of initial parameter values for gradient-based optimization of deep neural networks is one of the most impactful hyperparameter choices in deep learning systems, affecting both convergence times and model performance. Yet despite significant empirical and theoretical analysis, relatively little has been proved about the concrete effects of different initialization schemes. In this work, we analyze the effect of initialization in deep linear networks, and provide for the first time a rigorous proof that drawing the initial weights from the orthogonal group speeds up convergence relative to the standard Gaussian initialization with iid weights. We show that for deep networks, the width needed for efficient convergence to a global minimum with orthogonal initializations is independent of the depth, whereas the width needed for efficient convergence with Gaussian initializations scales linearly in the depth. Our results demonstrate how the benefits of a good initialization can persist throughout learning, suggesting an explanation for the recent empirical successes found by initializing very deep non-linear networks according to the principle of dynamical isometry. | [
"deep learning theory",
"non-convex optimization",
"orthogonal initialization"
] | Accept (Poster) | https://openreview.net/pdf?id=rkgqN1SYvr | https://openreview.net/forum?id=rkgqN1SYvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"AuyFGF4L7a",
"S1l4JT3joH",
"B1x4O5hoiB",
"Hkl2Bu2jor",
"SJeRmfcAKH",
"Sygdb88AFr",
"SkgU7E06FS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729250,
1573797084313,
1573796460043,
1573795907936,
1571885606330,
1571870207822,
1571836957908
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1665/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1665/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1665/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1665/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1665/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1665/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper shows that initializing the parameters of a deep linear network from the orthogonal group speeds up learning, whereas sampling the parameters from a Gaussian may be harmful.\\n\\nThe result of this paper can be interesting to the deep learning community. The main concern the reviewers raised is the huge overlap with the paper by Du & Hu (2019). It would have been nice to actually see whether the results for linear networks empirically also hold for nonlinear networks.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your encouraging review!\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your valuable comments. Please see our response to each individual question below.\\n\\n\\n\\u2014 Similarity to Du and Hu (2019) \\u2014\\nWhile the proof technique for our result on orthogonal initialization (Thm 4.1) is similar to that of Du and Hu (2019), we\\u2019d like to stress that the contribution of this paper is much more than the specific proof of this theorem.\\n\\nFirst of all, we believe that the main contribution of this paper should be the results themselves rather than the specific proof techniques. We establish rigorously that orthogonal initialization can drastically speed up optimization compared with Gaussian initialization in deep linear networks, which was empirically observed in a lot of previous work without a formal theoretical justification except for initialization.\\n\\nFurthermore, our negative result on Gaussian initialization (Thm 5.1) is an important piece of the paper (which takes up at least half of the technical part of the paper). This result is novel - there was no such result in Du and Hu (2019), and its proof is not adapted from any previous work (see our response to Reviewer 3).\\n\\n\\n\\u2014 Can use identity matrices in intermediate layers & only the randomness of the last layer is used \\u2014\\nYou are correct. Thanks for pointing out! It is true that we can just use identity initialization in intermediate layers. We think this is due to the specialness of linear networks. When generalizing this result to deep non-linear networks (on-going work), the randomness in all layers becomes important. Such randomness was also used in previous work that studied orthogonal random initializations, e.g. Pennington et al. (2017, 2018).\\n\\n[Pennington et al. (2017)] Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice.\\n[Pennington et al. (2018)] The emergence of spectral universality in deep networks.\\n\\n\\n\\u2014 Distribution of $W_L(0)$ \\u2014\\n$W_L(0)$ has size $d_y \\\\times m (d_y \\\\le m)$ and is drawn from the uniform measure over all row-orthogonal matrices satisfying $W_L(0)W_L(0)^\\\\top = mI_{d_y}$. Then $\\\\frac{1}{\\\\sqrt m} W_L(0)$ has unit-length orthogonal rows. When $\\\\frac{1}{\\\\sqrt m} W_L(0)$ is multiplied by $z \\\\in \\\\mathbb{R}^m$, it\\u2019s effectively projecting $z$ onto $d_y$ random directions (corresponding to rows of $\\\\frac{1}{\\\\sqrt m} W_L(0)$), and as a result we can apply concentration bounds on the norm of this product. Thank you for raising the confusion, and we will explain this more clearly in the paper.\\n\\n\\n\\u2014 Experiment \\u2014\\nThank you for the suggestion! We have added a short experiment section (Section 6) to the paper. We train a family of deep linear networks with their widths ranging in [10, 1000] and depths ranging in [1, 700]. Each network is trained on the same dataset using gradient descent starting from both Gaussian and orthogonal initialization, and we produce heat-maps whose colors exhibit the losses after 10,000 steps for all configurations.\\n\\nThe heat-maps clearly demonstrate a sharp transition from untrainable to trainable when we increase the width of the network. For Gaussian initialization, this transition occurs across a contour characterized by a linear relation between width and depth; for orthogonal initialization, the transition occurs at a width that is approximately independent of the depth. These observations match our theory excellently. Please see Section 6 for details.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your valuable comments and for appreciating our work. Please see our response to each individual question below.\\n\\n\\n\\u2014 Similarity to Du and Hu (2019) \\u2014\\nWhile the proof technique for our result on orthogonal initialization (Thm 4.1) is similar to that of Du and Hu (2019), we\\u2019d like to stress that the contribution of this paper is much more than the specific proof of this theorem.\\n\\nFirst of all, we believe that the main contribution of this paper should be the results themselves rather than the specific proof techniques. We establish rigorously that orthogonal initialization can drastically speed up optimization compared with Gaussian initialization in deep linear networks, which was empirically observed in a lot of previous work without a formal theoretical justification except for initialization.\\n\\nFurthermore, our negative result on Gaussian initialization (Thm 5.1) is an important piece of the paper (which takes up at least half of the technical part of the paper). This result is novel - there was no such result in Du and Hu (2019), and its proof is not adapted from any previous work (see below).\\n\\n\\n\\u2014 Is the proof of Thm 5.1 a straightforward extension of Shamir (2018)? \\u2014\\nThe proof is not a straightforward extension of Shamir (2018). Shamir\\u2019s proof relies on several special properties of 1-dim linear networks, such as a gradient norm bound for weights near initialization (Lemma 3 in his paper). Due to such specialness, his proof cannot be extended to any dimension greater than 1, as explicitly mentioned in his paper. Our result not only deals with multiple dimensions, but also reveals a nearly tight relation between width and depth for trainability (see paragraph after Thm 5.1). Key to our analysis is a careful control on the spectral norm of $A_{j:i}$ at initialization (Lemma 5.3) and after perturbations during training (Lemma 5.4). An important element in establishing the result is identifying the right things to bound and the right bounds for them.\\n\\n\\n\\u2014 On $W_L(0)$ \\u2014\\nWe sample $W_L(0)$ from the uniform measure over all row-orthogonal matrices satisfying $W_L(0)W_L(0)^\\\\top = mI_{d_y}$. Then as a consequence, in expectation we have $\\\\mathbb{E}[W_L(0)^\\\\top W_L(0)] = d_y I_m$. Thank you for raising the confusion, and we will explain this more clearly in the paper.\\n\\n\\n\\u2014 Experiment \\u2014\\nThank you for the suggestion! We have added a short experiment section (Section 6) to the paper. We train a family of deep linear networks with their widths ranging in [10, 1000] and depths ranging in [1, 700]. Each network is trained on the same dataset using gradient descent starting from both Gaussian and orthogonal initializations, and we produce heat-maps whose colors exhibit the losses after 10,000 steps for all configurations.\\n\\nThe heat-maps clearly demonstrate a sharp transition from untrainable to trainable when we increase the width of the network. For Gaussian initialization, this transition occurs across a contour characterized by a linear relation between width and depth; for orthogonal initialization, the transition occurs at a width that is approximately independent of the depth. These observations match our theory excellently. Please see Section 6 for details.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies the role of initialization for training deep linear neural networks. The authors specifically consider the orthogonal initialization, and prove that with the orthogonal initialization proposed in equation (4), the gradient descent can achieve zero training error in a linear convergence rate. The improvement of the orthogonal initialization lies at the dependence of the layer width $m$, which is independent of the network depth $L$.\\n\\nThe problem considered in this work is very interesting since there are lots of empirical studies show that good initialization can benefit the training of deep neural networks. However, my main concern about this work is its novelty, especially for the proof techniques used in the current paper. It seems that most of the proofs are similar to the previous work Du & Hu (2019), and the main reason that it can remove the dependence of $L$ seems to be Lemma 4.2, which can be derived using the orthogonal property of the initialization. In this sense, there is not too much contribution for the current paper given the previous work Du & Hu (2019). Are there any other significant changes need to be made in the proofs to get the main results? If the authors can provide the convergence guarantees of the stochastic gradient descent, the contributions would be strong. \\n\\nFor the proof of Theorem 5.1, is it a straightforward extension of the proofs in Shamir (2018)? What is the main challenge when prove the general $d$ case?\", \"minor_comments\": \"For the last equation in (4), why you need $W_L(0)W_L(0)^\\\\top=mI_{d_y}$ instead of $W_L(0)^\\\\top W_L(0)=d_yI$ as you used in the later proofs?\\n\\nThere is no experiment to verify the theory\", \"update\": \"I thank the authors for their response, I would like to keep my score.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies the convergence of deep linear networks under orthogonal initialization. Most of the problem setup, analysis techniques and proof roadmap are adapted from Du & Hu (2019). The difference is the orthogonal initialization compared with the Gaussian initialization in Du & Hu (2019). Since the product of orthogonal weight matrices is identity, the new initialization can remove the dependency of the number of nodes $m$ on the depth $L$ of the network. The authors also proved a lower bound of the loss function trained by Gaussian initialization within certain iterations. This justifies the disadvantage of Gaussian initialization.\\n\\nMy biggest concern is that this paper seems to be very similar to Du & Hu (2019) in many places. The whole Sections 3 & 4 are almost the same as Sections 3,4,5 & 7 in their paper. The contribution in this paper is too incremental given previous work. \\n\\nWhat is the advantage of using orthogonal matrices in (4) compared with just using identity matrices as initialization? Can we prove the same result with just identity initialization? In this case, what would we lose by restricting us to this special case?\\n\\nIn the proof of Lemma 4.2, it seems that only the randomness of the last layer $W_L(0)$ is used. Why do we need all the layers to be uniformly sampled?\\n\\nIt should be explained in more details that $W_L(0)$ is drawn from a uniform distribution over orthogonal matrices in $d_y\\\\times d_y$ space. Then $1/\\\\sqrt{m} W_L(0)\\\\cdot z /\\\\|z\\\\|^2$ is not distributed on the whole space of $d_y$-sphere. The argument in the proof of Lemma 4.2 thus needs more justification.\\n\\nIt would be interesting to see an empirical comparison of the proposed initialization and the Gaussian initialization. Due to the lower bound proved in this paper, the experiments are expected to show distinct difference between these two.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper rigorously proves that if a deep linear network is initialized with random orthogonal weights and trained with gradient descent, its width required for convergence does not depend on its depth. To compare, when weights in deep linear networks are initialized with Gaussian initialization, the minimal width required for convergence will depend on the depth of the network. This proof explains why orthogonal weight initialization can help to train networks efficiently, especially for those very deep ones.\\n\\nThe theoretical contribution of this paper is very important. Orthogonal initialization is found to be useful in deep network training. Although the theory in this paper is developed for linear networks, it still has important guidance meaning in practices in more areas of deep learning. The derivations are correct to my best knowledge. And the paper is well-written and easy to read.\", \"minor_points\": \"- typo in the last equation in (4)\\n\\n=======================\", \"update\": \"Despite the similarity with a previous paper, I still think the theoretical results and empirical observations important and thus I will keep my score.\"}"
]
} |
BkgF4kSFPB | Hallucinative Topological Memory for Zero-Shot Visual Planning | [
"Kara Liu",
"Thanard Kurutach",
"Pieter Abbeel",
"Aviv Tamar"
] | In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline, e.g., images obtained from self-supervised robot interaction. VP algorithms essentially combine data-driven perception and planning, and are important for robotic manipulation and navigation domains, among others. A recent and promising approach to VP is the semi-parametric topological memory (SPTM) method, where image samples are treated as nodes in a graph, and the connectivity in the graph is learned using deep image classification. Thus, the learned graph represents the topological connectivity of the data, and planning can be performed using conventional graph search methods. However, training SPTM necessitates a suitable loss function for the connectivity classifier, which requires non-trivial manual tuning. More importantly, SPTM is constricted in its ability to generalize to changes in the domain, as its graph is constructed from direct observations and thus requires collecting new samples for planning. In this paper, we propose Hallucinative Topological Memory (HTM), which overcomes these shortcomings. In HTM, instead of training a discriminative classifier we train an energy function using contrastive predictive coding. In addition, we learn a conditional VAE model that generates samples given a context image of the domain, and use these hallucinated samples for building the connectivity graph, allowing for zero-shot generalization to domain changes. In simulated domains, HTM outperforms conventional SPTM and visual foresight methods in terms of both plan quality and success in long-horizon planning. | [
"Visual Planning",
"Model-Based RL",
"Representation Learning"
] | Reject | https://openreview.net/pdf?id=BkgF4kSFPB | https://openreview.net/forum?id=BkgF4kSFPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"1QTqY2y8w1",
"SJxsM052iB",
"BklEl09hoB",
"HkxNg4qnjH",
"H1g3bMqnjB",
"HJxse3m2ir",
"SkgfxqIAtB",
"Syx34y_sYB",
"SkePIyMquB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729217,
1573854738953,
1573854700171,
1573852140198,
1573851651981,
1573825523174,
1571871210436,
1571680051631,
1570541390857
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1663/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1663/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1663/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1663/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1663/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1663/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1663/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1663/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The submission presents an approach to visual planning. The work builds on semi-parametric topological memory (SPTM) and introduces ideas that facilitate zero-shot generalization to new environments. The reviews are split. While the ideas are generally perceived as interesting, there are significant concerns about presentation and experimental evaluation. In particular, the work is evaluated in extremely simple environments and scenarios that do not match the experimental settings of other comparable works in this area. The paper was discussed and all reviewers expressed their views following the authors' responses and revision. In particular, R1 posted a detailed justification of their recommendation to reject the paper. The AC agrees that the paper is not ready for publication in a first-tier venue. The AC recommends that the authors seriously consider R1's recommendations.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1 (Part 2/2)\", \"comment\": \"Context image: The context can in principle be a scene image, camera angles, lightning variables, or any other observation that contains information about the configuration space in the domain. While our experiment are very simple, we found that even this setting of giving the context as the full map is *very challenging* for visual planning methods, so in this work we did not experiment with more complex context variables.\\n\\nWe point the reviewer to our new experiments, where the context there contains the agent shape and not the obstacle configuration, further demonstrating the generality of our approach.\", \"long_term_planning\": \"Moving around an obstacle definitely requires longer horizon planning than pushing an object without obstacles, as in [1,4,6]. That said, investigating visual planning for even longer horizon plans is an important future direction, which our work here gives even better motivation to study.\", \"determining_that_goal_has_been_reached\": \"That\\u2019s a good point. In our simulated experiments, we know the groundtruth distance, and use that to stop the policy. In a real-world application, we would use other measures, such as pixel-distance or an image classifier trained to predict task success.\", \"disconnected_hallucinated_images\": \"Our method indeed builds a *fully connected* graph whose weights are the inverse of the normalized score function. High weights in the graph can effectively act as disconnection between nodes. Our experiments show that, at least in the domains investigated, our hallucination method is expressive enough to imagine enough diverse images to always find a smooth connected path.\\n\\nOur method explicitly prevents disconnected hallucinated images. Instead of computing edge-weights as binary value if the classifier score is above a certain threshold (as in SPTM), our method creates a *fully-connected graph*, so this is never an issue.\", \"human_evaluation\": \"We found that the variance is quite small among the 5 testers. However, we are happy to add more human subjects to the evaluation if the reviewer finds it important. We also attached a link to the planning comparison examples sent to all participants for evaluation, which shows significant distinction in HTM planning results: https://tinyurl.com/htm-visualplan.\\n\\nAs a final note, please observe that Figure 3 in our submission, which displays the visual plans of our algorithm against a baseline, was incorrect, and we have since posted the correct figure as a response several weeks ago (10/18). We also updated the PDF submission to reflect this.\\n\\nWe thank you again for the questions to improve our paper. We take each of them seriously and will update the paper accordingly. We hope that our response also help clarifying the paper contributions. \\nIn the context of visual planning algorithms, our paper (1) elucidates the difficulty of the problem in simple and easy to reproduce domains, (2) proposes a novel visual planning method, and (3) clearly demonstrates the benefits of the new method compared to previous state of the art. We kindly ask to re-evaluate our work in this context.\", \"additional_references\": \"[1a] Pinto, L. and Gupta, A., Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In ICRA 2016.\\n[2a] Nair, A., Chen, D., Agrawal, P., Isola, P., Abbeel, P., Malik, J. and Levine, S., Combining self-supervised learning and imitation for vision-based rope manipulation. In ICRA 2017.\\n[3a] Ebert, F., Finn, C., Lee, A.X. and Levine, S., Self-supervised visual planning with temporal skip connections. arXiv preprint arXiv:1710.05268, 2017.\"}",
"{\"title\": \"Response to Reviewer #1 (Part 1/2)\", \"comment\": \"Dear Reviewer #1,\\n\\nWe appreciate your effort reading, reviewing, and giving us valuable feedback.\", \"our_paper_tackles_the_visual_planning_problem\": \"given images from a dynamical system, learn to predict goal-conditioned image trajectories. This problem has been studied recently in several settings, e.g., SPTM for 3D navigation [22], CIGAN and Visual-MPC for robotic manipulation [1,4,6,13,29].\\n\\nHere we extend SPTM, but *we do not consider 3D navigation*. Our observations are simply the overhead view images as plotted in the paper. These simple domains allow us to easily assess the capabilities of VP algorithms.\\nWhile a navigation policy can indeed be manually coded using the pixel values, *this requires prior knowledge about the task!* - i.e., knowing that this is navigation between obstacles, knowing how obstacles look like, and knowing how to map the image to a relevant state space, action space, and configuration space. Our algorithm, and any other algorithm in the visual planning setting, *does not require any such knowledge in advance!*\\n\\nWhile these problems are indeed simple, and the planning horizons are relatively short, *state of the art VP methods cannot solve them*! This clearly demonstrates the (in-) capabilities of current VP methods, and demonstrates how difficult the visual planning problem really is.\\n\\nWe kindly ask the reviewer to re-evaluate our paper *in the context of its contribution to the study of visual planning algorithms*, and not for general navigation problems.\\n\\nTo further motivate our claims, we have added an additional experiment to show the generality of our approach. In this experiment, an object can be translated and rotated (in SE(2)) slightly per timestep. Different from the experiments in the paper, here we change the object shape (and give the object shape as a context image). The goal is to plan a manipulation of an unseen object through a narrow gap between obstacles in zero-shot. Thus, our algorithm has to learn how object geometry is related to movement between obstacles, and has to generalize this knowledge for planning. Of course, all this has to be learned only from raw images - we use exactly the same algorithm, regardless of the fact that now the state space, action space, and configuration space are different. Please see our general comment for more details and additional results.\", \"paper_writing\": \"We acknowledge that our writing should be improved, and we thank the reviewer for pointing out many unclear points in the paper. We promise to write the paper in a much more accessible format, and clearly relate the math to the application. We emphasize that this can easily be done, and in the following we address the specific unclear points.\", \"self_supervised_data_collection\": \"Following work on self-supervised learning in robotics [1a, 2a, 3a], we mean that data is collected by letting the agent randomly explore the world without any specific supervision/guidance. Thus, we *do not* assume walkthroughs or demonstrations of any task, and the data only contains image sequences without any prior knowledge about the agent or the task. This is similar in spirit to the setting of off-policy batch RL, where the data collection policy is different from the learned behavior policy.\\n\\nPositive/negative data samples: Given the image sequence data, we use 1-step transitions from the data as positive samples, and random pairs of images from the data (collected under the same context image) as negative samples.\\n\\nCPC objective in sections 2, 3.2: We will clarify (including math) exactly how we use CPC on our data. As described above, by training on positive image pairs from the data and negative random pairs, the CPC loss learns to assign low energy to feasible transitions and high energy to infeasible transitions. This lets us use it as a proxy for connectivity in the graph.\", \"ml_trajectory\": \"The main point here is that the CPC output of transition probability (i.e., the energy of the transition) is not normalized. We propose a simple way to normalize it, and this makes interpreting the shortest path as an ML trajectory correct. Thus, the ML interpretation is not novel, but relating it to CPC is.\", \"connection_between_planner_and_policy\": \"The policy is a simple inverse model. It is trained by supervised learning to predict the action needed to bring the current state to the next state on the trajectory dataset. We use this inverse model to track the hallucinated sequence of images outputted by HTM. The same method was used in [29] using a different visual planning algorithm.\"}",
"{\"title\": \"Additional Experimental Results\", \"comment\": \"Following the reviewers' feedback, we have added an additional experiment to show the generality of our approach. In this experiment, an object can be translated and rotated (SE(2)) slightly per timestep. Different from the experiments in the paper, here we change the object shape (and give the object shape as a context image). The goal is to plan a manipulation of an unseen object through a narrow gap between obstacles in zero-shot. Thus, our algorithm has to learn how object geometry is related to movement between obstacles, and has to generalize this knowledge for planning. Of course, all this has to be learned only from raw images.\\n\\nDue to time constraints of the rebuttal period, we were not able to perform a full evaluation, but we present encouraging preliminary results which are reflected in the resubmitted PDF in the Appendix. We trained a CPC energy model on the domain using the object shape as a context image, as shown in Figure 6 of the Appendix. We subsequently ran visual planning on samples collected in the test environment (similar to the SPTM setting) for shapes that were not seen during CPC training. We compare the CPC model with an SPTM-style classifier as a baseline. Note that the CPC energy model is able to generate smooth plans that mimic the proper rotational and movement constraints of unseen objects (see Figure 7). On the other hand, the plans produced by SPTM fail to assume such properties of the new object and often jump around (see Figure 8). We also present preliminary results for HTM, with a CVAE trained conditioned on the shape. The results of the CVAE hallucinated plans can be seen in Figure 9. Although the CVAE was unable to finish training, the results clearly show that HTM in a zero-shot generalization setting is able to generate a successful plan that rotates the object correctly in order to pass through a narrow opening.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Dear Reviewer #3,\\n\\nThank you very much for your constructive feedback. \\n\\nAs mentioned in your review, we recognize that our paper has only covered a small subset of possible experiments possible when testing visual planning (VP) problems.\", \"more_difficult_experiments\": \"Actually, one contribution of our work is investigating what \\u2018difficult\\u2019 exactly means in the context of visual planning. As we show, methods such as SPTM and visual MPC, which were demonstrated on seemingly more complex tasks that involve first-person navigation or real images, fail on the simple tasks we investigate. This, at the very least, requires us to better think about the different difficulty axes in visual planning. Concretely, along with tackling different view-points and visual clutter, there is the difficulty of understanding the planning problem from an image, and solving it; our work addresses the latter.\\n\\nThat said, following some of the reviewer\\u2019s suggestions, we have added an additional experiment to show the generality of our approach. In this experiment, an object can be translated and rotated (SE(2)) slightly per timestep. Different from the experiments in the paper, here we change the object shape (and give the object shape as a context image). The goal is to plan a manipulation of an unseen object through a narrow gap between obstacles in zero-shot. Thus, our algorithm has to learn how object geometry is related to movement between obstacles, and has to generalize this knowledge for planning. Of course, all this has to be learned only from raw images. Please see our general comment for more details and additional results.\", \"asymmetric_transitions\": \"Our framework does not assume symmetric transitions. We are using a directed graph, and the bilinear weight matrix in our energy cost function is not symmetric.\", \"sptm_on_already_explored_space\": \"If we were to test SPTM vs. HTM when the space has been explored already, then the only difference between the two models would be (1) the classifier and (2) the method of edge weighting during planning. In Figure 3 (Right), we show that SPTM (vanilla classifier + binary edge weighting) quantitatively averages to about 1.8 L2 distance from the goal, where as our method (CPC energy model + inverse of norm edge weighting) averages to about 0.4 L2 distance.\\n\\nNote in our experiment we actually test the SPTM and HTM on the same samples from the CVAE. We find that HTM chooses higher fidelity images and the plans are significantly more feasible. This results in a higher execution success rate.\", \"solution_path_cost\": \"Measuring the cost of visual plans is indeed an interesting question. One could measure the number of subgoals in the plans. However, an algorithm can output no subgoals at all and claim the shortest plan, or output many small steps to make sure that the low-level policy can follow. This is a tricky problem, and we defer it to human evaluation in this work. We find that the cost seems to be a function of feasibility, fidelity, and completeness given a low-level policy.\\n\\nDefinitions for fidelity,etc: As requested, we will provide definitions of fidelity, feasibility, and completeness in Table 1 along with the source of the data.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Dear Reviewer #2,\\n\\nWe are happy to hear that you enjoy our paper and appreciate our contributions. Thank you for your effort in reviewing the paper. \\n\\n(1) This is a great question, and you are exactly correct. Using the same pool of generated samples, we have found CPC to be more robust against poor generated images, e.g., irregular-shaped block, or double blocks, which give poor scores. On the other hand, a regular SPTM classifier tends to exploit these. We believe that this is due to the fact that the CPC estimates the mutual information which should be lower when the data are poorer. The CPC loss contrasts each positive pair against many negative pairs which make the score function more robust. \\n\\n(2)-(3) To simplify the learning problem, we modified the decoder architecture for domain 2 such that it learns a mask for combining the context and the CVAE output at the last layer. This helps reduce the number of parameters, speed up the training, and improve our sample quality. With better quality of image samples in the second domain, our algorithm was able to select from a larger pool of more realistic transitions, which elucidates the improvement in feasibility and fidelity scores. In conclusion, due to architecture differences, comparison between domains 1 and 2 is not fair, but comparison between different methods on the same domain is fair.\\n\\n(4) Yes. We plan to open-source the code for our algorithm, visual planning baselines, and the domains.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents HTM, an extension of the semiparametric topological memory method that augments the approach with hallucinated nodes and an energy cost function. The hallucination is enabled by a CVAE, conditioned on an image of the environment, and allows the method to generalize to unseen environments. The energy cost function is trained as a contrastive loss and acts as a robustness score for connecting the two samples. The underlying graph is then used to plan for several top view planning problems.\\n\\nThe paper is well written and clear. I believe such latent representations are an interesting approach to solving visual navigation and general planning. HTM provides an interesting and useful extension to SPTM, allowing both generalization to unseen environments and a more robust loss function. The \\n\\nMy primary concern is the lack of rigorous experimentation to validate the concept and push it\\u2019s limits. The results in Table 1 show HTM outperforms baselines clearly on the given problems, but how it performs on more complex problems is unclear. These problems are dynamically simple and the obstacles are easily identified. Some more difficult problems may be:\\n- The mazes in SPTM or environments from https://arxiv.org/pdf/1612.03801.pdf.\\n- The original SPTM paper focuses on visual navigation from first person views. How does this method apply to such situations? How does the context translate to this scenario?\\n- Planning in real environments with real images, as done in [6].\", \"other_comparisons_and_notes\": [\"Can the method be applied to higher dimensional problems (dimensionality of the underlying space) where planning may be more difficult? E.g. SE(2), robot arms or other agents from UPN [26]. Application with actual 3D workspace problems too would be interesting as the image context may underspecify the environment.\", \"The energy cost function acts as a proxy for connection probability when traversing an edge. This may also be useful for dynamical systems (e.g., the mujoco ant navigating a maze). Are there limitations for the method on such problems, e.g., edges may no longer be symmetric?\", \"How does SPTM compare when the space has been explored already?\", \"Can more quantitative results been shown such as solution path cost?\", \"Provide definitions for Fidelity, feasibility, and completeness and the source of data (polling human\\u2019s) in the Table 1 caption.\", \"\\u201cAs shown in Table 5.2, \\u201c, should be renamed to Table 1.\"]}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper propose a novel visual planning approach which constructs explicit plans from \\\"hallucinated\\\" states of the environment. To hallucinate states, it uses a Conditional Variational Autoencoder (which is conditioned on a context image of the domain). To plan, it trains a Contrastive Predictive Coding (CPC) model for judging similarities between states, then applies this model to hallucinated states + start/end states, then runs Dijkstra on the edges weighted by similarities.\", \"i_vote_for_accepting_this_paper_as_it_tackles_two_important_problems\": \"where to get subgoals for visual planning and what similarity function to use for zero-shot planning. Furthermore, the paper is clearly written, the experiments are well-conducted and analyzed.\", \"detailed_arguments\": \"1. Where to get subgoals for visual planning is an important question persistently arising in control tasks. SPTM-style solution is indeed limited because it relies on an exploration sequence as a source of subgoals. Every time the environment changes, data would need to be re-collected. Getting subgoals from a conditional generative model is a neat solution.\\n2. Benchmarking similarity functions is crucial. One productive way to approach zero-shot problems is to employ similarity functions, but the question arises: what algorithm to use for training them? The paper compares two popular choices: CPC and Temporal Distance Classification (in particular, R-network). It thus provides guidance that CPC might be a better algorithm for training similarity functions.\\n3. The paper is well-positioned in the related work and points to the correct deficiencies of the existing methods. It also features nice experimental design with controlled complexity of the tasks, ablation studies and two relevant baselines.\", \"i_would_encourage_the_authors_to_discuss_the_following_questions\": \"1) Fidelity in Table 3 - why is it lower for SPTM compared to HTM if both methods rely on the same generated samples? Is it because HTM selects betters samples than SPTM for its plans?\\n2) Why is fidelity larger for SPTM in a more complex task 2?\\n3) Same question about fidelity/feasibility for HTM1/2?\\n4) Are there any plans to open-source the code?\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presents a method for learning agents to solve visual planning, in particular to navigate to a desired goal position in a maze, with a learned topological map, i.e. a graph, where nodes correspond to positions in the maze and edges correspond accessibility (reachability in a certain number of steps). The work extends previous work (semi parametric topological memory, ref. [22]) in several ways. It claims to address a shortcoming of [22], namely the fact that the graph is calculated offline from random rollouts, by using a conditional variational auto-encoder to predict a set of observed images which could lie between the current position and the goal position, and, most importantly from a context image which describes the layout of the environment. These predicted images are then arranged in a graph through a connectivity predictor, which is trained from rollouts through a contrastive loss. Training is performed on multiple environments, and the context vector provides enough information for this connectivity network to generalize to unseen environments. At test time, the agent navigates using a planner and a policy. The planner calculates the shortest path on a graph where edges are connectivity probabilities, and the policy is an inverse model trained on the output of the planner.\\n\\nWhile the idea of a topological memory with dynamic graph creation is certainly interesting, the work is unfortunately not well enough executed and the paper structured written in a way which makes it up to impossible to grasp what has been really done, as much information is missing which would be required for understanding. \\n\\nAs a first example, we are never really told what the observations are, which the agent sees. The different figures of the paper show very small images with a 2D maze from a bird\\u2019s eye view consisting of a walls arranged in a single connected component (mostly 1 to 3 strokes) in red color and an agent shown as a position indicated as a green dot. Are these the observed images? In absence of any other information, this is what we need to assume, and then this problem is fully observable and does not seem to be very challenging. Given the figures, even a handcrafted algorithm should be able to calculate the optimal solution with Dijkstra\\u2019s algorithm on a graph calculated from the pixel grid.\\n\\nThis important missing information alone makes it difficult to assess the paper, but the rest of the writing is similarly confusing. The authors focus on very short and dense descriptions of mathematical properties, but seem to have forgotten to ground the different symbols and to connect them to physical entities of the problem. The technical writing is in large parts disconnected from the problem which is addressed by it.\", \"further_examples_are\": \"-\\t\\u201cthe data (\\u2026) is collected in a self-supervised manner\\u201d: what does this mean? Self-supervision is way of creating loss from data without labels, but I am not sure what is meant by collecting data this way.\\n-\\tThe paragraph on CPC in section 2 can only be understood if the contrastive loss is known. To make the paper self-consistent, this should be properly explained, and tied to a training procedure which details how exactly the positive and negative samples are defined \\u2026 and collected.\\n-\\tThe CPC objective in section 2 is only loosely connected to its usage in section 3.2. Barely writing \\u201cwe optimize a CPC\\u201d objective is not sufficient for understanding how this objective is really tied to the different entities of the problem. This paper contains maths (which is always a pleasure to read), but it is not a purely and abstract mathematical problem - a real task is addressed, so it needs to be connected to it. This connection has certainly been done by the authors while they were working on the problem, but they should also communicate it to the reader.\\n-\\tThe section on ML trajectory is too dense and should be rewritten. I don\\u2019t understand what the authors want to tell us here. Basically, a (generalized) Dijkstra is run on a graph, where edge weights are the density or density ratios learned by the CPC objective, and if the edge weights are probabilities, that the shortest path corresponds to a trajectory likelihood. This is known, and this information is buried in a dense set of equations which are difficult to decipher and do not add any further value to the paper.\\n-\\tThe connection between the planner (generalized Dijkstra) and the policy is never explained. We don\\u2019t know how the policy is trained and how it works.\\n\\nOne of the downsides of the method is that it requires a context image. This image is responsible for the generalization to unseen environments, but it is a major drawback, as the image must be created beforehand. The authors claim that the context image must only contain the layout in any format which makes it possible to extract information about navigational space from it, but in the experiments the context image corresponds to the full map \\u2013 and it is probably equivalent to the observed images, but we can\\u2019t be sure as we haven\\u2019t been told. In any case, it is far from sure how this could generalize to more complex environments, let alone 3D navigation as is currently addressed in standard simulators like VizDoom, GIBSON, Matterport, Deepmind Lab, Habitat AI etc. \\n\\nThe authors\\u2019 claim that the proposed environment requires long-term planning, but looking at the images this does not seem to be the case. \\n\\nThe paper claims to perform zero-shot generalization and to adapt to changes in the environment, like the slight changes in camera motion, variations in lightning, but it unclear how the solution solves this claim.\\n\\nHow does the agent determine that a goal has been reached, without ground truth information? \\n\\nWhat happens, if the hallucinated images are disconnected (form several connected components) or are disconnected from the current position and/or from the goal position?\\n\\nAs mentioned, the method is evaluated on an environment, which is too simple. The experiments are difficult to assess, as we don\\u2019t really know what the agent observes. An information asymmetry is mentioned (visual foresight having the object\\u2019s (=agent\\u2019s) position and the others not) \\u2026 but if the proposed method observes the bird\\u2019s eye view, it can infer the agent\\u2019s position (as the position of the green dot).\\n\\nSubjective evaluation by humans on this kind of simple data does not seem to be meaningful, in particular with a very low number of observers (5 people).\"}"
]
} |
HkgYEyrFDr | Learning Good Policies By Learning Good Perceptual Models | [
"Yilun Du",
"Phillip Isola"
] | Reinforcement learning (RL) has led to increasingly complex looking behavior in recent years. However, such complexity can be misleading and hides over-fitting. We find that visual representations may be a useful metric of complexity, and both correlates well objective optimization and causally effects reward optimization. We then propose curious representation learning (CRL) which allows us to use better visual representation learning algorithms to correspondingly increase visual representation in policy through an intrinsic objective on both simulated environments and transfer to real images. Finally, we show better visual representations induced by CRL allows us to obtain better performance on Atari without any reward than other curiosity objectives. | [
"visual representation learning",
"reinforcement learning",
"curiosity"
] | Reject | https://openreview.net/pdf?id=HkgYEyrFDr | https://openreview.net/forum?id=HkgYEyrFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Q4JkXERKIv",
"SkepPxT0YB",
"Hyg8m3oTtr",
"B1lPAuj2tS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729185,
1571897444846,
1571826717920,
1571760334872
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1662/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1662/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1662/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper investigates using \\\"curiosity\\\" to improve representation learning. This paper is not ready for publication. The main issues was the reviewers found the paper did not support the claim contributions in terms of (1) evaluating the new representations and improvement due to the representation, and (2) the novelty of the method compared to the long literature in this area. In general the reviewers found the empirical evidence unconvincing, and the too many missing details.\", \"the_results_in_this_paper_have_many_issues\": \"claims of performance based on three runs; undefined error measures; bolding entries in tables which appear not significantly better without explanation; unclear/informal meta-parameter tuning.\\n\\nFinally, there are some terminology issues in this paper. I suggest an excellent paper on the topic: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3858647/\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a framework called curious representation learning (CRL) which uses a better visual representation in RL. They show that better visual representation helps reward maximization.\\n\\nI have to recommend rejection for this paper. It appears 1) the idea of using curiosity is not originated from this paper; 2) I do not see what is a \\\"better visual representation\\\"; 3) the comparison with baselines does not show that the new method is consistently better.\\n\\nThe paper is also very hard to read. I would think the name \\\"curious representation learning\\\" means \\\"representation learning is curious\\\". There are many inaccurate languages used in the paper. To list a few: \\\"complex behavior\\\", \\\"in curiosity\\\", \\\"disentanglement\\\"... I do not really understand what does it mean.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper formulates curiosity based RL training as learning a visual representation model, where the policy tries to maximise the loss of a shared visual model minimising an auxiliary task (such as autoencoding the input).\\n\\nCuriosity is an important topic in the RL field and this paper is well motivated. I also like the approach taken as it looks into this problem through the lens of better representation learning (LR) and arguing that with focusing on better LR and maximising model loss for novel scenes, we are going to get also better overall performance.\\n\\nHowever, there are a few key question marks that are still open and I would suggest them to be answered explicitly in the paper:\\n\\n1) What is the relationship with methods that use auxiliary tasks for unsupervised training in RL (e.g. Jaderberg et al, ICLR 2017)? It's clear that this method doesn't use any extrinsic reward function but the underlying architecture is similar.\\n\\n2) Similar to above, the comparisons and contrasts to Burda et al, ICLR 2019 could be made more explicit as the objective functions such as autoencoding which seems to be working well in this paper has also been studied in that work.\\n\\n3) Continuing with comparisons, it's not clear if this method delivers better performance compared to other curiosity based methods. For examples, the top scores in Fig 7 are considerably lower than those achieved in Burda et al, ICLR 2019 (Fig 2). Similarly, we don't know how the method compares to state-of-the-art on other tasks considered in the paper. As a result, the paper lacks good benchmarking against state-of-the-art in this space and discussion on pros and cons.\\n\\n4) It seems that in Tab 1 the correlation collapses for the last row, any reason why this is happening?\\n\\n5) It would be good to add both a system diagram as well as a network architecture to clarify how everything is wired.\\n\\n6) The training details are missing, both in terms of hyperparameters as well as optimisation strategy for solving minmax.\\n\\n7) Minor: RND is used in the experimental section to refer to both random feature prediction and random network distillation, so would be better to use different references.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents an empirical study of using error reduction as a curiosity measure. The authors consider an auto-encoder model, a colorization model and RND as intrinsic motivation signals. I find the write up very unclear and have trouble understanding what the claims are and how they are backed up.\", \"major_points\": [\"About the claims as stated on page 2: 1) The first claim I don't understand, I think what is meant is on navigation tasks they find \\\"their measure of representation learning\\\" (proposed in kolesnikov 2019) seems to correlate well with reward optimization. In 3.1 to back this claim they claim to test disentanglement but seem to test classification. I see no reason to not put that part in 4.1. 2) The authors claim to propose a new method: it can't be a separate representation network to derive rewards because that is what burda et al. and many many others do, it can't be the minimax formulation because that has been known for a while (e.g. predictability minimization schmidhuber) so I am not sure what the claim is about. What exactly is novel about the model. 3) CRL seems at best to outperform baselines on beamrider, qbert and riverraid but the results are impossible to assess. We don't know what the x axis is in figure 7 (is it frames, with or without repeats, is updates etc). Pong -12 is far from learnt for instance and its one of the easiest games.\", \"suprisal objective and modeling objectives are very high level concepts that we can talk about in the introduction and conclusion but much more precise terms need to be employed in the model exposition. The readers need to know what sort of properties they should have ideally easily identify examples (without having to read 2 papers and a large survey). I would start with the minimax formula and then explain what is considered, how the different intrinsic rewards are added, if they are normalized, how they are weighted etc. etc.\", \"the details about failed experiments, environments and architecture should IMO be relegated to experiments\", \"It should be very clear early on that the model is separate from the representation.\", \"auto-encoders are a large family of models and it is not clear from the paper which exact model is meant by the authors. Also, citing Bengio 2013 is NOT a valid citation for auto-encoders. The right citation depends on the model you apply please use that!\"]}"
]
} |
r1etN1rtPB | Implementation Matters in Deep RL: A Case Study on PPO and TRPO | [
"Logan Engstrom",
"Andrew Ilyas",
"Shibani Santurkar",
"Dimitris Tsipras",
"Firdaus Janoos",
"Larry Rudolph",
"Aleksander Madry"
] | We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms: Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO). Specifically, we investigate the consequences of "code-level optimizations:" algorithm augmentations found only in implementations or described as auxiliary details to the core algorithm. Seemingly of secondary importance, such optimizations turn out to have a major impact on agent behavior. Our results show that they (a) are responsible for most of PPO's gain in cumulative reward over TRPO, and (b) fundamentally change how RL methods function. These insights show the difficulty, and importance, of attributing performance gains in deep reinforcement learning.
| [
"deep policy gradient methods",
"deep reinforcement learning",
"trpo",
"ppo"
] | Accept (Talk) | https://openreview.net/pdf?id=r1etN1rtPB | https://openreview.net/forum?id=r1etN1rtPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"WOPud-bMzfq",
"JMaznUjShLs",
"v2HLSZXW9Q",
"LqWqdhJGoy",
"rkxSRfF3iB",
"BkeUPN-2oS",
"rylcHmbnoS",
"HJxMBiZ9sS",
"HJgUz6HFor",
"SylZgZgFor",
"Byeyga6OjH",
"rJeMhnp_sr",
"HkeJq2ausH",
"Hyx12SsVsB",
"HJexEd0ZoB",
"HJeISRZAKS",
"BJeHgxi6tr",
"H1xMvclFtr"
],
"note_type": [
"official_comment",
"comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1617591302518,
1617591009286,
1586817819793,
1576798729155,
1573847756888,
1573815390105,
1573815105646,
1573686073923,
1573637389645,
1573613801053,
1573604583031,
1573604521903,
1573604486808,
1573332391249,
1573148711974,
1571851837870,
1571823597255,
1571519065623
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1661/Authors"
],
[
"~Zelin_Zhao1"
],
[
"ICLR.cc/2020/Conference/Paper1661/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1661/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1661/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1661/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1661/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1661/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1661/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1661/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1661/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1661/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1661/Authors"
],
[
"~Erik_Wijmans1"
],
[
"ICLR.cc/2020/Conference/Paper1661/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1661/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1661/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Re: One issue about value clipping\", \"comment\": \"Thanks for the comment! This is actually just a typo in the paper---the equation should just say max instead of min. As you mentioned, the standard OpenAI implementation also uses max.\"}",
"{\"title\": \"One issue about value clipping.\", \"comment\": \"In your paper, you state that L^V = min(clipped value reward, unclipped reward), but in your implementation (https://github.com/MadryLab/implementation-matters/blob/5ee6ecb12545365d9178135e65576adfc0d82f52/src/policy_gradients/steps.py#L96) and in OpenAI standard implementation, you use L^V = max(clipped value reward, unclipped reward). Is this a problem?\"}",
"{\"title\": \"Camera ready and revision changes\", \"comment\": [\"We have now uploaded the camera-ready. While preparing to upload, we confirmed a bug in our code (precisely, in computing KL-divergences), and thus reran all of our experiments to ensure their validity. The vast majority of the results went unchanged, with the exception of the explicit plots of KL divergence for the PPO-NoClip algorithm, which have thus been removed (along with the respective 2-3 sentences in the text.) We also made a few improvements to the paper for clarity. A full summary is below:\", \"Added numbers from OpenAI baselines to compare to\", \"More thorough explanation of gridding procedure and hyperparamters used\", \"Increased sample size for both the ablation study (5 agents per config for a total of 320 agents trained) as well as the rewards tables (now > 80 agents per cell, with 95% bootstrapped confidence intervals)\", \"Fix typos/editing for clarity\"]}",
"{\"decision\": \"Accept (Talk)\", \"comment\": \"This paper provides a careful and well-executed evaluation of the code-level details of two leading policy search algorithms, which are typically considered implementation details and therefore often unstated or brushed aside in papers. These are revealed to have major implications for the performance of both algorithms.\\n\\nThe reviewers are all in agreement that this paper has important reproducibility and evaluation implications for the field, and adds substantially to our body of knowledge on policy gradient algorithms. I therefore recommend it be accepted.\\n\\nHowever, a serious limitation is that only 3 random seeds were used to get average performance in the first, key experiment. Experiments are expensive, but that result is not meaningful without more runs, and arguably could be misleading rather than informative. The authors should increase the number of runs as much as possible, at least to 10 but ideally more.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for addressing my concerns. I am much happier knowing that 10 random seeds were used for Tables 1 and 2. After reading other reviews and discussions on this paper, I firmly believe that this paper should be accepted.\\n\\nThis, of course, does not mean that I find the paper entirely without fault. If I were to add anything to this paper, it would be to see if these results continue to hold even on smaller domains with simpler neural networks. Perhaps a domain on the scale of a simple gridworld task, where a small single layer neural network could appropriately solve the task. In this setting, dozens of random seeds could be tested overnight on a modern laptop and strong statements of statistical significance could be made. While less indicative of impact on real-world tasks, this would at least ease concerns that differences are due to chance. If the hypotheses hold true on a small gridworld with high statistical significance, and appear to hold true on a demonstration across larger tasks, then we can reasonably expect a meaningful contribution was found.\"}",
"{\"title\": \"Re: Re: Response\", \"comment\": \"We are glad to hear that our response addressed your concerns!\\n\\nSince you brought up the issue of code explicitly in your review, we wanted to point out our latest revision and general comment, where we posted code for reproducing our results, with toggles for all of the relevant code-level optimizations discussed in our work.\"}",
"{\"title\": \"Code Release\", \"comment\": \"Dear Reviewers,\\n\\nWe finished our cleaning and documentation of the codebase before the rebuttal deadline! As such, we have revised our paper to include a link to the (anonymized) codebase, which contains extendable, modular, and commented implementations of PPO and TRPO, with precise control over all the code-level optimizations. We also provide utilities for reproducing the results of this work.\", \"the_code_can_be_found_here\": \"https://github.com/implementation-matters/code-for-paper.\\n\\nWe will continue to improve the code over time to make it even easier to use, but we believe it is definitely in a good enough state (and is sufficiently important) to be added to the paper now.\"}",
"{\"title\": \"Updated revision\", \"comment\": \"We have moved all the optimizations listed to the main text, added a table summarizing the different algorithms, and have written clarifications in the sections describing each new algorithm we present. Thank you for your feedback, and let us know if there is any more clarification needed!\"}",
"{\"title\": \"Clarification of suggestions\", \"comment\": \"Thanks for your reply.\\n\\nWhat I mean by \\\"being more clear about what is meant by code-level optimization\\\" is not that you provide a definition but that you explicitly write which optimizations are included and which are not included directly at the point where you introduce PPO-M, PPO-NoClip, etc. Going back and forth between Sect. 2 and Appendix is inconvenient, and I was especially confused that the list of \\\"code-level\\\" optimizations in Sect. 2 is different from the one in Appendix; therefore, it was not clear to me what exactly is included and what not.\"}",
"{\"title\": \"Re: Response\", \"comment\": \"Thank you for addressing my concerns & remarks.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comments on our paper. With regards to the reviewer\\u2019s main concern, we completely agree and are definitely planning to release code for this work along with the final version. We have been working on making the code more readable, modular, and easy to run, and will include a GitHub link with the final version of the paper. (We are almost done with cleaning up the codebase, if we are done before the revision deadline we will upload an anonymous copy and link it, but either way a link will appear in the final version.)\", \"we_address_the_other_minor_comments_below\": [\"We experimented with many different plot styles and visualization techniques for Figure 1 and converged on this version due to readability and its ability to express the relatively intricate data collected. However, we have updated the caption of Figure 1 to better describe the plot style (as it is somewhat unconventional), hopefully alleviating this concern.\", \"We have updated both the captions of Figure 2 and 3 to fix the noted issues, and also added (left), (middle), and (right) to our references to Figures 2 and 3\", \"We have fixed various typos/confusing wordings, including those found by the reviewer. Thank you for pointing them out!\"]}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your review and comments on our paper. We have made sure to more precisely define code-level optimizations (algorithmic changes that are predominantly found in codebases and not presented as core parts of their respective RL algorithms, see our reply to R1 for more detail), PPO-M (PPO but without any of the code-level optimizations mentioned in Sec 2 or the Appendix), and PPO-NoClip (PPO minus the clipping, including all the optimizations). We have also copied the optimizations listed in Section 2 to the appendix, so that the appendix contains a complete list of the optimizations.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your detailed review and comments, which we have taken into account in our (now uploaded) revision of the manuscript. We address each point raised in the original review below:\", \"three_random_seeds\": \"For the ablation study, we only used three random seeds for computational reasons (as every random seed requires running all of the hyperparameter configurations). However, it appears as though the reviewer is (rightfully) more concerned with Tables 1 and 2---both of these, however, actually used 10 random seeds rather than 3. We have added this to their captions to clarify this issue. We will also run a few more random seeds and update the means and variances accordingly when the results become available (we cannot guarantee this before the rebuttal deadline, however.)\", \"discussing_variances_in_table_2\": \"We agree that a discussion of a possible variance reduction induced by clipping would improve the paper. In order to make sure that the apparent reduction in variance is not spurious we will be sure to run more random agents\\u2014if the trend remains, we will certainly include a discussion of how clipping might serve to reduce the variance in PPO rather than its \\u201cconventionally perceived\\u201d purpose of ensuring monotonic reward increase.\", \"discussion_of_henderson_et_al\": \"We agree with the reviewer that our work builds on that of Henderson et al, and certainly did not intend to imply otherwise (hence the extensive citation noted by the reviewer). We have taken the reviewer\\u2019s advice and moved the related work to Section 2, and added the suggested mention of building on Henderson et al there.\\n\\n\\u201cCode-level optimizations\\u201d: We initially chose to use the term \\u201ccode-level optimization\\u201d to indicate that these were algorithmic optimizations that are for the most part found only in the code of RL algorithms. However, we appreciate the reviewer\\u2019s point that the term may cause confusion. To avoid this confusion and for lack of a better term, we have, in our revision, added a footnote which indicates precisely what we mean by \\u201ccode-level optimization.\\u201d We would be happy to amend this footnote or change the term if our fix has not alleviated the issue.\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Thanks for the kind comment! We actually studied all algorithms with normalized advantage in place (in a sense we treated normalization as part of the advantage estimation process, rather than as part of the RL algorithm itself). Studying the effect of normalization would be interesting future investigation! (note that even TRPO/other RL algorithms use normalized advantage as well.)\"}",
"{\"title\": \"Effect of Normalized Advantage?\", \"comment\": \"Hi,\\n\\nI really like this paper, the implementation-level additions in TRPO and PPO have always been confusing as to whether or not they really matter (and it seems like they do really matter!). One addition that has always confused me is normalized advantage (in baselines: https://github.com/openai/baselines/blob/master/baselines/ppo2/model.py#L139 and in this popular pytorch implementation: https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail/blob/master/a2c_ppo_acktr/algo/ppo.py#L36), but I don't see it as one of the things that you investigated. Is this something you considered? If not, I would be very curious to know if that one matters!\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper investigates the impact of implementation \\\"details\\\", with existing implementations of TRPO and PPO as examples. The main takeaway is that the performance gains observed in PPO (compared to TRPO) are actually caused by differences in implementation, and not by the differences between the two learning algorithms. In particular, adding to the TRPO code the same implementation changes as in PPO makes TRPO on par with (and possibly even better than) PPO. The clipping objective of PPO is also found to have no significant impact on its performance. This calls for more careful comparisons between algorithms (by minimizing implementation changes and more in-depth ablation studies) than has typically been done until now in the RL research community.\\n\\nAlthough this paper is pretty straightforward and does not bring meaningful algorithmic improvements, I still believe it should be accepted as reproducibility and evaluation are a major issue in RL, and people need to be aware of these kinds of implementation differences that can affect the reported results.\\n\\nMy only important concern is that I could not find a link to the code, which I believe is a must for such a paper focusing on implementation. Could the authors please confirm that they will release their code?\", \"other_small_remarks\": [\"Fig. 1 is hard to read, I think more synthetic results could have easily conveyed more clearly the intended message\", \"When referring to Fig. 2 and 3 please specify \\\"left\\\", \\\"middle\\\" or \\\"right\\\"\", \"Fig. 2's caption should describe the plots in left to right order (also what does \\\"maximum versus mean KL\\\" mean?)\", \"Fig. 3's caption lists mean KL twice on its first line\", \"\\\"The trust region for PPO-NoClip bounds KL to a lesser degree\\\": this is confusing as it sounds like it is \\\"less bounded\\\" while it is actually \\\"more bounded\\\" (as said in Fig. 3's caption)\", \"It would help comparing Fig. 2 and Fig. 3 if they both used the same y axis range\", \"Typo: \\\"enforcing\\\" => enforces\"], \"update_after_author_feedback\": \"increasing score to \\\"Accept\\\" thanks to the release of the code\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"# Summary\\nThe papers studies the effects of code-level optimization on the performance of TRPO and PPO. Details, usually considered as implementation-level particularities, are shown to be of crucial importance for the algorithms' performance.\\n\\n# Decision\\nThe paper makes an important point, it is written clearly, and the body of evidence is convincing. Therefore, I recommend this paper for publication.\\n\\n# Suggestions\\nMake it more clear what is meant by code-level optimizations.\\n - In Sec. 2, there is a link to Appendix A.2 for a \\\"full list\\\", but the list in A.2 does not contain all points from Sec. 2.\\n - For PPO-M, it is said \\\"implements only the core of the algorithm\\\". What exactly does that mean?\\n - PPO-NoClip is defined as \\\"PPO without clipping\\\". Does it mean that it includes all other tricks apart from clipping? Please, be explicit in such places.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary\\n\\nThis paper calls to attention the importance of specifying all performance altering implementation details that are current inherent in the state-of-the-art deep policy gradient community. Specifically, this paper builds very closely on the work started by Henderson et al. 2017, building a conversation around the importance of more rigorous and careful scientific study of published algorithms. This paper identifies many \\\"code-level optimizations\\\" that account for the differences between the popular TRPO and PPO deep policy gradient algorithms. The paper then subselects four of these optimizations and carefully investigates their impact on the final performance of each algorithm. The clear conclusion from the paper is that the touted algorithmic improvement of PPO over TRPO has negligible effect on performance, and any previously reported differences are due only to what were considered unimportant implementation details.\\n\\nReview\\n\\nThis paper investigates the claims made by Schulman et al. 2017 carefully, by investigating the impact of PPO's clipping mechanism on maintaining a valid trust-region; the central claim made by PPO's originating paper. The empirical results suggest that PPO is not sufficient for maintaining a valid trust-region, however the \\\"code-level optimizations\\\" that differ between the TRPO implementation the PPO implementation are sufficient. The ablation study of the four optimizations studied by the paper shows dramatic and clear results suggesting that annealing stepsizes and normalize rewards make very strong differences in learning performance; much more effect than demonstrated by the differences between TRPO and PPO's core algorithmic contribution as demonstrated in Figure 2 and even more strongly in Figure 3. I find the work included in this paper to be novel and a valuable contribution to the field.\\n\\nFor the above reasons, I recommend to accept this paper for publication at ICLR. In the following paragraphs I will discuss why I only recommend a weak accept instead of a strong accept.\\n\\nMy primary concern with the empirical study is the use of only three random seeds. As demonstrated in Henderson et al. 2017 (which is heavily cited in this paper), using such a small number of random seeds can have very misleading results. Although the effects appear very strong in the empirical studies in this paper, the effects likewise appear strong in Henderson et al.'s Figure 6 where 10 random seeds were split into two groups for the same algorithm. For this paper to make such strong claims about the negligence of the careful scientific study on TRPO and PPO, it would be best if this paper included far more random seeds in its investigation.\\n\\nMy second concern is with the discussion and conclusions drawn from Tables 1 and 2. It appears that the inclusion of clipping plays a strong role in the variance of each algorithm on every domain except Hopper. Specifically, the algorithms that include clipping appear to be much lower variance than the algorithms including clipping. Admittedly using only 3 seeds means that investigating the variance appropriately is near impossible (see the above paragraph), however variance should be considered and discussed in a conversation about the effects of the core contribution of PPO. If clipping leads to more consistent results across runs, even if those results are a little worse, it is still a valid and important contribution.\\n\\nThe paper cites Henderson et al. 2017 in several places. I would point out (perhaps in the introduction) that this paper builds on work already done in Henderson et al. 2017. Specifically, Henderson et al. 2017 investigates the effects of using different codebases for TRPO and shows that these different codebases result in dramatically different performance. The similarity to the investigation in this paper to too close to be unreported. However, I find that the investigation in this paper is much more complete and insightful than that of Henderson et al. 2017 (this paper has a more narrow focus), thus contributes significantly and meaningfully to this ongoing conversation.\\n\\nAdditional Comments (do not affect score)\\n\\nIt might be worthwhile to move the related work section to the beginning of the paper, either merged with the introduction or immediately after. This section is of critical importance to understanding the scope of this paper and for understanding why you are studying what you study. In fact, there is already a bit of duplication between the related works and introduction sections, so the paper could likely gain some additional real-estate by combining these.\\n\\nI disagree with the terminology \\\"code-level optimizations\\\" and I find that it is misleading. This caused a bit of confusion on my first pass reading the paper, as I originally was expected the code differences to be more akin to using Tensorflow vs PyTorch or switching hash table functions, etc. Instead the changes focused on in this paper are changes to the problem specification and algorithm implementation. These are not simply implementation details as \\\"code-level optimizations\\\" suggests, but are rather details that necessarily must be included in peer-reviewed works. I don't have a suggested name to switch to, but felt strongly enough to mention it.\"}"
]
} |
ryxdEkHtPS | A Closer Look at Deep Policy Gradients | [
"Andrew Ilyas",
"Logan Engstrom",
"Shibani Santurkar",
"Dimitris Tsipras",
"Firdaus Janoos",
"Larry Rudolph",
"Aleksander Madry"
] | We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development. To this end, we propose a fine-grained analysis of state-of-the-art methods based on key elements of this framework: gradient estimation, value prediction, and optimization landscapes. Our results show that the behavior of deep policy gradient algorithms often deviates from what their motivating framework would predict: surrogate rewards do not match the true reward landscape, learned value estimators fail to fit the true value function, and gradient estimates poorly correlate with the "true" gradient. The mismatch between predicted and empirical behavior we uncover highlights our poor understanding of current methods, and indicates the need to move beyond current benchmark-centric evaluation methods. | [
"deep policy gradient methods",
"deep reinforcement learning",
"trpo",
"ppo"
] | Accept (Talk) | https://openreview.net/pdf?id=ryxdEkHtPS | https://openreview.net/forum?id=ryxdEkHtPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"LPQbyXfA8ru",
"EYZztuKXx",
"rygtBhTOor",
"BylmMh6djH",
"B1lOJhaOir",
"rkgkefNAYr",
"Sye656FTFr",
"r1g2J6P6tr"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1585856872835,
1576798729122,
1573604417467,
1573604362957,
1573604320175,
1571860966712,
1571818901026,
1571810531944
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1660/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1660/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1660/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1660/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1660/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1660/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1660/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Camera Ready Uploaded\", \"comment\": [\"We have updated the paper to the camera-ready version now. While updating it, we saw a bug in our implementation of KL divergence calculation---we have ensured that the results remain accurate by re-running all of the experiments in our paper (the graphs have been updated). Below is a list of the minor edits we made in the camera-ready version:\", \"While we were rerunning everything, we used many more agents to ensure the trends held up. All of the trends have been verified on 24 random agents.\", \"We performed a much finer grid search to find the best agent parameters\", \"We removed the \\\"PPO-M\\\" line from the graphs, as we did not introduce PPO-M in this paper and it was an artifact from an earlier revision.\", \"We removed the 2-3 sentences about the high-sample regime, since we were unable to reproduce it reliably with the new code/old parameters (and lacked the compute to do another full grid in the high-sample regime)\", \"We updated the value baseline results to use 5 million state-action pairs instead of 500K\", \"We give finer detail about the hyperparameters used in Appendix A\"]}",
"{\"decision\": \"Accept (Talk)\", \"comment\": \"The paper empirically studies the behaviour of deep policy gradient algorithms, and reveals several unexpected observations that are not explained by the current theory. All three reviewers are excited about this work and recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comments.\", \"responding_to_your_questions\": \"The step direction is the direction of the update computed for the current agent, and the points do indeed correspond to linearly combined mixtures of a random (Gaussian) vector and the step direction. Your conclusion that following a random direction would be more beneficial than following the step in Figure 8 is correct --- this misalignment between true reward and surrogate objective is a core finding of our work.\\n\\nWe have fixed the citation, thank you!\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comments and suggestions. We have addressed your concerns below:\\n\\nIn Figure 4 (b), the returns MRE hovers around 1/2, which is where we obtain the \\u201c50% off of the true value function\\u201d conclusion. \\n\\nIn (13), the $\\\\pi$ refers to the current policy, and $\\\\pi_\\\\theta$ refers to the policy we are optimizing to solve the maximization problem (which will become the agent\\u2019s new policy).\\n\\nWe have added references to previous work --- Schulman et al 2015 [0] and Sutton and Barto 2018 [1] --- that support the assertion that learned baselines result in significant improvements in agent performance.\\n\\nIn Figure 6 and 7, the changing factor of sample size and objective do not change the way that the agent was run. We ablate these factors to indicate the objective mismatch, and the noisiness of the reward landscape around agents.\\n\\nThank you for the catch, the $V_{\\\\theta_{t-1}}$ in Eq (4) is indeed a function of state and we have corrected this notation accordingly.\\n\\n[0] https://arxiv.org/abs/1506.02438\\n[1] http://incompleteideas.net/book/the-book.html\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your feedback, and we are happy that you enjoyed the paper.\\n\\nSurrogate objectives/\\u201dSurrogate rewards\\u201d terminology: we indeed refer to the surrogate objective when we refer to the surrogate reward --- we have corrected this in the revision by replacing all instances of surrogate reward with surrogate objective. The terminology of \\u201csurrogate reward\\u201d simply refers to the fact that instead of optimizing over the true rewards, agents optimize over a surrogate function. To address your point of concretely defining the surrogate objective, we have placed a reference in the main text to the surrogate objective\\u2019s definition (which can be found in the Appendix).\\n\\nWith respect to our experiments/comparisons, our experiments use the surrogate objective or the true reward information depending on the section. We measure steps optimizing the surrogate objective in our gradient estimation quality experiments, and plot both the true reward and the surrogate objective in our landscape experiments.\\n\\nWe agree that it would be an interesting line of work to investigate how the misalignment of the surrogate reward impacts value learning.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This is an interesting and important paper, it emphasizes and analyzes how policy gradient methods modify their objective functions and how this leads to training differences (and often errors w.r.t. the true objective). I have some minor comments on terminology used that I would like to see properly defined within the paper, but otherwise believe this should be accepted for its useful insights.\", \"assorted_comments\": [\"Maybe I simply have a difference of opinion or have misunderstood, but I am hesitant to agree that the work is comparing the surrogate *reward* function, but rather the surrogate objective. You'll notice that in the TRPO paper, it is called a surrogate objective not a surrogate reward: https://arxiv.org/pdf/1502.05477.pdf .\", \"I think better specification of what exactly is being plotted (pointing to an equation) or defining very concretely what is a surrogate reward or true reward (which I suspect is the objective) will make this paper much clearer.\", \"In fact, it was a bit unclear whether the comparisons were of the sampled/observed reward function R(s,a) (provided by the environment and sampling regime) or the objective function often the advantage A(s,a) (or the surrogate objective, GAE, etc.) I assume it should be the latter, but the wording of the paper makes this a bit unclear. I suggest discussing things in terms of objectives not rewards -- unless in fact the paper does approximate reward functions in which case this should be specified in much more detail.\", \"Also, in a lot of places it seems like there's a mixup between rewards and returns. I think typically in the literature reward = r_t and return = V_t (sum of reward). Perhaps, in places the paper truly speaks of rewards, but from the context it seems as though it mainly refers to returns. Examples: \\\" Evidently (since the agent attains a high reward) these estimates are sufficient to consistently improve reward\\\" \\\" This is in spite of the fact that our agents continually improve throughout training, and attain nowhere near the maximum reward possible on each task\\\"\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"[Summary]\\nThis paper empirically studies the behavior of deep policy gradient algorithms during the optimization. The conclusion is that, while these methods generally improve the policy, their behavior does not comply with the underlying theoretical framework. First, sample gradients obtained with a reasonable batch size have little correlation with each other and with the true gradient. Second, a larger batch size requires a smaller step-size. Third, the value baseline is far from true values and only marginally reduces variance, yet it considerably helps with optimization. Finally, the optimization landscape highly varies with the choice of objective function and the number of samples used to estimate it.\\n\\n[Decision]\\nI vote for acceptance. To the best of my knowledge, the findings of this paper are new and not predictable by the current theory. These negative results have some merit as they call for theory that explains the behavior of these algorithms, or an algorithm whose behavior is predictable by the current theory. The paper is well-written, with a few small issues in presentation that should to be addressed in the final revision.\\n\\n[Comments]\\nIn Fig. 4 (b) it does not look like that the value error is high. It is said that \\\"the learned value function is off by about 50% w.r.t. the underlying true value function.\\\" This sentence should be clarified or visualized.\\n\\nWhat is \\\\pi in Eq (13) in A1? If it is the agent's current policy, how is it different than \\\\pi_\\\\theta? If \\\\pi corresponds to the distribution of state-action pairs in the replay buffer, how can one obtain a policy \\\\pi that has led to this distribution of states in order to construct the importance sampling ratio?\\n\\nIn 2.2, the claim that a learned value baseline results in significant improvement in performance should be supported by results or reference to previous work.\\n\\nFigs. 6 and 7 compare the loss surface with different objectives and sample regimes. Do these factors (objective and sample size) affect the part of the parameter space that is visualized (by changing the origin and the update direction), or are they only used to evaluate the values on the z-axis for the same area in the parameter space? Observing a different landscape in a different part of the parameter space is not surprising.\\n\\n[Minor comments]\\n- Is V_\\\\theta_{t-1} in Eq (4) a function of state? If so, a (s_t) is missing before the plus sign.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper explores a critical divergence between theory and practice, emphasizing that while deep policy gradient algorithms seem to work in certain cases, they don't seem to be working foor the reasons underlying their derivations. It particularly looks at how closely the sample-based approximation of the objective's gradient aligns with the true gradient of the objective, how accurately learned values match the true expected returns, and how well the optimization landscapes of surrogate objectives line up with the objective of maximizing the return.\\n\\nI propose accepting this paper, as it reveals a key gap in our understanding of why policy gradient methods work. Such emphasis can suggest why deep RL results tend to be inconsistent and irreplicable, and spark future work on closing the gap between theory and practice. Further, the paper is overall well written.\", \"i_primarily_would_like_clarification_on_the_optimization_landscape_visualizations\": \"1) Is the step direction the direction of the update actually performed at that time step?\\n\\n2) Would moving diagonally in this space correspond to a mixture of following the update direction and a normally-distributed random direction? Concretely, in the true reward plot at Step 0 for few state-action pairs in Figure 8, does this suggest that mixing a random direction with the update direction would be better than moving cmopletely in the step direction?\", \"minor\": \"Typo in citation \\\"...policy improvement theorem of Kakade and Langford Kakade & Langford (2002)\\\"\"}"
]
} |
H1edEyBKDS | Plug and Play Language Models: A Simple Approach to Controlled Text Generation | [
"Sumanth Dathathri",
"Andrea Madotto",
"Janice Lan",
"Jane Hung",
"Eric Frank",
"Piero Molino",
"Jason Yosinski",
"Rosanne Liu"
] | Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities. However, controlling attributes of the generated language (e.g. switching topic or sentiment) is difficult without modifying the model architecture or fine-tuning on attribute-specific data and entailing the significant cost of retraining. We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM. Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation. Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency. PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper. | [
"controlled text generation",
"generative models",
"conditional generative models",
"language modeling",
"transformer"
] | Accept (Poster) | https://openreview.net/pdf?id=H1edEyBKDS | https://openreview.net/forum?id=H1edEyBKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"yMNRSl3cei",
"7bAtaBWJGZ",
"FmVXgU2BYJ",
"B1xorXFnjS",
"BJeY0tInor",
"Bkxgl5Toir",
"BJgpGJ7ooH",
"H1lsWy7ssS",
"Sye1kV9qjB",
"B1ljnf5csH",
"rklLgxopKB",
"HkegjQxaFH",
"H1x4jhziFS",
"BklhMpxp_r",
"SJxBXUyT_B",
"r1emraXVOr",
"ByeOsKdGOr",
"BJgScu_zdr",
"SklON_OGdr",
"HJlzgOuG_B",
"BklATnxydS",
"H1lCPj1pwH",
"SJlN9SciDS",
"SyebpLFjPB",
"rJxpps_swS",
"B1eC5MPoDH",
"SkezXrHiPr"
],
"note_type": [
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment"
],
"note_created": [
1581453486162,
1581437231682,
1576798729092,
1573847875072,
1573837265368,
1573800424422,
1573756692904,
1573756674570,
1573721047495,
1573720755446,
1571823597576,
1571779479992,
1571658908107,
1570733331812,
1570727453077,
1570155834996,
1570044319737,
1570044045483,
1570043952171,
1570043882371,
1569815749693,
1569680229617,
1569592715740,
1569588920570,
1569586116583,
1569579669673,
1569572121771
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"~Zhiyu_Lin1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1659/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1659/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1659/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1659/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"~Jason_Brett1"
],
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1659/Authors"
],
[
"~Eric_Wallace1"
],
[
"~Karl_William_McMara1"
],
[
"~Jason_Brett1"
],
[
"~Karl_William_McMara1"
],
[
"~Jason_Brett1"
],
[
"~Karl_William_McMara1"
],
[
"~Jason_Brett1"
]
],
"structured_content_str": [
"{\"title\": \"PPLM vs Fine-tuning\", \"comment\": \"Hi,\\n\\nThank you for the insightful questions. We clarify below. \\n\\n1. K and V are activations not weights of the model. Usually, with backprop you compute the gradient with respect to the weights of the model because you want to update. We don\\u2019t update the weights but we update the activations, which are dynamically determined at each step by encoding the input.\\n\\n2. Fine-tuning implies having a pre-trained model that you update. Here the pre-trained LM is untouched, and the updated components are initialized at 0 (no pre-training). \\n\\n3. For conventional fine-tuning, you increase the likelihood of a given set of sequences by updating your model weights. PPLM does not update the model or increase the likelihood of a set of sequences directly.\"}",
"{\"title\": \"What is the difference of this method vs. back-propagation?\", \"comment\": \"I'm wondering since K and V of the transformer network is the target and \\\"shift the history H_t in the direction of the sum of two gradients\\\", what is the difference between actually back-propagating this gradient in the backward phase (step 2 in the illustration) and the method described? By backprop into the model, isn't that fine-tuning?\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a simple plug-and-play language model approach to the problem of controlled language generation. The problem is important and timely, and the approach is simple yet effective. Reviewers had some discussions whether 1) there is enough novelty, 2) evaluation task really shows effectiveness, and 3) this paper will inspire future research directions.\\n\\nAfter discussions of the above points, reviewers are leaning more positive, and I reflect their positive sentiment by recommending it to be accepted. I look forward to seeing this work presented at ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the quick response!\", \"comment\": \"Thanks for your quick response!\\n\\nWe agree with you that controlling for consistency in context is very interesting and in general, the lack of metrics for measuring content shift. \\n\\nThe current objective of this piece of work is controlled generation as studied in [1,2, 3, 4], as opposed to style transfer where retaining content is more of a concern. We do hope to extend PPLM into style transfer in the future, and also to applications such as NMT ( e.g. this could allow transforming a German to English translation model and a Twitter vs. Wikipedia classifier to translate German phrases into their English Twitter equivalents) where retaining content is extremely important.\\n\\nHowever, in the context of this paper, the factors we are evaluating for 1) can we steer text generation towards desired attributes, and 2) can we do this without degrading language fluency and diversity. We have used 2 types of human annotation (with thousands of labels collected), 4 kinds of automated measures, and evaluations from an additional, separately trained classifier. We believe this presents the most comprehensive evaluation we have seen so far in the literature for the task of controlled generation. We have also included controlled generation baselines as mentioned in our general response.\"}",
"{\"title\": \"Thank you for your rebuttal\", \"comment\": \"Thank you very much for your feedback on my reviews; really appreciate that.\\n\\nRegarding \\\"If you have any suggestions for other automatic evaluation metrics, we would be happy to consider including them.\\\", unfortunately, I don't have them, but I do think that measuring the content shifting among the generated texts could be useful. In this sense, we would be able to control the consistency of the targeted context or conversation.\"}",
"{\"title\": \"General Response\", \"comment\": \"Dear Reviewers, \\n\\nThank you for your comments helping us improve the paper! We appreciate the time/effort you all have taken in reviewing our paper carefully and giving insightful feedback. We have updated a great deal of our paper thanks to your feedback and to address the concerns raised. To summarize, the main changes are:\\na) Added strong baselines: CTRL (a conditional language model with 1.6B parameters; https://arxiv.org/abs/1909.05858), and GPT2-FT-RL (a pre-trained 774M GPT-2 model fine-tuned for positivity; https://arxiv.org/abs/1909.08593), as well as Weighted Decoding, a more direct conditioning approach suggested by Reviewer #3. \\nb) Performed full evaluations on all those baselines, including over 2000 additional human annotations. \\nc) Found that PPLM outperforms GPT2-FT-RL and performs comparably with CTRL, even though both of them are trained specifically for conditioned generation, and are over 4 times and twice of our model size (and 5 orders of magnitude larger than our attribute models). Further, PPLM outperforms Weighted Decoding significantly on both topic and sentiment control. \\nd) Extended related work section where neural style transfer, weighted decoding, and most recent work on controlled generation are included. We thank you for all the suggested references!\\ne) Extended analysis for settings in which PPLM succeeds and fails. Thanks to Reviewer #2 for the suggestions!\"}",
"{\"title\": \"Response 2/2\", \"comment\": \">> 4. Can the authors add analysis on how much the model respects the control variables? This is quite common in existing controlled generation papers. If the model is updated to have the control variables and then is not provided with one at test time, what happens?\\n\\nOur human/automatic evaluation in Tables 4, 6 (and Tables S8, S9 in Supplementary Information) reflect attribute relevance. We find that it is comparable with other state-of-the-art approaches that have been trained for the task of controlled generation. In Tables 4, 6, the entries corresponding to the method \\u2018B\\u2019 are those where samples are generated with an uncontrolled language model (original GPT-2) -- comparing \\u2018B\\u2019 with \\u2018BR\\u2019, \\u2018BC\\u2019 and \\u2018BCR\\u2019 provides an understanding of the extent of language control provided by our proposed method.\\n\\n\\n>>Can you also control very easy to measure attributes, such as length?\\n\\nIt would definitely be possible to optimize for easily measurable attributes such as a length of sentence by building an attribute model that can predict the probability of attribute p(a|x). Beyond that it would be a direct application of the methods developed in our paper.\\n\\n>> why it is better than other control methods or control baselines\\n\\nThe biggest difference is that PPLM do not train LM at all, in comparison to fine-tuning/training an LM with desired control attributes. The amount of compute is hence negligible. Therefore, we do not need domain specific annotated data -- for instance, CTRL uses data from specific subreddits to train a controlled language model. To finetune an LM to be positive, [5] had to first get human annotations on generated samples, and then fine-tune to the LM. In contrast, we can take a simple out of domain dataset such as a SST (which is about movie reviews) or a bag of words, and generate positive- or negative-controlled passages about arbitrary topics such as a chicken, potatoes, lakes, horses or paintings.\\n\\nLastly, both [4, 5] do not provide the fine-grained control PPLM does (See Table S17 for an illustration), or the flexibility (that PPLM can always turn the control to zero to fully recover the original LM).\\n\\n>> where the proposed control mechanism is not effective\\n\\nWe can gain some insight into this question from Tables S8, S9, and Figure S3, S4. We look at controllability for different topics (Table S8) and we see that the controllability varies quite a bit by topic. It is significantly easier to control for commonly occurring topics in the training data for GPT-2 such as \\u2018religion\\u2019, \\u2018politics\\u2019, \\u2018science\\u2019 as opposed to rarer topics such as a \\u2018legal\\u2019 or \\u2018space\\u2019. Based on human annotations, by default, GPT-2 generates most sentences on \\u2018politics\\u2019 and \\u2018science\\u2019 in comparison to \\u2018space\\u2019 or \\u2018legal\\u2019. During our experiments, we found that rarer topics such as \\u2018Fantasy\\u2019 are far more difficult to control while retaining fluency as opposed to topics such as \\u2018science\\u2019. We have included this discussion (Section 4.2). See Tables S8 and S9 for details.\\n\\n\\n>> 5. Missing citations\\n\\nThank you for these references. \\n\\nWe have restructured the paper to include the extended related work in the main text, and have updated the references to include the ones you suggested.\\n\\n>> Please also cite:\\n- which dataset was used for story generation \\u2026. missing\\nWe use an improv sketch as the skeleton for story generation. Beyond that we do not use any dataset to train the model -- the underlying model is GPT-2 and the attribute models are the same as the ones in the other sections of the paper.\\n\\n- top-k sampling\\n We added the citation (Section 4). \\n\\n[1] Hafez: an Interactive Poetry Generation System, Ghazvininejad\\u2019 et. al., ACL\\u20192017\\n[2] Multiple-Attribute Text Rewriting, Lample et. al., ICLR\\u201919\\n[3] What makes a good conversation? How controllable attributes affect human judgments, See et. al., NAACL\\u201919\\n[4] CTRL: A Conditional Transformer Language Model for Controllable Generation, Keskar et. al., 2019\\n[5] Fine-Tuning Language Models from Human Preferences, Ziegler et. al, 2019\\n[6] Hierarchical Neural Story Generation, Fan et. al., ACL\\u201918\\n[7] Towards controlled text generation,Hu et. al., ICML\\u201817\\n[8] Controlling Linguistic Style Aspects in Neural Language Generation, Ficler et. al, 2017\\n[9] Style Transfer from Non-Parallel Text by Cross-Alignment, Shen et.al., NeurIPS\\u201917\"}",
"{\"title\": \"Response 1/2\", \"comment\": \">>In contrast to existing work, which often trains conditioned upon the control element, the authors emphasize that their method does not require re-training the initial LM. This is exciting and a great research direction.\\n\\nWe are glad you like the research direction!\\n\\n>> 1. The authors \\u2026. evaluation with any existing work that performs controlled text generation.\\n\\nThank you for the suggestions. We have included the following baselines: i) Weighted Decoding [1], ii) CTRL (a conditional language model trained for controlled text generation), and iii) a fine-tuned GPT-2 language model. Despite, CTRL being trained for the task (and with over 4 times as many parameters) and GPT-2 being fine-tuned for the task (and with over twice as many parameters), we perform comparably with CTRL and outperform the fine-tuned GPT-2 based on human-evaluation/automated evaluation. We also clearly outperform the more direct approach of weighted decoding proposed [1] (also, used in [3]). See Tables 4, 5 for updated results and Section S7 for baseline details.\\n\\n>> 2. Can\\u2026 discuss the relationship of this work to neural style transfer? Compared to unsupervised style transfer approaches \\u2026 what are the benefits of the proposed approach and how would it compare?\\n\\nWe have moved the discussion of neural style transfer from supplementary information section to Section 2 Related Work. Thanks for your suggestion.\", \"benefits_of_pplm_over_style_transfer\": \"-- Most style transfer approaches [2] require training a seq2seq model from scratch and it is also not possible to plug in new attributes that were not considered during training.\\n\\n\\n-- Further, there are many domains outside style transfer where it is useful to control style -- for example, story writing [6], dialogue systems [3], where approaches from current work on style transfer are not directly applicable. We believe PPLM would be directly applicable to any transformer based generative model in all these domains. \\n\\n-- In contrast to current approaches for unsupervised neural style transfer [2, 9], our approach allows for fine-grained control (e.g. How positive do we want our LM to be?).\\n\\n-- We also note that controlled/stylized generation itself is a well studied problem [4, 5, 6, 7, 8], and there are merits to generating text in a controlled manner outside of the style transfer setting.\\n\\n>> 3. Can the authors discuss the effectiveness of their control mechanism for less logical control settings? .. \\\"religion\\\" for \\\"the potato\\\" prompt? .. still respect these settings, or no? \\n\\nThis is a great idea! We\\u2019ve added examples of how PPLM responds to the following odd or illogical topic-prefix combinations. The experiment is described in Section S9 and we list samples from various combinations in Tables S10-S16. The conclusion of this experiment is that PPLM can handle those odd settings as well. For example, a sample from \\u201cThe potato\\u201d + \\u201cReligion\\u201d is as follows: \\n\\n=== Sample 1 ====\\nThe potato, an ancient food, is considered a sacred plant by many Hindus. However, some Hindus believe that the potatoes are the seed of a demon. ...\\n\\n\\\"In India we have the Hindu god Vishnu, Vish, the God. He has come to the world,\\\" said a woman in Mumbai.\\n\\n\\n\\\"He came to the world because of God. God came to the world to save people from the curse of the devil God. God came to save us from the curse of the devil,\\\"\\n=== end of Sample 1 ====\\n\\n=== Sample 2 ====\\nThe potato salad that I have recently been making for our family is so good, I wanted to share it with you guys. This was my first attempt at a Potato Salad recipe, and I love it. It also reminds me why I love cooking. I love\\n how good it tastes and how it reminds you why you love Cooking with God. I love how it is a great way to celebrate Thanksgiving and Christmas. It also reminds me why I am a Christian. I love how it reminds me why I love to\\n=== end of Sample 2 ====\"}",
"{\"title\": \"Response\", \"comment\": \">> The proposed method is simple and makes sense to me... is very neat here. However, I have two main concerns, as follows.\\n\\nWe thank you for your comments helping us improve the paper. We address your comments below.\\n\\n>> \\\"1. The main focuses of the generated text seem to be dramatically changed in an unpredictable way while tailoring the control attributes. In this sense, how useful these kinds of text generation techniques are not clear to me. .... Is there an automatic evaluation metric to subjectively evaluate the change of the focuses/ideas of two pieces of text?\\\"\\n\\nThis is certainly the case. In our work, two samples from either an LM distribution p(x) or a controlled LM p(x|a) are independent. The task being studied is controlled generation as opposed to style transfer, the latter scenario in which one aims to retain content but adjust style. Our goal is not to control the language so that the idea being conveyed is retained.\\n\\nAlthough it would be great if our model could accomplish both feats, we would like to note that controlled generation on its own is an actively studied problem in the language community. Recently several approaches have been proposed towards solving the problem of open-ended controlled generation where the goal is to only generate language with specific attributes without controlling for context, for example, the following papers: [1], [2], [3], [4], [5]. Another paper, [6], showed the benefits of language control (without directly controlling the idea being conveyed) on human judgement of the quality of engagement during interaction with a dialogue agent. We also note that the PPGN model in the paper inspiring this work does not control for deviation in context, but rather only controls for the generated image having the desired attribute (i.e. PPGN and PPLM both perform \\u201ccontrolled generation\\u201d but not \\u201cstyle transfer\\u201d).\\n\\nFor the open-ended controlled generation task (such as studied in [1,2,3,4]), we consider several possible automatic and human evaluation metrics, including perplexity, dist scores, human fluency and attribute relevance scores. If you have any suggestions for other automatic evaluation metrics, we would be happy to consider including them.\\n\\n>> \\\"2. The model is a straightforward adaptation of the Plug and Play Generative Networks from the vision community.\\\"\\n\\nWe respectfully disagree that the adaptation was straightforward. While we would have been happy to apply the PPGN approach directly to the language domain, the adaptation actually required several modifications, summarized as follows:\", \"ppgn\": \"-- A graphical model depiction of the network looks like this: h -> x -> y, where h is a latent code, x is an image, and y is a class or attribute.\\n-- A single h generates an entire, single image x.\\nh and x are both continuous, and the gradient w.r.t. y passes through x to h.\\n-- The Markov chain is run in h space, with a separate p(h) model being trained and used to ensure h does not drift too far from high probability regions.\\n-- Multiple steps are taken in h space, corresponding to multiple entire images.\\n-- Noise is added in h space to obtain the correct diversity of images.\", \"pplm\": \"-- A graphical model depiction of the network looks like this: [x1 -> (h1, x2) -> (h2, x3), \\u2026 ] -> y, h_t and x_t are the latents and byte-pairs at time t and y is an attribute.\\n-- A single h generates a distribution over sentences x.\\nh is continuous and x is discrete, and gradient w.r.t. y passes directly to h, with discrete x skipped, except in the distribution propagation approach in Sec 4.3, which propagates through the single word x_t+1 (\\u201cInstead, as in -- Dai et al. (2019a), we use the distribution\\u2026\\u201d).\\n-- A complete Markov chain is not run, as this would require multiple full forward and backward passes through the transformer. Instead, we update only a sliding window of the recent past of h and sample only one word at a time. This is a compromise between speed and quality of the samples. The particular dependency structure of the transformer allows us to update only the past (key, value) pairs, which also allows for efficient sampling.\\n-- Multiple steps are taken in h space as the sentence is constructed word by word. Multiple entire sentences are never produced.\\n-- Noise is added via the sampling of each word in x space to obtain the correct diversity of sentences.\\n\\n[1] CTRL: A Conditional Transformer Language Model for Controllable Generation, Keskar et al., https://arxiv.org/abs/1909.05858\\n[2] Fine-Tuning Language Models from Human Preferences, Ziegler et al., https://arxiv.org/abs/1909.08593\\n[3] Towards controlled text generation, Hu et al., https://arxiv.org/abs/1703.00955\\n[4] Controlling Linguistic Style Aspects in Neural Language Generation, Ficler et al., https://arxiv.org/abs/1707.02633\\n[5] Towards Controllable Story Generation, Peng et al.\\n[6] What makes a good conversation? How controllable attributes affect human judgments,See et al., NAACL\\u201919\"}",
"{\"title\": \"Response to review\", \"comment\": \"Thank you for your comments helping us improve the paper!\\n\\n>> \\u201cfollowing the framework known in NLP as noisy-channel modeling \\u2026. this connection (they should!).\\u201d\\n\\nThanks for pointing out references to noisy-channel modeling. We have added a mention of the noisy channel modeling approach to our related work section and have discussed how that approach compares to PPLM (Section 2).\\n\\n>> \\u201cI find this approach interesting and like the paper overall.\\u201d\\nThanks!\\n\\n>> \\u201cHowever \\u2026 do not compare to more direct ways of integrating the conditional ... expect the proposed approach to work better (or at least differently) but it would be interesting to see it confirmed. .... will be no increase in the probability of generating relevant words before the first seed word is generated.\\u201d\\n\\nThank you for these great suggestions. We\\u2019ve updated the paper to include this approach both for PPLM-BoW and PPLM-Discrim models:\", \"for_pplm_bow\": \"this corresponds to an existing approach referred to in literature as \\u201cWeighted Decoding\\u201d (https://www.aclweb.org/anthology/P17-4008/, See et al., NAACL\\u201919).\", \"for_pplm_discrim\": \"for each token in the vocabulary we compute p(y=desired sentiment | x) and sample from the distribution p(x)*p(y=sentiment|x) with top-k=5. While the forward passes over the vocabulary are extremely expensive (e.g. 50000x), we get a sense of how well PPLM compares with a direct integration of the condition. We have included both sets of results in the paper (Table 3 and Table 6, row \\u201cWD\\u201d), where we find, as you presumed, that it does not work quite as well as PPLM.\\n\\nPPLM works better than directly integrating the conditioning into the decoding procedure. For the bag of words, we can further confirm the observation that the probability of generating relevant words before the first seed word from the bag does not increase. Another key difference is that the semantics of the bag of words are not captured, rather the decoder chooses to pick one of the words that fits context. For instance, a generated sample when conditioned on the prefix \\u201cOnce upon a time\\u201d with the \\u201cSpace\\u201d bag of words is \\u201cI used to have a pretty good idea what a starfish was. I was a starfish biologist.\\u201d. See Section 4.2 for details.\\n\\nFor the sentiment control task, we find that this results in a lot of adversarial samples. Sequences often have a high attribute likelihood under the discriminator used during decoding but do not possess the attribute under human evaluation/external classifier evaluation. \\n\\n>> \\u201cAnother limitation is the lack of comparison to standard controlled generation.... fine-tuning off-the-shelf pretrained decoders.\\u201d\\n\\nThanks for the suggestion! We have updated the paper to include comparisons with a recent conditional LM (CTRL, https://arxiv.org/abs/1909.05858) and a GPT-2 LM fine-tuned for positivity with RL and human preferences (https://arxiv.org/abs/1909.08593). The details of the set-up can be found in the Section S7, particuarly, S7.1 and S7.2. We find PPLM performs comparably with CTRL on sentiment control (Table 6) and (perhaps surprisingly) outperforms CTRL on topic control (Table 3). PPLM also significantly outperforms the fine-tuned GPT-2 model on the sentiment task. In all of the above cases, PPLM is at least as fluent or more fluent than the baselines (CTRL & fine-tuned GPT-2). This is impressive considering that the fine-tuned GPT-2 model has over twice as many parameters, and the CTRL conditional language model has over 4 times as many parameters and are specifically tuned/trained for controlled gen.\\n\\n>>\\u201dThere .. interesting relation to the NIPS 2019 paper \\u2026 'steerability' rather ... controlled-generation model.\\u201d\\n\\nThanks for the interesting connection! We have included this (Section 2, Page 3).\\n\\n>> \\u201cGiven that style-controlled \\u2026 can push this approach ... pretrained conditional LMs?\\u201d\\n\\nWe believe the PPLM approach should scale well to any method with a Transformer based decoder, including potentially the application you describe, or NMT or Dialogue systems, where See et al.\\u201919 showed the utility of being able to control the response in dialogue systems. Just as PPLM allows one to combine a p(x) model and p(a|x) model to generate samples from p(x|a), in the NMT scenario one could combine a base translation model p(x_target | x_source) with a p(a | x_target, x_source) to generate samples from p(x_target | a, x_source). E.g. this could allow transforming a German to English translation model and a Twitter vs. Wikipedia classifier to translate German phrases into their English Twitter equivalents!\\n\\nThese are some of the immediate next steps we plan on exploring!\\n\\n>>\\u201dMinor: ... Something seems off here.\\n\\nIndeed this was an error -- we had posted a comment mentioning a correction on openreview. Thanks for pointing out; we have fixed this in the revision now.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper introduces an approach to the conditional generation of text, relying on pre-trained decoders, without fine-tuning and, in certain cases, without any training at all. The approach they introduce is following the framework known in NLP as noisy-channel modeling, previously standard in machine translation (in its SMT days), but undergoing certain revival recently (https://arxiv.org/abs/1611.02554, https://arxiv.org/abs/1910.00553,https://arxiv.org/abs/1908.05731,https://arxiv.org/abs/1907.06616). The authors do not mention this connection (they should!).\\nVery differently from these previous approaches attempting to integrate the two factors in the search process (e.g., using reranking), the authors instead rely on gradient descent in the latent space of their model (Transformer), similarly to plug-n-play generative networks in image generation. \\n\\nI find this approach interesting and like the paper overall. However, I do not see why authors do not compare to more direct ways of integrating the conditional component into the model. This would have been tricky in the NMT papers mentioned above, as the entire source sentences need to be reconstructred, however, it should be quite straightforward in this work, with conditioning on single categorical control variables (or maybe a couple in the additional experiments in sect 4.4). Especially, given that the authors already make the predictions of the control variable independently per prediction (e.g., see eq. (5) in section 4.2) / greedily per prefix (bottom lines, page 7). I would actually expect the proposed approach to work better (or at least differently) but it would be interesting to see it confirmed. E.g., for the experiments defining topics as sets of seed words (section 4.2), when integrating factors directly (unlike the proposed approach, Table 3), there will be no increase in the probability of generating relevant words before the first seed word is generated. \\n\\nAnother limitation is the lack of comparison to standard controlled generation work, i.e. those requiring training a model or/and fine-tuning pretrained decoder. I understand that the proposed approach falls in a different category and, of course, do not expect it to beat a fine-tuned model, but I'd like to get some feel for how much one loses by using this simpler method. There has been a lot of work on controlled generation in recent ~3 years, and they can also be combined with intializing and fine-tuning off-the-shelf pretrained decoders.\", \"there_is_an_interesting_relation_to_the_nips_2019_paper\": \"https://arxiv.org/abs/1907.04944 They also rely on gradient descent to steer a pretrained language model. Their goal is to assess the degree of 'steerability' rather than building a controlled-generation model.\\n\\nGiven that style-controlled but otherwise unconditional generation may not have that many applications, I am curious how far you can push this approach. E.g., can you make it scale to more complicated data-to-text generation tasks (https://www.aclweb.org/anthology/D17-1239/)? Or, will the only application in this context be integrating new conditioning variables into pretrained conditional LMs?\", \"minor\": \"I am confused with the notation in \\\"Post-norm Geometric Mean Fusion\\\" section. It says that softmax is applied to the product of probabilities. Maybe to a linear interpolation of log-probs? Or maybe that's not softmax at all? Something seems off here.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a Plug and Play LM model for controlled natural language generation. Similar to the idea of the Plug and Play Generative Networks for vision, the model plugs in a discriminator, which is either a bag-of-words model or a single layer classifier. The added simple discriminator is then coupled with a pre-trained generative language model such as GPT-2, to obtain a conditional probability for generating controllable text. The authors evaluate the proposed model using human evaluation studies and quantitative perplexity metrics, aiming at measuring the relevance and fluency of the generated text. Their experimental results show that the text generated is fluent and aligned with the desired attributes.\\n\\nThe proposed method is simple and makes sense to me. The idea of how one can make good use of large, pre-trained generative language models is very neat here. However, I have two main concerns, as follows.\\n\\n1. The main focuses of the generated text seem to be dramatically changed in an unpredictable way while tailoring the control attributes. In this sense, how useful these kinds of text generation techniques are not clear to me. For example, the first two rows in Table 3 contain two paragraphs with very different main ideas to be conveyed. Similarly for sentences in Table 1. It seems that those sentences talk about very different topics/things to me, although they may reflect the desired control attributes. Is there an automatic evaluation metric to subjectively evaluate the change of the focuses/ideas of two pieces of text?\\n\\n2. The model is a straightforward adaption of the Plug and Play Generative Networks from the vision community. \\n\\nIn short, the idea in the paper is simple and seems effective. On the other hand, the lack of a good evaluation metric makes me a bit uncertain about the contribution of the paper. I am willing to increase my evaluation score if I will be convinced by other reviews and comments.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors describe a method for training plug and play language models, a way to incorporate control elements into pre-trained LMs. In contrast to existing work, which often trains conditioned upon the control element, the authors emphasize that their method does not require re-training the initial LM. This is exciting and a great research direction. It is evaluated in a number of different settings.\\n\\n1. The authors claim that this method is a baseline for controlled text generation (see e.g. the title). However, there does not appear to be any evaluation with any existing work that performs controlled text generation. I don't see how this can be proposed as a baseline for controlled text generation is there is no comparison to other methods. I imagine the authors will emphasize that that's not fair - because their method doesn't require retraining the language model - but it is relevant to demonstrate if there is a gap in performance or not. As is, there is only one baseline- unconditional language model - and to me this is mostly a way to calibrate the evaluators and not a way to compare their model against other models. \\n\\n2. Can the authors make a point or discuss the relationship of this work to neural style transfer? Compared to unsupervised style transfer approaches, which also use lists of words or attributes to learn to dis-entangle content and style, what are the benefits of the proposed approach and how would it compare?\\n\\n3. Can the authors discuss the effectiveness of their control mechanism for less logical control settings? For example, what if there was \\\"religion\\\" for \\\"the potato\\\" prompt? Does the model still respect these settings, or no? \\n\\n4. Can the authors add analysis on how much the model respects the control variables? This is quite common in existing controlled generation papers. If the model is updated to have the control variables and then is not provided with one at test time, what happens? Can you also control very easy to measure attributes, such as length?\\n\\nThis question ties in with a general point I am ambivalent to in this paper- that it is very long, but there is very little analysis done on what makes the method work, why it is better than other control methods or control baselines, where the proposed control mechanism is not effective, how the model scales if there are large quantities of topics rather than just a few of them, if the BoW and discriminator attribute models work well together or if certain attributes are easier to learn than others, so the model focuses more on those when there are conflicts, etc\\n\\n5. Missing citations: \\n\\nPrevious work has investigated controlling various attributes of text generation. Several of these works have also controlled multiple attributes simultaneously. For example, here's a list of a few of the works that were missed:\\n\\nKikuchi et al 2016\\nFicler and Goldberg, 2017\\nWang et al, 2017\\nFan et al, 2018\\nBaheti et al, 2018\\nSee et al, 2019\\nMartin et al, 2019\\n\\nThe related work section only focuses on very recent work, e.g. only one paper is discussed amongst a large body of existing work. I feel this is not an accurate reflection of how much previous work has investigated these techniques and analyzed how models deal with control variables.\", \"please_also_cite\": [\"which dataset was used for story generation, appears to be missing\", \"top-k sampling\", \"I have read the author response. Thanks for the details and additional analysis in the paper.\"]}",
"{\"comment\": \"Hi Eric,\\nThe code is now in the dropbox folder. You should be able to check-out the implementation.\", \"title\": \"Code available\"}",
"{\"comment\": \"We would like to point the readers to 2 minor corrections in the paper -- we will fix this during revision:\\n1. Instead of the Softmax normalization in Section 3.3, page 5, paragraph 3, we actually divide by a normalizing factor such that it forms a valid distribution.\\n2. Table S1, Row corresponding to POSITIVE, NEGATIVE --> gamma_gm = 0.9 is incorrect, and should be gamma_gm = 0.95.\", \"title\": \"Minor Corrections\"}",
"{\"comment\": \"Thanks for your reply! The response is sound and makes sense in every way. I do hope to see this paper get accepted.\\n\\nGood luck,\\nJason\", \"title\": \"Reply to Authors\"}",
"{\"comment\": \"Hi Eric,\\n\\nThanks for the interest in our paper! We are actively working on institute approval, we hope to have it available in the next few days (~ 1-2 weeks). We will keep you updated about this!\", \"title\": \"Code Release\"}",
"{\"comment\": \">>(6) A question about the setting: I wonder whether the setting for sentiment-conditioned generation is too elaborate? Why choose the SST-5 dataset and only use \\\"Very Positive\\\" and \\\"Very Negative\\\" instead of SST-2 dataset with a binary classification or SST-5 with a three-fold classification? The setting in the paper clearly exaggerates the real performance of the proposed method.\\n\\nIt was a simple design choice to use SST-5 over SST-2. We do not measure with SST-2 (or SST-5) and we use humans and an external dataset (IMDB reviews) to measure sentiment controllability. In this regard, the results are meaningful and fair. \\n\\n>>Nitpick:\\nThe Dist-1 metric in Table S4 is labeled as smaller better, which is in conflict with Table 4.\\n\\nThanks for the catch on the Dist-1 \\u201clower is better\\u201d typo! We have corrected this and it will appear in the next posted draft.\\n\\n\\nWe once again thank you both for your interest, and we\\u2019d be happy to continue the discussion.\\n\\n[1] Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style Transfer\", \"https\": \"//arxiv.org/abs/1804.06437\\n[2] Multiple-Attribute Text Rewriting, https://openreview.net/forum?id=H1g2NhC5KQ\\n[3] What makes a good conversation? How controllable attributes affect human judgements. https://www.aclweb.org/anthology/N19-1170/\\n[4] http://www.abigailsee.com/2019/08/13/what-makes-a-good-conversation.html;\", \"title\": \"Response to Karl and Jason's discussion 3/3\"}",
"{\"comment\": \">>(3) To what end is the generated text conditional? The results in Table 4 seem to be significantly lower than prior methods (e.g., CTRL-GEN).\\n\\nBoth human evaluation and evaluation via automated methods can be very useful, but unfortunately the percentages reported in each case are not comparable for several reasons:\\na) Human judgements are often quite different from automated measurements of an attribute. For example, a recent work on controlled generation in the style transfer setting MTR [2] is able to achieve 85%+ with automated evaluation (Table 4, [2]) but in the human evaluation setting the number drops to 69.6% (Table 5, [2]). Further, in DAR [1] -- which also focuses on style transfer -- the human evaluation accuracy is 64% (Table 5, [2]) while the automated evaluation accuracy for DAR in [1] is over 90%. We hope this is sufficient evidence that automated evaluation and human evaluation numbers are not directly comparable. Note that both DAR/MTR are more recent works that CTRL-GEN.\\n\\nHuman judges (both in our paper and [2]) are given the option of saying a sentence is neutral: neither positive/negative. Evaluation with an SST-2 discriminator will classify neutral sentences as positive or negative (binary classifier), resulting in an increased count for both positive/negative accuracies. This in turn results in exaggerated numbers for sentiment accuracy. Please note that human annotation in our paper (as in [2]) is not \\u201cbinary\\u201d but rather \\u201cternary\\u201d, as the human can mark if a sentence is positive or negative or neutral (if the human marks a sentence as not positive and not negative).\\n\\nb) Additionally, CTRL-GEN evaluated sequences of length<15. In contrast, our numbers are reported on sequences of length 80 (for topic) and sequences of length 50 (for sentiment). This difference further prohibits direct comparison. \\n\\n\\nc) While, the premise for PPLM is entirely different from DAR/MTR -- PPLM is generation from an unconditional model, note that we perform comparably to the human evaluation numbers reported in [2] for both DAR/MTR (more recent works than CTRL-GEN). Further, [2] reports the perplexity for CTRL-GEN (Hu et al., 2017) as 232.0 (significantly higher than other competing approaches). Also, note that conditioning can be increased at the cost of fluency (See Table S5, Supplementary Information) -- in that setting the topic/sentiment accuracy numbers would go up at the cost of fluency. In this regard, it makes sense to compare both accuracy and fluency between different approaches as done in [2].\\n\\n\\n\\n\\n>>(4) Will the fluency degenerate along with the conditionality? How to balance that? I didn't find an answer in the article.\\n\\nYes, fluency does degenerate with the strength of condition as shown in the paper. You can control that by decreasing the step-size alpha, increasing the coefficient for the kl-loss and also decreasing lambda_gm (in the limit, as lambda_gm goes to 0, you recover the original LM). We found a good set of hyper-parameters that work well in practice (Table S1 in Supplementary Information). We will make this clearer during revision.\\n\\n\\n>>(5) The BoW strategy seems to be confusing. If we simply multiply the probability of the keywords by a constant larger than one, will it also do the trick? For example, we manually multiply the probability of {best, excellent, wonderful, ...} by 2, can GPT-2 generate positive samples as well? If we measure the cosine similarity between the pre-trained word embeddings of predicted words and the given keywords, can we even get better results? (e.g., it will decrease the probability of words like \\\"terrible\\\", \\\"infamous\\\" and increase the probability of \\\"terrific\\\", \\\"gorgeous\\\"). It seems like the work in this paper is an over-sophisticated version of what I just mentioned.\\n\\nThe approach you describe with increasing the probabilities of the bag-of-word tokens (referenced to in prior work as \\u2018weighted decoding\\u2019 [3]) will work to some extent but will not result in samples with subtle changes of topics that result from promoting *related* words to the bag of words but not necessarily words directly from the bag. We provide more discussion in extended related work about weighted decoding [3], in S1 in Supplementary Information. For example, consider the sentences generated with topics \\u2018Politics\\u2019, \\u2018Science\\u2019, \\u2018Fantasy\\u2019 in Table 3, and topic \\u2018Fantasy\\u2019 in Table 7 -- here words closely related to the topic appear before words from the bag.\", \"title\": \"Response to Karl and Jason's comments: 2/3\"}",
"{\"comment\": \"Dear Jason and Karl,\\n\\nThank you for your interest in our paper, including posting questions and comments that will help the community better understand and contextualize our work. You both make several valid points, and we thank you both for this engaging discussion. In addition to the points made by Karl (who we are in no way associated with) -- we want to emphasize that our proposed methodology is Plug-and-Play, thus no re-training or fine-tuning of the language model is needed. We address Jason's comments below, although several of them have already been directly addressed by Karl\\u2019s responses. \\n\\n>>(1) This paper does not provide any comparison with other works. For example, for a publicly available conditional language model, CTRL is only mentioned in related work but not compared as a baseline. As a researcher myself, I completely understand that CTRL is recently released and the authors do not have sufficient time to run CTRL. But I wish to see the results as supplemental material later on OpenReview. On the other hand, there are already several conditional generation model, e.g., S-VAE, CTRL-GEN. The authors should at least compare with these works.\\n\\nBoth CTRL and CTRL-GEN train *conditional language models* -- in contrast, we demonstrate controlled generation with a *pre-trained unconditional language model*. We do not believe a meaningful comparison can be made with a \\u2018plug-and-play\\u2019 method and training a conditional LM (as done in S-GAN, CTRL-GEN and CTRL). That said, we could\\u2019ve made the distinction more clear in the paper; thanks for the suggestions! \\n\\nIn the limit of infinite annotated data and compute for training, approaches such as CTRL that directly train p(x|a) will outperform our approach. The main objective of our work is to provide a simple inexpensive alternate approach to directly trained conditional models. We\\u2019ve added a note to fully clarify this point in the next draft of the paper.\\n\\n\\n\\n>>(2) The comparison with the vanilla GPT-2 seems to be unreasonable. The GPT-2 baseline is not provided with any condition-specific information, which surely cannot generate conditional text. For example, if providing the GPT-2 with the CTRL-style prompt (e.g., Rating 5.0/5.0, or Topic: Military), I'd like to see how GPT-2 performs with this setting.\\n\\nOur main objective with considering unconditioned GPT-2 text (\\u201cB\\u201d) in the ablation study is to obtain a baseline for human judges. More precisely, it provides information about the bias induced by asking a human if a sentence is positive, or if it is about the military: do humans squint and try to interpret everything in a manner the question suggests? The data shows that they do, and it was important to measure exactly how much, e.g. see the high 13.0% of unconditioned GPT-2 sentences judged to be about any particular topic!\\n\\nA second key objective is to illustrate that we retain most of the fluency from vanilla GPT-2 generation.\\n\\nHowever, your suggestion of an additional baseline is a great one. We quickly tried it out, and preliminary results show that:\\n-- Merely adding the \\u201cTopic: xyz\\u201d prompt doesn't work too well in general. We did obtain some topical relevance for some topics when using short prefixes, but for longer prefixes and more complex topics, we found it not to work well.\\n\\n\\nFor example, with the prefix, \\u201cThe little girl lived in the woods, and her father was a carpenter\\u201d adding \\u201cTopic: Military\\u201d does not reliably generate military related samples. In contrast, PPLM generates military related samples very reliably with the same prefix.\\n\\n\\n-- For sentiment control, we found that simply adding \\u201cRating x/y\\u201d does not work reliably. Whether we added a \\u201cRating 1.0/5.0\\u201d or \\u201cRating 0.0/5.0\\u201d or \\u201cRating 5.0/5.0\\u201d prompt before the actual prefix, we found that this always biased GPT-2 to towards producing positive passages often.\\n\\n\\n-- For more complex tasks such as detoxification, it might not be simple to find an easy \\u201cextra prefix\\u201d that works.\\nWe\\u2019ve added running a more rigorous evaluation of this approach to our TODO list and hope to complete it and add the results to the paper before the end of the review period. Thanks very much for the idea.\\n\\nFinally, we also note that the two approaches are complementary -- our conditioning can simply be used as an add on to other forms of condition (such as used in the CTRL paper).\", \"title\": \"Response to Karl and Jason's discussion: 1/3\"}",
"{\"comment\": \"Hi, thanks for the paper! I was interested to take a closer look at the implementation of your method. I see you are working on getting approval to release your code, do you have any estimate on when it will be available? If its a while (I know getting company approval can be slow/annoying), is it possible to share it privately?\", \"title\": \"Any Estimated Time of Code Release?\"}",
"{\"comment\": \"I actually hope reviewers are not influenced by this discussion, as some of the requests in the original comment and considerations in it are incorrect and may mislead judgment, but hopefully the authors can actually draw something useful from this.\\n\\n1) I believe this paper is the best conditioned language generation we have seen so far that doesn't require retraining or finetuning, both really expensive to perform given the size of the original GPT-2 model, so, in this specific setting, I would consider it SOTA.\\n\\n2) different annotations of the same inputs make for different datasets. The difference could even be 50%, but if the two numbers are obtained in totally different settings like in this case (human evaluation vs automatic evaluation with a specific classifier trained on a specific dataset) they are absolutely non comparable. You say you don't believe the difference is only because of the evaluation, I'm saying the human judges have a completely different standard than the automatic system, thus the difference could have been smaller or bigger or reversed in sign, it would have still meant nothing at all.\\n\\n6) We agree on being curious to see the same automatic evaluation in CTRL-GEN applied here, to actually have a realistic reference. That said, I do believe that the human evaluation provided in the paper is much stronger evidence of the quality of the results than an automatic evaluation would be.\", \"title\": \"Reply\"}",
"{\"comment\": \"Hi Karl,\\n\\nThanks for the discussion. I hope our discussion is beneficial for both the authors and reviewers!\\n\\n1) Yes! Of course, CTRL will have much better result, but it does not compromise this paper. We all agree that the method is a weak baseline method instead of a competitive SOTA. However, a fair comparison with CTRL, the possible SOTA will benefit the NLG community. This part is missing in CTRL due to there was no comparable controlled language model. But here comes the chance. \\n\\n3) I'd like to kindly remind you that SST-2 and SST-5 are actually the same dataset. They only differ on labels. Thus, I don't think training discriminator on the other SST is fairer. However, I believe you'd also agree that using only \\\"Very positive\\\" and \\\"Very negetive\\\" samples will make both automatic and human evaluation higher than using SST-2 (which only has \\\"Pos\\\" and \\\"Neg\\\"). It is a pity that these two papers use different evaluation, which indeed cannot be compared directly. I also acknowledged that in my second comment, but adding that I do not believe -20% is only because of the different evaluation. On the other hand, if the authors provide comparison under the same settings, I guess it will be more convincing.\\n\\n6) My point is if both CTRL-GEN and this paper use binary generation, they should be at least somehow comparable. That's to say, I won't compare a binary result to a ternary one. On the other hand, as I mentioned above, the different evaluation methods are not directly comparable but that's the only thing I can do since I really want to know to what end can GPT-2 be conditional with this proposed method. A direct comparison will surely help answer that. I sincerely hope to see the results provided by the authors.\", \"title\": \"Reply #2\"}",
"{\"comment\": \"Hi Jason,\\n\\nI'm not an author of the paper, I guess authors answering will still be anonymous and still marked as authors, like previous years.\\n\\nI confused CTRL and CTRL-GEN (of which I was not aware of), sorry about that! My fault.\\n\\n1) I still think they are techniques with very different premises, in particular one is trained and the other is not. A comparison, although being a reasonable request, would be akin to comparing the machine translation capabilities in the GPT-2 paper with a fully supervise MT system, not super informative.\\n\\n3) My bad here for confusing the papers. Looking at CTRL-GEN I guess you are referring to Table 1 and Figure 3. If that is the case, I believe the topicality evaluated by human judges and the automatic accuracy obtained by a pretrained classifier are not comparable at all. But I think that type of evaluation could be requested to the authors of this paper, using the same independently trained classifier. Moreover, being their classifier being trained on a different SST, it would be an even more fair comparison than using the same classifier as discriminator too.\\n\\n5) My guess is that a reweighting and renormalization could have a similar effect of increasing the probability of the terms, although I believe if would not increase the probability of related words that are not in the bow, the words highlighted in dark red in the samples. But I agree with you, it would be interesting to see if that is actually the case.\\n\\n6) That accuracy 0.851 automatically computed score is not comparable with the 0.696 obtained from human evaluation. That said, I think most of the discriminator used in this paper are binary, so using a ternary alternative would be interesting too, although I'm not sure it would add a lot to the paper.\", \"title\": \"answering the clarification\"}",
"{\"comment\": \"Hi Karl,\\n\\nThanks for your attempt to try to explain these to me. I believe you are one of the authors? If so, I think replying in the role of authors may be more appropriate. However, I'd like to point out something in addition to my comment above. \\n\\nFirst, I never neglect the interesting and smart dynamic conditioning method proposed in the paper. However, as a normal reader instead of a reviewer, I thought it is not mandatory to also compliment the strengths in the paper.\\nSecond, I need to say the CTRL (proposed by Salesforce) and CTRL-GEN ( https://arxiv.org/pdf/1703.00955.pdf ) mentioned in my comment are not the same at all. I should have made it more clear. My bad.\\nThen, I'd like to provide some more explanation for my comment and response to your questions. \\n\\n1) As I already said in my comment, it was completely fine not to provide a comparison with CTRL. I totally understand that. However, what I kindly ask is if the authors can provide the following experiment results on OpenReview, and this is the spirit of an open review, right?\\n\\n2) Yes! It is a weak but meaningful baseline. I would certainly add this work as a baseline if I have future work on controlled generation.\\n\\n3) In Table 1 of CTRL-GEN (NOT CTRL!), the result of CTRL-GEN and S-VAE on SST are both higher than the result shown in the second column of Table 6 in this paper. I know there may be some nuance on experimental settings but it cannot result in an absolute 20% performance drop, right? On the other hand, Table 1,3,5,7 and appendix S2, S5 and S6 which you mentioned are all qualitative results (which may be cherrypicked, more or less). However, I was talking about a quantitive result instead.\\n\\n4) It is indeed a pro of this paper! I may have missed something in the appendix.\\n\\n5) Yes. I just wonder if the method proposed in this paper is better than my proposed over-simplified version? Also, I'm not suggesting the authors need to compare my naive simplification. It is just proposed for discussion.\\n\\n6) However, in the CTRL-GEN paper (https://arxiv.org/pdf/1703.00955.pdf , NOT CTRL!), the setting is more reasonable (as I mentioned in the comment, using SST-2 instead of SST-5) and it yields even better performance (0.851) than Table 6 of this paper (0.696). I believe the experiment in this paper is also a binary generation (+, -) instead of a ternary one (+, 0, -). If it is not, please point out. I am not very sure about that.\", \"title\": \"Some issues to be clear\"}",
"{\"comment\": \"This paper is really smart in how it performs conditioning of the generated text.\\nUnlike conditional generation and finetuning, which is a huge win, as the authors explain in the text, this method allows plugging any classifier and any bag of words as a way to condition the text, and also to change the intensity and the type of the conditioning along the way and also add additional conditioning aspects after the original LM model is trained, something that conditional methods cannot do.\", \"for_this_reason_i_believe_this_work_is_in_a_different_class_with_respect_to_other_works_and_some_of_your_comments_are_unfair\": \"1) CTRL was released publically less than a week before the ICLR deadline, which means it was entirely impossible for the authors to add anything about it other than citing it. Moreover, because this PPLM approach can do things that CTRL (and other conditional models) can't do and doesn't require any training, they are on different planes. Comparing them would be like comparing an unsupervised model with a supervised model. In the CTRL paper, moreover, they don't compare with anything neither they run a user study, while in this paper there's both a user study and an automatic evaluation, that together make results really convincing.\\n\\n2) the comparison with vanilla looks to me like a.way to provide a baseline for fluency, and it's clearly a useful one as the PPLM slightly decreases the fluency in the human evaluation, which is something the reader would want to know. Moreover the authors also have the BC and BR ablations to compare against, which makes the results totally fair.\\n\\n3) the extent of the conditioning is clearly shown in table 1, 3, 5, 7 and in the appendix S2, S5 and S6. I guess you are referring to table 4 for the first column, the topical one, where PPLM performs better than vanilla (obviously) and the ablations. In that case, the CTRL paper does not report any such human evaluation numbers, so I'm not sure how can you say from those numbers they they seem significantly lower (lower to what?).\\n\\n4) in the article they both talk about a couple hyperparameters ( the amount of gradient and the number of gradient updates) and they report some parameters that work well in practice, plus they also show in S2 and S5 how much both parameters influence the generation topicality and the fluency degeneration.\\n\\n5) they KL approach is more principled than the hack you are proposing and could be expanded in the future to non bag of words scenarios, like topic obtained from LDA or other methods that return distributions of words per each topic. I wouldn't call their oversophisticated, I would call yours oversimplified.\\n\\n6) I guess that was intended to show the strength of the conditioning. I don't believe it exaggerates the real performance of the model at all, it's just a specific choice about what to condition on to obtain a specific result, I don't see anything bad or unfair about it.\", \"title\": \"Great paper with really smart dynamic conditioning\"}",
"{\"comment\": \"The idea behind this paper is simple but interesting. However, I personally have some questions about this work and hope to get replies from the authors.\\n\\n(1) This paper does not provide any comparison with other works. For example, for a publicly available conditional language model, CTRL is only mentioned in related work but not compared as a baseline. As a researcher myself, I completely understand that CTRL is recently released and the authors do not have sufficient time to run CTRL. But I wish to see the results as supplemental material later on OpenReview. On the other hand, there are already several conditional generation model, e.g., S-VAE, CTRL-GEN. The authors should at least compare with these works.\\n\\n(2) The comparison with the vanilla GPT-2 seems to be unreasonable. The GPT-2 baseline is not provided with any condition-specific information, which surely cannot generate conditional text. For example, if providing the GPT-2 with the CTRL-style prompt (e.g., Rating 5.0/5.0, or Topic: Military), I'd like to see how GPT-2 performs with this setting.\\n\\n(3) To what end is the generated text conditional? The results in Table 4 seem to be significantly lower than prior methods (e.g., CTRL-GEN).\\n\\n(4) Will the fluency degenerate along with the increase of conditionality? How to balance that? I didn't find an answer in the article.\\n\\n(5) The BoW strategy seems to be confusing. If we simply multiply the probability of the keywords by a constant larger than one, will it also do the trick? For example, we manually multiply the probability of {best, excellent, wonderful, ...} by 2, can GPT-2 generate positive samples as well? If we measure the cosine similarity between the pretrained word embeddings of predicted words and the given keywords, can we even get better results? (e.g., it will decrease the probability of words like \\\"terrible\\\", \\\"infamous\\\" and increase the probability of \\\"terrific\\\", \\\"gorgeous\\\"). It seems like the work in this paper is an over-sophisticated version of what I just mentioned.\\n\\n(6) A question about the setting: I wonder whether the setting for sentiment-conditioned generation is too elaborate? Why choose the SST-5 dataset and only use \\\"Very Positive\\\" and \\\"Very Negative\\\" instead of SST-2 dataset with a binary classification or SST-5 with a three-fold classification (i.e., Positive, Neutral, Negative)? The setting in the paper clearly exaggerates the real performance of the proposed method.\", \"nitpick\": \"The Dist-1 metric in Table S4 is labeled as smaller better, which is in conflict with Table 4.\", \"title\": \"Neat tricks but may have some critical defects\"}"
]
} |
HkewNJStDr | Efficient High-Dimensional Data Representation Learning via Semi-Stochastic Block Coordinate Descent Methods | [
"Bingkun Wei",
"Yangyang Li",
"Fanhua Shang",
"Yuanyuan Liu",
"Hongying Liu",
"Shengmei Shen"
] | With the increase of data volume and data dimension, sparse representation learning attracts more and more attention. For high-dimensional data, randomized block coordinate descent methods perform well because they do not need to calculate the gradient along the whole dimension. Existing hard thresholding algorithms evaluate gradients followed by a hard thresholding operation to update the model parameter, which leads to slow convergence. To address this issue, we propose a novel hard thresholding algorithm, called Semi-stochastic Block Coordinate Descent Hard Thresholding Pursuit (SBCD-HTP). Moreover, we present its sparse and asynchronous parallel variants. We theoretically analyze the convergence properties of our algorithms, which show that they have a significantly lower hard thresholding complexity than existing algorithms. Our empirical evaluations on real-world datasets and face recognition tasks demonstrate the superior performance of our algorithms for sparsity-constrained optimization problems. | [
"Sparse learning",
"Hard thresholding",
"High-dimensional regression"
] | Reject | https://openreview.net/pdf?id=HkewNJStDr | https://openreview.net/forum?id=HkewNJStDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0KkqKZSgC",
"HyeEITxojr",
"S1lJepgjsr",
"HkeVs3gisB",
"BJgRgt7j5r",
"BkeUyj9z5r",
"Hyg-tqxRtr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729063,
1573748044172,
1573747942976,
1573747867683,
1572710645626,
1572149982449,
1571846777130
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1658/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1658/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1658/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1658/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1658/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1658/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"All the reviewers reach a consensus to reject the current submission.\\n\\nIn addition, there are two assumptions in the proof which seemed never included in Theorem conditions or verified in typical cases. \\n\\n1) Between Eq (16) and (17), the authors assumed the 'extended restricted strong convexity\\u2019 given by the un-numbered equation. \\n\\n2) In Eq. (25), the authors assume the existence of \\\\sigma making the inequality true.\\n\\nHowever those assumptions are neither explicitly stated in theorem conditions, nor verified for typical cases in applications, e.g. even the square or logistic loss. The authors need to address these assumptions explicitly rather than using them from nowhere.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for positive comments. We address your concerns as follows.\\n\\nQ1\\uff1aThe test functions are simple (i.e. logistic and linear regression), and the data sets are simple as well. I'd like to see more experiments using more complex problems with more realistic data.\", \"r1\": \"We have added more experimental results for solving the linear support vector machine (SVM) problem in the revised manuscript (please see Figure 11 in the revised manuscript). W\\n\\ne have also conductedWe make many experiments on five real-world (realistic) datasets including rcv1-test, rcv1-train, real-sim, news20, and , E2006-TFIDF. In addition, Many many experimental results (e.g., on face recognition tasks) are have been provided in Appendix (see Section F in Appendix). And all the models and datasets we use include the models and datasets have been used by almost all the state-of-the-art papers.\\n\\nQ2\\uff1aThere are two stronger assumptions in the paper, but they are never verified in the experiments. What else functions are following such functions? Any example?\", \"r2\": \"In fact, the two strong assumptions (i.e., Restricted Strong Convexity (RSC) and Restricted Strong Smoothness (RSS)) are commonly used in the literature of hard thresholding algorithms (such as Zhou et al. [2018], Yuan et al. [2014] and so on), which is are also stated in Section 3 in our main paper. Beside the linear / logistic regression functions, the multi-classes of softmax regression and linear SVM functions are also included in this field.\\n\\nPan Zhou, Xiao-Tong Yuan, Jiashi Feng. Efficient Stochastic Gradient Hard Thresholding, NeurIPS, 2018.\\n\\nXiao-Tong Yuan, Ping Li, Tong Zhang. Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization, ICML, 2014\", \"q3\": \"There are no validation experiments for Thm. 1. Are all the assumptions mild in practice? Is it possible to plot the upper bound in Eq. 4?\\n\\nR3\\uff1aFor the convergence rate of Theorem 1, we can see that the proposed algorithms achieve the best-known theoretical convergence result for the sparsity-constrained non-convex problem, and all the experimental results in our paper show that our algorithms have better performance than the state-of-the-art methods in terms of both effective pass and CPU time. To address your concern, we have added the comparison between the theoretical upper bound and the actual performance in the Appendix in the revised manuscript (please see Figure 10). All the experimental results show the correctness of our algorithms and theoretical analysis.\", \"q4\": \"I do not understand the usage of Def. 1.\", \"r4\": \"We use Definition 1 is used to define a hard thresholding complexity, as in Zhou et al [2018]. Based on this definition, we can evaluate the complexity of hard thresholding operations in theory, as in few several existing work such as ((Zhou et al., 2018)).\\n\\nPan Zhou, Xiao-Tong Yuan, Jiashi Feng, Efficient Stochastic Gradient Hard Thresholding. NeurIPS, 2018.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your positive comments. We address your concerns as follows.\", \"q1\": \"The algorithm is strongly based on previous optimization algorithms as the main difference with them is applying the hard thresholding less frequently. This sounds incremental and I think the paper could emphasize much more why this idea is a significant advancement over previous works.\", \"r1\": \"In our theoretical analysis, we can see that the hard thresholding operation used in each inner- loop, which is very time-consuming, usually O(dlog(d)). However, we reduce the hard thresholding complexity from linear dependence on kappa_shat (i.e., the restricted condition number) to an independent result (i.e., O(kappa_shatlog(\\\\frac{1}{\\\\epslion})) vs, . O(log(\\\\frac{1}{\\\\epslion})) ). This is a significant improvement for reducing the whole complexity of hard thresholding algorithms. Moreover, we have improved the required value of sparsity levels from the quadratic dependence on kappa_shat to a linear dependence result. It means that we can choose more flexible in the sparsity level to obtain an approximate solution of the sparsity-constrained non-convex problem.\", \"q2\": \"The paper could make a better job motivating the sparsity constraints, as it cites papers from 10 years ago which are not representative of state-of-the-art.\", \"r2\": \"Thanks for your suggestion. In the previous manuscript, we have cited some state-of-the-art work for solving the sparsity-constrained problem, such as (Zhou et al, 2018) and (Chen et al., 2017). To address your concern, we have added more references, e.g., the state-of-the-art papers 10 years ago in the revised manuscript, such as (Yang et al., 2010), (Zhang et al., 2011).\\n\\nZhou P, Yuan X, Feng J. Efficient stochastic gradient hard thresholding. NeurIPS, 2018.\\nChen J, Gu Q. Fast newton hard thresholding pursuit for sparsity constrained nonconvex optimization.SIGKDD, 2017.\\nYang J, Wright J, Huang T S, et al. Image super-resolution via sparse representation. IEEE transactions on image processing, 2010, 19(11): 2861-2873.\\nZhang L, Yang M, Feng X. Sparse representation or collaborative representation: Which helps face recognition ICCV, 2011: 471-478.\", \"q3\": \"The paper focuses on optimization time but not on generalization. Note that converging faster to the objective does not imply better generalization. I would have expected at least a discussion about this in the paper. Does the algorithm lead to better generalization accuracy or worse?\", \"r3\": \"In fact, we have conducted some experiments for the generalization ability in the Appendix on the face recognition task in the Appendix. All the results show that our SBCD-HT algorithm has a higher testing accuracy than the state-of-the-art methods (e.g., SVRG-HT) on the Extended Yale B dataset (please see Figure 5)same test set.\", \"q4\": \"The proof contains an assumption that it is unclear that has been explained in the paper. In the supplementary material, after eq. 25, the factor sigma is being introduced and the paper says \\u201cwe assume that there exists a constant factor sigma making this inequality true\\u201d\", \"r4\": \"Actually, in Eq. (25), we can see that the right-hand side of Eq. (25) is less than F(w^r)-F(w^m), since the L2-norm is always positive. Thus, the factor sigma could be used to describe the tight bound of this equation. This is not a strong assumption in the analysis and it can be satisfied by using a small sigma.\\n\\nQ5\\uff1aDefinition 1 does not really define hard thresholding.\", \"r5\": \"In fact, Definition 1 is to define not a hard thresholding operation but the hard thresholding complexity, not a hard thresholding operation. Definition 1 is the same as the definition of the hard thresholding complexity as in (Zhou et al., 2018).\\n\\nPan Zhou, Xiao-Tong Yuan, Jiashi Feng. Efficient Stochastic Gradient Hard Thresholding, NeurIPS, 2018.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your positive comments. We address your concerns as follows.\", \"q1\": \"In Corollary 1, how is the gradient oracle complexity defined or computed? And more specifically, how does one compare fairly the cost of doing a gradient update in Algorithm 1 on the *bigger set* S = Gtilde U G_jt vs. just G_jt for the Chen & Gu ASBCD algorithm? Is this accounted in the computation?\", \"r1\": \"In the computation of gradient oracle complexity, we omit the smaller set Gtilde since we cannot guarantee there is not overlap between Gtilde and G_jt and taking the dimension d into consideration, and thus the influence of Gtilde maybe actually small. We have shown the worst condition of this gradient complexity in the revised manuscript, and also made some discussions about this issue in the rRemark 1 in the revised manuscript.\", \"q2\": \"In Figure 1, which \\\"minimum\\\" is referred to and how is it found? I suspect it is not F(w*) (as it could be higher than F(w_t)), i.e. it is *not* the minimum of (1) with s*. One natural guess is that it might be min_w F(w) s.t. ||w||_0 <= s, though I do not see any guarantee in the main paper that running the algorithm would make F(w_t) converge to such a value (i.e. all we know from Thm 1 is that F(w_t) might be within O(||nabla_Itilde F(w*)||^2) of F(w*) ultimately. Please explain and clarify!\", \"r2\": \"As in (Zhou et al., 2018) , we run all the algorithms (e.g., FG-HT, SG-HT, SVRG-HT, ASBCDHT, and SBCD-HTP) sufficiently long until \\\\|w^t-w^(t-1)\\\\|/\\\\|w^t\\\\|<10e-6, since there is no ground truth on the real-world datasets. Then we regard the minimum of all the results to the optimal F(w^*). To address your concern, we have clarified this issue at the beginning of Section 6 in the revised manuscript.\\n\\nPan Zhou, Xiao-Tong Yuan, Jiashi Feng. Efficient Stochastic Gradient Hard Thresholding, NeurIPS, 2018.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"===== Update after author response\\n\\nThanks for the clarifications and edits in the paper.\\n\\nI recommend acceptance of the paper.\", \"other_comments\": \"Definition 1 in the updated version is still too vague (\\\"difference of what?\\\" -- function values? distance in norm between iterates?) -- this should be clarified.\\n\\n========\\n\\nThis paper considers the problem of sparsity-constrained ERM and asks whether one can design a variant of the stochastic hard thresholding approaches where the hard-thresholding complexity does not depend on a (sparsity dependent) condition number, unlike all previous approaches (Table 1). It proposes a method which combines SVRG-type variance reduction, with block-coordinate updates, leaving the hard thresholding operation outside the inner loop, to accomplish this goal. It provides a convergence analysis which significantly improves the previous best rates (by having both the sparsity level shat which is significantly lower (kappa_shat vs. kappa_stilde^2) as well as a condition number independent hard thresholding complexity (Table 1). An asynchronous and sparse (in the features) variant is also proposed, with even better complexity. Some standard experiments on sparse linear regression and sparse logistic regression is presented showing an improvement in both number of iterations as well as CPU time.\\n\\nI think the clarity of the paper should be quite improved (see detailed comment), hence why I think the paper is borderline, but I am leaning towards an accept given the significant theoretical improvements over the past literature (and positive empirical results), even though the algorithmic suggestion is somewhat incremental.\\n\\nThe proposed Algorithm 1 seems very close to the one of Chen & Gu (2016), the paper should be more clear about this. There seems to be mainly two changes: a) extending the support projection of the gradient to the union of the sampled block with the one of the support of the reference parameter wtilde (vs. just the sampled block in Chen & Gu (2016) and b) moving the hard-thresholding iteration outside of the SVRG-inner loop. These small tweaks to the algorithm yield a significant theoretical improvement, though.\\n\\n== Detailed comments ==\", \"clarity\": \"the number of long abbreviations with only one letter change make it hard to follow the different algorithms; perhaps a better more differentiating naming scheme could be used. Moreover, I think more background on the sparse optimization setup should be provided in the introduction or at least in the preliminaries, as I do not think the wider ICLR community is very familiar with it (in particular, no cited paper was at ICLR). For example, define early the separation in optimization error and statistical error; and point out that F(w_t) might even be lower than F(w*) as the sparsity threshold s might be much higher than s*. This will make Table 1 more concrete and less abstract for people who not are not yet experts on this particular analysis framework.\\n\\n- Table 1: I would suggest to put the rate for S2BCD-HTP instead on the last row and mention instead that the rate for ASBCD is similar under conditions on the delay; as it is interesting to already have a better gradient complexity for S2BCD vs. SBCD.\\n\\n** Questions: **\\n1) In Corollary 1, how is the gradient oracle complexity defined or computed? And more specifically, how does one compare fairly the cost of doing a gradient update in Algorithm 1 on the *bigger set* S = Gtilde U G_jt vs. just G_jt for the Chen & Gu ASBCD algorithm? Is this accounted in the computation?\\n\\n2) In Figure 1, which \\\"minimum\\\" is referred to and how is it found? I suspect it is not F(w*) (as it could be higher than F(w_t)), i.e. it is *not* the minimum of (1) with s*. One natural guess is that it might be min_w F(w) s.t. ||w||_0 <= s, though I do not see any guarantee in the main paper that running the algorithm would make F(w_t) converge to such a value (i.e. all we know from Thm 1 is that F(w_t) might be within O(||nabla_Itilde F(w*)||^2) of F(w*) ultimately. Please explain and clarify!\\n\\n== Potential improvement ==\\n\\nThe current result in Theorem 1, which is building on a similar proof technique as the original SVRG paper, has the annoying property of requiring the knowledge of the condition number in setting the size of the inner loop iteration. I suspect that this is an artifact of using an outdated version of the SVRG algorithm. This has been solved since then by considering a \\\"loopless\\\" version of SVRG which implicitly defines the size of the inner loop in a random manner using a quantity *which does not depend on the condition number*. This was proposed first by Hofmann et al. [2015], and then re-used by Lei & Jordan [2016] and more recently by Kovalev et al. [2019] e.g. Note that Leblond et al. (2017) that you cited profusely also used this variant of SVRG. I suspect that this technique could be re-used in your case to obtain a similar result with a loopless variant (which also gives cleaner complexity results). (Though I only skimmed through your proof.)\", \"caveat\": [\"the sensibility of the theory in the main paper seems reasonable, but I did not check the proofs in the appendix.\", \"= References:\", \"Hofmann et al. [2015]: Variance Reduced Stochastic Gradient Descent with Neighbors, Thomas Hofmann, Aurelien Lucchi, Simon Lacoste-Julien and Brian McWilliams, NeurIPS 2015\", \"Lei & Jordan [2016]: Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method,\\u00a0Lihua Lei\\u00a0and Michael I. Jordan, AISTATS 2016\", \"Kovalev et al. [2019]: Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop, Dmitry Kovalev, Samuel Horvath and Peter Richtarik, arXiv 2019\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces a new optimization algorithm for convex functions with sparsity constraints, based on hard thresholding to enforce sparsity. It is argued that applying hard thresholding in every step is the computational bottleneck of previous works based on hard thresholding, and the paper proposes a variant that alleviates previous works\\u2019 convergence problems. This is achieved by having an inner and outer loop in which the hard thresholding is applied only in the outer loop, and in the inner loop block coordinate descent is used (ie. applying updates to only a subset of the solution\\u2019s variables). The paper shows that the convergence bounds of the algorithms compares favourably to previous hard thresholding algorithms convergence bounds, and empirical results show the effectiveness of the algorithm compared to previous ones.\\n\\nIt is always exciting to see improvements in optimization algorithms as they can lead to improvements in many different problems. Yet, I have the following concerns:\\n-The algorithm is strongly based on previous optimization algorithms as the main difference with them is applying the hard thresholding less frequently. This sounds incremental and I think the paper could emphasize much more why this idea is a significant advancement over previous works.\\n-The paper could make a better job motivating the sparsity constraints, as it cites papers from 10 years ago which are not representative of state-of-the-art.\\n-The paper focuses on optimization time but not on generalization. Note that converging faster to the objective does not imply better generalization. I would have expected at least a discussion about this in the paper. Does the algorithm lead to better generalization accuracy or worse?\\n-The proof contains an assumption that it is unclear that has been explained in the paper. In the supplementary material, after eq. 25, the factor sigma is being introduced and the paper says \\u201cwe assume that there exists a constant factor sigma making this inequality true\\u201d. \\n-Definition 1 does not really define hard thresholding.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a semi-stochastic block coordinate descent hard thresholding pursuit (SBCD-HTP) algorithm for solving l0 sparsity-constrained minimization with decomposable convex objective. The key idea is to introduce BCD into SVRG type of algorithm to speed up the convergence as well as the hard thresholding operation. The paper is well written and easy to follow. The theoretical analysis is strong, I think. However, my main concern of the paper is the experimental part.\\n\\n1. The test functions are simple (i.e. logistic and linear regression), and the data sets are simple as well. I'd like to see more experiments using more complex problems with more realistic data. \\n\\n2. There are two stronger assumptions in the paper, but they are never verified in the experiments. What else functions are following such functions? Any example?\\n\\n3. There are no validation experiments for Thm. 1. Are all the assumptions mild in practice? Is it possible to plot the upper bound in Eq. 4?\\n\\n4. I do not understand the usage of Def. 1.\"}"
]
} |
H1gDNyrKDS | Understanding and Robustifying Differentiable Architecture Search | [
"Arber Zela",
"Thomas Elsken",
"Tonmoy Saikia",
"Yassine Marrakchi",
"Thomas Brox",
"Frank Hutter"
] | Differentiable Architecture Search (DARTS) has attracted a lot of attention due to its simplicity and small search costs achieved by a continuous relaxation and an approximation of the resulting bi-level optimization problem. However, DARTS does not work robustly for new problems: we identify a wide range of search spaces for which DARTS yields degenerate architectures with very poor test performance. We study this failure mode and show that, while DARTS successfully minimizes validation loss, the found solutions generalize poorly when they coincide with high validation loss curvature in the architecture space. We show that by adding one of various types of regularization we can robustify DARTS to find solutions with less curvature and better generalization properties. Based on these observations, we propose several simple variations of DARTS that perform substantially more robustly in practice. Our observations are robust across five search spaces on three image classification tasks and also hold for the very different domains of disparity estimation (a dense regression task) and language modelling. | [
"Neural Architecture Search",
"AutoML",
"AutoDL",
"Deep Learning",
"Computer Vision"
] | Accept (Talk) | https://openreview.net/pdf?id=H1gDNyrKDS | https://openreview.net/forum?id=H1gDNyrKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9abhtGXJdR",
"SyeYBOShoH",
"B1eoXuH3iB",
"HJgy-_ShjB",
"S1gi6wS2oS",
"rJg2Lwr3ir",
"HylLUgPU5S",
"HJgGSAKk9H",
"SylDRJcpYr",
"HkeEK8eZdS",
"HkgLJuJRvS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798729031,
1573832769325,
1573832738941,
1573832694870,
1573832643259,
1573832532419,
1572397134364,
1571950137779,
1571819470835,
1569945211994,
1569744862160
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1657/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1657/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1657/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1657/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1657/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1657/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1657/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1657/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1657/Authors"
],
[
"~James_Smith10"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Talk)\", \"comment\": \"This paper studies the properties of Differentiable Architecture Search, and in particular when it fails, and then proposes modifications that improve its performance for several tasks. The reviews were all very supportive with three Accept opinions, and authors have addressed their comments and suggestions. Given the unanimous reviews, this appears to be a clear Accept.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author response to official blind review #4\", \"comment\": \"Many thanks for your very positive feedback and for the acceptance score! We fixed the two typos you pointed out.\"}",
"{\"title\": \"Author response to official blind review #1\", \"comment\": \"Many thanks for your very positive feedback and acceptance score!\"}",
"{\"title\": \"Author response 3/3 to official blind review #3\", \"comment\": \"Q8: Figure 6, C10 S2\", \"a8\": \"We believe this behaviour comes from the inherent noise present in the eigenvalues of the Hessian when computing it on a validation mini-batch. Due to this noise, the stopping criterion would work less well in rare cases. E.g., when the architectural parameters do not overfit on the validation data, but the noise in the Hessian nevertheless triggers the early stopping mechanism too early, and therefore the resulting architecture is worse than the one DARTS would have found if it did not early stop. This case is more likely when running DARTS with a higher regularization value than its default values. In general, as shown in Figure 6 and Figure 7, the gain when early stopping with regularization values different from the default, is smaller (or negative as in the example you mentioned: C10 S2 with drop prob=0.6) compared the gain when early stopping DARTS with its default regularization factor.\", \"q9\": \"ScheduledDropPath in section 5.1\", \"a9\": \"Thanks, we agree. We nevertheless listed it in the augmentation section because whenever we enable ScheduledDropPath during search we also increase the Cutout probability linearly from 0 to 1 throughout the search alongside the ScheduledDropPath. We now clarified this in the paper.\", \"q10\": \"S1 S2... in Table 3\", \"a10\": \"S{1, 2, 3, 4} in Table 3 indeed refers to the different search spaces as described in Section 3.\", \"q11\": \"one-shot v.s. weight sharing model\", \"a11\": \"We agree with the reviewer that these two nomenclatures may have conceptually different meanings. So far, we have been using \\u201cone-shot\\u201d and \\u201cweight-sharing\\u201d interchangeably, but taking into account that \\u201cone-shot\\u201d can be seen as subset of \\u201cweight-sharing\\u201d methods, we changed this in our paper and now refer to the search model as weight-sharing model. But we are open to suggestions concerning nomenclature.\\n\\n- Typos \\nThank you for reading our paper in detail. We fixed this typo.\\n\\n-- References --\\n[1] Hanxiao Liu, Karen Simonyan, Yiming Yang, DARTS: Differentiable Architecture Search, ICLR 2019\\n[2] Liam Li, Ameet Talwalkar. Random Search and Reproducibility for Neural Architecture Search, UAI 2019\"}",
"{\"title\": \"Author response 2/3 to official blind review #3\", \"comment\": \"Q4: \\u201cR-DARTS failed to out-perform DARTS in the original space on CIFAR-10\\nThis is confusing, will this suggest, if tuning well, DARTS will surpass R-DARTS(L2) in other cases as well? Since this is the only setting that DARTS is built upon.\\u201d\", \"a4\": \"We agree with the reviewer that it indeed seems that the default DARTS hyperparameters are well-tuned for CIFAR-10, the dataset used during the development of DARTS. As Table 3 indicates, only changing the dataset leads to sub-optimal behaviour of DARTS. Of course one could tune DARTS\\u2019 hyperparameters on each new benchmark, but much care would have to be taken to avoid this getting too expensive in a practical setting. Furthermore, tuning weight-sharing NAS algorithms is not that straightforward, because (as we show) the search model validation performance does not correlate with the generalization of the architectures that DARTS finds. Therefore, one needs to optimize the error of stand-alone architectures (retrained from scratch) evaluated on a separate validation set (different from the subset used for updating the architectural parameters during the search), which is one of the most computationally expensive parts. R-DARTS solves this problem by only altering one hyperparameter internally and still has the same computational costs as running 1 DARTS run with the protocol 1 above as suggested in the DARTS paper..\", \"q5\": \"Using test data during search?\", \"a5\": \"We *never* use the test data when conducting any of the proposed heuristics. The test data is only used to indicate the generalization of the found architectures retrained from scratch in the very end, *after the search has finished*. It is in fact precisely the advantage of our early stopping mechanism based on curvature information that we do *not* need a separate dataset for the early stopping.\", \"q6\": \"L2 stabilizes max eigenvalue [Could the author try larger coefficients to determine when this trend will stop?]\", \"a6\": \"Thank you for raising this question. We would expect the generalization error of the found architectures to drop when increasing above a certain value. This is confirmed by the plots we added to Appendix H. For those we used two of our benchmarks, S1-C10 and S3-SVHN and conducted the DARTS search 3 independent times for each L2 value and for each DropPath max. probability value. The results show the mean test error when retraining from scratch using the same settings as in Fig. 6 and Fig. 7 of these 3 architectures.\", \"q7\": \"Question about section 4.2\", \"a7\": \"The x axis in Figure 5 shows the dominant eigenvalue after the search has finished, while the y axis shows the absolute value of the difference: \\u201csearch model validation accuracy\\u201d - \\u201cbinarized model (according to the argmax of \\\\alpha) validation accuracy, evaluated using the search model weights, not retrained from scratch\\u201d. Note that the settings for each point in the plot are different: the plot includes all our runs across all our search spaces (S1-S4) and datasets (C10, C100, SVHN). The points in Figure 4 on the other hand show the dominant eigenvalue on the x axis and the test error of the found architectures by DARTS when retrained from scratch. In this case we only use the C10 S1 settings (in total 24 points in the plot: 3 seeds x (4 runs with dp \\\\in {0, 0.2, 0.4, 0.6} + 4 runs with L_2 \\\\in {0.0009, 0.0027, 0.0081, 0.0243}).\\n\\nNote that what we consider a \\u201clow\\u201d curvature depends on the dataset and search space, since, as, e.g., Figure 12 shows, the scale of the dominant eigenvalue differs across these: e.g., for S2-SVHN the blue line goes up to a value of 0.275, while for S2-C100 it goes up to 1.3. This means that a \\u2018high\\u2019 or \\u2018low\\u2019 eigenvalue actually is relative to the benchmark we evaluate our algorithm.\"}",
"{\"title\": \"Author response 1/3 to official blind review #3\", \"comment\": \"Many thanks for your very detailed and very useful review, your positive feedback, and for the acceptance score. We have updated the paper and now reply to all your questions in detail.\", \"q1\": \"Problem of DARTS as a motivation [Using the largest eigenvalue does not seem enough as an indicator of the local shape?]\", \"a1\": \"We agree with the reviewer, the largest eigenvalue by itself is not enough. We did indeed also compute the full Hessian eigenspectrum on a randomly sampled validation mini-batch (please see Figure 14 and Figure 15 in Appendix D.2), and as one can see, not only the dominant eigenvalue is larger when comparing a low regularization factor vs. a high regularization factor, but also the other eigenvalues throughout the spectrum. This indicates that the curvature is higher not only towards one principal axis, but towards all the principal axes. The distribution of the eigenvalues in the eigenspectrum show clearly that for lower regularization factors the tail of the distribution becomes larger. We also thank the reviewer for pointing us to the very interesting paper on loss-landscape visualization; we are currently working on integrating this into our code.\", \"q2\": \"Questions about Figure 3 experiments\", \"a2\": \"All test errors reported in the paper are indeed computed on the full test set, using the final stand-alone model (the single architecture we find in the end), obtained via applying the argmax to the optimized architectural weights; for computing test errors, we always train these models from scratch. The word \\u201cfinal architecture\\u201d in Section 4.1 refers to this final stand-alone model -- thanks, we updated the paper to make this clearer. The super-net parameters are only used for the results in Figure 5 in order to compute the correlation between the accuracy drop after binarization (we call this discretization in our paper) and the dominant eigenvalues of the Hessian.\", \"q3\": \"More independent runs of experiments.\", \"a3\": \"We actually do 4 search runs for each R-DARTS run in Table 4. We use the following procedure (Protocol 1) introduced in the DARTS paper [1] to select the architecture that will be trained from scratch using the evaluation settings:\\nDo 4 independent DARTS (R-DARTS) search runs with the same (4 different) regularization factor(s).\\nRetrain from scratch the 4 found architectures for 100 epochs and select the best according to the validation accuracy.\\nRetrain from scratch the selected architecture (with more initial filters, stacked cells, etc.) for 600 epochs and compute the test error on the full test data.\\nFor the image classification datasets in Table 4 we repeated this protocol 5 times and report the mean +/- std test error of the 5 architectures returned from step 3. For the results in Table 3 and PTB in Table 4 we only repeated this protocol once due to the large computational costs of doing this for a lot of cases, however each of these entries is already based on 4 independent DARTS runs, and the best selected model is always retrained from scratch.\\n\\nThe results in Figure 3 and Table 1 report the results when using the following simpler procedure (Protocol 2):\\nDo 3 independent DARTS search runs with the same regularization factor.\\nRetrain from scratch the 3 found architectures using the full evaluation pipeline (more initial filters, more stacked cells, etc.) and compute the test error on the full test data.\\nReport the mean +/- std test error of the 3 architectures.\\n\\nFor completeness, we added Table 6 (we had these results already before the rebuttal phase) in the Appendix, which reports the results when running Protocol 2 using Random Search with Weight Sharing [2], DARTS [1], DARTS-ES and DARTS-ADA for all the settings in Table 3. Note that the 3 architectures evaluated in Table 6 are a subset of the 4 architectures used in Protocol 1. We also added in Appendix G (Figures 27 and 28) the cells found when running the experiments in Figure 1 with 2 other random seeds; the qualitative results indeed remain the same.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studies the causes of why DARTS often results in models that do not generalize to the test set. The paper finds that DARTS models do not generalize due to sharp minima, partially caused by the discretizing step in DARTS. The paper presents many experiments and studies on multiple search spaces, showing that this problem is general. To address this problem, the paper proposes several different ways to address this, e.g., an early stopping criteria and regularization methods.\\n\\nOverall, the paper is well written, thorough experiments (various tasks and search spaces) show the benefit of the approach. The final experiments using the full DARTS space also show an improvement over standard DARTS.\\n\\nThe final method is fairly simple, running the search with different regularization parameters and keeping the best model, which suggests it could be widely used for DARTS-based approaches. \\n\\n Two (very) minor comments:\\n- There's a missing space before \\\"Similarly\\\" in section 2.1\\n- Extra \\\")\\\" in last paragraph of section 2.2 (after the argmin eq).\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper seeks to understand why Differential Architecture Search (DAS) might fail to find neural net architectures that perform well. The authors perform a series of experiments using different kinds of search spaces and datasets, and concluded that a major culprit is the discretization/pruning step at the end of DARTS. To avoid this, the authors propose early stopping based on measuring the eigenvalue of the Hessian of the validation loss. The results look promising (though as someone who is not familiar with the datasets, I don't have a sense of the significance of improvements.)\\n\\nIn general, this is a strong paper. I enjoyed reading it. It describes the problem clearly and performs a set of convincing experiments to support the claims. I especially like how different constrained search spaces are investigated, as this makes the results easier to interpret. I think the analysis in this paper will benefit researchers who work on similar problems.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"----- Updated after rebuttal period ---\\n\\nThe author's detailed response effectively addressed my concerns. I am moving my score to Accept. This paper proposes an interesting systematic study of differentiable approach in NAS.\\n\\n------ Original Review ----\\n\\nSummary \\n\\nThis paper presents a systematic evaluation on top of differentiable architecture search (DARTS) algorithm and shows it usually searched an architecture with all skip-connection. It empirically reveals that the largest eigenvalue of the Hessian matrix (\\\\lambda) of loss w.r.t. architecture parameters has a strong correlation with the generalization ability (via loss of test dataset), and shows this \\\\lambda will first decrease but then drastically increase after a certain epoch number on 4 different search spaces. It then proposes an early-stop scheme (DARTS-ES) to stop the search before this phenomenon occurs. In addition, it proposes to use data-augmentation, path-dropping and tuning L2 regularization during the search, namely Robust-DARTS(R-DARTS), and yield constantly better results over original DARTS on 3 datasets. \\n\\n\\nOverall, the observation that the largest eigenvalue of the Hessian matrix is novel and intriguing, and the experiments are extensive and meaningful. The idea to use more search spaces for comparison is fair and performance increase demonstrates the proposed R-DARTS and DARTS-ES are effective. Although I still have some questions regarding the detail settings, I think this paper provides a novel angle to understand the search phase of DARTS, and proposed simple but effective regularization can be beneficial to the research community using DARTS.\", \"main_concerns\": \"- Problem of DARTS as a motivation \\nThe claims of local smoothness/sharpness and generalization are related to network generalization is quite intriguing, however, only using largest eigenvalue of Hessian matrix as an indicator of this local shape does not seem to be enough. A recent paper on loss-landscape visualization [1] provides means to examine this hypothesis directly, and could the author try to provide additional visualization to support their claim? Otherwise, the paper's claim does not generalize to the local shape of the loss function, and should stays with the largest eigenvalue. It is totally okay in my perspective, but just indicates some revision to the main text and analysis of Section 4.2.\\n\\n- Questions about Figure 3 experiments\\nHow is test error computed? Is it on a batch of test-split, or the entire one? Also, which architecture is used to compute this test error? Paper mentioned, in Section 4.1, the word \\\"final architecture\\\", but does this refer to the super-net (the one-shot model in paper's definition), or the stand-alone model obtained via binarized architecture alphas? If latter, is this generalization error obtained from training from scratch? Or simply using the super-net parameters during the search? Since the conclusion of this plot serves as the foundation of designing R-DARTS and DARTS-ES, if the experiments are only conducted over a small set of images or the binarized model with super-net parameters, it undercuts the credibility of the conclusion, largest architectural eigenvalue, and generalization ability.\\n\\n\\n- More independent runs of experiments.\\nIn Figure 3, validation of original DARTS, and Table 1, DARTS vs DARTS-ES, paper runs the experiment for 3 times and take the average, but for the proposed R-DARTS, it is not. Is there a reason why not scaling the experiments? I suggest the author provide results over 5 runs, like the one in Table 4 for PTB and show if the R-DARTS truly surpasses DARTS constantly. This question also applies to Figure 1, when paper claims the original DARTS found poor cell type, could this be repetitive over multiple runs?\\n\\nI guess for the experiments in Table 3, it is already done since paper mentioned the reported results are the best model of searching 4 times. \\n\\n- R-DARTS failed to out-perform DARTS in the original space on CIFAR-10\\nThis is confusing, will this suggest, if tuning well, DARTS will surpass R-DARTS(L2) in other cases as well? Since this is the only setting that DARTS is built upon. \\n\\n\\nMinor comments and questions\\n\\n- Using test data during search\\nAfter showing the strong correlation between the largest eigenvalue of Hessian and the network generalization error, the early-stop is natural, however, does this mean the model selection is using the test data? Or the actual test-data is never seen during the search phase of Section 4.3.\\n\\n- L2 stabilizes max eigenvalue\\nPaper uses L2 coefficient up to 0.0243, showing constant improvement of test error while validation error drops in CIFAR-10 of Figure 11. Could the author try larger coefficients to determine when this trend will stop? \\n\\n- Question about section 4.2\\nPerformance drop due to the binarized operation (pruning step) in DARTS analysis is very interesting, I am curious how many architectures does the paper evaluate in Figure 5, when the dominant eigenvalue is smaller than 0.25? Since the conclusion is \\\"low curvature never led to large performance drops\\\" if the number of points is too few, it is not that convincing, especially from the plot, we see at eigenvalue = 0.5, there exists 2 architecture with >20% drop. In addition, what does each point in Figure 5 refer to? The best model (and the binarized one according to the argmax of \\\\alpha) of one independent DARTS run or some binarized models sampled from a distribution on top of the same DARTS run (meaning only one super-net)? Is this experiment follows the setting in Figure 4? \\n\\n- Figure 6, C10 S2, DARTS-ES is worse than DARTS when Drop probability = 0.6, whereas all other cases, DARTS-ES outperforms DARTS, why does this happen? Could the author comment on it?\\n\\n- ScheduledDropPath in section 5.1\\nDoes Drop-path belongs to data-augmentation techniques? It is more like a regularization in my perspective and should be grouped with 5.2.\\n\\n- S1 S2... in Table 3\\nDoes this refer to the search space? Or different random seed (mentioned )\\n\\n- one-shot v.s. weight sharing model\\nOne-shot in NAS domain is firstly introduced by Bender et al., while Pham et al. use parameter sharing. The reason to use one-shot is that all the sub-paths will have a fair chance to be trained. \\n\\n- Typos\\n1. In section 2.1, line 4 \\\"better.Similarly\\\" should have space.\\n\\n--- Reference ---\\n[1] Li et al., Visualizing the Loss Landscape of Neural Nets, arxiv'17.\"}",
"{\"comment\": \"Hi James,\\nthank you for your comment. Indeed, the independent work you mention also proposes an early stopping for DARTS to avoid instabilities. Their observation seems - to some extent - to be consistent with ours. E.g., they also observe an increased prevalence of skip connections over the course of search, resulting in poorly performing architectures and motivating an early stopping criterion.\\n\\nHowever, there are some fundamental differences in both works: \\n1) The DARTS+ authors hypothesize that the problem in DARTS is solving the bi-level optimization problem. In contrast, we show that the bi-level optimization problem is actually solved fine and that the problem instead lies in the discretization and generalization. E.g., in Figure 3 in our paper, you can see (left plot), that the validation error (= objective to be minimized in the bi-level optimization problem) steadily decreases, i.e., the alternating optimization of DARTS works fine in these cases. Rather, only the test error (middle plot) increases. Therefore, we argue that the problem lies in the generalization capability. Unfortunately, it is not clear if the DARTS+ authors plot the train-, validation-, or test error in their Figure 2 (showing the collapse), but based on our finding we\\u2019d guess it is test error, which would then be consistent with our experiments. \\n2) The DARTS+ authors hypothesize that changing hyperparameters (such as regularization parameters) will likely not solve the problem. However, in our experiments, with the fixed search time of 50 epochs (which is the vanilla DARTS default), we observed that increasing regularization strength *did* help to prevent poor generalization performance (on average). We refer to Figures 6 and 7 in our paper, which show that increasing drop path probabilities and L2 regularization, respectively, usually helps to find better architectures -- also without early stopping, i.e., with vanilla DARTS (solid lines in all plots). See also Table 2 as well as Figures 10 and 11 in the appendix. \\n3) Early stopping is only a minor contribution in our work that resulted as a byproduct from one of our core scientific contributions: analysing the eigenvalues of the Hessian of the validation loss w.r.t. the architectural parameters and relating them to generalization performance.\\n\\nHaving said this, the proposed stopping criterion in DARTS+ based on the ranking of architectural parameters seems interesting; we will look into this in the future and investigate whether, e.g., the ranking also correlates with large eigenvalues.\\n\\nWe did not run our models on ImageNet yet, as the focus of our work is on gaining insights why DARTS fails and how we can prevent these failures. Therefore, we rather went the opposite direction: to *small* cases that ought to be easy for NAS methods and which are computationally relatively cheap, so that we can do multiple repeats and really gain fundamental scientific insights about the performance of different NAS methods, rather than engineering gains on ImageNet. In that vein, we introduce (and make available code for) 12 new, relatively cheap, benchmarks where DARTS fails that ought to help provide a solid foundation for empirical work in studying this problem henceforth. \\nIn contrast, the DARTS+ paper is definitely very strong on the engineering side, achieving new state-of-the-art performance for CIFAR by using a final training pipeline that is modified from the original DARTS paper (e.g., they train for 2000 epochs rather than 600 on CIFAR, use AutoAugment, and emphasize their use of various further tricks). We commend the DARTS+ authors for their great work on improving the final training pipeline -- it is amazing to see how much they can push the SOTA by doing so. On the other hand, these changes are orthogonal to the architecture search and would likely similarly improve the performance of vanilla DARTS. This makes it hard to assess how much the DARTS+ search actually improves over the DARTS search and how much improvement is due to the different final training pipeline. We refer to the recent NAS best practice paper ( https://arxiv.org/abs/1909.02453 ), which argues that for a fair comparison between NAS methods, the final training pipeline (optimizer, hyperparameters, data augmentation, number of epochs, ...) should be identical. For this reason, to ensure a fair comparison, we did not change this final training pipeline at all in our work (but only changed the architecture search process). If the DARTS+ authors release the code for their final training pipeline for CIFAR and ImageNet we could also run DARTS and our extensions of it on that.\\n\\nThank you for your interest, and we\\u2019ll be happy to answer further questions.\\nThe authors\", \"title\": \"Comments on other work with early stopping\"}",
"{\"comment\": \"I remember that there is another paper doing the similar thing recently (DARTS+: Improved Differentiable Architecture Search with Early Stopping, https://arxiv.org/abs/1909.06035).\\n\\nThey also point out the instability of DARTS and try to use the early stopping trick to improve it. However, they early stop DARTS when the ranking of conv operations becomes stable which is quite different from yours. I wonder if there is any underlying connection between the two works, and which one is more fundamental.\\n\\nThey claim that they achieve the state-of-the-art results of classification on several datasets, including CIFAR-10, CIFAR-100 and ImageNet. Do you have evaluated your algorithms on ImageNet? Thanks!\", \"title\": \"Another work which early stops DARTS to stabilize the performance\"}"
]
} |
B1g8VkHFPH | Rethinking the Hyperparameters for Fine-tuning | [
"Hao Li",
"Pratik Chaudhari",
"Hao Yang",
"Michael Lam",
"Avinash Ravichandran",
"Rahul Bhotika",
"Stefano Soatto"
] | Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks. Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyperparameters and keeping them fixed to values normally used for training from scratch. This paper re-examines several common practices of setting hyperparameters for fine-tuning. Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. (1) While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter. We find that the value of momentum also affects fine-tuning performance and connect it with previous theoretical findings. (2) Optimal hyperparameters for fine-tuning, in particular, the effective learning rate, are not only dataset dependent but also sensitive to the similarity between the source domain and target domain. This is in contrast to hyperparameters for training from scratch. (3) Reference-based regularization that keeps models close to the initial model does not necessarily apply for "dissimilar" datasets. Our findings challenge common practices of fine-tuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning. | [
"fine-tuning",
"hyperparameter search",
"transfer learning"
] | Accept (Poster) | https://openreview.net/pdf?id=B1g8VkHFPH | https://openreview.net/forum?id=B1g8VkHFPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"JGqNbI89wN",
"Hkxx5ys3jH",
"Bkl_x05niH",
"ByeXaaqhoS",
"B1e2K9q3ir",
"BJgIS_c3ir",
"BklYgHvGsH",
"BkxO9K7JcH",
"B1xkAOgitB",
"SJev0nZKYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798729000,
1573855112481,
1573854703695,
1573854651322,
1573853828506,
1573853246411,
1573184752574,
1571924367678,
1571649735500,
1571523790898
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1656/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1656/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1656/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1656/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1656/Authors"
],
[
"~Simon_Kornblith1"
],
[
"ICLR.cc/2020/Conference/Paper1656/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1656/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1656/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents a guide for setting hyperparameters when fine-tuning from one domain to another. This is an important problem as many practical deep learning applications repurpose an existing model to a new setting through fine-tuning. All reviewers were positive saying that this work provides new experimental insights, especially related to setting momentum parameters. Though other works may have previously discussed the effect of momentum during fine-tuning, this work presented new experiments which contributes to the overall understanding. Reviewer 3 had some concern about the generalization of the findings to other backbone architectures, but this concern was resolved during the discussion phase. The authors have provided detailed clarifications during the rebuttal and we encourage them to incorporate any remaining discussion or any new clarifications into the final draft.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"our response\", \"comment\": \"Thanks for the comment, Simon! Yes, as we stated in section 3.3 \\u201cit is effective learning rate that matters for the best performance. It explains why the common practice of changing only learning rate generally works, though changing momentum may result in the same effect\\u201d. We agree that simply emphasize the importance of momentum or any single hyperparameter without context may be miss leading. As we believe learning rate is a \\u201ccritical\\u201d hyperparameter, the aim of the first part is to show momentum is also an equivalent \\u201ccritical\\u201d hyperparameter if other hyperparameter is fixed (e.g., lr=0.01) not well tuned, rather than common belief that 0.9 is the best value or no effect when momentum is tuned.\\n\\nIn fact, as long as the initial learning rate is smaller enough, we can always search for the optimal momentum as momentum is an amplifier, making the ELR larger by a factor of 1/(1-m). Therefore, momentum does determine the search ranges of learning rate. We would like to provide an alternative view, similar as [Smith et al, 2018] where increasing batch size during training has the same effect as decreasing learning rate.\\n\\n[1] Smith et al, Don\\u2019t decay the learning rate, increase the batch size, ICLR 2018\"}",
"{\"title\": \"our response 1/2\", \"comment\": \"Thanks for your detailed and constructive review! We have made changes and adding additional experiments in the latest revision for answering the following questions.\", \"q1\": \"\\u201cMore experiments on other backbones are necessary.\\u201d\", \"a\": \"In Appendix C of revised version, we calculate the domain similarity with ImageNet for Caltech256 and MIT-Indoor with our ImageNet pre-trained ResNet-101. Their similarities are 0.8916 and 0.8563, which are still larger than Cars (0.8452), Aircrafts (0.8404) and Flowers (0.8444). In Table 2, momentum 0 is better than momentum 0.9 when the learning rate is 0.01 and 0.005, which indicates the ELR with momentum 0.9 is too large and momentum 0 makes the ELR 10x smaller and is more close to the optimal value.\", \"q2\": \"\\u201cthis submission claims that the regularization methods such as L2-SP may not work on networks with Batch Normalization module. But there is no comparison on networks without BN.\\u201d\", \"q3\": \"\\u201cProviding a complete hyperparameter selecting strategy for fine-tuning could be an important contribution of this submission. I suggest authors to think about it.\\u201d\", \"q4\": \"\\u201cthis submission does not propose a proper method for measure the similarity or provide detailed experiments on previous measurements\\u201d\", \"q5\": \"\\u201cIt seems that the MITIndoors Dataset is not similar with ImageNet from the semantic view. This submission does not provide similarity measurement between these datasets. Why the optimal momentum is 0? \\u201d\"}",
"{\"title\": \"our response 2/2\", \"comment\": \"Q6: \\u201cThe effective learning rate and \\u2018effective\\u2019 weight decay are not first given in this submission. This makes the novelty of this submission relatively weak.\\u201d\", \"a\": \"To clarify, we do not change the default momentum in Batch Normalization in our end-to-end fine-tuning experiments. As the reviewer can appreciate, in principle all the hyper-parameters could be tuned, we choose to focus on the most important ones.\\n\\nWe add a discussion about BN momentum in Appendix D. Kornblith et al., 2018 found it is critical to decrease the batch normalization momentum parameter from its ImageNet value to max(1 \\u2212 10/s, 0.9) where s is the number of steps per epoch. This will change the default BN momentum value (0.9) when s is larger than 100, which means the original dataset size has to be larger than 100*256 = 25.6K. The maximum data size used in our experiments is Caltech-256, which is 15K, so it is not applicable to our experiments.\\n\\nWe do add experiments for exploring the effect of BN momentum by performing similar study as to ELR. We try to identify whether there is an optimal BN momentum for each dataset. For each dataset, we fine-tune the pre-trained model using previously obtained best hyperparameters and only change the BN momentum before fine-tuning. In addition to the default value 0.9, we searched 0.0, 0.95 and 0.99. We believe if BN momentum is critical, we may expect noticeable performance differences. The results are shown in Appendix D. We observe 0.99 slightly improves the performance for some datasets, however, we did not see the significant performance difference among values greater than 0.9. We suspect that when fine-tuning steps is long enough, the dataset statistics will eventually be adapted to the target dataset. \\n\\n[1] Cui et al. 2018, Large scale fine-grained categorization and domain-specific transfer learning. In CVPR 2018\\n[2] Smith et al, Don't decay learning rate, increasing the batch size, ICLR 2018\\n[3] Kornblith et al, Do Better ImageNet Models Transfer Better? CVPR 2019\", \"q7\": \"\\u201cIt seems that merely searching for learning rate and weight decay hyperparameters (as Kornblith et al. (2018) did) on a fixed momentum is Ok if there is a most effective relationship between learning rate and momentum.\\u201c\", \"q8\": \"\\u201cThis submission omits that Kornblith et al. (2018) also referred to the fact that the momentum parameter of BN is essential for fine-tuning and provided a strategy in section A.5. Discussion about this strategy will make this submission more complete.\\u201d\"}",
"{\"title\": \"our response\", \"comment\": \"Thanks for your positive review! We have updated our paper and added Appendix C regarding the concerns on similarity calculation.\", \"q1\": \"\\u201cfor the five datasets provided, the similarity of them are really close, making this claim less convincing...the authors may need a more reliable method to compare the similarity between datasets\\u201d\", \"a\": \"The original Earth Mover\\u2019s Distance is much larger, e.g., the raw EMD value for Birds and Cars are 15.08 and 16.82. We follow the method of Cui et al, 2018 by using the scaled exponential value as similarity, and their similarity scores after processing are 0.8600 and 0.8452. We introduce this process in Appendix C of the revised version and made further comparison with the original approach.\\nDifferent with Cui et al, 2018 that calculated the similarity with ResNet101 pre-trained on JFT, which is not public available. We use the source model as the feature extractor. We find the scale of similarity is a bit different with the original paper. But the relative similarities to ImageNet is almost the same, such as Dogs is still more similar to ImageNet than Birds or Cars and it is consistent across different architectures. \\nWe find that the similarities correlates with the scale of optimal ELR pretty well, though it is not a strict correspondence. This is already useful for reducing the hyperparameter search range in a dataset dependent way. We provide a heuristic approach for exploiting this correlation. We believe more reliable and accurate similarity calculation method will be very useful for efficient optimal hyperparameters prediction for fine-tuning and our findings are one step towards it.\"}",
"{\"title\": \"our response\", \"comment\": \"Thanks for your positive review! In our revised version, we have fixed the typos and made the figures easier to read and adjusted our narratives. Below are our response to your questions:\", \"q1\": \"\\u201cMy main concern about this paper relates to the importance of momentum\\u201d\", \"a\": \"We agree. We use the test data of these datasets for validation and report the validation error. We have corrected the axes in Figure 4 and make them more consistent. As we noted in section 3.1, our goal is not getting STOA performance on these datasets. For each trial of hyperparameter configuration, we report the performance of the last epoch after training rather than selecting the best one during training. The curves are only used for monitoring and comparison.\\n\\n[1] Smith et al, Don\\u2019t decay the learning rate, increase the batch size, ICLR 2018\", \"q2\": \"\\u201cThe EMD values of Birds, Cars and Aircrafts are within 0.7 points of each other...I find it somewhat hard to believe that these small differences explain the error differences on Table 2\\u201d\", \"q3\": \"\\u201cit would be better to report in this kind of research validation rather than test results\\u201d\"}",
"{\"title\": \"Tuning momentum is not important if learning rate is tuned\", \"comment\": \"While it contains several interesting findings, I do not believe this paper should be accepted as long as it states that \\\"picking the right value for momentum is critical for fine-tuning performance.\\\" Section 3.3 of the paper shows that this is false, and what is critical is performing a sufficiently expansive grid search for learning rate. If this is done, there is no need to separately tune momentum. All effects attributed to momentum can be obtained by adjusting learning rate. Both reviewers 2 and 3 recognize this below, but I'd like to provide a signal boost.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This submission studies the problem of transfer learning and fine tuning. This submission proposes four insights: Momentum hyperparameters are essential for fine-tuning; When the hyperparameters satisfy some certain relationships, the results of fine-tuning are optimal; The similarity between source and target datasets influences the optimal choice of the hyperparameters; Existing regularization methods for DNN is not effective when the datasets are dissimilar. This submission provides multiple experiments to support their opinion.\", \"pros\": [\"This submission provides interesting facts that are omitted in previous research works.\", \"This submission examines the previous theoretical results in empirical setting and finds some optimal hyperparameter selection strategies.\", \"This submission provides many experiment results of fine-tuning along with its choice of hyperparameters that could be taken as baselines in future researches.\"], \"cons\": [\"All experiments results are based on same backbone, which makes all discoveries much less reliable. More experiments on other backbones are necessary. Furthermore, this submission claims that the regularization methods such as L2-SP may not work on networks with Batch Normalization module. But there is no comparison on networks without BN.\", \"Providing a complete hyperparameter selecting strategy for fine-tuning could be an important contribution of this submission. I suggest authors to think about it.\", \"This submission claim that the choice of hyperparameters should depend on similarity of different domains. But this submission does not propose a proper method for measure the similarity or provide detailed experiments on previous measurements.\", \"It seems that the MITIndoors Dataset is not similar with ImageNet from the semantic view. This submission does not provide similarity measurement between these datasets. Why the optimal momentum is 0?\", \"The effective learning rate and \\u2018effective\\u2019 weight decay are not first given in this submission. This makes the novelty of this submission relatively weak. Authors only test these strategies in fine-tuning setting and find that they also work with a different initialization.\", \"It seems that merely searching for learning rate and weight decay hyperparameters (as Kornblith et al. (2018) did) on a fixed momentum is Ok if there is a most effective relationship between learning rate and momentum. So the discoveries in the first part that a 0 momentum can be better is based on a careless search of learning rates?\", \"This submission omits that Kornblith et al. (2018) also referred to the fact that the momentum parameter of BN is essential for fine-tuning and provided a strategy in section A.5. Discussion about this strategy will make this submission more complete.\", \"This submission gives important discoveries about the hyperparameter choice in the fine-tuning setting. But there are several flaws in this submission. I vote for rejecting this submission now but I expect authors to improve the submission in the future version.\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper provides extensive experimental results to investigate the influence of hyper-parameters on fine-tuning and challenges several commonly-held beliefs. The hyper-parameters of training from scratch does not always perform well when applied to fine-tuning. Furthermore, current L_2-SP regularization is not necessarily helpful when the domain discrepancy is large.\\nThe authors discover that the optimal momentum value is closely related to domain similarity. For similar target datasets, 0 momentum is a better choice than 0.9, since it potentially allows better convergence. Similar to training from scratch, the actual effect at play is the effective learning rate and \\u2018effective\\u2019weight decay. This further involves the coupling of hyper-parameters.\\nDifferent from the commonly-held belief, the L_2-SP regularization does not always perform better than L_2. When domain discrepancy is large, the regularization effect will be worsened. \\nThis paper is well-written and makes several interesting discoveries. My question for the authors is as follows:\\nIn the momentum section, the authors postulate that for more similar target datasets, smaller momentum performs better. Here, the similarity is quantified by EM distance defined in the feature space. However, for the five datasets provided, the similarity of them are really close, making this claim less convincing. The conclusion is reasonable, but the authors may need a more reliable method to compare the similarity between datasets.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the role of different hyperparameters in finetuning image recognition models on new target tasks. The authors run a large set of experiments and show that, perhaps non-surprisingly, hyperparameters matter. In particular, they show that momentum, which is typically ignored in finetuning, is quite important, and that the momentum values that work well depend on the similarity between the source and target datasets. They also show important correlations between momentum, learning rate, and weight decay.\\nOverall, despite some issues detailed below, the paper is clearly written, presents a coherent story, and its conclusions will be useful to the community.\", \"comments\": \"1. My main concern about this paper relates to the importance of momentum. The authors argue that this hyperparameter is \\\"critical for fine-tuning performance\\\". However, they later show that in fact what matters is the ratio between the learning rate (LR) and the momentum. In this case, it might be justified to fix the momentum value and only modify the LR, as often done. \\n\\n2. The EMD values of Birds, Cars and Aircrafts are within 0.7 points of each other (while Dogs is much higher and Flowers is quite lower). Although I am not too familiar with this method, I find it somewhat hard to believe that these small differences explain the error differences on Table 2.\\n\\n3. While the paper is fairly clear in writing, the figures (e.g., fig. 3 and 4) are extremely hard to read on print, and thus hard to draw conclusions from. Figure 4 is confusing also on screen.\\n\\n4. To promote reproducibility, it would be better to report in this kind of research validation rather than test results. There is some confusion in Figure 4, the axes say validation error, while the caption says test error, but in the other figures test results are reported.\", \"minor\": \"1. The authors say in the intro \\\"Even when there is enough training data, fine-tuning is still preferred as it often reduces training time significantly (He et al., 2019).\\\", but later make a somewhat contradictory claim: \\\"He et al. (2019) questioned whether ImageNet pre-training is necessary for training object detectors. They find the solution of training from scratch is no worse than the fine-tuning counterpart as long as the target dataset is large enough.\\\".\\n\\n2. A couple of typos around the paper:\\n- section 2: \\\"However, most of these advances on hyperparameter tuning are designed *from* training from scratch\\\" (should be \\\"for\\\")\\n- The first sentence of 3.3 is ungrammatical\"}"
]
} |
S1eL4kBYwr | UNITER: Learning UNiversal Image-TExt Representations | [
"Yen-Chun Chen",
"Linjie Li",
"Licheng Yu",
"Ahmed El Kholy",
"Faisal Ahmed",
"Zhe Gan",
"Yu Cheng",
"Jingjing Liu"
] | Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are jointly processed for visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design three pre-training tasks: Masked Language Modeling (MLM), Image-Text Matching (ITM), and Masked Region Modeling (MRM, with three variants). Different from concurrent work on multimodal pre-training that apply joint random masking to both modalities, we use Conditioned Masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). Comprehensive analysis shows that conditioned masking yields better performance than unconditioned masking. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks for UNITER. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks over nine datasets, including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR2. | [
"Self-supervised Representation Learning",
"Large-scale Pre-training",
"Vision and Language"
] | Reject | https://openreview.net/pdf?id=S1eL4kBYwr | https://openreview.net/forum?id=S1eL4kBYwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"suA_qhEthA",
"B1glRlZniH",
"BkllNzs7sr",
"r1xcLWimsH",
"HJlSBT5QjH",
"H1gyAjcmsr",
"H1xtPjqmiS",
"Hyethc9QoS",
"rJl-cFcmsS",
"SJemXYqmiS",
"B1xg8d57jS",
"SyxP8v97sS",
"S1l5DIqXiB",
"BkeMz3WRFB",
"SygtFAYaKB",
"S1e6rHd6YH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728970,
1573814472219,
1573265960320,
1573265746285,
1573264700895,
1573264327469,
1573264225012,
1573264049058,
1573263752765,
1573263643079,
1573263431768,
1573263183045,
1573262946313,
1571851274260,
1571819136578,
1571812676968
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1655/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1655/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1655/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1655/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1655/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1655/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1655/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1655/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1655/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1655/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1655/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1655/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1655/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1655/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1655/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This submission proposes an approach to pre-train general-purpose image and text representations that can be effective on target tasks requiring embeddings for both modes. The authors propose several pre-training tasks beyond masked language modelling that are more suitable for the cross-modal context being addressed, and also investigate which dataset/pretraining task combinations are effective for given target tasks.\\n\\nAll reviewers agree that the empirical results that were achieved were impressive.\", \"shared_points_of_concern_were\": [\"the novelty of the proposed pre-training schemes.\", \"the lack of insight into the results that were obtained.\", \"These concerns were insufficiently addressed after the discussion period, particularly the limited novelty. Given the remaining concerns and the number of strong submissions to ICLR, this submission, while promising, does not meet the bar for acceptance.\"], \"title\": \"Paper Decision\"}",
"{\"title\": \"General Response to All Reviewers\", \"comment\": \"We thank all reviewers for your reviews. We have updated the paper and the changes are in blue for easier reference. To summarize, we have added:\\n 1) visualization of attention and qualitative examples;\\n 2) additional analysis on conditional masking vs. joint random masking;\\n 3) more recent SOTA on VCR and NLVR2;\\n 4) additional experiments on Conceptual-Caption-only pre-training;\\n 5) some revisions suggested by the reviewers.\"}",
"{\"title\": \"Author Response to Reviewer #1: Visualization and Others\", \"comment\": \"--------------------original question---------------------------\\n(2) Some visualization of attention weights would be helpful. \\n-----------------------------------------------------------------------\\n\\nThank you for your suggestion. We already find some interesting patterns on the attention weights. We are working on the visualization and will update the paper before the discussion period ends.\\n\\n--------------------original question---------------------------\\nMinor \\u2022 In \\u201cm \\\\e N^M\\u201d (equation 1), what is N and M? \\n-----------------------------------------------------------------------\\n\\n\\\\mathbb{N} stands for natural numbers (non-negative integers), M is the number of masked tokens, and \\\\mathbf{m} is the set of masked indices. We will add a footnote to make this clearer.\"}",
"{\"title\": \"Author Response to Reviewer #1: single-stream vs two-stream\", \"comment\": \"--------------------original question---------------------------\", \"i_have_some_questions_for_the_authors\": \"(1) What are the advantages of using single-stream transformer over two-stream transformer (page 2). I guess it leads to fewer parameters, but I don\\u2019t think this is a big problem. \\n-----------------------------------------------------------------------\\n\\nOur argument is not about \\u201csingle-stream being strictly better than two-stream\\u201d. In fact, we started with a SOTA two-stream model (MCAN, Yu et al., 2019) in our preliminary experiment but found that it did not work as well as single-stream model. We therefore continued with the single-stream architecture, and focused on finding the most effective pretraining strategy, which is our main contribution. Note that both LXMERT and ViLBERT promoted two-stream models, yet we showed that single-stream model is sufficient to learn contextual representations across two modalities.\\n\\nFewer parameters could potentially be an advantage, e.g., we are able to stack deeper/larger transformer layers under the same memory constraint.\"}",
"{\"title\": \"Author Response to Reviewer #1: Comparison with DFAF\", \"comment\": \"--------------------original question---------------------------\\nThe paper modifies an existing pre-training procedure by conditional masking (Section 2). I agree this is well-motivated, but it has little novelty and a similar idea is there in VQA (See \\u201cDynamic fusion with intra and inter-modality attention flow for visual question answering\\u201d). MLM and MRM are not new training procedure either, they are basically extending the BERT\\u2019s training procedure with the consideration of multiple modalities.\\n-----------------------------------------------------------------------\\n\\nThank you for referring us a related work. After a thorough check of the paper, we agree with the reviewer this is also a relevant work. Nevertheless, Gao et al., (2019) may focus on novel model architecture, while we proposed a generic V+L representation via pretraining. We will cite the paper and discuss it in the related work.\\nSecondly, we argue that UNITER is not trivially derived from BERT. Even for BERT, language modeling has been around for years (CBOW, Mikolov et al., 2013).\\nIntuitively, mask-then-reconstruct is helpful for learning contextualization, but the key is what exact pretraining tasks are effective, especially for learning \\u201calignment\\u201d across vision and language in our case.\\nThat\\u2019s why we proposed ITM, MLM, MRM (3 variants), enumerate their combinations on large-scale pretraining, and then study a diverse set of V+L downstream tasks to derive the best pretraining strategy (Table 3).\\nTo explain the superior performance of UNITER, we believe the conditioned MLM/MRM better learns the local alignment (token/region level) across modalities and ITM enhances the global alignment.\"}",
"{\"title\": \"Author Response to Reviewer #2: Experiments\", \"comment\": \"--------------------original question---------------------------\\n# 4. Experimentation The main advantage of this paper relies on the extensive experimental analysis done on many challenging datasets reaching the state of the art on several downstream tasks. The evaluation on both pre-training tasks and downstream tasks show that the method is working well in practice.\\n----------------------------------------------------------------------\\n\\nWe appreciate the reviewer for recognizing our effort of achieving SOTA on 9 V+L tasks under 13 settings. Making so many things work well involved a fair amount of effort, but it deserves as we show UNITER has outstanding generalization ability, regardless how different these 9 downstream tasks are. We are looking forward to seeing our pre-trained model could serve as fundamental representations for future V+L research in this community.\\n\\nAfter the submission, we made another two new SOTA on VCR and NLVR2. We ensembled our VCR model and see a 4% absolute gain (66.8 vs 62.8). Note that our single large model already outperforms all other ensembled models by a large margin (62.8 vs 59.8). Besides, our NLVR2 model outperforms the others on Test-U (80.4 vs 76.2 for accuracy; 50.8 vs 42.1 for consistency). Both results will be added in the final version.\"}",
"{\"title\": \"Author Response to Reviewer #2: Difference between UNITER and other concurrent works\", \"comment\": \"--------------------original question---------------------------\\n# 3. Novelty The novelty of the paper is quite limited since it is an extension of BERT to the visual domain. The authors propose an empirical analysis of different ways to mask the visual input, however this might not be a substantial extension of previous work. In fact, recently there are many other papers (ViLBERT, VisualBERT, LXBERT, ...) working on similar topic with small differences. What it is missing in this paper is an understanding and intuition on the reasons why the conditioned masking idea should be better than the other visual masking ideas proposed in previous work. \\n----------------------------------------------------------------------\\n\\nWe think these peer works are concurrent to our UNITER. We did feel much pressure when our peer works went public, but we decided to complete all 9 downstream tasks with 13 settings (covering nearly all popular V+L tasks) to show UNITER\\u2019s better generalization ability, rather than publishing a premature work. With such extensive experiments, our work is much more convincing.\\n\\nIt is true that all concurrent works used visual masking and language masking. However, it is not clear what the exact visual tasks are helpful for V+L self-supervised learning. First, until recently, no one knows whether MLM can be applied to image-conditioned text modeling. Second, we propose 3 variants of Masked Region Modeling (MRM) and suggest the community the most effective combination (Table 3). \\n\\n\\nAs for why our conditional masking works better than others (joint masking), we hypothesize that UNITER learns better latent alignment of entities (regions and words) across two modalities. For example, given a sentence \\u201cman with his dog on a couch\\u201d and a corresponding image as in Figure 1. For our conditional masking, when the region of \\u201cdog\\u201d is masked, our model should be able to infer that region is \\u201cdog\\u201d based on the context of both the surrounding regions and the full sentence, and vice versa. For the joint masking implementation, it could happen when both the region of \\u201cdog\\u201d and the word of \\u201cdog\\u201d are masked, then the model has to predict blindly. Therefore, joint masking might lead to miss-alignment. We show in Table 3 (row 10&11) that our conditional masking performs better than joint masking.\\n\\nTo further demonstrate our idea, we are working on a more direct comparison with ViLBERT and other concurrent works which were trained on Conceptual Captions (CC) only. However, we would like to emphasize that large-scale data is essential for self-supervised learning. So far, only we have succeeded in pretraining on these 4 largest public datasets (10 days x 16 V100 GPUs). Also, we are working hard to resolve legal/license issue, and we will release our best pretrained model to help future V+L research.\"}",
"{\"title\": \"Author Response to Reviewer #2: How to combine tasks\", \"comment\": \"--------------------original question---------------------------\\n* Combination of tasks (MLM + ITM + MRC-kl + MRFR) -> it is not clear how this is done in practice. Is the loss function composed (summed)? Within the mini-batch, the method randomly chooses which operation to do (e.g., MLM) for each sample? This should be clarified in the main text of the paper. \\n----------------------------------------------------------------------\\n\\n\\nThank you for the suggestion. We will update the paper accordingly to make this clearer. In our implementation, we randomly sample a pretraining task for each mini-batch and train on only 1 objective per SGD update following MT-DNN (Liu et al, 2019).\\n\\nIt is also worth noting that existing implementations (LXMERT, ViLBERT) applied MLM, MRM on negatively sampled ITM pairs and summed the losses, which means 50% of the training is not correctly conditioned across modalities.\"}",
"{\"title\": \"Author Response to Reviewer #2: MRC vs MRC-kl\", \"comment\": \"--------------------original question---------------------------\\n* MRFM and MRC are clear, however the intuition of MRC-kl is missing. Why is this needed? What does it mean in practice to minimize such divergence (provide practical example)? \\n----------------------------------------------------------------------\\n\\nIn MRC, we assume the object class with the highest score to be the ground-truth label for a detected region. However, it may not be true, since no ground-truth label is provided for a detected region. In MRC-kl, we avoid making such an assumption by using a soft label instead of a hard one. This can be understood as distilling the knowledge (Hinton et al., 2015) from a pretrained object detection model to our UNITER model. Further, this hypothesis is empirically verified in our experiments (Table 1, row7&9).\"}",
"{\"title\": \"Author Response to Reviewer #2: scoring function s\", \"comment\": \"--------------------original question---------------------------\\n* \\\"The scoring function is denoted as s\\\" -> please indicate in the main text what function you used \\n----------------------------------------------------------------------\\n\\nWe used sigmoid function. We will make it clear in the paper. Thanks for the suggestion.\"}",
"{\"title\": \"Author Response to Reviewer #2: [CLS] in ITM\", \"comment\": \"--------------------original question---------------------------\\n* End of Sec. 3.1 (and paragraph in Sec. 3.2): not clear how the model is training for ITM. What's the input and output? Why do you need a new symbol [CLS]? \\n\\n* Sec. 3.2 ITM: \\\"an additional special token [CLS] is fed into our model, which indicates the fused representation of both modalities\\\" - This is not clear. Why this special token is needed? Why is not needed in the MLM and MRM? \\n----------------------------------------------------------------------\\n\\n\\nThank you for the questions, we will update the paper to make it clearer. \\n\\nFor ITM, the input is a sentence and a set of image regions and the output is a binary label (0 for negative match, and 1 for positive match). During training, we sample negative examples for each positive example by replacing the sentence/image. We extract the representation of [CLS] token as the joint representation of the input text and image, then fed into a fully connected layer to predict a score between 0 and 1. The ITM supervision is over the [CLS] token.\\n\\nIn practice, the [CLS] token is fed into the model for all other pretraining tasks and the downstream finetuning tasks as well. However, in MLM/MRM, the goal is to reconstruct the masked token/region. Therefore, the MLM/MRM supervision is over the representation of the masked token/region. \\n\\nThe supervision over the [CLS] token in pretraining also alleviates the input mismatch between pretraining tasks and downstream finetuning tasks, since most of the downstream tasks also regard the representation of [CLS] token as the joint representation.\"}",
"{\"title\": \"Author Response to Reviewer #2: Comparison with LXMERT and ViLBERT\", \"comment\": \"--------------------original question---------------------------\\n* \\\"Compared with LXMERT (Tan & Bansal, 2019) and ViLBERT (Lu et al., 2019) that use two streams (one Transformer for each modality), our UNITER model can learn joint contextualized ...\\\", why is this an advantage? Using two streams might also lead to learning context? Maybe an example can clarify my question. \\n-----------------------------------------------------------------------\\n\\nOur argument is not about \\u201csingle-stream being strictly better than two-stream\\u201d. In fact, we have tried a SOTA two-stream model (MCAN, Yu et al., 2019) in our preliminary experiment before the aforementioned related works are published, and found that it did not work as well as single-stream model. We therefore continued with the single-stream architecture, and focused on finding the most effective pretraining strategy, which is our main contribution. Note that both LXMERT and ViLBERT promoted two-stream models, yet we showed that single-stream model is sufficient to learn contextual representations across two modalities.\"}",
"{\"title\": \"Author Response to Reviewer #3\", \"comment\": \"Thank you for your insightful comments. Our assumption is that UNITER learns contextualized joint representation of both modalities. In Section 3.1, we proposed conditional masking that allows the model to learn informative representation of one modality conditioned on the other. Note that the conditional masking happens in both modalities. Therefore, the representation is aware of both visual and textual information. The reconstruction of masked tokens/regions can be viewed as learning local alignment across modalities. Furthermore, when combined with ITM pretraining, the global alignment between both modalities is encouraged. We also show that every pretraining task contributes to the final performance gain. To better address the reviewers\\u2019 concern, we are working on attention visualization, and will update the paper before the discussion period ends.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This is an impressive paper. LIke BERT, it proposes a tranformer based approach to derive a pre-trained network for representing images and texts. The resulting pre-trained network, used in 9 different tasks, advances the SOTA on all the tasks.\\nThe major limitation of this paper is why. Why does it happen? How this results can be achieved? What is exactly represented in this pre-trained network. Why the tasks used for pre-training build a network that is so informative?\\nThis is really the major obscure point of this impressive paper.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"# 1. Summary\\nThe authors introduce a new pre-training procedure for image-text representations. The idea is to train the model on a huge collection of different image-text datasets and the use the model for downstream tasks. The difference between the proposal wrt the concurrent work is that conditioned masking is used: (i) Masked Language Modeling (MLM) conditioned on image; (ii) Masked Region Modeling (MRM) conditioned on text; and (iii) joint Image-Text Matching (ITM).\\n\\nI am on the fence for this paper given the balance between strengths and weaknesses listed below. I am conservative here and decide for weak reject; but I am open for discussion, if the authors answer to my concerns detailed below.\", \"strengths\": [\"State-of-the-art results on several downstream vision-language tasks\", \"Empirical work to investigate different ways to perform conditioned masking\"], \"weaknesses\": [\"Some parts of the method needs clarification (see point 2 below) to better understand the details and practical advantages of the method.\", \"Limited novelty: the paper is an extension of BERT to the visual domain (see point 3 below)\", \"# 2. Clarity and Motivation\", \"The paper reads quite well, although some points need to be improved:\", \"\\\"Compared with LXMERT (Tan & Bansal, 2019) and ViLBERT (Lu et al., 2019) that use two streams (one Transformer for each modality), our UNITER model can learn joint contextualized ...\\\", why is this an advantage? Using two streams might also lead to learning context? Maybe an example can clarify my question.\", \"End of Sec. 3.1 (and paragraph in Sec. 3.2): not clear how the model is training for ITM. What's the input and output? Why do you need a new symbol [CLS]?\", \"Sec. 3.2 ITM: \\\"an additional special token [CLS] is fed into our model, which indicates the fused representation of both modalities\\\" - This is not clear. Why this special token is needed? Why is not needed in the MLM and MRM?\", \"\\\"The scoring function is denoted as s\\\" -> please indicate in the main text what function you used\", \"MRFM and MRC are clear, however the intuition of MRC-kl is missing. Why is this needed? What does it mean in practice to minimize such divergence (provide practical example)?\", \"Combination of tasks (MLM + ITM + MRC-kl + MRFR) -> it is not clear how this is done in practice. Is the loss function composed (summed)? Within the mini-batch, the method randomly chooses which operation to do (e.g., MLM) for each sample? This should be clarified in the main text of the paper.\", \"# 3. Novelty\", \"The novelty of the paper is quite limited since it is an extension of BERT to the visual domain. The authors propose an empirical analysis of different ways to mask the visual input, however this might not be a substantial extension of previous work. In fact, recently there are many other papers (ViLBERT, VisualBERT, LXBERT, ...) working on similar topic with small differences. What it is missing in this paper is an understanding and intuition on the reasons why the conditioned masking idea should be better than the other visual masking ideas proposed in previous work.\", \"# 4. Experimentation\", \"The main advantage of this paper relies on the extensive experimental analysis done on many challenging datasets reaching the state of the art on several downstream tasks.\", \"The evaluation on both pre-training tasks and downstream tasks show that the method is working well in practice.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a novel method for image-text representations called UNITER. The proposed method has been subsequently tested in many downstream tasks. A detailed ablation study helps to understand the role of each pretrained task in the proposed model.\\n\\nAlthough the empirical results are nice, performing the intensive set of experiments on many different tasks is definitely time-consuming and needs a lot of engineering efforts, the technical contribution does not seem significant to me. The paper modifies an existing pre-training procedure by conditional masking (Section 2). I agree this is well-motivated but it has little novelty and a similar idea is there in VQA (See \\u201cDynamic fusion with intra and inter-modality attention flow for visual question answering\\u201d). MLM and MRM are not new training procedure either, they are basically extending the BERT\\u2019s training procedure with the consideration of multiple modalities.\", \"i_have_some_questions_for_the_authors\": \"(1) What are the advantages of using single-stream transformer over two-stream transformer (page 2). I guess it leads to fewer parameters but I don\\u2019t think this is a big problem.\\n(2) Some visualization of attention weights would be helpful. \\nMinor\\n\\u2022\\tIn \\u201cm \\\\e N^M\\u201d (equation 1), what is N and M?\"}"
]
} |
Skl8EkSFDr | Self-Supervised GAN Compression | [
"Chong Yu",
"Jeff Pool"
] | Deep learning's success has led to larger and larger models to handle more and more complex tasks; trained models can contain millions of parameters. These large models are compute- and memory-intensive, which makes it a challenge to deploy them with minimized latency, throughput, and storage requirements. Some model compression methods have been successfully applied on image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks. In this paper, we show that a standard model compression technique, weight pruning, cannot be applied to GANs using existing methods. We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator. We show that this framework has a compelling performance to high degrees of sparsity, generalizes well to new tasks and models, and enables meaningful comparisons between different pruning granularities. | [
"compression",
"pruning",
"generative adversarial networks",
"GAN"
] | Reject | https://openreview.net/pdf?id=Skl8EkSFDr | https://openreview.net/forum?id=Skl8EkSFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"pDjlsNktRv",
"r1x6iL19iH",
"SkeoHSycjH",
"B1g3omy5jB",
"BkxJ87yqjH",
"SJx-uwtJjH",
"HkxGS3W9FH",
"Sklg5QfVFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728936,
1573676709117,
1573676355109,
1573675940461,
1573675847282,
1572996968822,
1571589177810,
1571197832407
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1654/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1654/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1654/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1654/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1654/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1654/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1654/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper develops a new method for pruning generators of GANs. It has received a mixed set of reviews. Basically, the reviewers agree that the problem is interesting and appreciate that the authors have tried some baseline approaches and verified/demonstrated that they do not work.\\n\\nWhere the reviewers diverge is on whether the authors have been successful with the new method. In the opinion of the first reviewer, there is little value in achieving low levels (e.g. 50%) of fine-grained sparsity, while the authors have not managed to achieve good performance with filter-level sparsity (as evidenced by Figure 7, Table 3 as well as figures in the appendices). The authors admit that the sparsity levels achieved with their approach cannot be turned into speed improvement without future work.\\n\\nFurthermore, as pointed out by the first reviewer, the comparison with prior art, in particular with LIT method, which has been reported to successfully compress the same GAN, is missing and the results of LIT have been misrepresented. While the authors argue that their pruning is an \\\"orthogonal technique\\\", and can be applied on top of LIT, this is not verified in any way. In practice, combination of different compression techniques is known to be non-trivial, since they aim to explain the same types of redundancies.\\n\\nOverall, while this paper comes close, the problems highlighted by the first reviewer have not been resolved convincingly enough for acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thank you for your feedback and comments - these suggestions will make our submission stronger. Some responses to particular points follow:\\n\\n>> ... why put a GAN generator on a mobile device?\\n\\nAny real-time service using GANs, on a mobile device or otherwise, can benefit from model compression. General examples include mobile applications that perform style transfer, or video players that perform super-resolution on the client to save broadcast bandwidth. In the future, visual artists may rely on inpainting or other texture-generation techniques to save on asset storage space or interactive video generation to save rendering time, and musical artists may want a backing track to generate novel accompaniment that responds in real-time; all these tasks can be approached with GANs and may not work well with the latency associated with server-side execution. GANs have been used to augment training data, so, even in data center scenarios, having a more efficient generator can leave more resources available to training what may be a much more complex network.\\n\\n>> ... particular compression method ... pruning limits contribution\\n\\nCorrect, we only present results for network pruning. However, given the much better results with our method, we believe it may help other techniques achieve more aggressive compression rates. (We leave this as future work in our conclusion.) Further, we have shown that network pruning fails spectacularly on generators in the absence of our technique, which is a surprising result we have not seen reported before.\\n\\n>> \\\"Generalizes\\\" --> \\\"Applies to\\\"\\n\\nWe've addressed this to avoid making too broad a claim.\\n\\n>> In Section 4 the authors write: \\u201cOur main insight is found,\\u201d but then they describe the GAN method. What is the actual insight there?\\n\\nOur insight is, in fact, the next paragraph \\u2013 we've fixed this to make it clear that reviewing the GAN method (the paragraph in question) leads to the insight (the next paragraph) of how to make fine-tuning a compressed model more stable and successful by using the (pre-trained) discriminator.\\n\\n>> Scores from Table 2 also support the claims, but the table itself is not referenced anywhere in the text.\\n\\nWhat an oversight! We've fixed this.\\n\\n>> The analysis in Section 6 seems out of context with the rest of the paper. It is not clear how it relates to the \\u201cself-supervised\\u201d method.\\n\\nWeight pruning can be used to remove entire filters, not just individual elements (as noted by Reviewer #4), and 50% compression is somewhat modest (as noted by Reviewer #2). So, we included results for both filter pruning and pruning (elements and filters) more aggressively, to show that our method is successful, at least in more aggressive sparsity. While filter pruning was not as successful, this is yet more empirical evidence that pruning generators is not as straightforward a task as pruning classification or detection networks.\\n\\n>> Missing related work\\n\\nThanks for pointing these out - we've added this extra background.\\n\\n>> It would be good to first refer to Table 1.\\n\\nThank you for suggestion; this helps make the notation clear.\\n\\n>> Table 1: why is there a \\u201c?\\u201d only on the \\u201cFixed\\u201d column?\\n\\nWe've removed the \\u2018?\\u2019 from the \\u201cCompressed\\u201d and \\u201cFixed\\u201d columns.\\n\\nFinally, we'll work on fixing the font size for Figure 2 and finding a better solution to the sizable appendix.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your thorough review. We'd like to respond to some particular comments:\\n\\n>> Only 50% compression\\n\\nWe chose to focus on 50% compression in early sections to highlight how easily existing methods fail at pruning generators and, in contrast, that ours succeeds. We share results for higher compression rates (up to 90%) using our method in Section 6, though we stress that we haven't spent any effort trying to find fancy fine-tuning schedules for each task, which may result in even higher compression rates.\\n\\n>> Generative loss\\n\\nEach task already has a generative loss; this is not a term that we have added. Rather, we modify the existing loss function to take into account the new (compressed) generator. Here's what the TensorFlow DCGAN (image synthesis) tutorial has to say about the generator's loss term:\\n\\\"The generator's loss quantifies how well it was able to trick the discriminator. Intuitively, if the generator is performing well, the discriminator will classify the fake images as real (or 1). Here, we will compare the discriminators decisions on the generated images to an array of 1s.\\n\\ndef generator_loss(fake_output):\\n return cross_entropy(tf.ones_like(fake_output), fake_output)\\\"\\n(https://www.tensorflow.org/tutorials/generative/dcgan)\\nWe simply add a new loss term, identical to this one, for the compressed generator. Since each model and task will have its own unique loss terms, we feel the best place to see these particular loss functions is in the baseline implementations.\\n\\n>> If I understood the framework properly, here, the compressed generator is trained from a random initialization.\\n\\nYour understanding is slightly incorrect \\u2013 the compressed generator is initialized from the uncompressed generator (\\u201cInit Scheme\\u201d column in Table 1), which is trivial when the applied compression is something like pruning or quantization. In this way, the compressed sample distribution starts out close to the dense generator\\u2019s. We've added some clarification to Section 4. We have used different random seeds in some experiments with stable results.\\n\\n>> Figures 2, 9, 42 and 43 are unreadable when printed with a regular color office printer.\\n\\nWe're working on Figure 2! We've made it larger in the current revision. The others, all in the appendix, may simply be best viewed in a digital format that supports zooming. (As suggested by another reviewer, we may move the appendix material to a website.)\\n\\n>> Extra epochs\\n10% is simply what we found to be the maximum needed for good results; others took only an extra 1% of the original training epochs. This is a hyperparameter that one may choose to tune for more aggressive sparsities.\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"Thank you for your time and feedback. Please see our responses, below.\\n\\n>> Considering the success of large generative models (like BigGAN) one may wonder if these models can be compressed after being trained to improve practical applicability.\\n\\nThis is a great question - we look forward to extending our technique to the state-of-the-art GANs available today. Looking at other domains, pruning has still been successful on larger, more capable networks.\\n\\n>> The message of the article is misleading. The whole goal of pruning a neural network is to remove filters from it\\n\\nWe respectfully disagree with the assertion that the goal of pruning a network is to remove filters. This is one common approach used, but fine-grained (per-weight) pruning is not without merit - we show that our technique makes fine-grained pruning of generators possible; without this step, pruning filters would have no hope. Further, there are architectures (e.g. Cnvlutin2[1], SCNN[2]) that accelerate fine-grained pruning, as well as software approaches that achieve higher performance on more traditional architectures ([3],[4]). While our presented results for filter pruning show that it does not perform as well as on other tasks, we see this as an exciting area for future research. As with those other tasks for which filter pruning succeeded, work demonstrating the success of fine-grained pruning preceded filter pruning.\\n\\n[1] https://arxiv.org/abs/1705.00125\\n[2] https://arxiv.org/abs/1708.04485\\n[3] https://arxiv.org/abs/1802.10280\\n[4] https://arxiv.org/abs/1804.10223 (not directly applicable to convolution-based GANs, but it shows that fine-grained pruning can offer a measurable benefit to real problems)\\n\\n>> The comparison in Figure 1 is arguably misleading as well. For example, one of the methods that were mentioned (LIT) does achieve a factor of 1.8 model compression, yet the comparison was not carried out directly with that method, but a modification proposed by the authors of this paper.\\n\\nWe agree \\u2013 LIT, as originally reported, does achieve a 1.8x compression rate (noted at the bottom of our original page 3). The results in Table 1 and Figure 1 are with past approaches to compressed training or fine-tuning applied to model pruning. Put another way: using distillation on intermediate representations (LIT) and removing layers and removing layers achieves a reported compression rate of 1.8x. When performing distillation on intermediate representations (LIT) and pruning the generator, the quality at a compression rate of 2x is quantitatively and qualitatively worse than the original generator. We've tried to make the context for the results we report in Figure 1 clear in the caption, as well as in the supporting text.\\n\\n>> FLOPs or inference time\\n\\nPerformance will vary wildly based on particular architecture and hardware selected and is unfortunately out of the scope of this submission, which is hardware and architecture agnostic. Spending time aggressively pruning, optimizing for performance, and measuring against other baselines may be future work.\\n\\n>> Weights pruning is simply one of the approaches for model compression, so you cannot ignore the alternatives.\\n\\nWe agree that weight pruning is one of many approaches to compression, but we also point out that many approaches are orthogonal: taking the example of LIT, from above, one could apply weight pruning on top of the shallower network that results from the original LIT application. Quantization, also successful at compressing GANs in the past, has seen success when applied with pruning in other domains. This large cross product of comparisons is also out of the scope of this submission. Finally, without our submission, there would be no possibility of combining pruning with other techniques, as pruning has not been shown to succeed on generators prior to our work.\\n\\nHad our investigation into pruning generators been straightforward (\\\"We tried it, and it works\\\"), then performance would have been the a primary focus. Instead, we focused on explaining why naive pruning fails and devising a robust method that is able to prune some tasks up to 90% sparsity.\\n\\n>> Section 4 unorthodox notation\\n\\nWe added more general forms of equations 1,2,4, and 5 - thanks for the suggestions! Does this ease the understanding of section 4?\"}",
"{\"title\": \"Updated submission\", \"comment\": \"We've updated the submission to address questions and take advantage of suggestions offered by the reviewers.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The problem tackled in the paper is the compression of generators in adversarially trained models. Considering the success of large generative models (like BigGAN) one may wonder if these models can be compressed after being trained to improve practical applicability.\\n\\nThis paper is focused on the compression of image to image translation models and uses distillation on discriminator's outputs to achieve better results.\\n\\nMy decision is weak reject.\\n\\nThe message of the article is misleading. The whole goal of pruning a neural network is to remove filters from it, therefore, reducing the computation or, at least, storage space for the parameters. The attempt to remove filters was presented in the last figure, and it does not work as good as all other results presented in the paper.\\n\\nThe comparison in Figure 1 is arguably misleading as well. For example, one of the methods that were mentioned (LIT) does achieve a factor of 1.8 model compression, yet the comparison was not carried out directly with that method, but a modification proposed by the authors of this paper.\\n\\nI would like to see more comparisons in terms of FLOPs or inference time between the baselines, SotA methods, and your proposed method. Weights pruning is simply one of the approaches for model compression, so you cannot ignore the alternatives.\\n\\nAlso, section 4 probably has to be rewritten, since some unorthodox notation is used. The authors should consider using some reference paper for the notations, like CycleGAN, that was mentioned in the paper. That will improve the clarity and readability of the used objectives.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a method to compress GANs. The motivation is that the current compression methods work for other kinds of neural networks (classification, detection), but perform poorly in the GAN scenario. The authors present intuitive reasons for why this is the case.\\n\\nHowever, the motivation why we would like to compress GANs is unclear to us. The intro mentions: reducing memory requirements and improving their performance. Sure, compressing networks for object detection and classification on mobile devices is really useful. But GANs are mainly used for unsupervised density estimation, why put a GAN generator on a mobile device? But maybe we are missing something here. \\n\\nTheir \\u201cself-supervised\\u201d method works by using the pre-trained discriminator network, while compressing only the generator. They show both qualitative and quantitative gains.\\n\\nThe paper is clear and well-written. It presents a way of pruning GAN generator network and although of limited novelty, it might be an interesting read as it provides extensive and convincing experiments in a clear manner. It does have several parts though which require additional clarification.\\n\\nThe idea of using the pre-trained discriminator network seems reasonable, but I am missing what the compression method for the generator network actually is (Section 4). From Table 2 I would assume it is pruning, in which case the paper\\u2019s contribution is very limited.\\n\\nThe authors claim that the \\u201cself-supervised\\u201d method generalizes well to new tasks and models. \\\"Generalizes\\\" seems a strong word here, since the procedure compresses only the generator network. A more appropriate way of putting it might be \\u2018can be applied to other tasks and models.'\", \"in_section_4_the_authors_write\": \"\\u201cOur main insight is found,\\u201d but then they describe the GAN method. What is the actual insight there?\\n\\nThe qualitative results in Figure 1 suggest that their \\u201cself-supervised\\u201d method is better than the other baselines. \\n\\nScores from Table 2 also support the claims, but the table itself is not referenced anywhere in the text.\\n\\nThe analysis in Section 6 seems out of context with the rest of the paper. It is not clear how it relates to the \\u201cself-supervised\\u201d method.\", \"missing_related_work\": \"1st paragraph: compressing or distilling one network into another is much older than 2015, dating back to 1991 - see references in section 2 of the overview http://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html\\nThe GAN principle itself is also much older (1990) - see references in section 5 of the link above.\", \"general_remarks\": \"In the first read of Section 3 it is not clear what [a], [b], [c] are.\\n\\nIt would be good to first refer to Table 1.\", \"table_1\": \"why is there a \\u201c?\\u201d only on the \\u201cFixed\\u201d column?\\n\\nIt would be good to have a larger font size in Figure 2, at least the size of the main text font.\\n\\nIn its current form, the pdf file has 100MBs (8MBs the main paper and the rest is the appendix). One could instead move the images from the appendix to a website and provide a link.\\n\\nWe might improve our rating provided the comments above were addressed in a satisfactory way in the rebuttal.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors tackle the task of compressing a network. While there\\nare many effective solutions so far regular computer vision tasks, as they demonstrate,\\nthey fail catastrophically when applied to generative adversarial networks(GANs).\\nThey propose a modification to the classic distillation method, where a\\n\\\"student\\\" network tries to imitate the uncompressed one under the supervision of\\na fully converged discriminator network. They perform evaluation on multiple\\ntasks from image synthesis to super-resolution. They also study the influence\\nof the compression factor on the quality of the generated images.\\n\\nThe task is well motivated and situated in the related literature. The first\\nsection is very thorough and extremely efficient at describing the failure modes\\nof existing methods. On one side, the results demonstrated in the evaluation are\\ncompelling, on the other side, the compression factor is only 50%, which is much\\nlower than seen in related work. However, as it is shown in section 3 the task\\nmay be much harder for GANS than regular models so I still consider it a \\nsizeable contribution.\\n\\nThere are a couple of points that require clarification. I personally found the\\ndescription of the method (Section 4) rather confusing. It is clear\\nwhat \\\"discriminative loss\\\" is as it is the one used in every GAN.\\nUnfortunately, I could not understand what \\\"generative loss\\\" means in the general\\ncase. An example is given for StarGAN in equation (7) and I have a rough idea of\\nwhat to choose for Style Transfer, Domain Translation, Super Resolution and\\nImage translation. Though, it is unclear to me what to use in the case of\\nimage synthesis. The experiments clearly show that it is possible so I think\\nit is necessary to show how this framework is concretely applied to each task at\\nhand.\\n\\nDuring training, the discriminator only ever saw pictures from the true distribution\\nand the distribution generated by the generator (at each of its training steps).\\nIf I understood the framework properly, here, the compressed generator is trained\\nfrom a random initialization. The distribution it outputs is therefore\\ncompletely unknown and potentially non overlapping with either of the true or\\nthe generator ones. In that case it is hard to predict what the discriminator\\nwould do on completely out of distribution samples. I seems reasonable to\\nconjecture that it might consider them \\\"true\\\" because it was never trained on\\nthem. Could you provide an explanation of why it is not a problem in practice?\\nDo you have to try multiple initializations? Is the generative\\nloss enough to force the compressed discriminator to match the support\\nof the distribution of the dense generator?\\n\\nI think this paper is novel, tackles a hard task and presents compelling results\\n(albeit using very mild compression ratios). It should be accepted if some\\nclarifications are made in section 3.\", \"minor_remarks\": [\"Figures 2, 9, 42 and 43 are unreadable when printed with a regular color office\", \"printer.\", \"It is unclear what it would take an extra 10% of the original number of epochs to train the compressed network. Why couldn't it be faster, or much longer?\"]}"
]
} |
BylB4kBtwB | Retrieving Signals in the Frequency Domain with Deep Complex Extractors | [
"Chiheb Trabelsi",
"Olexa Bilaniuk",
"Ousmane Dia",
"Ying Zhang",
"Mirco Ravanelli",
"Jonathan Binas",
"Negar Rostamzadeh",
"Christopher J Pal"
] | Recent advances have made it possible to create deep complex-valued neural networks. Despite this progress, the potential power of fully complex intermediate computations and representations has not yet been explored for many challenging learning problems. Building on recent advances, we propose a novel mechanism for extracting signals in the frequency domain. As a case study, we perform audio source separation in the Fourier domain. Our extraction mechanism could be regarded as a local ensembling method that combines a complex-valued convolutional version of Feature-Wise Linear Modulation (FiLM) and a signal averaging operation. We also introduce a new explicit amplitude and phase-aware loss, which is scale and time invariant, taking into account the complex-valued components of the spectrogram. Using the Wall Street Journal Dataset, we compare our phase-aware loss to several others that operate both in the time and frequency domains and demonstrate the effectiveness of our proposed signal extraction method and proposed loss. When operating in the complex-valued frequency domain, our deep complex-valued network substantially outperforms its real-valued counterparts even with half the depth and a third of the parameters. Our proposed mechanism improves significantly deep complex-valued networks' performance and we demonstrate the usefulness of its regularizing effect. | [
"Deep Complex Networks",
"Signal Extraction"
] | Reject | https://openreview.net/pdf?id=BylB4kBtwB | https://openreview.net/forum?id=BylB4kBtwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"6O4GcPJXOT",
"JvylP8JlZ9",
"S1g39b0OiH",
"S1letkAuoH",
"HyeTfJ0djB",
"SkeBp6aOoS",
"H1eiOTauiB",
"r1lh86adoS",
"BklD_3adoH",
"rygaxccJcB",
"B1gkXWp6tr",
"HygJE9ZXYr"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1584118622810,
1576798728907,
1573605780465,
1573605240224,
1573605140730,
1573604796531,
1573604722592,
1573604692240,
1573604462951,
1571953140635,
1571832086714,
1571129895032
],
"note_signatures": [
[
"~Manuel_Pariente1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1653/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1653/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1653/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1653/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1653/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1653/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1653/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1653/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1653/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1653/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"The data used in (Luo & Mesgarani, 2018) and (Shi et al, 2019) is standard\", \"comment\": \"I'd like to point out that the \\\"accusation\\\" that both (Luo & Mesgarani, 2018) and (Shi et al, 2019) use non-standard data preparation is wrong.\\n\\nIf noise-free 2-speaker mixtures are considered, \\\"SNRs between 0dB and 5dB\\\" is strictly equivalent to \\\"SNRs between -5dB and 5dB\\\". \\nIf speaker S1 has SNR of +XdB, S2 will have -XdB, that's in the definition. \\n\\nIf we go look at the mix_2_spk_tr.txt file that can be downloaded from the official data prepatation recipe (https://www.merl.com/demos/deep-clustering/create-speaker-mixtures.zip), the mixing weight associated to s1 is always positive. Effectively, s1 is always mixed between 0dB and 5dB, hence s2 is always mixed between 0dB and -5dB\\n\\nMany implementation were able to reproduce the results of ConvTasnet, with official data preparation recipes. And the authors of the original paper also used the official mixing scripts.\\n\\nAccusing other papers to cheat on their data preparation to increase their performances is pretty strong. Please be sure that you are right about it next time.\\nPlease correct the paper accordingly.\\n\\nBest,\\nManuel Pariente\"}",
"{\"decision\": \"Reject\", \"comment\": \"The paper discusses audio source separation with complex NNs. The approach is good and may increase an area of research. But the experimental section is very weak and needs to be improved to merit publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 3 (part 3)\", \"comment\": \"Reviewer 1: \\\"9- What is the importance of the first paragraph in 2.1? I was not aware of the holographic reduced representations, but I don't understand it more now, and I don't see why explaining that for 15 lines.\\\"\\n\\nHRR is intimately related to our extraction mechanism. As we stated in section 2.1, HRRs are a form of key-value storage. Information is stored in the memory by entangling the key and the value together with circular convolution and then summing this bound representation into a storage vector. To retrieve a value by its associated key, it is enough to perform the same circular convolution with the inverse of that key. The result is, in expectation, the value plus some Gaussian noise (so the retrieved representation quite noisy). A different version of this was implemented in the associative LSTM paper, Danihelka et al. (2016), in order to augment the memory of the RNN. According to the convolution theorem, performing element-wise multiplication in the frequency domain is dual to performing the circular convolution in the time domain. Element-wise multiplication between the inverse of the key and the storage vector would allow to retrieve the representation in question only when both representations are encoder or converted to the frequency domain. Danihelka et al. (2016) however, have not relied on FFTs in order to convert the temporal signals to the frequency domain. They have assumed that element-wise multiplication is by itself is enough in order to retrieve the desired representation. The authors have also relied on summing random permutations of the bound representations by creating randomly permuted copies of the storage vector in order to decorrelated the noises during the retrieval. This is akin to our retrieval mechanism in the frequency domain as it is based on element-wise multiplication in the frequency domain and on decorrelating noises so that their mean is 0.\"}",
"{\"title\": \"Response to Reviewer 1 (part 2)\", \"comment\": \"Reviewer 1: \\\"6- have you tested with a higher lambda_imag (as the larger is now the best)?\\\":\\n\\nWe have tested higher and lower lambdas and they have yielded instabilities during training. We agree that a hyperparameter search between these lower and bigger valued might of course yield better results.\", \"reviewer_1\": \"\\\"8- In the results, table 1, in the last 4 lines: it looks from 1st and 2nd line that the new loss CSimLoss is not very different from the L2 (9.88 compared to 9.87). The best result, in the 4th line, cannot be compared to the 3rd line as both the loss and the number of transforms are different. I then found those values in the appendices, but it would be best to show fewer parameters varying in the main paper, but show some results that can be easily compared.\\\"\\n\\nWe agree with the reviewer that the parameters are varying but these varying parameters are in purpose contained in Table 1 in order to show a very important conclusion. Our conclusion is that our extraction mechanism is particularly well suited for being paired with the CSimLoss objective. It can be observed from the Table that when the proposed extraction mechanism is not used (No multiple transformations and no dropout) the CSimLoss performs comparably to the L2freq loss (around 10.90db for both). However, when the extraction mechanism is introduced (FiLM with multiple transformations, signal averaging and dropout), the CSimLoss significantly outperforms the L2freq loss and yields the best results. L2Freq is unable to yield better performance when the extraction mechanism is introduced (10.93 dB; unchanged) whereas CSimLoss does (11.34 dB).\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the useful feedback and appreciate the encouraging comments. We have considered the comments and tried to address them below.\", \"reviewer_1\": \"\\\"5- In the CSimLoss, why (i) is the real part negative and the imaginary part positive; (ii) is the imaginary part squared?\\\":\\n\\nWe have shown in section A.1 in the appendix and in section 4 that:\\na) When the real part of the normalized inner product is maximized, the match in amplitude between the estimate and the target is maximized.\\nb) When the imaginary part of the normalized inner product is 0, the match in phase between the estimate and the target is maximized.\\n\\nDeep learning frameworks are built around minimization of some objective function. We formulate the CSimLoss so that its minimization maximizes the match between estimate and target. To do this, we minimize the negation of its real part and minimize the square of its imaginary part. The minimum of the real part\\u2019s negation is achieved at -1, and the minimum of the imaginary part\\u2019s square is achieved at $0^2 = 0$, which corresponds to our needs.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the useful feedback and appreciate the encouraging comments.\\n\\nTo address quickly existing state of the art methods, we have added a Table 4 to our paper, summarizing models in the literature and including ConvTasNet.\\n\\nOne important clarification that Table 4 provides is that the various methods use different data preparations and parameters. In particular, ConvTasNet stands out for its use of a non-standard preprocessing of the data, unlike that of the other methods and likely to improve its headline SDR score. Our ConvTasNet reproduction with the standard data preprocessing protocol yielded a significantly lower score.\\n\\nConvTasNet also stands out for its use of smaller windows and hop lengths, which favour it relative to other papers (Yu et al, 2017). We have used window sizes and hop lengths more typical of the other papers, facilitating comparisons.\"}",
"{\"title\": \"Response to Reviewer 3 (part 3)\", \"comment\": \"Reviewer 3: \\\"4-b. This work shall also make a comparison with some of the existing methods on speech separation as described in Section 2.2.\\\":\\nPlease take a look at our revised manuscript where we have added a Table 4 in section A.6 in the appendix listing some of the existing methods on speech separation and their corresponding performance. We would like to draw your attention to the fact that all of the methods listed in Table 4 are fundamentally different from our approach in important ways:\\na-They do not operate in the frequency domain as they discard the phase information and operate in the log scale of the magnitudes of the fourier transform or in the time domain as it is the case for Tasnet and ConvTasnet.\\nb- All of the models listed in Table 4 take into account short and long term temporal dependencies by adding BLSTMs to their models or by using TCN (Temporal Convolutional Networks (Bai et al, 2018)) such is the case of ConvTasnet. In our case, we use a U-Net which does not take into account long-term temporal dependencies. We are not interested in performance gain induced by external models such as temporal ones, but rather by the improvement caused by our extraction framework (extraction mechanism + loss) that could be incorporated into any type of model.\", \"reviewer_3\": \"\\\"7. What if the mechanism of mask generation is also applied to Real U-Net? How much improvement can this bring?\\\":\\n\\nAs can be observed in Table 1, the complex U-Net outperforms by a very significant margin the real-valued U-Net. Neither of these baselines incorporate our extraction mechanism. The proposed extraction mechanism operates on complex-valued representation. The output of the U-Net should therefore be complex so that the extraction mechanism can process it. (Trabelsi et al, 2017) have shown that hybrid models (ones combining both real and complex representation) do not perform as well as models that either operate fully on complex representations or fully on real-valued representations. Given the performance of complex-valued U-Nets compared to the real-valued ones and given that hybrid models do not perform as well as fully-real/-complex models, it is unlikely that a hybrid model combining a real U-Net and our extraction mechanism would achieve performance comparable to our fully-complex pipeline.\"}",
"{\"title\": \"Response to Reviewer 3 (part 2)\", \"comment\": \"This is explained in detail at the end of section A.5 in the appendix where we state that we can also observe from Table 3 that, for all wider models and when multiple transformations and dropout rates were introduced, the CSimLoss consistently outperformed the L2Freq loss (It could also be observed in Figure 4 that the best CSimLoss models outperforms the best L2Freq models). More precisely, when wider models (with wider feature maps) are introduced in Table 3, when using the L2Freq loss, the SDR score did not cross the threshold of 10.91 dB while it reached 10.93 dB for narrower models (the ones reported in Table 2 that have less feature maps and so less parameters). When it comes to the CSimLoss, while it performed almost as well as L2Freq for narrower models, as it reached 10.91 dB in Table 2, jumps in SDR were recorded for the CSimLoss in Table 3 as it reached 11.05 dB when no dropout was introduced and then a jump to 11.35 db in terms of SDR was recorded when a dropout rate of 0.1 was used. This demonstrates that:\\na- The CSimLoss is implementing inherently a regularization mechanism allowing to avoid overfitting when wider models are used;\\nb- The CSimLoss is operating better when our extraction mechanism (multiple transformations and dropout rate) is introduced. And there, the best SDR of 11.35 db is reported. This is why, as we stated in section 7, the CSimLoss objective is therefore particularly well suited for being paired with the CSimLoss objective.\", \"reviewer_3\": \"\\\"4-a. The experimental comparison in Table 1 is limited, although some improvements have been demonstrated.\\\"\\n\\nWe have already answered that in 2 (see our answer in 2- where we mentioned that table 1 contains a summary of the most important results and that The complete results and the extended empirical analysis can be found in the appendix in section A.5 where the list of all the experiments is contained in big tables (Tables 2 and 3)).\"}",
"{\"title\": \"Response to Reviewer 3 (Part 1)\", \"comment\": \"We thank the reviewer for the useful feedback. We will attempt here to highlight those aspects of the paper we believe are novel, and defend our experimental study\\u2019s setup.\", \"reviewer_3\": \"\\\"2. The proposed CSimLoss is interesting. However, its effectiveness seems to be limited as demonstrated in Table 1. It can be found that the CSimLoss in some cases is only comparable (or even inferior) to the L2freq loss; You have has also mentioned in your 4th remark : 4. The experimental comparison in Table 1 is limited, although some improvements have been demonstrated.\\\":\\n\\nTable 1 provides a synopsis of the most important results obtained for the task of speech separation. A more exhaustive set of experiments are provided in the appendix. As mentioned in the beginning of section 7, the complete results and the extended empirical analysis can be found in the appendix in section A.5 where the list of all the experiments is contained in big tables (Tables 2 and 3).\\n\\nAlso, as mentioned in the analysis in section 7, and as it can be observed from Tables 2 and 3, our extraction mechanism is particularly well suited for being paired with the CSimLoss objective. We draw this conclusion because when the proposed extraction mechanism is not used the CSimLoss performs comparably to the L2freq loss. However, when the extraction mechanism is introduced (FiLm with multiple transformation, signal averaging and dropout), the CSimLoss significantly outperforms the L2freq loss and yields the best results.\", \"we_provide_the_following_explanation_to_help_understand_the_method\": \"Equation (1) gives the expression of the Fourier transform of a noisy signal y. By leveraging the linearity of the Fourier Transform and Convolution Theorem we get equation (1). Equation (2) is just the expression of the clean signal in terms of the Fourier transforms of the noisy signal, the impulse response and the noise component. In equation (2), [1 / F(r)] and -[F(epsilon) / F(r)] are respectively scaling and shifting representations of F(y). Now, Film (Perez et al., 2017) is a mechanism that allows to infer a scaling (\\u0393) and a shifting representation (B) given an input representation. This is why we use FiLM as it allows to infer \\u0393 = [1 / F(r)] and B = -[F(epsilon) / F(r)] given the noisy mix and the output of the U-Net. Equations (1) and (2) allow us to express F(s) in terms of \\u0393 and B. Equation (3) is just the application of equation (2) in the context of speech separation where multiple speakers are involved in the mix and we have to retrieve the clean speech of each of the distinct speakers. We assume then, that, for each speaker, there exists an impulse response r_{i} and an additive noise epsilon_{i} such that they allow to reconstruct the original constant mix. This leads to the expression of Equation (3).\\n\\nNow, indeed the text between Equation (3) and Equation (4) is related to a very well-known result in signal processing which explains how signal averaging increases the signal to noise ratio of a given signal by a factor of N. This text provides to readers unfamiliar with signal averaging and signal processing a clear motivation for using signal averaging in the context of signal retrieval in the frequency domain. Equation (4) then gives the expression of F(s) when such an operation is implemented.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose complex valued neural networks to perform audio source separation in the Fourier domain. The adapt a well known U-Net architecture to the task by introducing a complex-valued FiLM layer and a new complex similarity loss that explicitly takes magnitude and phase into account. They motivate the use of complex values well and demonstrate performance and parameter efficiency improvements over real-valued baselines. Importantly, they do not need to perform spetrogram inversion because their network works natively in the complex domain. Despite the quantitative improvements over spectral models, they still slightly underperform the ConvTasNet baseline that operates directly in the waveform domain (which was slightly misleading to not include in the table). The authors perform an extensive hyperparameter search to tune the model and provide sufficient detail to reproduce their experiments. While the results did not improve upon the best baseline, they do provide further evidence to the value of using complex-valued neural networks to handle complex-valued data (where phase and synchronicity matter), which I believe will be of value to the ICLR community, and thus I lean slightly in favor of acceptance.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This work researches the deep complex-valued neural networks. Specifically, it proposes a new signal extraction mechanism that operates in frequency domain and applies to address the speech separation issue. Also, a function is proposed to explicitly consider both the magnitude and phase information of a signal. Related work on learning representation in frequency domain and speech separation is well introduced. Theoretical analysis is conducted to show the motivation and connection to signal processing. The architecture of the deep neural networks is presented in details, with the elaboration of the complex mask generation. Experimental study is conducted on a benchmark dataset to compare the proposed complex networks with those using real-part values only to demonstrate the improvement.\", \"the_rating_is_3\": \"Weak Reject considering that the novelty is limited and the experimental study is weak.\\n\\n1. The significance of the theoretical analysis in Eq.(1) to Eq.(4) needs to be better explained. Currently, they seem to be some straightforward results in the field of signal processing;\\n2. The proposed CSimLoss is interesting. However, its effectiveness seems to be limited as demonstrated in Table 1. It can be found that the CSimLoss in some cases is only comparable (or even inferior) to the L2freq loss;\\n3. The mask generation proposed in Section 6 conceptually is largely an attention mechanism that has been widely applied in deep networks; \\n4. The experimental comparison in Table 1 is limited, although some improvements have been demonstrated. This work shall also make a comparison with some of the existing methods on speech separation as described in Section 2.2.\\n5. It was mentioned in the last paragraph of Section 7 that (Shi et al. 2019) uses different data preparation than this paper. Can this paper use the same data preparation as (Shi et al. 2019) and perform some comparisons?\\n6. Why did (Shi et al. 2019) achieve better SDR (12.1 vs. 11.3) than the proposed method using standard setup?\\n7. What if the mechanism of mask generation is also applied to Real U-Net? How much improvement can this bring?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new method for source separation, by using deep learning UNets, complex-valued representations and the Fourier domain. Concretely, their contribution is : i) a complex-valued convolutional version of the Feature-Wise Linear Modulation, able to optimise the parameters needed to create multiple separated candidates for each of signal sources that are then combined using signal averaging; ii) the design of a loss that takes into account magnitude and phase while being scale and time invariant. It was then tested and compared with real-valued versions, and also some state-of-the-art methods.\\n\\nOverall, I think this paper is of good quality and proposes an interesting method for this crucial task of source separation. However, I found the paper too dense and difficult to read (even if well written), and it looks like a re-submission from a journal paper of more than 8 pages. I would suggest the authors to shrink the paper so it *really* fits into the 8-pages (without important figures or important implementation details in the appendices), maybe at the cost of leaving some parts (such as old related works) out of the paper. The experiments are important here, and it is too bad that the comparison with state-of-the-art is just in the last paragraph, while the results do not seem to show any improvements compared to the other methods. The computational time might be very important here, as the claim is that FFT reduces time computation, but I did not had time to go through all the appendices.\", \"positive_aspects\": [\"The work is well documented and motivated, and I found that the reflexion leading to the method is of good quality.\", \"Concretely, I found interesting the use of the FiLM, originally designed for another application, for minimizing the SNR of the signal sources. The motivation/proof is quite clear too.\", \"Equally, the motivation for the design of the new loss is clear and interesting.\", \"I also found important the experiments, that shows in the same table the difference between the method without the complex-valued part and with different parameter values.\"], \"questions_and_remarks\": [\"I have to recall that I am not an expert on source separation and complex-valued deep learning. Yet, I have had difficulties in understanding the structure of the method, even if the different parts were clearly explained. The figure 1 is very useful, but I found it not clear enough and too small. I went to find some informations in the appendices, but there are too much crucial information there and I did not have time to go through all of it.\", \"The use of the U-net architecture is not explained (just some citations are given). What is supposed to be the output of it?\", \"When you say 'to be more rigorous, we can assume in the cse of speech separation that, for each speaker, there exists an impulse response such that when it is convolved with the clean speech of the speaker, it allows to reconstruct the mix' : why can we be sure that it is always possible, and why is it more rigourous?\", \"Why are the additive noises epsilon_i supposed to have the same E(|epsilon_i|^2|) ? Even if they are uncorrelated, what is the hypothesis behind that?\", \"In the CSimLoss, why (i) is the real part negative and the imaginary part positive; (ii) is the imaginary part squared?\", \"have you tested with a higher lambda_imag (as the larger is now the best)?\", \"It looks from the end of the paper that the method is still not achieving better results than the state-of-the-art. I agree with the authors as it might not be the scope of the paper, but then what is it? If it's time computation, it is not shown in the paper. If it is just a methodology, what would be required in the future to beat the best method?\", \"In the results, table 1, in the last 4 lines: it looks from 1st and 2nd line that the new loss CSimLoss is not very different from the L2 (9.88 compared to 9.87). The best result, in the 4th line, cannot be compared to the 3rd line as both the loss and the number of transforms are different. I then found those values in the appendices, but it would be best to show fewer parameters varying in the main paper, but show some results that can be easily compared.\", \"What is the importance of the first paragraph in 2.1? I was not aware of the holographic reduced representations, but I don't understand it more now, and I don't see why explaining that for 15 lines.\"], \"small_remarks\": [\"'deep complex valued models have *just* started to gain momentum'... with citations beginning in 2014, I would not say 'just'.\", \"'in the frequncy domain is then, ...' --> frequency + no coma\", \"Figure 2 is in the appendix, while in the text it is not said so. I was lost. This figure should not be in the appendix as the appendix should not have key elements, but just details that are not important for the understanding of the paper.\"]}"
]
} |
BJgr4kSFDS | Query2box: Reasoning over Knowledge Graphs in Vector Space Using Box Embeddings | [
"Hongyu Ren*",
"Weihua Hu*",
"Jure Leskovec"
] | Answering complex logical queries on large-scale incomplete knowledge graphs (KGs) is a fundamental yet challenging task. Recently, a promising approach to this problem has been to embed KG entities as well as the query into a vector space such that entities that answer the query are embedded close to the query. However, prior work models queries as single points in the vector space, which is problematic because a complex query represents a potentially large set of its answer entities, but it is unclear how such a set can be represented as a single point. Furthermore, prior work can only handle queries that use conjunctions ($\wedge$) and existential quantifiers ($\exists$). Handling queries with logical disjunctions ($\vee$) remains an open problem. Here we propose query2box, an embedding-based framework for reasoning over arbitrary queries with $\wedge$, $\vee$, and $\exists$ operators in massive and incomplete KGs. Our main insight is that queries can be embedded as boxes (i.e., hyper-rectangles), where a set of points inside the box corresponds to a set of answer entities of the query. We show that conjunctions can be naturally represented as intersections of boxes and also prove a negative result that handling disjunctions would require embedding with dimension proportional to the number of KG entities. However, we show that by transforming queries into a Disjunctive Normal Form, query2box is capable of handling arbitrary logical queries with $\wedge$, $\vee$, $\exists$ in a scalable manner. We demonstrate the effectiveness of query2box on two large KGs and show that query2box achieves up to 25% relative improvement over the state of the art.
| [
"knowledge graph embeddings",
"logical reasoning",
"query answering"
] | Accept (Poster) | https://openreview.net/pdf?id=BJgr4kSFDS | https://openreview.net/forum?id=BJgr4kSFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"DPwg04vDFgi",
"wdj9g4Nvs",
"SyxzjxvjsH",
"S1xjDxPssS",
"B1gHTJvjsB",
"ryxU5kPssS",
"rJxqx3Fy9r",
"Hklbu1aRKH",
"S1geJGyitH"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1588495194354,
1576798728869,
1573773466081,
1573773410956,
1573773245293,
1573773198412,
1571949553541,
1571897193227,
1571643864176
],
"note_signatures": [
[
"~Ishika_Singh1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1652/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1652/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1652/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1652/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1652/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1652/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1652/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"More Questions\", \"comment\": \"1. I didn't a find a mention on how do you convert natural language queries to your usable form, i.e., DNF. Do you use semantic parsing, and other resolution methods to convert using an algorithm, or is it done manually?\\n\\n2. How do you find your final answer after finding a set of answers? Do you perform some query-answer matching mechanism? Could you elaborate more on range search?\\n\\n3. The statistics are pretty amazing! Could you show more actual examples on the answered queries under the 9 categories that you define?\\n\\n4. Could this model answer complex questions (how many, what all, as in counting all possible answers under a categorie and giving all possible outputs) as defined in Leveraging Domain Context for Question Answering Over Knowledge Graph (Tong et al., 2019) (https://link.springer.com/article/10.1007/s41019-019-00109-w)? It intuitively seems to me that it does, but I wanted see some actual examples on this?\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a new method to answering queries using incomplete knowledge bases. The approach relies on learning embeddings of the vertices of the knowledge graph. The reviewers unanimously found that the method was well motivated and found the method convincingly outperforms previous work.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Re: Official Blind Review #2 (1/2)\", \"comment\": \"We thank the reviewer for thorough and constructive comments. Based on reviewer\\u2019s valuable feedback we conducted a number of additional experiments and provided additional dataset statistics, which further validate the efficacy of our query2box framework. We believe these together further strengthen our work.\\n\\nBelow please find responses to individual comments/questions:\", \"re\": \"How can intersection operator model zero offset for non-overlapping boxes?\\n\\nThis is a good question. Our current model learns the box offset in a purely data-driven manner; thus, we do not explicitly ensure that the intersection of non-overlapping boxes has zero offset but the model can definitely learn it. In our preliminary experiments, we found our rich learnable parametrization of the box intersection consistently gives better results than strict intersection of raw boxes. This is possibly because our richer learnable parameterization of the intersection operator is more expressive and also robust to noise in the knowledge graphs. Furthermore, in principle, we can explicitly train our model on intersection queries whose answers are empty. Our deepsets (with sigmoid output activation) will then learn to output (almost) zero offset on those empty queries. \\n\\nIn our experiments, at training time, we ensure that all intersection queries have answers, so our current model is not explicitly trained to handle empty sets. Nevertheless, we found box sizes (measured by the L1 norm of box offset) of intersection queries still offer strong insight on the number of entities they represent. Specifically, on FB15k, we randomly generate 10k queries of two types: (1) intersection queries with more than 5 answers and (2) intersection queries with empty answer (note that we have never trained on the type (2) of intersection queries). We found that the average box size is 0.36 for type (1) queries, while 0.7 for type (2) queries. Furthermore, type (2) queries are much more likely to have smaller boxes than type (1) queries (91% ROC-AUC score). This indicates that the empty intersection queries have much smaller box sizes than queries with non-zero answers.\\n\\nWe will further clarify these points in the paper.\"}",
"{\"title\": \"Re: Official Blind Review #2 (2/2)\", \"comment\": \"RE: Please include the deepset network here instead of in Sec 4.3.\\n\\nThanks for the suggestion. We included our deepsets architecture in Section 3.2 of our paper as suggested.\", \"re\": \"What type of relations have higher offset?\\n\\nThanks for the insightful comments. Such an analysis is indeed helpful. Below, we demonstrate on FB15k that (1) box offset significantly varies across different relations, (2) one-to-many relations tend to have larger offset embeddings. We will include our insight in the final version of our paper.\\n\\nFor each relation, we consider (1) the L1 norm of box offset, (2) the average number of answers of 1p queries using the relation. \\n\\nTop 10 relations with the smallest box size\", \"simple_query_1p\": \"8.5,\", \"complex_queries\": \"2p: 131.4 / 3p: 215.3 / 2i: 69.0 / 3i: 48.9 / ip: 593.8 / pi: 257.7 / 2u: 35.6 / up: 127.7\", \"nell\": \"\", \"complex_queries_2p\": \"56.6 / 3p: 65.3 / 2i: 30.3 / 3i: 15.9 / ip: 310.0 / pi: 144.9 / 2u: 14.4 / up: 62.5\\n\\nWe observe 2i / 3i queries have more answers than 1p queries despite the fact that intersection is taken. This is because during the query generation process, we ensured non-empty intersection in 2i / 3i queries, which biases the used relations to be one-to-many relations.\", \"relation\": \"/user/tsegaran/random/subject_taxonomy/entry./user/tsegaran/random/taxonomy_entry/subject\\n#Avg_answers: 495.0, Box size: 104.2\\n\\nWe observe a clear trend that the size of the box has a strong correlation with the average number of answer entities of 1p queries using the relation.\"}",
"{\"title\": \"Re: Official Blind Review #3\", \"comment\": \"We thank the reviewer for the positive and detailed feedback. We are glad that the reviewer likes our approach. Below, we address the reviewer\\u2019s concern on experimental comparisons.\\n\\nThe GQE that we primarily compared with is published in NeurIPS 2018 and is the most recent and state-of-the-art approach in handling complex multi-hop logical queries. GQE also generalizes Guu et al. 2015 (which can only handle path queries) and includes it as a special case. Das et al. 2017 built on top of Guu et al. 2015 and used LSTM to model path queries, which could potentially be integrated into our query2box framework. We leave it for future work as it is non-trivial to extend LSTM to operate on tree-structured DAG to handle conjunctive/EPFO queries.\"}",
"{\"title\": \"Re: Official Blind Review #1\", \"comment\": \"Thank you for your positive and detailed review and constructive feedback . We are glad that the reviewer like our approach.\\nThank you for catching the typo, which we fixed.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a method to answer complex logical queries in large incomplete knowledge bases (KB). Specifically it considers the class of existential first-order logical queries (EPFO) which includes the logical and, or and existential operator. The key contribution of this paper is to represent sets of entities via regions, more specifically as boxes or hyper-rectangles. This is well motivated because such logical queries often involves working over sets of entities at once and involves applying set based operators. Previous work which represented queries as a point in vector space are not well suited for these queries.\\nInstead, this paper models sets of entities as boxes or axis-aligned hyper-rectangles which is parameterized by two vectors \\\\in \\\\mathbb{R}^{n] denoting the center and the offset respectively. Boxes can also be understood to represent all the points in it (measure by element-wise comparison with the min and max coordinate). Handling the queries require projection and intersection operation. They are defined by simple addition operation (which guarantees the boxes grow in size, due to positive offset values) and a shrinkage function to denote intersection that guarantees the output area is smaller and is inside the set of boxes.\\nFor handling disjunctive queries, they make the clever trick of converting queries to DNF form so that the union operation is at the end of the computation graph which effectively reduces to taking the union of sets at the end. \\nExperiments are run on standard datasets (FB15k and FB15k-237) \\u2014 however, they generate their own query patterns. Specifically, they train on 5 patterns involving projection and intersection operation and test on 4 unseen ones. For baselines, they only compare to previous work of Hamilton et al., 2018 that maps queries to vectors.\", \"strengths\": \"1. Most knowledge base comprehension benchmark tests on link prediction problems which are queries of kind (e1, r, ?). However, semantic parsers of natural language produce queries that are much richer in shape. This paper (and Hamiltion et al., 2018 before) considers answering complex logical queries (although the shape of query is pre-defined and not arbitrarily complex). \\n2. Modeling logical queries into regions in vector space is an interesting idea, and it would be nice to see followup work in this direction.\\n3. The paper is nicely written and ablation experiments were helpful.\\n4. Compared to the baseline they used, the paper does a better job of modeling complex logical queries.\\n\\nWeaknesses / Questions:\\n1. I understand that the papers have considered various pre-determined shapes of queries, but the simple 1p query is similar to the usual benchmarks, I don\\u2019t understand why results for 1p were not compared with existing benchmark results. Without that comparison, I don\\u2019t have a good sense if region-based method actually are effective for \\u201c1p\\u201d kind of queries. \\n2. Even though the model was tested on two variants of freebase datasets, FB15k is well-known to have a lot of issues (Toutanova and Chen, 2015). Why weren't other standard datasets such as Nell-995, WN18RR and so many other biomedical KBs not considered for experiments, especially because the query generation process is very simple and it's easy to run experiments\\n3. It was not clear to me how the intersection operator would give zero offset for a set of non-overlapping boxes as input. Is the zero value coming from the deep-set model?, If so, how do you ensure that? Minor: Please include the deep-set network here instead of in Sec 4.3. I was confused about what the deepest model is until this point.\\n4. Regarding the results, is there any particular reason the MRR metric was pushed to the appendix and only results of Hits@3 was shown in the main section of the paper. I believe MRR is a better metric for your case because you are modeling sets of entities as answers and hence a ranking metric that ranks all entities is better, Hits@k is 1 if any of the answers in the set is present in top-k and hence quite a loose metric.\\n5. Why is the result of 3i is better than 2i. I am not sure why the model would do a better job in handling 3 intersections better than it does 2 intersections. \\n6. How many answers are there on an average for each question. This will better help me understand how hard the dataset actually is. \\n7. It is nice to see, that the model prefers boxes of different width. Do you have a sense of which type of entities (or relations) have higher offsets. This analysis would be nice to have for readers in the appendix section\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper studied the problem of answering complex logical queries on KGs, by proposing an embedding approach that encodes the queries into hyper-rectangles. The authors show that the proposed QUERY2BOX achieves the performance improvement in answering EPFO queries, as well as handling complex queries that is not observed in the training data. Experimental results show the efficacy of the proposed model. In general, I like the paper due to the nice presentation and promising approaches. However, I am not familiar with the context of KGs. I could not find anything wrong with this paper, but also do not have many intelligent questions to ask. My only concern is the comparison experiments. The authors only presented the experiments using one baseline model GQE over two benchmark datasets. The authors may want to conduct more comparison experiments with the recent advances (e.g., Guu et al., 2015; Das et al., 2017) mentioned in the paper.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper introduces an approach to answering queries on knowledge graphs, called Query2Box. The idea is to work with the embeddings of the vertices of the knowledge graph as if they were kind of sets. In this way, from a set, called box, of entities embeddings it is possible to project them to find other boxes using the relations specified by the query (these boxes contain the embeddings of the entities linked with those of the previous box by the relation specified in the query), or to intersect them to find the common entities.\\n\\nMoreover, following this similarity, the approach is extended to solve queries containing disjunctions as well. The idea is to transform a query into its disjunctive normal form, solve each conjunction on its own (allowing the process to be parallelized) and finally answer with the set of entities in the boxes given by each conjunction.\\n\\nA distance measure is used to check the belonging of an entity to a box.\\n\\nThis extension to disjunction could in principle be applied to other existing methods. In fact, the presented system is compared with GQE appropriately extended to handle disjunctions. \\n\\nThe experiments show that Query2Box can achieve better results than GQE. Moreover, an ablation study was conducted.\\n\\nWhile I am not an expert on the subject of reasoning on knowledge graph using embeddings, the proposal seems to me to achieve interesting results, and thus to be worthy of attention in the community. Comparing Query2Box with a state-of-art system like GQE positions the new approach as a good alternative to the state of the art.\\n\\nThe paper is well written, there are no problems in the use of English and in the organization of the paper.\\n\\nThe only error I have found is on page 5, equation 4, where v'_i is used while in the text below v_i is used, so I think the two notations should be aligned.\\n\\nAs a final score, I would not assign a high score because I am not experienced enough in the area to be sure about the validity of the approach, which, however, seems to be good and mature.\"}"
]
} |
Byx4NkrtDS | Implementing Inductive bias for different navigation tasks through diverse RNN attrractors | [
"Tie XU",
"Omri Barak"
] | Navigation is crucial for animal behavior and is assumed to require an internal representation of the external environment, termed a cognitive map. The precise form of this representation is often considered to be a metric representation of space. An internal representation, however, is judged by its contribution to performance on a given task, and may thus vary between different types of navigation tasks. Here we train a recurrent neural network that controls an agent performing several navigation tasks in a simple environment. To focus on internal representations, we split learning into a task-agnostic pre-training stage that modifies internal connectivity and a task-specific Q learning stage that controls the network's output. We show that pre-training shapes the attractor landscape of the networks, leading to either a continuous attractor, discrete attractors or a disordered state. These structures induce bias onto the Q-Learning phase, leading to a performance pattern across the tasks corresponding to metric and topological regularities. Our results show that, in recurrent networks, inductive bias takes the form of attractor landscapes -- which can be shaped by pre-training and analyzed using dynamical systems methods. Furthermore, we demonstrate that non-metric representations are useful for navigation tasks. | [
"navigation",
"Recurrent Neural Networks",
"dynamics",
"inductive bias",
"pre-training",
"reinforcement learning"
] | Accept (Poster) | https://openreview.net/pdf?id=Byx4NkrtDS | https://openreview.net/forum?id=Byx4NkrtDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"y4BBlXf4L",
"HyetoPNhsr",
"BkxSZ8x3jH",
"rklL5zxssr",
"rkl90g-mor",
"Bkxi8gb7ir",
"SkxCR1-mjS",
"rkl_IG0e9S",
"HJg9M2dCFH",
"r1ldu7WAYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728841,
1573828512646,
1573811709240,
1573745294403,
1573224657875,
1573224530630,
1573224406453,
1572033104202,
1571879953786,
1571849071736
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1651/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1651/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1651/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1651/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1651/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1651/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1651/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1651/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1651/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Navigation is learned in a two-stage process, where the (recurrent) network is first pre-trained in a task-agnostic stage and then fine-tuned using Q-learning. The analysis of the learned network confirms that what has been learned in the task-agnostic pre-training stage takes the form of attractors.\\n\\nThe reviewers generally liked this work, but complained about lack of comparison studies / baselines. The authors then carried out such studies and did a major update of the paper.\\n\\nGiven that the extensive update of the paper seems to have addressed the reviewers' complaints, I think this paper can be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revised version uploaded: New results, improved clarity.\", \"comment\": \"We thank all reviewers for the discussion and believe that our paper has improved through this process.\\nWe now uploaded a revised version of the paper, that we hope answers all the concerns raised by the reviewers. In particular both points #1 and #2 address the novelty concerns raised by reviewer #3.\", \"the_main_changes_are\": \"1) Following reviewer 2's suggestion, we added a modular system composed of PosNet, MemNet and an action selection module. This modular system was able to perform well on both topological and metric tasks, surpassing the performance of any individual network we trained. The success of modular system roots in understanding of tradeoff postulated in most part of paper. \\n\\n2) We compared our two-stage learning to a direct end-to-end protocol, in which we modify the entire connectivity to perform the various tasks. The main conclusions from this comparison are: (A) Two-stage learning is faster than end-to-end. (B) Good performance on topological tasks could only be achieved with two-stage learning.(C) Performance on metric tasks(except implicit context) is similar. (D) Generalization (transfer) to new tasks is better for two-stage learning. This implies potential to use our methods in other multiple tasks learning(meta learning) problems. \\n\\n3) We repeated our dynamics analysis for LSTM units, revealing qualitatively similar behavior.\\n\\n4) We improved the clarity of text and figures:\\n4.1) Example trajectories are provided in figure 2 and in the appendix.\\n4.2) Figure 4 was remade, and the slow point concept illustrated with a cartoon.\\n4.3) Pre-training description was improved.\\n\\n5) We added in the discussion part: possible reason for trade-off between different dynamics according to request of both reviewer 1 and 2.\\n\\n6) Note that the new experiments (end-to-end, modular, LSTM) were only done for a small number of networks due to the time constraint, and we will gather more statistics for the final version.\\n\\nWe hope these changes strengthen the paper in the eyes of the reviewers. \\nWe will be happy to answer any further questions.\\n\\nSincerely,\\nThe authors.\"}",
"{\"title\": \"Speculation of reason for trade-off\", \"comment\": \"Thanks for asking. Following reviewer 2's suggestion, we added a modular system composed of PosNet, MemNet and an action selection module. This modular system was able to perform well on both topological and metric tasks, surpassing the performance of any individual network we trained.\\n\\nAs for possible reasons for the tradeoff - the hidden states could represent the current position, stimulus memory, and other \\\"irrelevant\\\" features. Accurate path integration implies an ability to marginalize over stimulus memory and other features to obtain a clean version of the position. In principle, accurate decoding of the position could be done even if the hidden state contains all the other information. However, perhaps a more \\\"natural\\\" way is to allow the dynamics to suppress these aspects of the representation, and thus facilitate readout of the desired position information.\\n\\nFurthermore, the attractor landscape picture provides a dynamical reason for the tradeoff. Position requires a continuous attractor, whereas stimulus memory requires discrete attractors. It is possible to have four separated plane attractors, but perhaps easier to converge to one or the other.\"}",
"{\"title\": \"Response to authors.\", \"comment\": \"Thanks for your comments.\\n\\nAs pointed out by Reviewer 2, it is interesting that no networks seem to be good at both tasks. It may be worthwhile to expand the discussion of this phenomenon. Is this just a consequence of limited representational capacity of the network, or is there a more specific computational reason why accurate path integration and accurate landmark memory are mutually exclusive?\"}",
"{\"title\": \"Thanks for the feedback. Additional details provided.\", \"comment\": \"We thank the reviewer for the interest in our work, and for the thoughtful comments.\\n\\nQ1) Please describe how the networks 1, 13, 20 used for Figure 4 were selected. Were they selected at random or selected according to some criteria?\\nA1) The figures were chosen based on above-average performance, without a specific criterion. Visual inspection of other networks showed qualitatively similar results.\\n\\nQ2) It may be interesting to study the effect of more modern RNNs, e.g. LSTMs or GRUs, on the dynamics.\\nA2) This will be done. See also answer to Q5 of reviewer #2, repeated here:\\n\\nR2.A5) We will train the gated architectures on the task and analyze them. Initially, we did use GRU for the task, and found that a vanilla RNN with the average effective timescale performs as well. We thus chose the \\u201csimpler\\u201d model. It is true that fixed point analysis can be equally applied to the more sophisticated architectures, and we will do this. Preliminary results show that pre-training LSTM networks for PosNet results in similar topology to the vanilla case.\"}",
"{\"title\": \"Thanks for the feedback. Additional details provided.\", \"comment\": \"We thank the reviewer for appreciating the novelty of our work, and for the thoughtful comments.\\n\\nQ1) I think the presentation the pretraining objective (eq 3) could be clearer. Is eq 3 what is minimized during pre-training? How are \\\\alpha, \\\\beta, and \\\\gamma chosen? \\\\alpha is used to separate the two types of networks (MemNet from PosNet), which is the critical difference studied in the paper, so it would helpful to go into more detail about what \\\\alpha controls and how it was chosen.\\n\\nA1) This part will be rewritten for clarity. Yes - Eq. 3 is minimized during pre-training. The ratio between alpha and beta controls the relative strength/tradeoff of memory and position. We found that, in general, with $\\\\alpha$ larger than 0 , the position representation dominates over the memory one. The exact values were chosen manually through trial and error.\\n\\n\\nQ2) For the first task, I am surprised that the agent is able to navigate the environment using only the eight neighboring locations. What is the size of the arena? What fraction of the states are simply surrounded on all sides by empty space? It would be informative to show some trajectories of agents solving the basic task.\\nA2) We were initially also surprised by this. The size is 15 (will be written clearly), which means that 161 (8 reward + 56 border) out of 225 are surrounded by empty space (71.5 %). The agent is able to perform the task because it collects information into its recurrent dynamics. The information is gathered continuously in the trajectory, and not only from wall encounters. For instance, in many trials the agent reaches the reward before touching the wall twice \\u2013 that is, without full metric information. Regarding the size of arena, we were able to train networks on the topological tasks for large arenas (50x50, 92% empty). We will include several example trajectories in the supplementary material, illustrating the various ways in which the agent solves the task.\\n\\nQ3) For Fig 3A and 3B, it would be nice to show the other network's performance (i.e. show the PosNet on the scaling task in 3A, and the MemNet on the bar task in 3B). \\nA4) We will add this.\\n\\nQ4) How come there are no networks that are able to solve both sets of tasks? That is, how come there are no networks in the upper right region of Fig 3C? Does this suggest that an agent needs to combine two separate RNNs to solve the whole suite of tasks?\\nA4) We think this is indeed the case (as we mention in the discussion). We tried to carefully select networks (post-hoc) and could get a partial improvement towards the upper right, but still there was a tradeoff.\\nWe will also train a modular network to perform the task, but are not sure whether the results will be ready in one week.\\n\\nQ5) What happens if you train recurrent networks with more sophisticated cell architectures (e.g. a GRU or an LSTM)? These are typically easier to train (and using automatic differentiation techniques are also amenable to fixed point analysis).\\nA5) We will train the gated architectures on the task and analyze them. Initially, we did use GRU for the task, and found that a vanilla RNN with the average effective timescale performs as well. We thus chose the \\u201csimpler\\u201d model. It is true that fixed point analysis can be equally applied to the more sophisticated architectures, and we will do this. Preliminary results show that pre-training LSTM networks for PosNet results in similar topology to the vanilla case.\\n\\nQ6) ## Minor comments\\n- In eq. (1), use `\\\\left(` and `\\\\right)` to make the first set of parentheses have an appropriate height.\\n- Typo on the first line after eq. (6) (matrices)\\n- Relevant reference on comparing networks using dynamics around approximate fixed points: \\nA6) Will be fixed.\"}",
"{\"title\": \"Clarification of novelty\", \"comment\": \"We thank the reviewer for the thoughtful comments on the paper.\\nBelow, we explain what we view as the novelty in the paper and describe some new results that we hope strengthen the argument.\\n\\nQ1) While the idea of using a representation inspired by cognitive maps is interesting, the paper does not offer much technical novelty. (e.g no technical novelty in eq. 1 and eq. 2)\\n\\nA1) These equations are indeed not novel \\u2013 they are the dynamics of a simple RNN and appear in the \\u201cTask Definition\\u201d section where we define the framework. The novel aspects, which appear later, are: (1) Using a neuroscience inspired pre-training protocol. (2) Showing how pretraining biases subsequent RL performance on different tasks. (3) Uncovering the mechanistic reason for this bias. Namely, different arrangements of slow points in the phase space of the recurrent network. \\n\\nQ2) The experimental results are weak and only a simple domain is tested. \\nA2) We chose a simple domain to increase our analysis power. In particular, this choice enabled a more systematic study of many tasks and networks than would be feasible for a more complex setting. Our choice of a simple domain also allowed for an easier interpretation of the slow point analysis which revealed that the underlying mechanism behind the effect of pre-training is the dynamical objects. The correlation between slow points and task features is easier to see in our setting.\\nPretraining is also used in other domains such as NLP or image recognition (Devlin et al. 2018, You et al. 2015). We believe that obtaining deeper understanding in a simple setting could pave the way for implementations in other domains in the future.\\n\\n\\nAs for the weakness of the experimental results \\u2013 we are not sure we completely understand this comment and would welcome clarification. Figures 2 and 3 clearly show a tradeoff between the two pretraining protocols, and figure 4 shows different dynamical objects (the clarity of this figure will be improved). These are the two main results, and we believe the data support them. \\n\\nQ3) It is not clear how efficient the method would be compared to other approaches.\\nA3) We performed additional experiments that address this point. We trained the network using end-to-end learning, without pretraining. This was done both by extending our Q-learning to start from random networks, and by an adaptation of a different method (DDPG, Heess et al. ). These experiments show that: (1) Training in our pre-training protocol is faster. (2) Performance in our pre-training protocol is much better for the topological tasks (end-to-end was not able to learn it), and it is comparable in most metric tasks. \\nWe believe these new results strengthen the case for the importance of inductive bias in the form of discrete fixed points to learn the topological tasks.\\n\\nQ4) Visualizations can be improved. As an example, Fig. 4 is not quite self-representative.\\nA4) We thank the reviewer for this comment. The visualizations will be improved. In particular, we will better illustrate the main points of figure 4.\", \"references\": \"Heess, N., Hunt, J. J., Lillicrap, T. P., & Silver, D. (2015). Memory-based control with recurrent neural networks. arXiv preprint arXiv:1512.04455.\\n\\nDevlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.\\n\\nYou, Q., Luo, J., Jin, H., & Yang, J. (2015, February). Robust image sentiment analysis using progressively trained and domain transferred deep networks. In Twenty-ninth AAAI conference on artificial intelligence.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a method for the navigation task. The proposed method is inspired by the concept of cognitive map in human and animal.\\nA recurrent neural network is incorporated and training is divided in two steps of (1) task-agnostic pre-training and (2) task-speci\\ufb01c Q learning.\\n\\nThe paper is well-written and clear.\\n\\nWhile the idea of using a representation inspired by cognitive maps is interesting, the paper does not offer much technical novelty. (e.g no technical novelty in eq. 1 and eq. 2)\\n\\nThe experimental results are weak and only a simple domain is tested. \\n\\nIt is not clear how efficient the method would be compared to other approaches.\\n\\nVisualizations can be improved. As an example, Fig. 4 is not quite self-representative.\\n\\nI see the paper has a large room for improvement and the current manuscript is not convincing for publication.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"## Overview\", \"This paper explores how pre-training a recurrent network on different navigational objectives confers different benefits when it comes to solving downstream tasks. First, networks are pretrained on an objective that either emphasizes position (path integration) or landmark memory (identity of the last wall encountered). This pretraining generates recurrent networks of two classes, called PosNets and MemNets (in addition to no pre-training, called RandNets). Surprisingly, the authors found that pre-training confers different benefits that manifests as differential performance of PosNets and MemNets across the suite. Some evidence is provided that this difference has to do with the requirements of the task. Moreover, the authors show how the different pretraining manifests as different dynamical structures (measured using fixed point analyses) present in the networks after pre-training. In particular, the PosNets contained a 2D plane attractor (used to readout position), whereas the MemNets contained clusters of fixed points (corresponding to the previously encountered landmark).\", \"Overall, I thought this was a very interesting paper--it is one of the first papers I have seen that demonstrates how different pre-training requirements both change network dynamics (as measured by fixed points), and how those differences can yield different benefits on downstream navigational tasks.\", \"## Major comments/concerns\", \"I think the presentation the pretraining objective (eq 3) could be clearer. Is eq 3 what is minimized during pre-training? How are \\\\alpha, \\\\beta, and \\\\gamma chosen? \\\\alpha is used to separate the two types of networks (MemNet from PosNet), which is the critical difference studied in the paper, so it would helpful to go into more detail about what \\\\alpha controls and how it was chosen.\", \"For the first task, I am surprised that the agent is able to navigate the environment using only the eight neighboring locations. What is the size of the arena? What fraction of the states are simply surrounded on all sides by empty space? It would be informative to show some trajectories of agents solving the basic task.\", \"For Fig 3A and 3B, it would be nice to show the other network's performance (i.e. show the PosNet on the scaling task in 3A, and the MemNet on the bar task in 3B).\", \"How come there are no networks that are able to solve both sets of tasks? That is, how come there are no networks in the upper right region of Fig 3C? Does this suggest that an agent needs to combine two separate RNNs to solve the whole suite of tasks?\", \"What happens if you train recurrent networks with more sophisticated cell architectures (e.g. a GRU or an LSTM)? These are typically easier to train (and using automatic differentiation techniques are also amenable to fixed point analysis).\", \"## Minor comments\", \"In eq. (1), use `\\\\left(` and `\\\\right)` to make the first set of parentheses have an appropriate height.\", \"Typo on the first line after eq. (6) (matrices)\", \"Relevant reference on comparing networks using dynamics around approximate fixed points: https://arxiv.org/abs/1907.08549.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies the internal representations of recurrent neural networks trained on navigation tasks. By varying the weight of different terms in an objective used for supervised pre-training, RNNs are created that either use path integration or landmark memory for navigation. The paper shows that the pretraining method leads to differential performance when the readout layer of these networks networks is trained using Q-learning on different variants of a navigation task. The main result of the paper is obtained by finding the slow points of the dynamics of the trained RNNs. The paper finds that the RNNs pre-trained to use path integration contain 2D continuous attractors, allowing position memory. On the other hand, the RNNs pre-trained for landmark memory contain discrete attractors corresponding to the different landmarks.\\n\\nAn interesting implication of the findings for neuroscience is that the same underlying network architecture can learn different dynamics, explaining diverse types of navigation-related signals found in the mammalian brain (place cells, border cells etc.).\\n\\nI am not entirely sure about the novelty or impact of the presented results. However, the exposition and the results are clear and it is interesting how pre-training can shape the dynamics of a network. I therefore recommend acceptance.\", \"minor_comments\": [\"Please describe how the networks 1, 13, 20 used for Figure 4 were selected. Were they selected at random or selected according to some criteria?\", \"It may be interesting to study the effect of more modern RNNs, e.g. LSTMs or GRUs, on the dynamics.\"]}"
]
} |
BJe4V1HFPr | Disentangling Style and Content in Anime Illustrations | [
"Sitao Xiang",
"Hao Li"
] | Existing methods for AI-generated artworks still struggle with generating high-quality stylized content, where high-level semantics are preserved, or separating fine-grained styles from various artists. We propose a novel Generative Adversarial Disentanglement Network which can disentangle two complementary factors of variations when only one of them is labelled in general, and fully decompose complex anime illustrations into style and content in particular. Training such model is challenging, since given a style, various content data may exist but not the other way round. Our approach is divided into two stages, one that encodes an input image into a style independent content, and one based on a dual-conditional generator. We demonstrate the ability to generate high-fidelity anime portraits with a fixed content and a large variety of styles from over a thousand artists, and vice versa, using a single end-to-end network and with applications in style transfer. We show this unique capability as well as superior output to the current state-of-the-art. | [
"Adversarial Training",
"Generative Models",
"Style Transfer",
"Anime"
] | Reject | https://openreview.net/pdf?id=BJe4V1HFPr | https://openreview.net/forum?id=BJe4V1HFPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0WaOgwe3N_",
"HylCthV3sH",
"HJgsz9V2or",
"rygSqFE2oH",
"rkgoormqiB",
"Hkl2BrX9ir",
"HygXXr75iH",
"rklNwrXSqB",
"r1g81xakcH",
"SkeldOzRtS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728811,
1573829765665,
1573829138556,
1573829004845,
1573692835078,
1573692740034,
1573692698950,
1572316507927,
1571962845916,
1571854439669
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1650/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1650/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1650/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1650/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1650/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1650/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1650/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1650/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1650/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a two-stage adversarial training approach for learning a disentangled representation of style and content of anime images. Unlike the previous style transfer work, here style is defined as the identity of a particular anime artist, rather than a set of uninterpretable style features. This allows the trained network to generate new anime images which have a particular content and are drawn in the style of a particular artist. While the approach works well, the reviewers voiced concerns about the method (overly complicated and somewhat incremental) and the quality of the experimental section (lack of good baselines and quantitative comparisons at least in terms of the disentanglement quality). It was also mentioned that releasing the code and the dataset would strengthen the appeal of the paper. While the authors have addressed some of the reviewers\\u2019 concerns, unfortunately it was not enough to persuade the reviewers to change their marks. Hence, I have to recommend a rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revision to the submission\", \"comment\": \"We have added some quantitative evaluation in our updated submission. Please find the details in the official comments. Thanks.\"}",
"{\"title\": \"Revision to the submission\", \"comment\": \"We have added some quantitative evaluation in our updated submission. Please find the details in the official comments. Thanks.\"}",
"{\"title\": \"Revision to the submission\", \"comment\": \"We have updated our submission, mainly to address reviewer 2 and 3's concern about the lack of quantitative evaluations. In particular, we augmented the experiments on the NIST dataset in appendix B with evaluations of the effectiveness of the disentangling encoder, in the new section B.4, exploiting the fact that in the NIST dataset both variations of interest are labelled. The numbers are computed from the exact same networks used for generating the visualizations and no additional training was done.\\n\\nBesides this, we added reference to the new experiments in the main text, adjusted the caption of some figures for better clarity, and removed some text to keep the length within limits.\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"We thank for the enthusiasm about our work and for pointing out at many strengths and appreciate the feedback.\\n\\nThe problem of figures being unclear can be fixed within the discussion period.\\n\\nFor the reviewer's other major concerns, it is not our own words that \\\"the focus is on disentanglement\\\", and the comment that \\\"this result can be expected from a class-conditional GAN\\\" is not made in regard to disentangling either. While disentanglement is a major goal, our conditional image generation part is also considerably different from existing methods and those differences are of equal importance to the success of the method.\\n\\nBy \\\"this result can be expected from a class-conditional GAN\\\" we are referring to the fact that if a class-conditional GAN is trained with the label being the artist, then it provides the functionality of controlling the style of the generated image by altering the input artist label, which we also provide. What a typical class-conditional GAN cannot do is to guarantee that when the input artist label is altered the content remains unchanged. In our method, this is guaranteed by requiring that the generated image can be encoded back to the input content code. While this would benefit from a better disentangled encoder, it is a contribution in its own right.\\n\\nOur classifier in stage 2 is also different from a typical class-conditional GAN in that it is adversarial and is thus able to capture the most subtle aspects of an artist's style. The effectiveness of this improvement is also one place where we do prove qualitative evaluations.\\n\\nSo the high-fidelity results we were able to obtain is due to the combined strength of many improvements. Should the reviewer demand more evaluations of the effectiveness of the disentangling step, we will run these additional experiments and update our results.\\n\\nWe have prepared the code and training dataset for this work and we will release our code to the public upon acceptance of this work. We hope the score can be improved based on this rebuttal. We will improve the paper with these feedback and please let us know if there are any further questions.\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"It is correct that in stage 1 the classifier $C$ tries to disregard $S(a')$ and tries to find information about the true author $a$ from $E(x)$, and $E$, $G$, and $S$ jointly tries to purge style information of $a$ from $E(x)$.\\n\\nIt is true that $G$ and $S$ are changing in stage 2 while $E$ is fixed, but note that $G$ and $S$ don't change arbitrarily, they are still cooperating with $E$, so they should not make changes that cause the output of $E$ to be unsuitable. Furthermore, it is necessary to fix $E$: if we allow $E$ to change and at the same time minimize $||E(x)-E(G(E(x), S(a)))||$, then the most obvious way to do so is for $E$ to give degenerate output (e.g. encode everything to 0) which renders $E$ useless.\\n\\nIn stage 2 $G$ and $S$ are initialized from the trained $G$ and $S$ in stage 1, but this is optional. It might take longer if $G$ and $S$ are trained from scratch in stage 2 but the end result should be comparable.\", \"regarding_the_necessary_of_the_many_loss_terms_in_stage_2\": \"for a class-conditional GAN where the class condition is enforced by an auxiliary classifier in the discriminator, five terms is the bare minimum: discriminator's loss on real and generated samples, classifier's loss on real samples, and generator's loss against the discriminator and the classifier.\\n\\nAmong the additional terms we added, $L_{cont}$ is necessary. It is not used to guarantee the validity of $E$. Instead, it is used ensure that the generator does actually use the content input in the intended manner, that is, it does generate images that has the content represented by the content code. The effect of dropping it is shown in section C.2 in the appendix.\\n\\nThe classifier's loss on generated samples is not absolutely necessary, but it changes the behavior of the classifier towards the generated samples from passive to adversarial, which is a qualitative difference from prior works and is one of our contributions. The effect of dropping it is discussed in detail in section C.2.\\n\\nThe last term, the KL-divergence loss on the output of $S$, is largely optional. It tries to constrain the distribution of style code and supposedly could benefit such things as interpolating between two styles. If this is not a concern, this term can be dropped.\\n\\nIn short, seven of the eight terms are necessary.\\n\\nWe have prepared the code and training dataset for this work and we are willing to release them if this work can be published.\"}",
"{\"title\": \"Response to reviewer 3\", \"comment\": \"It may seem that our major contribution is tackling the scarcity of artworks reliably labelled by style, by using artist label instead. This is in fact a practical design choice based on the availability of training data and is of minor importance.\", \"the_major_contributions_are\": \"1. Improving the disentangling of two factors of variation in stage 1 by observing that the instability of the output distribution of the encoder $E$ for the unlabelled factor (content) enables it to also encode information about the labelled factor (style) but avoid being correctly classified by the adversarial classifier $C$, and proposing a modification of data flow and training procedure to eliminate this instability, thus achieving a cleaner disentanglement.\\n\\n2. In stage 2, on top of a class-conditional GAN, explicitly adding a loss term to condition on the input content code, so that the generator is guaranteed to generate images with the same content from the same content code in combination with different style codes.\\n\\n3. In stage 2, adding a loss term to train the classifier to not classify generated images into the correct class, thus making the classifier adversarial against the generator, in contrast to previous works where the classifier is either cooperative or passive towards the generator. This has the effect of letting the classifier learn every aspect of the style of each artist, beyond what is enough to tell different artists apart.\\n\\nThe impact of each of these changes is studied in a separate ablation study in appendix C.\\n\\nBesides these technical contributions, we made conceptual arguments on why statistics-based representations of style and domain-independent definitions of style does are problematic for general style transfer problems, thus providing a different viewpoint for these problems.\\n\\nFor the experiments on NIST, we also demonstrated that our method works equally well when the labelled factor is the digit and the unlabelled factor is writer identity, giving an example where the labelled factor does not conceptually correspond to \\\"style\\\".\\n\\nPlease let us know if there are any further concerns.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This work introduces a Generative Adversarial Disentangling Network based on two stages training the first aims at learning a style independent content encoder and then content and style conditional GANs is used for synthesis.\\nAt stage 1 training to prevent the encoder from encoding authors introduce a gan style training in which an adversarial classifier that tries to predict the corresponding artist from the encoded image.\\nAt stage 2 is training a style/content conditional gan. To condition on the style (artist) authors introduce an extra adversarial classifier so the generator tries to generate samples that would be classified as the artist that it is conditioned\\non. While to condition on the input content another loss is ensuring that the generated image is encoded back to its content input.\\n\\nAuthors compare the proposed method against the original neural style transfer and StarGAN over various styles within the context of anime illustrations and the NIST Dataset where styles are being represented by artist name. \\n\\nWhile the work tackles some of the problems by conditioning only on artist names other than style features that might be hard to have annotations for. The proposed modifications are quite incremental. Additionally, the experiments section is quite weak, evaluation is only done quantitatively over some cherry-picked examples, although some extra ablation study in the appendix is provided.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Contributions:\\n1. This paper proposes a method to learn disentangled style (artist) and content representations.\\n2. By carefully designing two-stage training objectives, the method learns a style-independent content-encoding E at the first stage and the style encoder S and generator G both from the first and the second stage.\\n3. Empirical results justify the validity of the method.\\n\\nI think this paper makes a good contribution to disentangle style and content in anime. My main concern is the complicated learning procedure design may affect the reproducibility of this method. Moreover, I will suggest several points to the authors to clarify in the main text.\\n\\n1. I encourage the authors to release their code when published.\\n\\n2. In stage 1 (Style Independent Content-Encoding), the purpose of the classifier C, to my understanding, is to try to classify the generated example G(E(x), S(a')) as the \\\"ground-truth\\\" style (a). That is, the classifier C tries to disregard the S(a') when making a decision. As an adversarial player, E, G, S will try to fool C by making E(x) to be non-informative regarding the style. However, since you are still optimizing G and S, how do you make sure that it is safe to hold E fixed while still changing G and S in the second stage? Or more specifically, how do you make sure the style encoding network S preserves a good one in the second stage? Aside from that, are you using the trained G, S from the first stage to initialize G, S in the second stage?\\n\\n3. There are eight different terms in stage 2, so it worth checking the necessity for those terms. E.g. what happens if you drop the L_cont term? The term L_cont seems to guarantee the validity of E, but E is fixed in step 2.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an image generation method with the focus on generating anime faces from various artists. The proposed method which is a combination of conditional GANs and conditional VAEs manages to generate high fidelity anime images with various styles.\\n\\nThe paper is well-written and easy to follow and understand. The goals are clearly stated and the background (which is more of history), as well as related work, is comprehensive. The decision behind every design decision has been mentioned in detail which makes the paper stronger. The main writing flaw of the paper is in the figures where more annotation and caption is required to make them easy to understand. For example:\\n- The significance of colors in Figure 1 (architecture of the model) is not annotated at all and the letters are not clear either (although they are described in the text itself a figure should be comprehensive by itself).\\n- Figure 2 and Figure 4 are really hard to understand with very limited annotation and caption. I had to read the text multiple time to Figure out what is what in these figures which is not a good sign for clarity.\\n\\nIn terms of experiments, I think where the paper suffers the most is in comparison with other conditional methods. In Section 5 it has been clearly mentioned that \\\"this result can be expected from a class-conditional GAN and the focus in on Disentanglement\\\" however very little evidence has been provided for superior disentanglement. More experiments are required to demonstrate the capabilities of the model compared to other conditional methods (which is currently only limited to StarGAN) as well as its capability of disentanglement. I agree with the authors that quantitative evaluation of generated anime faces is not easy (although it is possible with a carefully designed human study), however, the disentanglement (which is the focus of the paper) is easy to evaluate quantitatively. This demands for more experiments on disentanglement datasets with known generative factors. Although the current ablation study in the Appendix provides more details for architectural decisions, a more qualitative and quantitative comprehensive ablation study (by actually ablating the final model) can help to demonstrate these decisions.\\n\\nIn conclusion, the paper has great results. We all know a big part of writing this kind of paper is to make the model \\\"work\\\" and authors truly demonstrate that they worked hard. However, the impact of the paper (in the current form) is not clear. With the focus on disentanglement, little evidence has been provided to justify the capability of the proposed method. I believe by addressing my comments on the experiments the paper can be easily pushed above the acceptance bar. Also releasing the code dataset should increase the impact of the paper.\"}"
]
} |
H1lXVJStwB | Dynamic Instance Hardness | [
"Tianyi Zhou",
"Shengjie Wang",
"Jeff A. Bilmes"
] | We introduce dynamic instance hardness (DIH) to facilitate the training of machine learning models. DIH is a property of each training sample and is computed as the running mean of the sample's instantaneous hardness as measured over the training history. We use DIH to evaluate how well a model retains knowledge about each training sample over time. We find that for deep neural nets (DNNs), the DIH of a sample in relatively early training stages reflects its DIH in later stages and as a result, DIH can be effectively used to reduce the set of training samples in future epochs. Specifically, during each epoch, only samples with high DIH are trained (since they are historically hard) while samples with low DIH can be safely ignored. DIH is updated each epoch only for the selected samples, so it does not require additional computation. Hence, using DIH during training leads to an appreciable speedup. Also, since the model is focused on the historically more challenging samples, resultant models are more accurate. The above, when formulated as an algorithm, can be seen as a form of curriculum learning, so we call our framework DIH curriculum learning (or DIHCL). The advantages of DIHCL, compared to other curriculum learning approaches, are: (1) DIHCL does not require additional inference steps over the data not selected by DIHCL in each epoch, (2) the dynamic instance hardness, compared to static instance hardness (e.g., instantaneous loss), is more stable as it integrates information over the entire training history up to the present time. Making certain mathematical assumptions, we formulate the problem of DIHCL as finding a curriculum that maximizes a multi-set function $f(\cdot)$, and derive an approximation bound for a DIH-produced curriculum relative to the optimal curriculum. Empirically, DIHCL-trained DNNs significantly outperform random mini-batch SGD and other recently developed curriculum learning methods in terms of efficiency, early-stage convergence, and final performance, and this is shown in training several state-of-the-art DNNs on 11 modern datasets. | [
"training dynamics",
"instance hardness",
"curriculum learning",
"neural nets memorization"
] | Reject | https://openreview.net/pdf?id=H1lXVJStwB | https://openreview.net/forum?id=H1lXVJStwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"HZ7RrZy6Bh",
"rJeU91sssS",
"S1xASJsjsS",
"BkgYi09siB",
"Bkx48v0Jqr",
"Syg2KlC0Fr",
"ryeC-rzhYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728784,
1573789581819,
1573789510481,
1573789345340,
1571968843828,
1571901572457,
1571722502172
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1649/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1649/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1649/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1649/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1649/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1649/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"All three reviewers, even after the rebuttal, agreed that the paper did not meet with bar for acceptance. A common complaint was lack of clarity being a major problem. Unfortunately, the paper cannot be accepted in its current form. The authors are encouraged to improve the presentation of their approach and resubmit to a new venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thanks for your comments! A new version of our paper has been uploaded.\\n\\n1. \\\"For example, when defining the curriculum learning problem in eq.2 and eq.3, are the f's the same? If so, why do they have different input arguments?\\\"\\n\\n- It's the same f in eq.2 and eq.3. For eq.3, we are using a shorthand notation (f(i|S)) to express the gain of an element conditioning on the selected elements for function f, which is clearly defined in the 4th line on the second paragraph of page 5 (5 lines above eq.3).\\n\\n2. \\\"'In step t, after the model gets trained on $S_t$, the feedback $a_t(i)$ for $i \\\\in S_t$ is already available': I don't get this.\\\"\\n\\n- After the model gets trained on $S_t$, say a forward and backward pass on a minibatch of data points when training a deep model, the losses ($a_t(i)$) on the data points in minibatch $S_t$ are a byproduct in the forward pass so we don't need to pay extra computational costs.\\n\\n3. \\\"I am not sure what Theorem 1 tries to tell. If one chooses k large enough, the inequality satisfies trivially.\\\"\\n\\n- Theorem 1 shows the solution of algorithm 1 is around a $1/k$ factor of the optimal solution for optimizing objective defined in eq.3. Note that $k$ is an input argument to the optimization objective defined in eq.3 and the bound in Theorem 1 holds for any input k (e.g., k=1 or k=n).\\n\\nIn the new version of the paper, we improve the bound to the factor of $\\\\max\\\\{((1-e^{-1})/k, k/2n\\\\}$. The previous bound with factor $1/k$ dominates when k is relatively small compared to n ($n > k^2$), and the $k/n$ factor dominates otherwise (when $k$ is large). We give hard examples in the first \\\"remarks\\\" on page 15 in the appendix showing the new factors are tight up to constant factors.\\n\\n4. \\\"BTW, what is $A_{1:T}$?\\\"\\n\\n- As clearly stated in the last line of Theorem 1, $A_{1:m}$ is the argument to the \\\"min\\\" operator ($min_{A_{1:m}}$), which means that $A_{1:m}$ is a multiset that achieves the smallest evaluation on $C_{f,m}$. Since the bound holds for the worst $A_{1:m}$, you can also think that the bound holds for any choice of $A_{1:m}$.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thanks for your comments! A new version of our paper has been uploaded.\\n\\n1. \\\"The role of the function 'f' that is being maximized in section 3.2 is not clear. To elaborate, it is not obvious what this function is or why should one care about it?\\\"\\n\\n- As stated in the 5th paragraph on page 2 and formally defined in section 2.1, $f(\\\\cdot)$ is a function that takes in a curriculum as a multiset, and returns the quality of the curriculum as a real value. Therefore, in section 3.2, we aim to find the best curriculum by maximizing $f(\\\\cdot)$. Note that $f(\\\\cdot)$ is unknown and inaccessible in practice as it may involve complicated dynamics of deep models and needs to produce the quality scores of all possible sequences (in the exponential number of training data). We only make assumptions about the properties of $f(\\\\cdot)$ (the diminishing return property) and only observe the function gains as $r_t(i)$. \\n\\nThe main purpose of formulating the problem based on $f(\\\\cdot)$ and making these assumptions is to make a theoretical analysis of our algorithm. Under the assumptions, the function can be arbitrary and even adversarial to the curriculum selection process. As an analogy, you can think $f(\\\\cdot)$ as the cumulation of rewards over time in the online learning setting, i.e., $f(S_1:T) = \\\\sum_t f(S_t | S_{1:t-1})$. The reward function in online learning does not have a specific form and can even be adversarial. Under the online learning setting, we can think every data point as a bandit arm, and at every time step, we get to choose a subset of bandit arms to pull, and observe the reward of each bandit as $r_t(i) = f(i | S_{1:t})$.\\n\\n2. \\\"The greedy algorithm has a bound in equation 7 that appears to be quite loose as the value of $k$ is as high as the order of the size of the entire training set (in the experiments, $0.2n\\\\leq k < n$). Am I misinterpreting it?\\\"\\n\\n- In the new version of the paper, we improve the bound to the factor of $\\\\max\\\\{((1-e^{-1})/k, k/2n\\\\}$. The previous bound with factor 1/k dominates when k is relatively small compared to $n$ (when $n > k^2$), and the $k/n$ factor dominates otherwise (when $k$ is large). We give hard examples in the first \\\"remarks\\\" on page 15 in the appendix showing that the new factors are tight up to constant factors. Since this is a worst-case bound, we found that the empirical performance can be much better than what the bound indicates.\\n\\n3. \\\"Furthermore, the greedy algorithm -- while being focused on the core of the technical contributions -- is not used in experimental comparisons and instead all the results presented use random sampling. At least the result of DIH-greedy should be presented.\\\"\\n\\n- In the first paragraph of section 3.3, we explained why randomness is essentially helpful to early exploration and accurate estimation of DIH. Note only the selected samples' DIH are updated in order to avoid extra computation. Hence, if using DIH-greedy, we will not get an accurate estimate of DIH for samples with small DIH in the first few epochs since their DIH are rarely updated.\\n\\n4. \\\"This is more of a suggestion: one of the claimed advantages of the algorithm (over non-curriculum learning and MCL) is that it requires less training examples to train. Given this, the authors should present training time improvements over large datasets.\\\"\\n\\n- In the appendix from page 19-22 (page 21-24 of the new version), we reported the wall clock time for DIHCL and all the baselines. It shows that DIHCL is much more efficient than other methods.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thanks for your comments! A new version of our paper has been uploaded.\\n\\n1. \\\"I find the arguments section 2.1 quite difficult to follow. In particular, under the assumption stated in the paper that $r_t(i)=f(i|S_{1:t\\u22121})=f(e_i+S_{1:t\\u22121})\\u2212f(S_{1:t\\u22121})$, why does it follow that $r_t(i)$ can be used instead of $f(\\\\cdot)$ in the minimization problem (2)?\\\"\\n\\n- We cannot directly optimize $f(\\\\cdot)$: it is an unknown function since it measures the quality of all the possible training sequences (in exponential number) so it is intractable to be estimated. Under this assumption, $r_t(i)$ is the only observation about $f(\\\\cdot)$, so our online optimization of $f(\\\\cdot)$ can be only based on $r_t(i)$. We proved in Corollary 1 that only using $r_t(i)$ to optimize $f(\\\\cdot)$ can achieve an approximation bound to the global optimum of the intractable optimization in Eq.2.\\n\\n2. \\\"It would be good to see an analysis of how sensitive the results are to the choice of $k_t$.\\\"\\n\\n- In our experiments over the 11 datasets, we use the same scheduling of $k_t$, i.e., we start from $k_0=n$ and exponentially reduce it to $k_t=0.2n$ by a factor $\\\\gamma_k=0.85$, and it works well on all datasets. We will add sensitivity analysis of the scheduling parameters in future version.\\n\\n3. \\\"In Figure 1, it is not clear whether the figure on the right shows the actual loss, or the smooth loss using Eq.1 with instantaneous instance (A). If it is the former, then if the loss is so smooth, why do we need DIH? If it is the latter, then what does the instantaneous loss look like? This actually raises the question of how important the smoothing component is -- could we achieve the same results with an instantaneous loss (i.e. set gamma to 1 in Eq.1)?\\\"\\n\\n- It shows the actual loss, i.e., the former case. However, each curve is the average loss over a group of samples, which makes it look smooth: for each group $V_j$ in Figure 1, Figure 1 shows how $\\\\frac{1}{|V_j|}\\\\sum_{i\\\\in V_j}a_t(i)$ changes over time $t$. In Figure 7-8 of the updated version, we visualize the actual loss $a_i(t)$ and $r_i(t)$ for individual samples, which show that $a_i(t)$ is much less smooth than $r_i(t)$ on individual samples shown in Figure 2. Moreover, if we instead use $a_i(t)$ for the partition at epoch 40 as in Figure 1, we cannot see the difference between the groups in future epochs.\\n\\n4. \\\"How do you choose $T_0$, $\\\\gamma$, $\\\\gamma_k$, and schedules in Appendix C (page 17)?\\\"\\n\\n- We tried several choices of the three parameters in $T_0\\\\in\\\\{5,10\\\\}$ and $\\\\gamma, \\\\gamma_k\\\\in\\\\{0.85, 0.9, 0.95\\\\}$ on validation sets and report the one with good performance on all the validation sets ($T_0=5, \\\\gamma=0.95, \\\\gamma_k=0.85$). We use the same hyperparameters for all the 11 datasets. The schedules are set based on our experience and have not been tuned.\\n\\n5. \\\"It would be interesting to see how SPL does if you use DIH as a metric (just smoothing the loss over time), but their approach of scheduling samples (easy to hard, and not the opposite and in DIHCL). It would be good to report standard deviations in Table 1.\\\"\\n\\n- We will report these results in our future version.\\n\\n6. \\\"I was surprised to see that, despite the extra computations, DIHCL is comparable to random mini-batches.\\\"\\n\\n- The training set of DIHCL in each epoch is a small subset of the one used in random mini-batches. Moreover, the extra computation of DIHCL is very cheap and only requires sorting ($O(n\\\\log n)$) and basic arithmetic operations ($O(n)$) on one array.\\n\\n7. \\\"In Figure 1, the axes are barely readable. The authors oftentimes reverse the use of \\\\citet and \\\\citep.\\\"\\n\\nThanks for catching this! We addressed these issues in the new version.\\n\\n8. \\\"It would be interesting to make a connection between the DIH and what other papers have discovered about example forgetting (e.g. Toneva et. al, that was mentioned in the paper).\\\"\\n\\n- We discussed Toneva's work in the 4th paragraph of page 2 and its difference to ours at the end of Section 1.1. We will add more discussion.\\n\\n10. \\\"On average, the dynamics on the hard samples is more consistent with the learning rate schedule, which implies that doing well on these samples can only be achieved at a shared sharp local minima.\\u201d -> can you please explain why this is so?\\\"\\n\\n- If it is a flat minima, the losses will not change when the model parameters slightly deviate from the minima (blue curves in Figure 1), so the losses will not change linearly with the learning rate, which however is the case for the hard samples (red curves) in Figure 1.\\n\\n11. \\\"In Table 1, on some datasets, the authors apply lazier-than-lazy-greedy, and on some not. Why, and how does one decide this for a new dataset?\\\"\\n\\n- Lazier-than-lazy-greedy (LTLG) is applied when we need to further reduce the selected training set in Line 7 of Alg.1 by Eq.5. For the datasets we did not use LTLG, DIHCL already outperforms other baselines on both final accuracy and efficiency, so we did not apply the further reduction.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"*Revision after author response*\\n\\nI thank the authors for the comments on my questions. \\n\\nUnfortunately, I do not feel that these comments addressed my main concerns. For all my experimental analysis questions, the authors promised some analyses for future versions, but I was hoping to see at least a minor preliminary analysis at this point, to see if indeed my concerns are valid or not. \\n\\nMoreover, for my question number 1 about the optimization problem, the authors referred me to Corollary 1 from the paper, but that didn't really help me because, as the other reviewers also point out, the writing is quite hard to follow.\\n\\nBecause of all these, I have decided to revise my score to a weak reject. While I believe the paper has merit, it requires revisions at many points in order for a reader to truly understand the method and trust the experimental results.\\n\\n--------------------------------------------------------------------------------------------------------------\\nThe paper proposes a curriculum learning approach that relies on a new metric, the dynamic instance hardness (DIH). DIH is used to measure the difficulty of each sample while training (in an online fashion), and to decide which samples to train on next. The authors provide extensive experiments on 11 datasets as well as some theoretical motivation for the use of this approach.\\n\\n---- Overall opinion ----\\nOverall I believe this paper is an interesting take on curriculum learning that is able to achieve good results. I believe this approach is a combination of core ideas from multiple sources, such as boosting, self-paced learning, continual learning and other curriculum learning approaches, but overall it seems different enough from each one of them individually. Because of the resemblance with these many different methods, the method itself does not surprise through the novelty of a new idea, but the authors seemed to have found something that was missing from these methods and that leads to very good results. The experimental results look great, but I believe the paper is missing some ablation studies to assess the importance of certain components (see details below). I also had some trouble understanding certain arguments, which I hope the authors can clarify. \\n\\n---- Major issues ----\\n1. I find the arguments section 2.1 quite difficult to follow. In particular, under the assumption stated in the paper that r_t(i) = f(i|S_{1:t\\u22121}) = f(e_i + S_{1:t\\u22121}) \\u2212 f(S_{1:t\\u22121}) , why does it follow that r_t(i) can be used instead of f in the minimization problem (2). \\n\\n2. Based on the method itself, it seems to me that the parameter k_t could would have a lot of influence on how well the method doing. The authors mention in the experimental section what values they use, but there is no indication on how one would choose this value. Moreover, it would be good to see an analysis of how sensitive the results are to this choice.\\n\\n3. In Figure 1, it is not clear whether the figure on the right shows the actual loss, or the smooth loss using Equation (1) with instantaneous instance (A). If it is the former, then if the loss is so smooth, why do we need DIH? If it is the latter, then what does the instantaneous loss look like? This actually raises the question of how important the smoothing component is -- could we achieve the same results with an instantaneous loss (i.e. set gamma to 1 in Eq. 1)?\\n\\n---- Minor issues ----\\n1. How do you choose T0, gamma and gamma_k?\\n\\n2. In the conclusions, the authors state that \\u201c The reason [why MCL and SPL are less stable] is that, compared to the methods that use DIH, both MCL and SPL deploy instantaneous instance hardness (i.e., current loss) as the score to select sample\\u201d. Since there are so many other differences in the way training progresses, I think we don\\u2019t have enough evidence to attribute this to merely the \\u201cinstantaneousness\\u201d of the loss. In fact, it would be interesting to see how SPL does if you use DIH as a metric (just smoothing the loss over time), but their approach of scheduling samples (easy to hard, and not the opposite and in DIHCL).\\n\\n3. Appendix C shows some interesting results regarding wall time comparison. I was surprised to see that, despite the extra computations, DHCL is comparable to random mini-batches. This makes me wonder what the stop criteria was, because when you stop matters a lot for run time comparisons. It would also be interesting to see a more ample discussion on this in the main text.\\n\\n4. In Figure 1, the axes are barely readable.\\n\\n5. The authors oftentimes reverse the use of \\\\citet and \\\\citep, for example \\u201chas been called the \\u201cinstance hardness\\u201d Smith et al. (2014) corresponding to\\u201d should have a bracket, whereas \\u201cOur paper is also related to (Zhang et al., 2017)\\u201d should not have brackets.\\n\\n6. This is not an issue, but I just wanted to say I appreciated Appendix B.\\n\\n---- Suggestions ----\\n1. It would be interesting to make a connection between the DIH and what other papers have discovered about example forgetting (e.g. Toneva et. al, that was mentioned in the paper).\\n\\n2. Major issues 3 -> a study on the effect of k and how to choose it.\\n\\n3. While I understand that the models chosen in the experiments are expensive to train, it would be good to report standard deviations in Table 1.\\n\\n4. Based on Table 1 and Figure 3, there is no concrete winner among the DIHCL methods. It would be good to include some recommendations in your conclusion on which one to choose and when.\\n\\n---- Questions ----\\n1. \\u201cOn average, the dynamics on the hard samples is more consistent with the learning rate schedule, which implies that doing well on these samples can only be achieved at a shared sharp local minima.\\u201d -> can you please explain why this is so?\\n\\n2. See Major issues 3.\\n\\n3. In Table 1, on some datasets, the authors apply lazier-than-lazy-greedy, and on some not.Why, and how does one decide this for a new dataset?\\n\\n4. How did you choose T0, gamma and gamma_k, as well as the schedules in Appendix C (page 17)?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a technique for curriculum learning using Dynamic Instance Hardness (DIH). DIH is defined as value for each training instance that characterizes how hard that instance is and is updated throughout the training process. The DIH value is used to select the best set of instances to learn. It is shown that instances with high (low) DIH values maintain the high (low) value throughout the training process.\\n\\nThe main contributions and pros of the paper are\\n1. A notion of instance hardness that is persisted and updated throughout the training procedure.\\n2. An objective function for characterizing instance hardness as a dynamic subset selection problem. Presenting a greedy algorithm for online maximization of this function.\\n3. Experimental results that show how the property of instance hardness is maintained throughout training and showing how DIH-driven curriculum learning techniques that use random sampling outperforms non-curriculum learning techniques.\\n\\nCons\\n1. The writing of this paper is difficult and in many of the core parts of the paper, the important definitions are not clear. E..g a) the role of the function 'f' that is being maximized in section 3.2 is not clear. To elaborate, it is not obvious what this function is or why should one care about it? \\n2. The greedy algorithm has a bound in equation 7 that appears to be quite loose as the value of k is as high as the order of the size of the entire training set (in the experiments, 0.2n <= k < n). Am I misinterpreting it?\\nFurthermore, the greedy algorithm -- while being focused on the core of the technical contributions -- is not used in experimental comparisons and instead all the results presented use random sampling. At least the result of DIH-greedy should be presented.\\n\\n3. This is more of a suggestion: one of the claimed advantages of the algorithm (over non curriculum learning and MCL) is that it requires less training examples to train. Given this, the authors should present training time improvements over large datasets.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies the curriculum learning approach that can more effectively utilize the data to train DNNs. It formulates DIH as a curriculum learning problem, and derives theory on the approximation bound. The method is verified by a set of experiments. There are several concerns raised by the reviewer.\\n\\nI found the presentation of this paper is rather bad. The structure of the paper is quite strange. The Introduction section contains a lot of stuffs that I believe should be moved to the preliminary or method sections. \\n\\nAlso, there are a lot of confusions in the descriptions. For example, when defining the curriculum learning problem in eq.2 and eq.3, are the f's the same? If so, why do they have different input arguments?\\n\\n\\\"In step t, after the model gets trained on S_t, the feedback a_t(i) for i \\\\in S_t is already available\\\": I don't get this.\\n\\nI am not sure what Theorem 1 tries to tell. If one chooses k large enough, the inequality satisfies trivially. BTW, what is A_{1:T}?\\n\\nTo sump up, there are some interesting ideas in this paper. However, with the current stage of writing, I cannot recommend acceptance.\"}"
]
} |
r1l7E1HFPH | Multi-step Greedy Policies in Model-Free Deep Reinforcement Learning | [
"Yonathan Efroni",
"Manan Tomar",
"Mohammad Ghavamzadeh"
] | Multi-step greedy policies have been extensively used in model-based Reinforcement Learning (RL) and in the case when a model of the environment is available (e.g., in the game of Go). In this work, we explore the benefits of multi-step greedy policies in model-free RL when employed in the framework of multi-step Dynamic Programming (DP): multi-step Policy and Value Iteration. These algorithms iteratively solve short-horizon decision problems and converge to the optimal solution of the original one. By using model-free algorithms as solvers of the short-horizon problems we derive fully model-free algorithms which are instances of the multi-step DP framework. As model-free algorithms are prone to instabilities w.r.t. the decision problem horizon, this simple approach can help in mitigating these instabilities and results in an improved model-free algorithms. We test this approach and show results on both discrete and continuous control problems. | [
"Reinforcement Learning",
"Multi-step greedy policies",
"Model free Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=r1l7E1HFPH | https://openreview.net/forum?id=r1l7E1HFPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"EnItGAmxLd",
"rJlPyxFhor",
"S1gcu3dnjH",
"B1l5akg2jB",
"r1eXcBqXsr",
"SJeiUS9miB",
"SJguBE9QsH",
"BylDpXqQoH",
"HJe7nfqXiH",
"H1geTaI6KS",
"BJeblVM6tr",
"BklUIEo6_r"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728756,
1573847007448,
1573846130464,
1573810114487,
1573262730921,
1573262674672,
1573262400391,
1573262271216,
1573261995058,
1571806647832,
1571787753220,
1570776141826
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1648/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1648/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1648/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1648/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1648/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1648/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1648/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1648/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1648/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1648/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1648/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper extends recent multi-step dynamic programming algorithms to reinforcement learning with function approximation. In particular, the paper extends h-step optimal Bellman operators (and associated k-PI and k-VI algorithms) to deep reinforcement learning. The paper describes new extensions to DQN and TRPO algorithms. This approach is claimed to reduce the instability of model-free algorithms, and the approach is tested on Atari and Mujoco domains.\\n\\nThe reviewers noticed several limitations of the work. The reviewers found little theoretical contribution in this work and they were unsatisfied with the empirical contributions. The reviewers were unconvinced of the strength and clarity of the empirical results with the Atari and Mujoco domains along with the deep learning network architectures. The reviewers suggested that simpler domains with a simpler function approximation scheme could enable more through experiments and more conclusive results. The claim in the abstract of addressing the instabilities was also not adequately studied in the paper.\\n\\nThis paper is not ready for publication. The primary contribution of this work is the empirical evaluation, and the evaluation is not sufficiently clear for the reviewers.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Updated review (2/2)\", \"comment\": \"Finally, about my rating of the paper, I have decided to keep it the same: weak accept. However, there is still some work to do on the paper, so I would not mind if the current iteration of the paper was rejected.\\n\\nAs reviewer #1 mentioned, the empirical evaluations in the main body of the paper are hard to read because of the overlap in the graphs. Moreover, although the results show a clear improvement over DQN and TRPO, this fact is lost in the amount of information presented in each graph. Personally, I liked the results in mountain car and pendulum because they are a lot cleaner than the results in Atari and Mujoco. You could also consider using less configuration of the parameters with a bigger spread to more clearly emphasize the results.\"}",
"{\"title\": \"Updated review (1/2)\", \"comment\": \"Thank you for your reply and for addressing my comments. However, I still have a few concerns.\\n\\n- About the simple domains and architectures\\nMy main concern was less about the big domains and more about the complicated architectures used for k-VI/PI DQN and TRPO. With big deep neural network architectures there are too many hyperparameters to tune and control for if one wants to study the effect of one particular part of the algorithm. Thus, I suggested to use a less complex function approximator such as a tilecoder (see Sutton & Barto 2018), which only depends on a few architectural choices and eliminates the representation learning problem from the task. Instead, the authors provided experiments in simpler domains such as cartpole and mountain car. I apologize for the confusion. Nevertheless, the results do shed some clarity on the performance of the algorithms, so I still consider it an improvement. I still have a few questions about the results in mountain car.\\n\\nFirst, what implementation of mountain car are you using? Often the first few episodes in mountain car implemented as in Sutton and Barto (2018) have a cumulative reward below -100, but the plots indicate that the lowest cumulative reward obtained is more than -50. Additionally, with respect to the y-axis of Figure 19 (top left pane), is iterations the same as episodes? If you're using a different implementation from the one in Sutton and Barto (2018) please cite your sources.\\n\\nSecond, what method was used to smooth the plot? From my experience with this environment, even when averaging over 100 runs, the plots still appear very noisy, so it seems surprising to me that the plots are this smooth with a sample size of 10. \\n\\nAdditionally, it would be useful to also provide results about the behaviour of the algorithms under different values of CFA. I understand that CFA is related to the final accuracy of V_pi. However, since the V_pi is being approximated anyway, perhaps there may be an interesting trade-off between using a lot of samples to compute an accurate estimate or accept a higher error while moving on to the next policy improvement iteration with a slightly better, yet inaccurate, estimate than before.\\n\\n- About the size of the network\\nThe size of the network was less important to me than the rationale behind choosing to deviate further from the original k-PI and k-VI algorithms. In Remark 1, it is mentioned that \\\\tilde{ Q }_\\\\theta, a target network that remains unchanged during the policy improvement step, should be used for \\\\pi_{i-1}; however, because of space complexity Q_\\\\theta', which may have changed during the policy improvement step, is used instead. My concern about this is that it further deviates from the original algorithms and introduces more confounding factors. Since the purpose of the paper is to study the performance of these algorithms when using deep neural networks to estimate V_pi and Q_pi, then I think it would be better if it was as closed to the original algorithms as possible, which seems doable since the sizes of the networks were not exorbitantly big. \\n\\n- Expected Sarsa Update\\nLine 19 resembles more the update for Sarsa more than Expected Sarsa (see Equation 6.7 of Sutton and Barto , 2018 and compare it to Equation 5 from Van Seijen et. al., 2009). Moreover, neither of those two should be called TD(0) since they are different algorithms. Finally, it is not clear how this is an off-policy update if there is no correction between the current policy and policy used to sample the action at the time it was stored in the experience replay buffer. The bottom line is that the statement that the update in Line 19 corresponds to off-policy TD(0) is false and should be corrected. \\n\\n- About the contradictory claims\\nMy main point about that comment was that the claim \\\"[Table 1 and Figure 1] ... lead to a clear degradation of final performance,\\\" is an overstatement. In Seaquest, Enduro, Beam Rider, and Qbert the confidence intervals of the performance of k-Pi DQN (k best) and N(k) = T overlap. It is true that with more samples the confidence interval will tend to shrink; however, they will shrink around the true mean, not the observed sample average. That means that if the confidence intervals overlap, there exists a chance that the true mean of N(k) = T will be higher than the k-PI DQN. Since there is a chance that N(k) = T has better performance than k-PI DQN (k best), saying that N(k) = T shows a clear degradation of performance seems exaggerated.\\n\\n== More typos == \\n- Paragraph above Equation (1), 4th line from the bottom, \\\"The algorithms by which an is be solved....\\\" It feels like there is a word missing. \\n\\n== References ==\\n1. Sutton, R. S., Barto, A. G. (2018 ). Reinforcement Learning: An Introduction. The MIT Press.\\n\\n2. Van Seijen, Harm, et al. \\\"A theoretical and empirical analysis of Expected Sarsa.\\\" 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning. IEEE, 2009.\"}",
"{\"title\": \"Update to Appendix C\", \"comment\": \"For completeness, we have added more results (on the Pendulum domain) in the Appendix C section. Moreover, we have tried to summarize our intuition on why $\\\\kappa$-PI and VI work well in different domains at the end of this section. We believe that this addresses the concerns about intuitive understanding using simpler domains, but would welcome additional suggestions for the final version.\"}",
"{\"title\": \"Response to AnonReviewer1 (1/2)\", \"comment\": \"We would like to thank the reviewer for the comments.\\n\\nBefore addressing the reviewer\\u2019s comments, we would like to summarize the contributions of our work.\\n \\n*****Summary*****\\nThe advantage of lowering the discount factor in RL has been investigated in several prior work, e.g., Petrik and Scherrer (2009) and Jiang et al. (2015). However, they have shown that lowering the discount factor introduces bias, and as we empirically demonstrate in Section 5.3, this bias can lead to a deterioration in the performance.\\n \\nIn our work, instead of lowering the discount factor, we follow a different route that was theoretically formulated in Efroni et al. (2018a). They introduced the notion of $\\\\kappa$ greedy policy, from which they derived $\\\\kappa$-Policy Iteration ($\\\\kappa$-PI) and $\\\\kappa$-Value Iteration ($\\\\kappa$-VI) algorithms. In their previous work (Efroni et al., 2018a and 2018b), they only theoretically analyzed these algorithms and empirically evaluated their convergence speed in problems with small number of states (100 states at most) and where the model of the environment is known (planning setting, not learning setting). It is not obvious if their theoretical results apply when complex function approximations, such as deep neural networks, are used to solve the problem in the model-free setting (learning, not planning). In this work, we investigate extending the algorithms proposed by Efroni et al (2018a,b) to model-free and function approximation settings with both discrete and continuous actions, and show that this extension is non-trivial (as also pointed out by Reviewer 3) and care should be taken in deriving the practical versions of these algorithms (e.g., the importance of the C_{FA} parameter). Furthermore, we demonstrate the generality of the framework by showing that popular algorithms, DQN and TRPO, are special cases of our multi-step greedy framework for $\\\\kappa$ = 1. \\n \\nWe show the advantage of using $\\\\kappa$-PI and $\\\\kappa$-VI algorithms over lowering the discount factor for both value (DQN) and policy (TRPO) based algorithms, when neural networks are used as function approximator (as mentioned in the first paragraph). Our results (in Section 5.3) indicate that while the performance of DQN and TRPO degrades with lowering the discount factor, our multi-step greedy algorithms improve over DQN and TRPO.\\n \\nFurthermore, we test some of the consequences of the theory in Efroni et al. (2018a,b) on multi-step greedy dynamic programming. In particular, we show the advantage of using \\u2018hard\\u2019 updates over \\u2018soft\\u2019 updates, which was shown to be problematic (theoretically) by Efroni et al. (2018b). By hard and soft updates (terms used in Efroni et al., 2018b), we refer to fully solving the $\\\\gamma\\\\kappa$ MDP in a model-free manner (hard) versus changing the policy at each iteration (soft). In the \\u2018hard\\u2019 setting, the policy improvement and evaluation steps are separated, while in the \\u2018soft\\u2019 setting, they are concurrent (each policy improvement step is followed by a policy evaluation step).\\n \\nWe believe much more is left to be understood in applying multi-step greedy policies to RL. We consider our work as a first step towards this goal as well as showing such an approach is empirically beneficial.\"}",
"{\"title\": \"Response to AnonReviewer1 (2/2)\", \"comment\": \"*****Response to the Reviewer\\u2019s Comments*****\\nAt the beginning, we would like to bring it to the reviewer\\u2019s attention that $\\\\kappa$ is a parameter in the range [0,1] and cannot go to infinity. $\\\\kappa$=0 corresponds to 1-step greedy and $\\\\kappa$=1 corresponds to solving the entire MDP. This fact makes the resulting algorithms easy to implement, unlike an approach that uses finite lookahead policies. \\n\\nWe now provide a summary of our experiments and the lessons one can learn from them. Hope this helps the reviewer with reading the experiments, and clarifies the messages we would like to deliver in this work. \\n\\nThe goal behind our experiments is to compare against the DQN and TRPO baselines, which are special cases of our algorithm by setting $\\\\kappa$=1. \\n\\nThe first takeaway message is that there are non-trivial $\\\\kappa$ values for which we could observe better performance than DQN and TRPO. These $\\\\kappa$ values are different for different environments. Here are some results revealed through our work:\\n \\n - We can categorize each environment with a certain range of \\u2018ideal\\u2019 $\\\\kappa$ values, e.g., either lower or higher $\\\\kappa$ values.\\n\\n - Our results also show that in TRPO, although previous work, such as GAE, concluded to have a fixed $\\\\lambda$ parameter across all environments, this is certainly not true. A $\\\\kappa$ or a $\\\\lambda$ value that works well for one environment is not guaranteed to be working well for another. Therefore, the natural next step, that we are currently working on, is to build methods that can adapt the value of $\\\\kappa$ based on the problem at hand.\\n\\nSecondly, since our methods have been derived from Policy/Value Iteration schemes, it makes sense to check how well they work when the policy evaluation and improvement steps are separated, i.e., improving for multiple time steps before evaluating the policy. We do this through the \\u2018naive\\u2019 baseline comparison which improves the policy for a single time-step. The results consistently show that doing a multi-step update is better. This is the second takeaway message from our work. \\n\\nThirdly, one can also wonder what effect does lowering the discount factor have on the problem, since the $\\\\kappa$-PI algorithm advocates for solving a more discounted MDP (i.e., the $\\\\gamma\\\\kappa$ MDP, instead of the $\\\\gamma$ MDP, at each time step). Our results show that the comparison is non-trivial, as we achieve consistently better performance with $\\\\kappa$ PI/VI, while lowering the discount factor actually hurts the baseline performance in most cases. This forms the third take-away of our work.\"}",
"{\"title\": \"Response to AnonReviewer2 (1/2)\", \"comment\": \"We would like to thank the reviewer for the comments.\\n\\nBefore addressing the reviewer\\u2019s comments, we would like to summarize the contributions of our work.\\n \\n*****Summary*****\\nThe advantage of lowering the discount factor in RL has been investigated in several prior work, e.g., Petrik and Scherrer (2009) and Jiang et al. (2015). However, they have shown that lowering the discount factor introduces bias, and as we empirically demonstrate in Section 5.3, this bias can lead to a deterioration in the performance.\\n \\nIn our work, instead of lowering the discount factor, we follow a different route that was theoretically formulated in Efroni et al. (2018a). They introduced the notion of $\\\\kappa$ greedy policy, from which they derived $\\\\kappa$-Policy Iteration ($\\\\kappa$-PI) and $\\\\kappa$-Value Iteration ($\\\\kappa$-VI) algorithms. In their previous work (Efroni et al., 2018a and 2018b), they only theoretically analyzed these algorithms and empirically evaluated their convergence speed in problems with small number of states (100 states at most) and where the model of the environment is known (planning setting, not learning setting). It is not obvious if their theoretical results apply when complex function approximations, such as deep neural networks, are used to solve the problem in the model-free setting (learning, not planning). In this work, we investigate extending the algorithms proposed by Efroni et al (2018a,b) to model-free and function approximation settings with both discrete and continuous actions, and show that this extension is non-trivial (as also pointed out by Reviewer 3) and care should be taken in deriving the practical versions of these algorithms (e.g., the importance of the C_{FA} parameter). Furthermore, we demonstrate the generality of the framework by showing that popular algorithms, DQN and TRPO, are special cases of our multi-step greedy framework for $\\\\kappa$ = 1. \\n \\nWe show the advantage of using $\\\\kappa$-PI and $\\\\kappa$-VI algorithms over lowering the discount factor for both value (DQN) and policy (TRPO) based algorithms, when neural networks are used as function approximator (as mentioned in the first paragraph). Our results (in Section 5.3) indicate that while the performance of DQN and TRPO degrades with lowering the discount factor, our multi-step greedy algorithms improve over DQN and TRPO.\\n \\nFurthermore, we test some of the consequences of the theory in Efroni et al. (2018a,b) on multi-step greedy dynamic programming. In particular, we show the advantage of using \\u2018hard\\u2019 updates over \\u2018soft\\u2019 updates, which was shown to be problematic (theoretically) by Efroni et al. (2018b). By hard and soft updates (terms used in Efroni et al., 2018b), we refer to fully solving the $\\\\gamma\\\\kappa$ MDP in a model-free manner (hard) versus changing the policy at each iteration (soft). In the \\u2018hard\\u2019 setting, the policy improvement and evaluation steps are separated, while in the \\u2018soft\\u2019 setting, they are concurrent (each policy improvement step is followed by a policy evaluation step).\\n \\nWe believe much more is left to be understood in applying multi-step greedy policies to RL. We consider our work as a first step towards this goal as well as showing such an approach is empirically beneficial.\\n\\n*****Response to the Reviewer\\u2019s Comments*****\\n\\u201cthis work extends Efroni et al. (2018b) from tabular to function approximation\\u201d\\nThis reviewer\\u2019s statement is not completely accurate. The work of Efroni et al (2018a) and (2018b) focused on theoretical analysis of $\\\\kappa$ PI/VI algorithms, and their experiments are in small (tabular) problems, where the model is given (the setting is planning, not learning). Our work is the first one to use multi-step greedy policies with neural networks as function approximator, in model-free RL (learning and not planning setting).\"}",
"{\"title\": \"Response to AnonReviewer2 (2/2)\", \"comment\": \"\\u201cno mention of our TRPO work in the review\\u201d\\nThe review only mentions our DQN work and does not talk about our TRPO algorithms and experiments. We are sorry if we did not present this part of our work clearly enough. We will improve the presentation of this part in the final version of the paper. To clarify, we experimented with both DQN and TRPO extensions of our approach. As stated in the paper, the TRPO extension resembles the practically used GAE (Generalized Advantage Estimation) algorithm, with a crucial difference that in GAE the value and policy are concurrently updated (each policy improvement step is followed by a policy evaluation step), while in our work, we emphasize the need to do multiple step improvement before evaluating the policy. The theoretical results of Efroni et al. (2018b) suggest that the concurrent update approach used by GAE does not necessarily result in an improving algorithm, which hints that using this approach might be problematic. We conjecture that the reason that this issue does not lead to significant performance deterioration in GAE is that most MuJoCo continuous control tasks are inherently of short horizon. In fact, our experiments show that in the Atari domains concurrently learning the policy and value leads to inferior performance.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We would like to thank the reviewer for the detailed review and useful comments.\\n\\n\\u201cusing simpler domains to better explain the algorithms\\u201d\\nWe agree with the reviewer that it would be better to describe very new algorithms using small/simple domains. The reason that we did not initially include our experiments in simple domains is that the difference between the performance of the algorithms is not very clear in such problems. To address the reviewer\\u2019s concern, we have now added a new section to the paper (Appendix C), where we report results on simpler environments, such as CartPole and Mountain Car. For the experiments on CartPole, we focus on the $\\\\kappa$-PI TRPO algorithm solely, since the results for the other versions ($\\\\kappa$-VI TRPO, DQN, and $\\\\kappa$-PI DQN) follow similarly. As pointed out by the reviewer, the purpose of this section is to gather a more intuitive understanding of the algorithms. Please refer to Appendix C in the updated paper for a more detailed discussion.\\n\\n\\u201ca simpler way to emphasize this is to show a plot of the cumulative reward ...\\u201d\\nWe have added a couple of bar plots to address this in Appendix C.4. These correspond to the $\\\\kappa$-PI training plots for HalfCheetah and Ant domains. It is clear from the bar plots that the performance is smooth in $\\\\kappa$. \\n\\n\\u201c1. Why choose 50% confidence intervals?\\u201d\\nThe results reported in Table 1 and 2 are the empirical mean $\\\\pm$ the empirical standard deviation, which for the sample size of 4 or 5 runs is roughly equal to the 95% confidence interval bound. Moreover, in the plots, we show results describing the empirical mean $\\\\pm$ 0.5 * empirical standard deviation, which is, again for the sample size of 4 or 5 runs, roughly equal to a 60% to 70% confidence interval. This is done so that there is less overlap in the graphs and they are more readable. The 50% value actually corresponds to these plots. We apologize for the lack of clarity here and have updated the paper with the correct confidence values. All conclusions are made with respect to the Table data eventually, which remains unchanged and still corresponds to the 95% confidence bound.\\n\\n\\u201c2. How big were the networks that you used for k-PI DQN?\\u201d\\nThe DQN network sizes are the same as used in Mnih et al. [1], i.e., 3 convolutional layers followed by 2 fully connected layers. \\n\\n\\u201c3. Line 19 of Algorithm 5 in Appendix A.1 \\u2026\\u201d\\nThe update in Line 9 resembles the expected SARSA update in Van Seijen et al. [2]. Also, this is the exact update as in DDPG (Equation 5 in Lillicrap et al. [3]).\\n\\n\\u201cContradictory Claims in the Results\\u201d\\nOur claim is essentially saying that the mean values of the best performing $\\\\kappa$ are consistently better than the mean values of the N_kappa = T baseline. Since the data here corresponds to the 95% confidence interval bound for 4-5 sample runs, increasing the number of sample runs would decrease the width of the 95% confidence interval, which essentially would ensure no overlap between the upper confidence limit of the baseline and the lower confidence limit of the best $\\\\kappa$ value. For example, comparing the two versions for 10 sample runs in the Ant domain, results in the lower confidence limit of the best $\\\\kappa$ value to be around 1230, while the upper confidence limit of the baseline to be around 1180, hence ensuring no overlap. Please note that due to the inherent variability in final training performance because of random seeding, the mean values for both cases, although relatively consistent, are also slightly changed. However, taking more samples always ensures that any x% confidence bound is narrowed.\\n\\n\\u201cTypo in the last column of Table 1\\u201d\\nThank you for pointing this out. We have fixed this in the updated version.\\n\\n\\u201cLinear convergence of PI and VI\\u201d\\nWe apologize for the confusion here. Many of the works in the optimization literature refer to such an exponential rate as linear in the parameter N, and we borrowed the same definition. We have fixed this in the updated version.\", \"references\": \"1. Mnih, Volodymyr, et al. \\\"Human-level control through deep reinforcement learning.\\\" Nature 518.7540 (2015): 529.\\n2. Van Seijen, Harm, et al. \\\"A theoretical and empirical analysis of Expected Sarsa.\\\" 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning. IEEE, 2009.\\n3. Lillicrap, Timothy P., et al. \\\"Continuous control with deep reinforcement learning.\\\" arXiv preprint arXiv:1509.02971 (2015).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"===== Summary =====\\nThe paper proposes an extension of multi-step dynamic programming algorithms from Efroni, Dalal, Scherrer, and Mannor (2018a, 2018b) to the reinforcement learning setting with function approximation. The multi-step dynamic programming algorithms proposed by Efroni et. al. (2018a) find the solution of the h-step optimal Bellman operator, which applies the maximum over the next h sequence of actions. Moreover, Efroni et. al. (2018a) also showed an equivalence between h-step optimal Bellman operators and k-Policy Iteration (k-PI) and k-Value Iteration (k-VI) algorithms, which, similar to TD( \\ud835\\udf06 ) but for policy improvement, take a geometric average of all future h-step returns weighted by k. The paper extends the work from Efroni et. al. (2018a, 2018b) to the deep reinforcement learning setting by proposing an approximate k-PI and k-VI algorithm based on DQN and TRPO. Finally, the paper provides empirical evaluations of k-PI and k-VI with DQN in several Atari games and of k-PI and k-VI with TRPO in several MuJoCo environments with continuous actions paces.\", \"contributions\": \"1. The paper proposes a non-trivial extension for k-PI and k-VI to use function approximation via the DQN algorithm. \\n2. Similarly, the paper proposes a non-trivial extension for k-PI and k-VI to use function approximation with continuous action spaces via the TRPO algorithm. \\n3. The paper provides empirical evaluations of the four proposed algorithms and, at least for the k-PI algorithm with DQN and TRPO, demonstrates an improvement over the baselines. \\n\\n===== Decision =====\\nThe paper represents a natural next step to the work of Efroni et. al. (2018a, 2018b). The paper extends the applicability of multi-step greedy policies to more complex environments and shows a statistically significant improvement in performance compared to the methods that it builds upon. Additionally, the ideas are presented clearly and incrementally throughout the paper, which makes it flow nicely until the part where k-PI and k-VI DQN and TRPO are introduced. This is my main complaint about the paper, the lack of simple and intuitive understanding about k-PI and k-VI with function approximation due to the complicated architectures associated with DQN and TRPO. For this reason, my rating of the paper is weak accept.\\n\\n===== Detailed Comments about Decision =====\\nAll of these are comments for which I would consider increasing my score if they were addressed. \\n\\n=== Empirical Evaluations ===\\nFirst, my main complaint is the complicated architectures and complex domains used to gain insights about k-PI and k-VI with function approximation. Big demonstrations in Atari and MuJoCo are important, but in the case of very new algorithms such as these ones, I consider it to be more important to gain insight through small domains that allow us to dig deep into the algorithms. Any small domain that would allow for big sample sizes for ablation and parameter studies would be more insightful than big demonstrations with very small sample sizes. I do not mean to be dismissive about what has been done in the paper, but it would be a great source of insight and a big improvement to what has already been done if a simple demonstration was presented in the paper. \\n\\nMy suggestion would be to use a simple approximation method, such as Tile Coding with linear function approximation, in small a domain such as mountain car. This would allow for a bigger sample size and a parameter study that could provide more insight about the role of the parameters k and C_{FA} on the performance of k-PI and k-VI. \\n\\nAdditionally, one of the claims in the conclusions was never emphasized in the results: \\u201cimportantly, the performance of the algorithms was shown to be \\u2018smooth\\u2019 in the parameter k.\\u201d This was not completely obvious until I spent some time looking closely at the graph. It eventually became clear, but I think a simpler way to emphasize this is to show a plot of the cumulative reward over the whole training period with the values of k on the x-axis. Based on the top right pane of FIgure 1, this type of plot would show a smooth increase from k=0.99 to k=0.68 followed by a smooth decrease from k=0.68 to k=0. \\n\\nFinally, I have some questions about some of the choices made in the experiments and results sections:\\n\\n1. Why choose 50% confidence intervals? 50% confidence intervals with a sample size of 4 in the case of DQN and 5 in the case of TRPO is equivalent to multiplying the standard error by a factor of approximately 0.7, which is narrower than using the standard error on its own. Thus, it seems that some of the conclusions would change based on using a 95% confidence interval compared to a 50% confidence interval in Tables 1 and 2. I insist in showing the performance in a small domain with a simple form of function approximation. This would complement the Atari and MuJoCo experiments by showing improvements in performance with a higher confidence. \\n\\n2. In remark one, it is pointed out that another target network \\\\tilde Q should be used to obtain \\\\pi_{t-1}, but this was not done to reduce the space complexity of the algorithm. How big were the networks that you used for k-PI DQN? If the network was not prohibitively big, why not implement \\\\tilde Q instead of using an alternative that further deviates from the original k-PI algorithm? \\n\\n3. Line 19 of Algorithm 5 in Appendix A.1 is supposed to be the off-policy TD(0) update. However, it is not clear how this update is off-policy TD(0) since it based on Q and it does not have any importance sampling to correct for the difference in policies. Am I missing something? It seems that it should be off-policy Sarsa(0), but even then it would still be missing an importance sampling term (see Sutton & Barto, 2018, Equation 7.11, or Algorithm 1 of Precup, Sutton, and Singh, 2000, for more information).\\n\\n=== Contradictory Claims in the Results ===\\nThere are a few claims that contradict with what is shown in Table 1 and 2.\\n\\nIn the last paragraph of Section 5.1.1 it says that \\u201c[the table 1] show[s] that setting N(k) = T leads to a clear degradation of the final training performance on all the domains except Enduro.\\u201d This is only true in two out of four games presented in Table 1. In Seaquest the lower confidence bound of the performance of k-PI with k=0.68 is 4643, whereas the upper confidence bound of the performance of k-PI with N(k) = T is 4837; the intervals clearly overlap. Similarly, in the game of Enduro, where k-PI with N(k) = T is said to have better performance, the lower confidence bound of k-PI with N(k) =T is 530, whereas for k-PI with k=0.84 the upper confidence bound is 575; again, the confidence intervals overlap. Hence, neither of these two claims are fully justified, and it is certainly not a \\u201cclear degradation of the final training performance.\\u201d\\n\\nSimilarly, in Section 5.2.2, k-PI is said to have a better performance than N(k) = T based on the results of Table 2. However, similar calculations show that this is only true for the Ant domain.\\n\\n===== Minor Comments =====\\n1. I believe there is a typo in the last column of Table 1, it should be a \\\\kappa instead of a \\ud835\\udf06.\\n\\n2. In the second paragraph above Equation 7, the convergence of PI and VI are said to converge to the optimal value with linear rate, but the rate of convergence is O( \\\\gamma^N ), i.e., exponential. Similarly, for the k-PI and k-VI their rate of convergence is O( \\\\ksi ( \\\\kappa )^{N( \\\\kappa )} ), which is also exponential. \\n\\n===== References =====\\nPrecup, Doina; Sutton, Richard S.; and Singh, Satinder, \\\"Eligibility Traces for Off-Policy Policy Evaluation\\\" (2000).ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning. 80.Retrieved fromhttps://scholarworks.umass.edu/cs_faculty_pubs/80\\n\\nR. Sutton and A. Barto. Reinforcement learning: An introduction. 2018.\\n\\nY. Efroni, G. Dalal, B. Scherrer, and S. Mannor. Beyond the one step greedy approach in reinforcement learning. In Proceedings of the 35th International Conference on Machine Learning, 2018a.\\n\\nY. Efroni, G. Dalal, B. Scherrer, and S. Mannor. Multiple-step greedy policies in approximate and online reinforcement learning. In Advances in Neural Information Processing Systems, pp. 5238\\u20135247, 2018b.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The main contributions of this paper are k-PI-DQN and k-VI-DQN, which are model-free versions of dynamic programming (DP) methods k-PI and k-VI from another paper (Efroni et al., 2018). The deep architecture of the two algorithms follows that of DQN. Efroni et al. (2018b) already gave a stochastic online (model-free) version of k-PI in the tabular setting. Although this paper is going one step further extending from tabular to function approximation, I feel that the paper just combined known results, the shaped reward from Efroni et al (2018a) and DQN. The extension seems straightforward. Mentioning previous results from Efroni et al (2018a) and (2018b) does not justify the extension would possess the same property or behaviour. The experiments were only comparing their methods with different hyperparameters, with only a brief comparison to DQN.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper focuses on the implementation and some empirical evaluations of a class of algorithms designed to find optimal strategies/values of large MDP.\\n\\nThe basic idea of these algorithm (called \\\\kappa-PI or \\\\kappa-VI) is to combine two type of classical approaches:\\n- policy/value iteration \\n- k-step ahead computation (instead of just 1-ahead, and actually, k should be quite big or even infinite with an auxiliary appropriate discount rate).\\n\\nThe theoretical formulation of \\\\kappa-PI and \\\\kappa-VI involves solving, at each iteration, another auxiliary MDP problem (where the discount rate is of order \\\\kappa\\\\gamma). This is basically what this paper suggests to do, and implements. \\n\\nThe experiments are a bit difficult for me to read, as the baselines (\\\\kappa=0 and =1, say) are compared with \\\"the best \\\\kappa\\\" which seems to be problem dependent, so I do not know if there is a clear message.\"}"
]
} |
BJx7N1SKvB | A Random Matrix Perspective on Mixtures of Nonlinearities in High Dimensions | [
"Ben Adlam",
"Jake Levinson",
"Jeffrey Pennington"
] | One of the distinguishing characteristics of modern deep learning systems is that they typically employ neural network architectures that utilize enormous numbers of parameters, often in the millions and sometimes even in the billions. While this paradigm has inspired significant research on the properties of large networks, relatively little work has been devoted to the fact that these networks are often used to model large complex datasets, which may themselves contain millions or even billions of constraints. In this work, we focus on this high-dimensional regime in which both the dataset size and the number of features tend to infinity. We analyze the performance of a simple regression model trained on the random features $F=f(WX+B)$ for a random weight matrix $W$ and random bias vector $B$, obtaining an exact formula for the asymptotic training error on a noisy autoencoding task. The role of the bias can be understood as parameterizing a distribution over activation functions, and our analysis actually extends to general such distributions, even those not expressible with a traditional additive bias. Intruigingly, we find that a mixture of nonlinearities can outperform the best single nonlinearity on the noisy autoecndoing task, suggesting that mixtures of nonlinearities might be useful for approximate kernel methods or neural network architecture design. | [
"nonlinearities",
"mixtures",
"random matrix perspective",
"high dimensions",
"millions",
"characteristics",
"neural network architectures",
"enormous numbers",
"parameters"
] | Reject | https://openreview.net/pdf?id=BJx7N1SKvB | https://openreview.net/forum?id=BJx7N1SKvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"7kJxOtZ2JW",
"rkxjQeEwiH",
"Byg3S1NPjH",
"BylmyJNwjB",
"rylKZCXDiS",
"HyxOtaQviB",
"SJgdT5JY5S",
"rklsQVQCtS",
"Bkl2KwYTtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728729,
1573498914569,
1573498692308,
1573498587162,
1573498369172,
1573498239663,
1572563648160,
1571857443360,
1571817348203
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1647/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1647/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1647/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1647/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1647/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1647/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1647/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1647/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"In this work, the authors focus on the high-dimensional regime in which both the dataset size and the number of features tend to infinity. They analyze the performance of a simple regression model trained on the random features and revealed several interesting and important observations.\\n\\nUnfortunately, the reviewers could not reach a consensus as to whether this paper had sufficient novelty to merit acceptance at this time. Incorporating their feedback would move the paper closer towards the acceptance threshold.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to comments\", \"comment\": \"2. As we outline in Subsection 1.1, our work generalizes substantially over previous work as we analyze nontrivial data distributions and NNs with biases. In particular, previous formulae for the spectrum would agree poorly on real datasets like MNIST and CIFAR, whereas we get good agreement as evidenced in Fig. 2. Moreover, to achieve these generalizations, we had to introduce a new method of proof rather than the moment method employed in Pennington & Worah (2017).\\n\\n3. While Fig. 4 is indeed important and illustrates a main conclusion of our paper about mixtures of nonlinearities, the other figures are also significant as they demonstrate the validity of the mathematical machinery we use to predict the spectrum on complex datasets. To help provide context on these other figures, we have reworded their captions and made sure that each figure is referenced in the main text.\\n\\n4, 5, and 6. We have rewritten these sentence to clarify our meaning and fix any grammatical errors.\"}",
"{\"title\": \"Response to minor comments\", \"comment\": \"1. The formula for E_train in Eq. (17) is exact, since the training error is a random variable that converges in probability to a deterministic quantity. We have updated the statement of Cor. 1 to clarify this.\\n\\n2. The only assumption on f is that it leads to finite moments up to third order under Gaussian expectation. There is also some discussion of the differentiability of f in Remark 4: while we assume f is differentiable for convenience, it is in fact not a necessary assumption as the derivative is only used in a Gaussian expectation, which inherently smooths the function. This implies in particular that ReLU activations are included in the behavior we identify. See Figs. 1 and 2, for example, where we find excellent agreement when f is ReLU.\\n\\n3. The object G(\\\\gamma) is indeed the same as G defined in Eq. (1); the only difference is that the function is evaluated at z=-\\\\gamma.\"}",
"{\"title\": \"Response to cons.\", \"comment\": \"1. This is a natural question and one we want to respond to in detail. Augmenting X as [X, 1] does not allow us to address both models in a unified manner. In fact, if it did, our theorem would be a special case of earlier work -- in particular, the final spectral distribution would depend on only the two parameters \\\\eta and \\\\zeta. However, the class of spectral distributions we find is nonparametric and thus cannot be expressed in this way.\\n\\nThe issue is that the previous work only applies if the augmented coordinate [--, 1] is transformed by weights of the same order of magnitude as the other features in X, i.e. the bias is of order O(1/\\\\sqrt{n}) per output feature -- an effect that disappears in the large m limit. Instead, our formulation f(WX+b) allows for an O(1) bias term per output feature. One can certainly reformulate this algebraically as f( [W, b] . [X, 1] ), but the scaling assumptions of prior work are (significantly) broken since W and b are on different scales. Addressing these issues is a main contribution of our derivation.\\n\\nWe have included a brief discussion of this point in the new version of the paper.\\n\\n2. In our setting the training error is a way to quantify the capacity of the function class. Indeed the test error is also interesting, but analyzing it requires specifying a model for the joint distribution between the data points and labels, and substantially more analysis. Unfortunately, this analysis is outside of the scope of this paper.\"}",
"{\"title\": \"Response to minor comments\", \"comment\": \"(1) We focused on the linear model because it is the simplest (nontrivial) learning task. Moreover, the analysis follows directly from the paper\\u2019s main theorem, whereas more complex learning tasks require substantially more calculation\\u2014generally beyond the scope of this initial work.\\n\\n(2) We are happy to include experiments on additional datasets in a final version of the paper.\"}",
"{\"title\": \"Thanks to the reviewers and AC\", \"comment\": \"We are grateful to all reviewers for their constructive feedback and for the time they took to review our work. We have uploaded a new version of the paper and added detailed responses to their comments below. In particular, we address some technical questions about our paper, and explain the overall merits of our work. We hope this encourages reviewers 2 and 3 to reconsider their scores.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper analyzed the asymptotic training error of a simple regression model trained on the random features for a noisy autoencoding task and proved that a mixture of nonlinearities can outperform the best single nonlinearity on such tasks.\", \"comments\": \"1.The paper is well written and provides sound derivation for the theories.\\n\\n2. Since this area is out of my expertise, I\\u2019m not sure whether merely extending the work of Pennington & Worah (2017) to non-Gaussian data distributions is significant enough or not.\\n\\n3. Except for Fig 4, the other figures seem out of the context. There is no explanation for the purpose of those figures in the main contents. It is a bit hard for the audience to figure out what to look at in the figures or what the figures try to prove. \\n\\n4. In \\u201c..., and our analysis actually extends to general such distributions, ... \\u201d, \\u201cgeneral\\u201d should be \\u201cgeneralize\\u201d.\\n\\n5. In \\u201cAnd whether these products generate a medical diagnosis or a navigation decision or some other important output, ..\\u201d, \\u201cwhether\\u201d should be \\u201cno matter\\u201d.\\n\\n6. \\u201c..., they may not be large in comparison to the number of constraints they are designed asked satisfy.\\u201d should be \\u201c... they are designed to satisfy\\u201d.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper investigates the asymptotic spectral density of a random feature model F(Wx + B). This is an extension of existing result that analyzed a model without the bias term, i.e., F(WX). This extension requires a modification of the proof technique. In addition to that, it analyzed a mixture of linear and non-linear activation functions, and show that mixture is better than single nonlinearity in terms of expected training error for ridge regression estimators.\", \"pros\": [\"This paper investigates an interesting problem and it successfully extends the existing work. The theoretical curve well matches the simulated curve.\", \"The finding that mixture of nonlinearities gives better expected training error is interesting.\"], \"cons\": [\"The extension to the model with bias seems a bit incremental. In practice, we may consider an input with additional constant feature, X <- [X,1], to deal with both models in a unified manner. There should be more discussion about why this kind of trivial argument cannot be applied in the analysis.\", \"The effect of mixture of activation functions is investigated in the \\\"training error,\\\" but I don't see much significance on investigating the training error thoroughly. Instead, people are interested in the test error. I guess there does not appear such a trade-off for the test error and the linear activation function would be always better because the true function is the linear model. Hence, more expositions about why the training error is investigated should be provided.\"], \"more_minor_comment\": [\"I guess the definition of Etrain (Eq.(17)) requires an expectation with respect to the training data.\", \"Assumptions of the activation function f should be provided; is it just assumed to be differentiable?, ReLU is included?\", \"The definition of G(\\\\gamma) in page 6 had better to be consistent to that in previous pages.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this work, the authors focus on the high-dimensional regime in which both the dataset size and the number of features tend to infinity. They analyze the performance of a simple regression model trained on the random features and revealed several interesting and important observations. I think it is a solid work and vote for acceptance.\", \"pros\": \"(1) This paper has a solid theoretical foundation. Although I have not checked in detail, I think the deduction is clear and the contribution is well-established.\\n(2) It extends some traditional bounds to more general cases. I think it will provide useful guidance to real applications, such as the network design in deep learning.\\n(3) The authors have explained the results in a clear way. Thus, it will benefit the following readers and give deep insights about the related research areas.\", \"minor_comments\": \"(1) I think some assumptions should be explained. For example, why the authors focus only on linear model. Due to the simplicity or the requirement from real applications?\\n(2) More experimental results on large data sets should be added to validate the effectiveness.\"}"
]
} |
ryefE1SYDr | LIA: Latently Invertible Autoencoder with Adversarial Learning | [
"Jiapeng Zhu",
"Deli Zhao",
"Bolei Zhou",
"Bo Zhang"
] | Deep generative models such as Variational AutoEncoder (VAE) and Generative Adversarial Network (GAN) play an increasingly important role in machine learning and computer vision. However, there are two fundamental issues hindering their real-world applications: the difficulty of conducting variational inference in VAE and the functional absence of encoding real-world samples in GAN. In this paper, we propose a novel algorithm named Latently Invertible Autoencoder (LIA) to address the above two issues in one framework. An invertible network and its inverse mapping are symmetrically embedded in the latent space of VAE. Thus the partial encoder first transforms the input into feature vectors and then the distribution of these feature vectors is reshaped to fit a prior by the invertible network. The decoder proceeds in the reverse order of the encoder's composite mappings. A two-stage stochasticity-free training scheme is designed to train LIA via adversarial learning, in the sense that the decoder of LIA is first trained as a standard GAN with the invertible network and then the partial encoder is learned from an autoencoder by detaching the invertible network from LIA. Experiments conducted on the FFHQ face dataset and three LSUN datasets validate the effectiveness of LIA for inference and generation. | [
"variational autoencoder",
"generative adversarial network"
] | Reject | https://openreview.net/pdf?id=ryefE1SYDr | https://openreview.net/forum?id=ryefE1SYDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"PagJTXqs6",
"BJxZZTx5ir",
"B1l9exHBjH",
"ByxHOkHBir",
"BylZiANHor",
"BkgxA2NrjB",
"ByljTrZ0qB",
"rJxSeEOj5S",
"BkxbdqGb5B",
"SJeLsABqFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728700,
1573682424695,
1573371890047,
1573371756529,
1573371545205,
1573371079941,
1572898243033,
1572729837257,
1572051561327,
1571606174264
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1645/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1645/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1645/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1645/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1645/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1645/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1645/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1645/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1645/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"A nice idea: the latent prior is replaced by a GAN. A general agreement between all four reviewers to reject the submission, based on a not thorough enough description of the approach, and possibly not being novel.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your clarifications\", \"comment\": \"Thank you for your clarifications\\n\\nThese comments have helped clear up my understanding of some important details.\"}",
"{\"title\": \"To Reviewer #4\", \"comment\": \"Q1: \\u201cHowever, the comparison to VAEs is difficult since the latent representation of the data learned by VAEs differs from the one of LIA.\\u201d\", \"a1\": \"Indeed comparing the latent representations of the VAEs and LIA directly is difficult, but we have compared the quality of the reconstruction images by the two methods in the experiment and our method achieved much better reconstruction accuracy.\", \"q2\": \"\\u201cExperiments regarding the learned latent representation are missing in the paper (the interpolation experiment in the appendix might be a starting point)\\u201d\", \"a2\": \"We add more experiments on interpolation and attribute manipulation, as well as the analysis on entanglement of latent representations in the revised version.\", \"q3\": \"\\u201cThe authors use posterior collapse in VAEs as a main argument for introducing LIA. However, it is easy to avoid as stated in e.g. Bowman et al. (2015) or S\\u00f8nderby et al. (2016), and hence this argument doesn't make a strong case for LIA. \\u201d\", \"a4\": \"The x-axis is re-scaled by 10,000. We didn\\u2019t explain it in the caption. We update it in the revised version.\", \"q4\": \"\\u201cIt is difficult to interpret the experiments in Fig. 5: the first 10 iterations might not be very significant.\\u201d\", \"q5\": \"\\u201cExperimental details are missing. I would appreciate to have model architectures in the appendix.\\u201d\", \"a5\": \"More experimental details and model architectures are provided in the revised version, the model architectures are attached in Appendix.9 in the revised version.\", \"q6\": \"\\u201cHow were the accuracies of the generations in Tab. 2 computed?\\u201d\", \"a6\": \"We followed the standard evaluation in StyleGAN paper (Karras et al., CVPR\\u201919) to compute FIDs in Table 2. We randomly generated 50,000 fake images and randomly sampled 50,000 real images. Then we computed FIDs with these two datasets.\"}",
"{\"title\": \"To Reviewer #1 (continued)\", \"comment\": \"Q4: \\u201cCalling the two stage training \\u201cStochasticity free\\u201d is incorrect\\u2014if you\\u2019re training the model as a GAN, then (1) you\\u2019ll be sampling z\\u2019s in the first place so it already has a latent distribution defined and (2) the end result of training will be much more variable than, say, training with a likelihood metric. There is a *ton* of stochasticity in the first stage of training!\\u201d\", \"a4\": \"You misunderstood our motivation and algorithm. \\u201cStochasticity-free\\u201d refers to that there is no any stochasticity involved in the loss and optimization of our algorithm. There is NO any likelihood used in our algorithm.\", \"q5\": \"\\u201cHowever, the metrics used for evaluation are limited\\u2014while at least MSE is presented, I would again stress that reconstruction is an odd metric to use when other factors like compression rates are not considered.\\u201d\", \"a5\": \"We provided three metrics for the evaluation, including MSE. And our work has little relevance with compression.\", \"q6\": \"\\u201cWhat\\u2019s more, the datasets chosen are all single-class datasets with a massive amount of data\\u201d, \\u201cWhile I would of course prefer to see results on something massively multimodal like ImageNet\\u201d\", \"a6\": \"The datasets we used are the same as the ones used by the state-the-art unconditional generative models like ProGAN (Karras et al., ICLR\\u201918) and StyleGAN (Karras et al., CVPR\\u201919). Actually, we follow the experimental setup of StyleGAN.\", \"q7\": \"\\u201cFurther, I found the experiment in 5.3 to be confusing and the results irrelevant to the argument made by the authors.\\u201d, \\u201cI do not see why the increased gradient noise relative to LIA is indicative of the superiority of the method.\\u201d\", \"a7\": \"The experiment is to show the superiority of using the invertible network. We explain this in detail in section 5.3.\", \"q8\": \"\\u201cWhy is it helpful to have an invertible phi in place of the StyleGAN MLP?\\u201d\", \"a8\": \"We explain it in section 2.1, section 3, and section 5.3.\", \"q9\": \"\\u201cwhat does it mean that the \\u201cgradients\\u201d are plotted in the figures relating to this experiment?\\u201d\", \"a9\": \"It is the norms of gradients. We make it clear in the revised version.\"}",
"{\"title\": \"To Reviewer #1\", \"comment\": \"A1: \\u201cI have a litany of concerns with the paper itself, concerning its high similarity with a paper published in May, its motivation, its presentation, its empirical evaluation, and the analysis presented within.\\u201d, \\u201c...titled \\u201cGenerative Latent Flow\\u201d (GLF) proposes an idea which is in essence identical to the one proposed in this paper.\\u201d\", \"q1\": \"First, saying the idea of GLF \\u201cis in essence identical to the one proposed in this paper\\u201d is incorrect. 1) About the principle. To improve the variational inference, the nature of GLF is to replace KL-divergence in VAE with the log-likelihood of normalizing flow. However, we did not use any optimization about normalizing flow. We only use the invertibility of the invertible network to establish the bijective mapping between disentangled features w and the associated latent code z in the latent space. In fact, there is no variational inference involved in our algorithm. 2) About the architecture. The algorithmic architectures between GLF and our LIA are totally different. You may know this by examining Figure 1 in the GLF paper and Figure 1 in our paper.\\n\\nSecond, using normalizing flow to improve the variational inference in VAE was first studied by Diederik Kingma much earlier in 2016 through the following paper.\\n\\nImproving variational inference with inverse autoregressive flow\", \"https\": \"//en.wikipedia.org/wiki/Curse_of_dimensionality\", \"f_vaes\": \"Improve VAEs with Conditional Flows\", \"a2\": \"\\u201cthe stated motivation in this paper is, I think, misguided\\u201d, \\u201cThe authors do not provide any motivation for why reconstruction matters\\u201d, \\u201crather than e.g. to learn a \\u201cgood representation\\u201d.\", \"q2\": \"GANs encounter difficulty when we try to get the latent code z for a given REAL image. This is fundamental to apply GANs for real image manipulation. Many recent work achieves faithful image manipulation for synthetic images (model-generated images rather than real images), such as facial attribute manipulation [1], object adding and removal in scenes [2], and steerable image attribute manipulation [3]. When applying for a real image, a basic requirement for such tasks is that a faithful reconstruction need to be guaranteed when solving the latent code. We include more real image manipulation results in the revised version (Fig.xx) to emphasize the motivation of faithful reconstruction.\\n\\n[1] InterFace GAN:\", \"q3\": \"\\u201cGiven that one can produce a model which achieves arbitrarily high-quality reconstructions by simply increasing the dimensionality of the bottleneck, I do not find reconstruction to be a compelling problem.\\u201d\", \"a3\": \"This viewpoint is incorrect. For the vanilla autoencoder, we can say \\u201chigh-quality reconstructions by simply increasing the dimensionality of the bottleneck\\u201d. For VAE, however, the case is totally different because there is a probability constraint on the latent space. In fact, the variational inference becomes more difficult when the dimensionality of the latent space increases due to the curse of dimensionality. This is one of the underlying reasons why learning a good VAE is so difficult, even though its architecture is simple. Please refer to the following page to understand this.\\n\\nCurse of dimensionality\"}",
"{\"title\": \"To Reviewer #3\", \"comment\": \"Q1: \\u201cHowever, the work in this paper can be seen as adding the losses from GANs and VAEs together to help learn a better generative model, which is not very novel.\\u201d\", \"a1\": \"This is a misunderstanding on our algorithm. We do not use the loss of KL-divergence in VAE. Our algorithm is not the loss combination of GANs and VAEs either.\", \"q2\": \"\\u201cWhile there is lots of math in the paper it is difficult for the reader to follow and often not well motivated why these choices are made.\\u201d\", \"a2\": \"We only used the very basic mathematical expressions to make writing more accurate. In order to help understanding, we plotted Figures 1 and 2 to explain the principle, architecture, and training strategy of our algorithm. We also used equation (2) to show the mapping details of our algorithm.\", \"q3\": \"\\u201cThe conjecture that VAEs produce blurry images needs a reference.\\u201d\", \"a3\": \"The following papers clearly show the blurry images generated by VAEs, we will include them in the revised version. In our experimental section, we have also shown the blurry images generated by VAEs.\\n\\nAutoencoding beyond pixels using a learned similarity metric\", \"https\": \"//gandissect.csail.mit.edu/\", \"q4\": \"\\u201cI am not sure if GANs are limited by the lack of an encoder.\\u201d\", \"a4\": \"Lacking encoder is one of the fundamental problems for GANs. It greatly limits the GANs for real image applications such as facial image manipulation [1] and semantic scene editing [2]. If we want to edit any given real images with GANs, we need to know the corresponding latent code z. This cannot be achieved with the vanilla GAN, thus posing the problem.\\n\\n[1] InterFace GAN\", \"q5\": \"\\u201cThe second claim in the paper (not affected by posterior collapse) should be proved in the paper or at least illustrated in some way.\\u201d\", \"a5\": \"To the best of our knowledge, there is no work that can rigorously prove linear interpolation and vector arithmetic about GANs up to now. We follow the convention of the baseline DCGAN and the state-the-art StyleGAN to illustrate these aspects by experiments. All the papers about GANs on this aspect perform this task as the same as ours.\", \"q6\": \"\\u201cTHe claim that the method will have linear interpolation and vector arithmetic will need to be more rigorously proven.\\u201d\", \"q7\": \"\\u201cthere are a number of hyperparameter values defined yet what these hyper parameters are is not explained. More detail needs to be added in this area for the reader to understand the experiments.\\u201d\", \"a7\": \"We explain these hyperparameters in the main context. We will explain them again in the experiment for easy understanding according to your advice. Thanks for the reminder.\", \"q8\": \"\\u201cThe latent encoding size use is rather large.\\u201d\", \"a8\": \"The 512-dimensional latent codes are usually applied in GAN works such as ProGAN (Karras et al., ICLR\\u201918) and StyleGAN (Karras et al., CVPR\\u201919). We follow the convention.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary:\\n\\nthe authors of this paper propose a two-stage model consisting of a Wasserstein GAN and an autoencoder. The goal is to learn an encoding of data within the GAN framework. \\nThe idea is inspired by the concept of VAEs. However, instead of maximising the ELBO, the authors propose to learn/represent the generative model by a Wasserstein GAN (first stage). Here, the architecture is crucial: an invertible MLP_1 is used to map from a standard normal prior into the feature space; then, a classical MLP_2 maps into the data space\\nIn the second stage, MLP_2 serves as decoder to train an encoder. By combining the latter with MLP_1, data can be encoded into the latent space.\\n\\nThe authors experimentally show that their method leads to improved reconstructions compared to previous GAN-based methods.\\n\\n\\nIn the following, a few concerns:\\n\\n1) The authors motivate their approach with the goal of \\\"encoding real-world samples\\\" without accepting disadvantages of VAEs such as \\\"imprecise inference\\\" or \\\" posterior collapse\\\". However, the comparison to VAEs is difficult since the latent representation of the data learned by VAEs differs from the one of LIA.\\nFore example, in contrast to VAEs, where similar data is clustered in the latent space, this is not necessarily the case for GANs (e.g. Mukherjee et al., 2019). Experiments regarding the learned latent representation are missing in the paper (the interpolation experiment in the appendix might be a starting point). \\n\\n2) The authors use posterior collapse in VAEs as a main argument for introducing LIA. However, it is easy to avoid as stated in e.g. Bowman et al. (2015) or S\\u00f8nderby et al. (2016), and hence this argument doesn't make a strong case for LIA. \\n\\n3) It is difficult to interpret the experiments in Fig. 5: the first 10 iterations might not be very significant.\\n\\n4) Experimental details are missing. I would appreciate to have model architectures in the appendix (even if the authors are going to make the source code publicly available).\\n\\n5) How were the accuracies of the generations in Tab. 2 computed?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The work in the paper is interesting and seems to show some empirical progress on image generation. However, the work in this paper can be seen as adding the losses from GANs and VAEs together to help learn a better generative model, which is not very novel. The invertible part is help for training as well. Still more detail about how the method works would be very helpful. While there is lots of math in the paper it is difficult for the reader to follow and often not well motivated why these choices are made. For example, the optimization is split into two different updates steps because they can't be combined mathematically. Yet, performing two different types of update steps can often favour whichever is easier and the other is noise. More details on how this process was made successful are important to discuss in the paper.\", \"more_detailed_comments\": [\"The conjecture that VAEs produce blurry images needs a reference.\", \"I am not sure if GANs are limited by the lack of an encoder. It could be that the introduction of an encoder is exactly what makes it difficult for VAE to learn complex and detailed image generation.\", \"The second claim in the paper (not affected by posterior collapse) should be proved in the paper or at least illustrated in some way. Currently, this claim is not well backed up in the paper.\", \"THe claim that the method will have linear interpolation and vector arithmetic will need to be more rigorously proven. Right now it seems a little too much like proof by picture.\", \"I do like figure 1 and 2. They help explain the method rather well.\", \"In the begining of the experiment section, there are a number of hyperparameter values defined yet what these hyper parameters are is not explained. More detail needs to be added in this area for the reader to understand the experiments.\", \"The latent encoding size use is rather large.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper develops a new generative model called latently invertible autodecoder (LIA). The major aim is to conduct variational inference for VAE and encoding real-world samples for GAN. My understanding is that the authors tried to achieve this by decomposing the framework into a wasserstein GAN and a standard autoencoder. I believe this paper contains promising idea.\\n\\nThe experiment is very thorough, and the results show that LIA achieves good empirical performance.\\n\\nThe method is not presented in a user-friendly fashion, and the presentation can be improved.\\n\\nI must admit that I do not work on this field, and cannot judge this paper with more details.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"LIA: Latently Invertible Autoencoder Review\\n\\nThis paper proposes a novel generative autoencoder, and a two-stage scheme for training it. A typical VAE is trained with a variational approximation: during training latents are sampled from mu(x) + sigma(x) * N(0,1), mu and sigma are regularized with KL div to match an isotropic normal, and the model minimizes a reconstruction loss. A LIA is instead first trained as a standard GAN, where an invertible model, Phi, (e.g. a normalizing flow) is hooked up to the Generator/Decoder, such that the output is G(Phi^{-1}(z)) with z~p(z), a simple distribution. In the second stage, the Generator/Decoder and Discriminator are frozen, and an Encoder is trained to minimize a reconstruction loss (in this paper, a pre-trained VGG network is used as a feature extractor to produce a perceptual loss) and maximize the \\u201creal\\u201d output of the frozen Discriminator on reconstructed samples.\\n\\nThe key advantage of this method is that during the second stage no stochasticity is injected into the network and Phi is not involved.This means that the encoder does not need to be regularized to produce a specific parametric form of its output (no KL(p || q))), instead implicitly learning to match the latent distribution expected by the generator through the reconstruction losses. Additionally, because the latent space is not e.g. an isotropic Gaussian, it can be more expressive and flexible, only being constrained to an invertible transformation of the distribution p(z) chosen during the first training stage.\\n\\nThe training procedure is evaluated using a StyleGAN architecture on several high-res, single-class datasets (faces and LSUN Cats/Cars/Bedrooms). The quality of the resulting reconstructions is compared against several methods which are also capable of inference (like ALI and post-training an encoder on a standard StyleGAN), and samples and interpolations are presented. There is also an experiment that compares gradients in the encoder when training using LIA against training using a more typical VAE setup.\", \"my_take\": \"The key idea behind this paper is quite promising, and I believe this paper has tremendous potential. I agree with the authors that the usefulness of implicit generative models is limited by their typical lack of an encoder network, and that existing approaches have several design drawbacks, and incorporating invertible networks and advances in flow-based modeling looks like a fruitful avenue of research.\\n\\nHowever, I have a litany of concerns with the paper itself, concerning its high similarity with a paper published in May, its motivation, its presentation, its empirical evaluation, and the analysis presented within. While my concerns and suggestions are extensive, this paper is perhaps unusual in that all of the issues I have are quite fixable; the core idea is good, but its realization and presentation in the paper need a substantial amount of revision. I am currently giving this paper a weak reject as I do not believe it is ready for publication, but I believe that with an overhaul this paper could be a more clear accept.\\n\\nUpdate, post rebuttal:\\n\\nThanks to the authors for their response. While I appreciate their insistence that my issues with the paper likely stem from my simply not understanding it (or the underlying topics), I hope they can appreciate that such an appeal is unlikely to allay said concerns. Pointwise:\\n\\n1. While there is of course a difference between the LIA setup and the GLF setup, regardless of the two-stage training or the inclusion of an adversarial or perceptual loss or any other bells and whistles that get attached, the fact remains that the resulting architecture for both LIA and GLF is an autoencoder with an invertible model in the middle. Arguing that the LIA setup is somehow fundamentally different is akin to arguing that a VAE-GAN is an utterly different model from a VAE with a VGG perceptual loss. Yes, they're optimizing slightly different things (distribution-wise differences vs sample-wise differences) , they have different characteristics, etc., but at the end of the day they're still autoencoders with extra bits on the end. The same general principle holds here. As I stated in my original review, I consider the differences relatively minor and maintain my stance that comparison is warranted. \\n\\n1(a). While the authors may have completed the work over a year ago, the fact that the other work was made public multiple months before this work means that it does, in fact, count as prior work. There is plenty of precedent at ICLR for work which appeared on arXiv well before submission date to be considered as prior work. I understand that this can be frustrating if the authors have previously submitted to other conferences and wished to wait until acceptance before making the work public, but that is a personal choice that does not change the nature of the situation.\\n\\n2. If interfacing with image manipulation techniques is the motivation for improving reconstructions, this motivation should be clearly stated in the paper. After rebuttal there is still no mention of this motivation, which suggests to me that the authors expect all readers to consider \\\"reconstruction\\\" (which I again posit is not really a task) to matter intrinsically.\\n\\n3. I once again appreciate that the authors hold that this reviewer is incorrect about basic facts. The forward KL placed on the latent space of a VAE only encourages it to resemble a particular distribution (typically isotropic gaussian) but the information content passed through the bottleneck can indeed grow with the size of the latent space, as one can see experimentally by ablating the latent dimensionality. This general principle should also be reinforced when one considers that flow models with exact inference (i.e. perfect reconstructions) require dz==dx.\\n\\n4. This reviewer maintains that sampling a random input from a distribution during training involves stochasticity.\\n\\n5. This still does not address my concern that reconstruction is not a particularly relevant task.\", \"7_9\": \"Thank you for modifying the caption in this figure, though I still hold that the y-axis should be correctly labeled. This still does not address my concern that this experiment does not actually show what the authors claim it does--the magnitude of the gradient noise is not by any measure a viable indication that the inclusion of phi is doing anything meaningful in place of a standard MLP as the comparison is instead made to an entirely different training setup.\\n\\nI maintain my stance of rejection.\", \"original_review\": \"First off, a paper published in May on arXiv titled \\u201cGenerative Latent Flow\\u201d (GLF) proposes an idea which is in essence identical to the one proposed in this paper. In GLF, a VAE is trained, but rather than using the reparameterization trick, a true normalizing flow is trained to model the distribution of the output of the encoder (i.e. with NLL using the change of variables formula common to flow-based models), such that the training of the actual autoencoder is truly deterministic (in the sense that at no point is an epsilon~p(z) sampled like in a normal VAE. The core difference between LIA and GLF is that GLF learns to model the distribution of the encoder outputs to enable sampling, while LIA incorporates an invertible model into a generator which explicitly through sampling, and then fits an encoder post-hoc. There are other differences in implementation and the choice of datasets, but those are (IMO) minor details relative to the core similarity. Given that GLF was published 4 months before the ICLR2020 deadline, this paper absolutely must be cited, compared against, and discussed. I am somewhat inclined to argue that given the similarity, LIA is merely incremental relative to GLF, but for now I think it is sufficient to point out the existence and similarity.\\n\\nSecond, the stated motivation in this paper is, I think, misguided. The authors argue for the need of an inference network, but they explicitly make clear that their goal is to train this network to enable reconstruction of an x given a z, rather than e.g. to learn a \\u201cgood representation\\u201d (bearing in mind that what constitutes a good representation is strongly subject to debate). The authors do not provide any motivation for why reconstruction matters. At no point is an application or downstream task presented or mentioned in which good reconstructions are relevant. One might argue that choosing reconstruction quality is as arbitrary as pursuing improved sample quality (as is in vogue in GAN papers) but there is substantial evidence that improved sample quality correlates with improved representation learning (mode dropping in GANs notwithstanding); the case is more complex for high-quality reconstructions. \\n\\nReconstruction could perhaps be motivated from the point of view of compression, but this paper makes no attempt to examine compression: rate-distortion tradeoffs are not considered, nor are any empirical metrics of compression ratio or likelihood such as bits/dim presented. Given that one can produce a model which achieves arbitrarily high-quality reconstructions by simply increasing the dimensionality of the bottleneck, I do not find reconstruction to be a compelling problem.\\n\\nOne might also argue that improved reconstruction capacity is indicative of better ability to fit the distribution (i.e. less mode dropping), but in the LIA setup the generator is trained as a standard StyleGAN with the only modification being the replacement of the MLP with Phi, so there\\u2019s no reason to believe that the implicit model defined by G has been meaningfully affected by the inclusion of the post-hoc trained encoder.\\n\\nIf the authors wish to pursue \\u201creconstruction\\u201d as the primary motivation for learning an encoder, I would suggest they spend more time discussing compression and the latent bottleneck, as well as performing more detailed empirical evaluations (explained below). Basically, *why* does reconstruction matter? Alternatively, the authors could demonstrate the usefulness of their learned encoders for downstream tasks to indicate that the representations they learn are of high quality and useful.\\n\\nThird, the presentation of this paper needs a lot of work. There are typos and confusing statements throughout, as well as several instances of overstatement. \\n\\nThe key insight of this paper appears to be that \\u201chaving an invertible network at the input to the generator makes it more amenable to post-hoc learning an encoder.\\u201d If I understand correctly, the only difference between this method and Encoded StyleGAN is that this paper uses an invertible model in place of the StyleGAN MLP. If this is the case, then the paper needs to (a) make clear the minimality of this difference and (b) devote substantial exposition to exploring the difference and why this is important (see my comments in the experimental section).\\n\\nPhrases like \\u201cthe two-stage training successfully handles the existing issues of generative models\\u201d suggests that this method has solved all of the problems in generative modeling, which the authors have by no means demonstrated to be the case. \\n\\nCalling the two stage training \\u201cStochasticity free\\u201d is incorrect\\u2014if you\\u2019re training the model as a GAN, then (1) you\\u2019ll be sampling z\\u2019s in the first place so it already has a latent distribution defined and (2) the end result of training will be much more variable than, say, training with a likelihood metric. There is a *ton* of stochasticity in the first stage of training!\\n\\nThe paper states several times that the \\u201ccritical limitation\\u201d of adversarial models is their lack of an encoder. While implicit generative models do not generally require an encoder, there are plenty of methods (BiGAN by Donahue and ALI by DuMoulin, along with all the VAE-GAN hybrids) that jointly learn encoders, and much work on training an encoder post-hoc. These methods are acknowledged in the related work, but I think they should be taken into consideration when describing this \\u201ccritical limitation.\\u201d While not having an encoder does indeed hinder or prevent the use of an implicit model for inference, I think stability, mode dropping, and mode collapse are more prominent issues with GANs. I think the authors might do better to say something to indicate that the challenge is to train a model which both has sharp, high-quality samples (as with GANs) which is still capable of inference or explicit likelihood evaluation (VAEs, etc). \\n\\nIn general, I found the description of the model itself to be confusing, and needed several thorough read-throughs just to understand what was going on: what was being frozen when, the fact that the model is just a GAN with a post-hoc trained encoder--I felt that there was a lot of obfuscatory language obscuring the actual simplicity of the method (which might arguably be its strength).\\n\\nWhile I would generally like the paper\\u2019s exposition to be improved, I understand that saying \\u201cwrite it better\\u201d is unhelpful so please see my individual notes at the end of this review for additional specific points. \\n\\nFourth, I found the empirical evaluations to be somewhat weak. To be clear, the results appear to be very good-the model retains the sample quality of StyleGAN (at least as far as can be seen from the presented samples) while achieving noticeably higher-quality reconstructions on all the tested datasets. However, the metrics used for evaluation are limited\\u2014while at least MSE is presented, I would again stress that reconstruction is an odd metric to use when other factors like compression rates are not considered. While it is interesting to note that in this exact setup (mostly dim_z=512) LIA outperforms the baselines wrt the chosen metrics, a more thorough evaluation would, for instance, sweep the choice of dim_z, and ideally present NLL results (which I think are possible to evaluate given that LIA has a flow model even if it\\u2019s not trained using NLL, but I\\u2019m not 100% sure on this front and am open to counterarguments on this front).\\n\\nWhat\\u2019s more, the datasets chosen are all single-class datasets with a massive amount of data\\u2014as far as generative modeling is concerned, these are very datasets with a minimal amount of variation. This is critical because the LIA method relies on pre-training a GAN, meaning that it does nothing to deal with problems like mode dropping and mode collapse. While we may not see much mode dropping on these very easy datasets (where there are, essentially, very few modes), this is still a substantial problem in the general case, as can be seen by results on e.g. ImageNet. If your GAN collapses or drops modes then post-training the encoder is not likely to be able to recover them. This is also arguably a weakness of this paper relative to GLF which incorporates the encoder into the training loop of the decoder and is likely to be better at covering modes.\\n\\nAccordingly, I have substantial concerns that this method will not work well on datasets outside of these highly-constrained, nearly-unimodal, single-object, very-high-data datasets. While I would of course prefer to see results on something massively multimodal like ImageNet (training on a 100-class subset @ 64x64 resolution would be about 100,000 images and should be even less hardware intensive than the already performed experiments) I am aware of how clich\\u00e9 it is for reviewers to ask for imagenet results. Auxiliary experiments on CIFAR-100 or something with more than one class would go a long way towards allaying my concerns on this front.\\n\\nNext, no error bars are presented; this is simply inexcusable. Given that no hardware requirements are presented it is difficult to judge if expecting multiple runs is unreasonable but unless each run requires weeks of the authors\\u2019 full hardware capacity, there is no reason for the authors not to include error bars or expected variances on the numbers for as many of their experiments as possible.\\n\\nFurther, I found the experiment in 5.3 to be confusing and the results irrelevant to the argument made by the authors. First of all, what does it mean that the \\u201cgradients\\u201d are plotted in the figures relating to this experiment? Are these gradient norms for a layer in the network, and if so, what type? Is the loss in Figure 5c the training loss or the test loss? I also disagree that the VAE \\u201cgradients\\u201d are \\u201cmore unstable\\u201d than the LIA \\u201cgradients,\\u201d they are simply noisier. I do not see why the increased gradient noise relative to LIA is indicative of the superiority of the method, but is instead entirely expected given that noise is explicitly injected into a standard VAE\\u2014I would argue that the change in gradient noise is simply the result of removing the stochasticity, but it says nothing as to whether or not the LIA method is better than the VAE method. Again, I agree that using an invertible network in some capacity is preferable to using the reparameterization trick, but I found this specific experiment to be distracting.\\n\\nI think the paper would do better to explore the importance of the invertible network relative to the exact same procedure but with the invertible network replaced with an arbitrary MLP of similar capacity. This appears to be what the encoded styleGAN model is, but I think it would do more to elucidate the key insights of this paper if the analysis was to focus more on this front. Why is it helpful to have an invertible phi in place of the StyleGAN MLP? What happens as the capacity of this portion of the model is increased or decreased? What is the form of the distribution output by Phi (maybe just some histogram visualizations along different dims?), and how does it compare to that of the typical MLP? What is the form of the distribution output by the encoder, and how does it differ from (a) the analytical latent distribution in the case of encoded styleGAN and (b) the empirical latent distribution of LIA? There\\u2019s quite a bit to explore there but this paper doesn\\u2019t dig very deep on this topic.\\n\\nI recognize that the amount of suggestions and changes I have listed are exceptionally large (more than I\\u2019ve personally ever written before, for sure), and I want to make it clear that I don\\u2019t expect the authors to address them all in the limited timespan of the rebuttal period. While this unfortunately may mean that there is simply not enough time for my concerns to be addressed, if this is the case then I hope these suggestions prove useful for publication in the next conference cycle, where this paper could be very strong. As it is, given the extent of my concerns, this paper is currently sitting at about a 4/10 in my mind.\", \"minor_notes\": \"\\u201cIn the parlance of probability,\\u201d page 2. I liked this alliteration a lot. This paragraph as a whole was quite clear and well written.\\n\\n\\u201cBut it requires the dimension dx of the data space to be identical to the dimension dz of the latent space\\u201d Minor nitpick, but I would swap \\u201cdx of the data\\u201d with \\u201cdz of the latent space\\u201d in this sentence, to make it clear that the model\\u2019s latent dimensionality is constrained by the dimensionality of the data. As written it makes it sound like it\\u2019s the other way around.\\n\\n\\u201cThe prior distribution can be exactly fitted from an unfolded feature space.\\u201d While flows have exact inference, saying that you can exactly fit the distribution of the encoder is arguably inaccurate unless you can show perfect generalization. Specifically, if you attain 0 training loss for the flow, do you also have 0 test loss (i.e. the NLL of the flow on the encoder\\u2019s output for test samples is also minimized). \\n\\nFurthermore, the phrasing \\u201cunfolded feature space\\u201d (used elsewhere in the paper) is confusing and not in common usage\\u2014does this mean the output of the encoder, or some sort of Taylor expansion? It\\u2019s not immediately clear, and I would recommend the authors find a different way to express what they mean.\\n\\n\\u201cTherefore the training is deterministic\\u201d Training is not deterministic if the first stage of training involves training a GAN. You are still sampling from a preselected prior in this stage.\\n\\n\\u201cAs shown in Figure 1f, we symmetrically embed an invertible neural network in the latent space of VAE, following the diagram of mapping process as\\u2026\\u201d This sentence is confusing. The term \\u201cembed\\u201d has a specific meaning in the literature: you might use word embeddings, or embed a sample in a space, but to \\u201cembed a [model] in a latent space\\u201d doesn\\u2019t make sense to me. I think the authors would do well to use more standard terminology, and to reconsider their description of the model to be more concise and clear.\\n\\n\\u201cOur primal goal is to faithfully reconstruct real images from the latent code.\\u201d Primal should be primary. I would also like to see this motivated better\\u2014why do you care to exactly reconstruct real images? Is there a downstream task where this is relevant or an intrinsic reason why we should care about being able to attain exact reconstructions?\\n\\n\\u201cindispensable discriminator.\\u201d Indispensible means \\u201csomething you can\\u2019t get rid of,\\u201d whereas it would appear the discriminator is not used after training (and is frozen after the first stage)\\u2014do the authors perhaps mean \\u201cdispensable\\u201d or \\u201cdisposable\\u201d?\\n\\n\\u201cThe interesting phenomenon is that the StyleGAN with encoder only does not succeed in recovering the target faces using the same training strategy as LIA, even though it is capable of generating photo-realistic faces in high quality due to the StyleGAN generator\\u201d This sentence is confusingly written and poorly supported. While I do agree that the LIA reconstructions are superior to the encoded styleGAN reconstructions, exactly what do the authors mean that LIA \\u201crecovers\\u201d the target faces while StyleGAN does not? The LIA reconstructions are not identity preserving\\u2014while most of the semantic features are the same, and the model does do a good job of picking up on unusual details such as hats, the facial identities are definitely not preserved (i.e. for every face in row 1 and row 2, I would say that the two faces belong to different people with similar features, but they are still definitely different people) .\\n\\n\\u201cThis indicates that the invertible network plays the crucial role to make the LIA work\\u201d This statement is unsupported. There are a number of differences in training setup, and the authors, in this reviewer\\u2019s opinion, have not presented evidence to indicate that the use of the flow model is specifically responsible for this. Specifically, what would happen if during the decoder training stage, the invertible network was not employed? While I do believe that the inclusion of the invertible network is important, the authors should go to greater lengths to elucidate exactly what it does (see my comments above in the experimental section re. the shape of the distribution and how the encoder ends up matched to the decoder depending on what the actual latent distribution is from the POV of the generator).\\n\\n\\u201cTo further evaluate LIA on the data with large variations\\u201d The choice of three single-category, single-subject datasets for evaluation is strictly at odds with this statement. These are highly constrained, clean datasets with tremendous amounts of data per class, which are substantially less difficult to model than e.g. ImageNet \\n\\n\\u201cThey will be made them available for evaluation.\\u201d -> \\u201cThese subsets will be made available for evaluation\\u201d\\n\\n\\u201cThe experimental results on FFHQ and LSUN databases verify that the symmetric design of the invertible network and the two-stage training successfully handles the existing issues of generative models.\\u201d This statement is far too strong\\u2014saying a method \\u201csuccessfully handles the existing issues of generative models\\u201d suggests that this method is the end-all be-all and has solved the problem of generative modeling entirely. I would suggest the authors dial back the strength of this claim.\\n\\n\\u201cTable 2: FID accuracy of generative results.\\u201d What is FID accuracy? Do the authors just mean FID?\\n\\nSpecify hardware used and training times, at least in the appendix.\"}"
]
} |
BJgWE1SFwS | PCMC-Net: Feature-based Pairwise Choice Markov Chains | [
"Alix Lhéritier"
] | Pairwise Choice Markov Chains (PCMC) have been recently introduced to overcome limitations of choice models based on traditional axioms unable to express empirical observations from modern behavior economics like context effects occurring when a choice between two options is altered by adding a third alternative. The inference approach that estimates the transition rates between each possible pair of alternatives via maximum likelihood suffers when the examples of each alternative are scarce and is inappropriate when new alternatives can be observed at test time. In this work, we propose an amortized inference approach for PCMC by embedding its definition into a neural network that represents transition rates as a function of the alternatives' and individual's features. We apply our construction to the complex case of airline itinerary booking where singletons are common (due to varying prices and individual-specific itineraries), and context effects and behaviors strongly dependent on market segments are observed. Experiments show our network significantly outperforming, in terms of prediction accuracy and logarithmic loss, feature engineered standard and latent class Multinomial Logit models as well as recent machine learning approaches. | [
"choice modeling",
"pairwise choice Markov chains",
"deep learning",
"amortized inference",
"automatic differentiation",
"airline itinerary choice modeling"
] | Accept (Poster) | https://openreview.net/pdf?id=BJgWE1SFwS | https://openreview.net/forum?id=BJgWE1SFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"t2KDDhKNnC",
"BygUPA_hoS",
"BklhlAuniS",
"Sklulsu2or",
"BylOTtd3jH",
"ryesrK_njr",
"H1l2wrZ_cr",
"S1xv9y4k5H",
"S1xfZCnTKB",
"rJxDQbYstS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728671,
1573846621971,
1573846516450,
1573845744077,
1573845439732,
1573845314991,
1572504932188,
1571925902630,
1571831290136,
1571684639247
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1644/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1644/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1644/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1644/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1644/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1644/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1644/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1644/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1644/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This submission proposes to use neural networks in combination with pairwise choice markov chain models for choice modelling. The deep network is used to parametrize the PCMC and in so doing improve generalization and inference.\", \"strengths\": \"The formulation and theoretical justifications are convincing.\\nThe improvements are non-trivial and the approach is novel.\", \"weaknesses\": \"The text was not always easy to follow.\\nThe experimental validation is too limited initially. This was addressed during the discussion by adding an additional experiment.\\n\\nAll reviewers recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comments and suggestions.\", \"regarding_the_technical_terms\": \"framing effect was replaced by context effect which is more standard and is now properly defined in the introduction along with the other choice-theoretic terms. We added an example to motivate the definition of contractibility in Section 2.\\nWe added a new experiment---see new Section 4--- that uses a different dataset that illustrates these context effects and compares to the original PCMC.\\nAlthough it is possible to compare to the original PCMC by resorting to a discretization of the feature space, it quickly becomes impractical as the number of attributes and the number of bins grow, making it inappropriate on a complex dataset such as the airline itinerary choice one, which is the point of proposing the PCMC-Net approach.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comments and suggestions.\\n\\t\\u2022 Regarding the jargon: we added a paragraph explaining the different types of effects and a new experiment on a synthetic dataset illustrating them--see new Section 4---.\\n\\t\\u2022 Amortized inference: the same parameters are used to determine the transition rates between any pair of alternatives (even unseen ones) in contrast to the original PCMC approach that requires one statistical parameter for each possible pair of alternatives of the universe. We added a sentence in the introduction to explicitly define it.\\n\\t\\u2022 The triangle notation is defined in the sentence where it is used first.\\n\\t\\u2022 The original inference approach counts the choices observed for each choice set (see Eq. 4). Additive smooting adds a pseudo-count (e.g. \\\\alpha=0.1) to each counter to avoid numerical instabilities when doing the MLE optimization. The remark was taken from the original paper and refers to performance. The epsilon used in our work is used to enforce the constraint that defines PCMC, ensuring the existence and unicity of the stationary distribution and has no effect on performance.\\n\\t\\u2022 Contractibility guarantees that the model behaves well when alternatives can be partitioned into classes of equivalence. This has practical modeling importance when equivalent alternatives are present in the choice set like in the classical Red Bus/Blue Bus example where the color of the bus is irrelevant to the preference of transportation mode \\\"bus\\\" with respect to the \\\"car\\\" mode. For example, MNL models reduce the probability of choosing the car mode when multiple color variants of buses are added, which does not match empirical behavior. This example is added in Section 2.\\n\\t\\u2022 In the new experiment, we compared against a simple parameterization of Q obtained by the discretizing the features.\\n\\t\\u2022 We moved Table 1 to appendix\\n\\t\\u2022 The new experiment considers a small synthetic dataset where present effects are known in order to gain intuition.\"}",
"{\"title\": \"Re Response\", \"comment\": \"Thanks for adding the experiment. I have raised my score to 'weak accept.'\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comments and suggestions.\\nWe added a new experiment on a synthetic dataset---see new Section 4---to show the ability of PCMC-Net to represent context effects (whose definition was added in the introduction).\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comments and suggestions.\\n\\n2. : we made this explicit in the introduction where we mention the linear approach suggested by the original PCMC authors.\\n3. : we added experiments on a synthetic dataset---see new Section 4.\\n\\nThanks for the CDM suggestion, we will consider it for future work.\", \"regarding_hyper_parameter_tuning_of_the_competitors\": \"we used the same dataset and the same hyper-parameters suggested by the authors of the respective papers who performed numerous experiments.\", \"specific_notes\": \"Thanks for pointing this out, we added the remark at the beginning of the proof.\\n\\nTo make the notation more explicit regarding the dependence on S, we replaced X_i by S_i. We also replaced X_0 by I.\\nWe also added a sentence explicitly saying that for PCMC-Net, S is a set of tuples.\\n\\nTypos/Small concerns:\\n\\t- We fixed the wrong number of parameters\\n\\t- Below equation 3, the indices are with respect to Q_S and range from 1 to |S|, so the wrong part is i\\\\in S. Therefore, we changed the indices to 1 <= i <= |S|, 1 <= j < |S| (the last inequality is a strict one since we want to remove the last column of Q_S. We fixed equation 11 in the same way.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"1.The goal of the paper is to connect flexible choice modeling with a modern approach to ML architecture to make said choice modeling scalable, tractable, and practical.\\n2. The approach of the paper is well motivated intuitively, but could more explicitly show that PCMC-Net is needed to fix inferential problems with PCMC and that e.g. SGD and some regularization + the linear parameterization suggested by the original PCMC authors isn't scalable in itself. \\n3. The approximation theorem is useful and clean, and the empirical results are intriguing. While consideration of more datasets would improve the results, the metrics and baselines considered demonstrate a considerable empirical case for this method. \\n\\nMy \\\"weak accept\\\" decision is closer to \\\"accept\\\" than \\\"weak reject.\\\" (Edit 11/25/19: I raised my score in to accept in conjunction with the author's improvements in the open discussion phase)\\n\\nImprovement areas(all relatively minor):\\n- While I personally enjoy the choice axioms focused on by the PCMC model and this paper, stochastic transitivity, IIA, and regularity are probably more important to emphasize than Contractibility. Because the properties of UE and contractibility were not used, it may be more appropriate to use this space to introduce more of the literature on neural-nets-as-feature-embeddings stuff. \\n- This paper could be improved by generalizing to a few other choice models- in particular the CDM (https://arxiv.org/abs/1902.03266) may be a good candidate for your method. This is more a suggestion for future work if you expand this promising initial result. \\n- Hyper-parameter tuning: I noticed that several of your hyper parameters were set to extremal values for the ranges you considered. If you tuned the other algorithms' hyper parameters the same way, it could be the case that the relative performance is explained by the appropriateness of those ranges. Would be interesting to have a more in-depth treatment of this, but I do understand that it's a lot of work.\", \"specific_notes\": \"Theorem 1 is nice, and the proof is clean, but doesn't explicitly note that a PCMC model jointly specifies a family of distributions \\\\pi_S for each S \\\\in 2^U obtained by subsetting a single rate matrix Q indexed by U. It's clear that PCMC-Net will still approximate under this definition, as \\\\hat q_ij approximates each q_ij because \\\\hat q_ij doesn't depend on S. While the more explicit statement is true with the same logic in the theorem, the notational choice to have \\\"X_i\\\" represent the \\\"i-th\\\" element in S is confusing at first, as e.g. X_1 is a different feature vector for S = {2,3} and S={1,3}. I don't see this issue as disqualifying, but it took me a while to realize that there wasn't more than a notational abuse problem when I returned to the definitions where the indexing depended on the set S under consideration. \\n\\n\\nTypos/small concerns:\\n-Above equation (1), the number of parameters in Q_S is |S|(|S|-1) rather than (|S|-1)^2, as each of the |S| alternatives has a transition rate to the other |S|-1 alternatives. \\n-Below equation (3), I think you mean j \\\\in S rather than 1<= j <= |S|, as S may not be {1,2,...,|S|}. Later I noticed that you always index S with {1,\\\\dots,|S|}, but using i \\\\in S in combination with 1<=j<=|S| was a bit confusing. \\n-X_i as the i-th element of S is a bit of an abuse of notation, as it surpasses dependence on S\\n-In Figure 1, you show X_0 in a vector that is referred to as \\\"S.\\\" It is my understanding that X_0 represents user features. As the user is not in the set, this is confusing. The use of a vertical ellipsis to connect \\\\rho(X_0) to \\\\rho(X_1) is also confusion, as \\\\rho(X_1) is input into the Cartesian product while X_0 is input into the direct sum. \\n\\nOverall, nice job! Really enjoyed the paper and approach, good to see connections made between these literatures so that progress in discrete choice can be used at scale.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: This paper enables a feature-based parametrization and amortized inference of Pairwise Choice Markov Chains (PCMCs), a model for decisions in the face of a set of alternative choices (e.g. the rock-paper-scissors game). Previous approaches to fitting PCMCs have leveraged sequential least squares programming, making optimization unstable, the model prone to overfitting, and test-time inference difficult. The authors propose parametrizing PCMCs with neural networks to fix these issues. Relying on universal function approximation results, the authors show that their PCMC-Net can represent arbitrary transition matrices. The experiments report results on a dataset of airline booking behavior, comparing PCMC-Net with four other baselines from the literature.\", \"pros\": \"Although I was previously unfamiliar with the PCMC model, using a neural network parametrization seems novel and well motivated. Moreover, the airline data experiment seems to validate that PCMC-Net is indeed effective, besting the other baselines in all three metrics.\", \"cons\": \"I have two main concerns over the paper\\u2019s experimental rigor\\u2026\\n\\n#1 Lack of Simulation Studies: The paper makes claims about the representation properties of PCMC-Net but fails to validate them with simulation studies. \\n\\n#2 Lack of data sets: Only one experiment on one data set is reported.\", \"final_evaluation\": \"While I found the paper\\u2019s methodology well motivated and sensible, the experiments do not thoroughly validate the method as they contain no simulation studies and only one data set.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces a novel approximate inference method, called PCMC-Net, for models from the family of Pairwise Choice Markov Chains (PCMC). The method relies on training a neural network. Consequently, the authors claim that inference is amortized, but its computational complexity is still quadratic in the number of choice alternatives due to separate processing of all pairs of alternatives. PCMC-Net bakes the definition of PCMC into the neural net structure and therefore satisfies the theoretical properties of contractability and uniform expansion, which are desired properties of choice models\\nMoreover, since choice probabilities are a function of choice candidates\\u2019 features (and features of an individual making the choice), this method allows for new (unseen) choice candidates at test time, which was not possible with previously proposed maximum-likelihood (ML) inference. The approach is evaluated on modelling the choice of airline itinerary, on which it outperforms all considered baselines by a significant margin.\\n\\nI recommend REJECTing this paper. This paper tackles the problem of efficient inference and test-time generalization (to unseen choice alternatives) for choice modelling, and the proposed approach is interesting, seems to be theoretically sound, and outperforms evaluated baselines. Experimental evaluation is insufficient, however, with the method assessed only on a single dataset---in which case it is unclear if the method is better than baselines in general, or whether it is a quirk of the considered dataset. Moreover, the authors do not compare to ML inference in PCMC, which seems to be the closest possible baseline; instead, the authors only mention that ML would overfit on this dataset. Finally, the paper is full of complicated terms and cumbersome notation, which makes it difficult to read. Technical terms are often used without definition (e.g. framing effects, Luce\\u2019s axiom, asymmetric dominance), which makes the paper inaccessible to an inexperienced reader like myself.\\n\\nI think that this work could be improved in the following ways. The exposition should be made simpler and easier to follow (especially section 2), and all technical terms should be appropriately defined. Additionally, the method should be evaluated on at least one more dataset and compared to ML inference for PCMC. I am happy to increase my score if (all) the above points are addressed.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents an approach for choice modeling that leverages neural network features in a continuous-time Markov Chain whose stationary distribution represents the choice distribution. Nonlinear features can be computed for both alternatives and individuals, and the resulting model beats all baselines in terms of log-likelihood and accuracy on an airline itinerary choice prediction task.\\n\\nOverall, this paper presented a simple but effective approach for using neural networks in the PCMC class of models. The experimental section is too limited, with results on only one dataset and no comparison of different architectural choices for how to incorporate neural networks into PCMC models, or analysis pointing toward what the features are learning that allows them to improve over earlier approaches. The text was also confusing in a number of places (possibly due to my lack of knowledge in choice modeling), and there\\u2019s no discussion of related work incorporating neural networks into ranking-based models.\", \"comments\": [\"Knowing nothing about choice modeling, I found the introduction hard to follow with lots of jargon that may be inaccessible to the broader ML community. It may be useful to specify the set of desired properties for these models up front, and then highlight how the different existing models do or don\\u2019t satisfy these properties (e.g. uniform expansion, regularity, efficiency, framing effects, etc.)\", \"How is the proposed approach \\u201camortized inference\\u201d?\", \"What\\u2019s the triangle notation in P(a |> c) ?\", \"It\\u2019d be useful to say more as to why contractability/uniform expansion are useful components of a choice model.\", \"\\u201cAdditive smoothing at the cost of some efficacy\\u201d - efficacy in what sense? Expressivity? Or worse performance? The same smoothing technique (minimum of epsilon) seems to be used in this approach.\", \"Theorem 1 and proof do not consider some of the architectural choices of PCMC-Net (e.g. cartesian product layers, does d_a have to go to infinity?)\", \"Contractability: why is this property desirable if you don\\u2019t take advantage of it computationally?\", \"Why is SGD + dropout training stable but the original MLE problem not?\", \"Comparison to baselines using low-rank or other simple parameterizations of Q?\", \"Table 1 should be moved to supplement\", \"\\u201cUsual rule of thumb\\u201d -> cite something here? Not familiar with this\", \"What is the impact of \\\\epsilon and dropout probability on model performance? From proofs I had expected \\\\epsilon tiny (1e-9) but you use 0.5\", \"Would be useful to show how model performs on smaller dataset to gain intuition\"]}"
]
} |
B1gZV1HYvS | Multi-Agent Interactions Modeling with Correlated Policies | [
"Minghuan Liu",
"Ming Zhou",
"Weinan Zhang",
"Yuzheng Zhuang",
"Jun Wang",
"Wulong Liu",
"Yong Yu"
] | In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents. However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures.
In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents’ policies, which can recover agents' policies that can regenerate similar interactions. Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution. Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods. Our code is available at \url{https://github.com/apexrl/CoDAIL}. | [
"Multi-agent reinforcement learning",
"Imitation learning"
] | Accept (Poster) | https://openreview.net/pdf?id=B1gZV1HYvS | https://openreview.net/forum?id=B1gZV1HYvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"hlMH9OLx7",
"ryxN9TcbiH",
"HylafpcWir",
"HkgZL35bsS",
"r1ggNn9Zir",
"Hyx3Roc-sB",
"Hkgn995bjr",
"rJezMXKS5r",
"B1gXtGEb5S",
"Hyeeki6nKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728643,
1573133707661,
1573133588666,
1573133384960,
1573133351617,
1573133268212,
1573132948204,
1572340489943,
1572057723200,
1571769047982
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1643/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1643/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1643/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1643/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1643/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1643/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1643/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1643/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1643/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes an extension to the popular Generative Adversarial Imitation Learning framework that considers multi-agent settings with \\\"correlated policies\\\", i.e., where agents' actions influence each other. The proposed approach learns opponent models to consider possible opponent actions during learning. Several questions were raised during the review phase, including clarifying questions about key components of the proposed approach and theoretical contributions, as well as concerns about related work. These were addressed by the authors and the reviewers are satisfied that the resulting paper provides a valuable contribution. I encourage the authors to continue to use the reviewers' feedback to improve the clarity of their manuscript in time for the camera ready submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"We sincerely thank you for your comprehensive comments on our paper and we carefully answer each of your questions as below.\\n\\nQ1. About Proposition 1\", \"response\": \"Yes. This does exist and it also makes us confused when we read MA-GAIL and MA-AIRL papers, which care less about the partially observable property of Particle World environments. However, we show our understanding to interpret this problem as below.\\n\\n(1) First, either they or we do not consider to describe a PO setting because all of us wish to simplify the methodology and concentrate on the imitation learning architecture without caring about the partially observable settings.\\n\\n(2) Second, in most single-agent RL tasks, the normal inputs of agent policies are observations instead of states, e.g. the raw pixels of Atari games. However, in deep reinforcement learning (DRL), we can always map those observations into a low-dimensional latent state representation achieved by low-level layers of deep neural networks to achieve the function of state inference from observations in POMDPs, thus we usually care less about the observation/state in normal DRL tasks.\\n\\n(3) Thus we think that not only the previously mentioned works but many other MARL works who take Particle World as a MARL benchmark all stand at this point since the observations of Particle World contain comprehensive information to infer the latent states.\\n\\n(4) Since the other two works mainly conduct experiments on these Particle environments, at least we need to show the performance against baseline methods on Particle environments.\"}",
"{\"title\": \"Response\", \"comment\": \"We truly appreciate your helpful feedback.\\n\\nQ1. About \\\"$\\\\epsilon$-NE\\\"\", \"response\": \"We are sorry for the unclarity. In fact, the learning process of the joint opponent function $\\\\sigma^{(i)}$ follows a normal way of opponent modeling. \\n\\n(1) Specifically, we construct a function $\\\\sigma^{(i)}(a^{(-i)} | s): \\\\mathcal{S} \\\\times \\\\mathcal{A}^{(1)} \\\\times \\\\cdots \\\\times \\\\mathcal{A}^{(i-1)} \\\\times \\\\mathcal{A}^{(i+1)} \\\\times \\\\cdots \\\\times \\\\mathcal{A}^{(N)}\\\\rightarrow {[0, 1]}^{N-1}$, as the approximation of opponents for each agent $i$. \\n\\n(2) Appendix B shows that in implementation: \\\"Specifically for opponents models, we utilize a multi-head-structure network, where each head predicts each opponent's action separately, and we get the overall opponents joint action $a^{(-i)}$ by concatenating all actions.\\\".\\n\\n(3) As Reviewer #2 says and shown in Eq. (17), opponent models are trained by minimizing either MSE loss (continuous actions) or CE loss (discrete actions).\\n\\nWe have revised this part for clarity in the latest version of our paper. \\n\\nIn sum, we think that at the current stage we have thoroughly answered your proposed questions, and according to your helpful suggestions, we have revised related parts for clarity in the latest version of our paper. Thus we sincerely wish you can re-consider and improve your rating for this work.\"}",
"{\"title\": \"** Response to Experiments (3/3)\", \"comment\": \"Q4. About not \\\"much value of the qualitative evaluation in Figure 1\\\"\", \"response\": \"We agree with your suggestion that it is better to consider more different-level evaluations. However, as shown in response 1.a of Q3, it is hard to straightly extend in [1,2]'s experimental settings for different tasks. And the major difficulty is that we learn the policy directly with no such a module as \\\"policy embeddings\\\" to achieve those downstream tasks.\"}",
"{\"title\": \"** Response to \\\"Missing related work.\\\" (2/3)\", \"comment\": \"\", \"q3\": \"About \\\"missing work in multi-agent interactions\\\"\", \"response\": \"Thanks for the helpful suggestions. We have included some of those works of interactions modeling along with other opponent modeling papers to make it more clear in our latest version. In fact, we've read most of these works, yet we did not include them as they aim to address different problems. \\n\\nAs we have formulated the problem of modeling multi-agent interactions from demonstrations as an imitation learning problem, we pay more attention to multi-agent imitation learning works as our comparable methods and the most related ones.\\n\\nBelow we discuss each paper you mentioned in detail to clarify the differences between them and ours. Such discussions are also added to the related work of our latest version.\\n\\n1 - [1] is the long paper of [2], which is an appealing work for modeling the among-agents interaction relationships as policy representations. Their problem setting has several important different points against us.\\n\\n1.a - First, we focus on different tasks. They aim to learn the **representations function** of agent policies \\\"based on their interactions\\\", that is, to learn a \\\"policies feature abstraction\\\" with the latent relationships among agents rather than imitating their policies from demonstrations to regenerate similar interacted data with correlated policies. Their learned policy embedding function is able to characterize agent behaviors and can be used in kinds downstream tasks, which all take the policy embeddings as a core part, making it tough for us to try those generalization tasks since we only recover agents' policies.\\n\\n1.b - Second, we consider different \\\"comprehension\\\" about interactions among agents. We care about the distribution of the overall interacted data sampled from correlated policies and how we can regenerating similar interacted data instead of analyzing the latent relationships among agents. Specifically, [1,2] regard interactions as the episodes that contain only k (in the paper they use 2 agents), which constructs an agent-interaction graph. That is, they focus on the latent relationships among agents.\\n\\n1.c - Third, in [1,2], imitation learning is just a tool or technique to lean the policy embedding, which, by contrast, is the entire problem that we focus on.\\n\\n1.d - Last but not least, parameter sharing is different from \\\"correlated policy\\\". Parameter sharing treats each agent as an independent individual to generalize the single-agent learning method in a multi-agent setting, which does not, in essence, consider the property of Markov Games and complicated \\\"reasoning\\\" policy. On the contrary, \\\"correlated policy\\\" means that each agent can infer about the others which explicitly considers opponents' policy in their decisionmaking process. See more details in [7,8,9]. In our setting, we want to model interactions considering such correlated policy structures, which is our motivation.\\n\\n2 - The diverse behaviors of single-agent shown in [3] are different from the correlated interactions in a multi-agent setting. The main difference is that in single-agent setting one does not have to reason about the others, thus the generated trajectories are only related with the agent's own policy, which could be influenced by all agents in a multi-agent setting, and that's why the generated trajectories of all agents can be viewed as \\\"interactions\\\".\\n\\n3 - [4] is a good work to model such a reasoning-like policy of agents, but they focus on MARL settings that interact with environments and learn policies with reward signals instead of an imitation learning setting that learning from pure demonstrations without reward signals (our task). Imitation learning is also just a technique to make use of the past trajectories of other agents. However, in future work, we can extend our work with their \\\"theory of mind\\\" policy structures to model complicated interactions.\\n\\n4 - [5] cannot exactly solve the important weight problem. See details in our response to Q2.\", \"references\": \"[7] Probabilistic recursive reasoning for multi-Agent reinforcement learning. Y Wen, Y Yang, R Luo, J Wang, W Pan. ICLR 2019.\\n[8] A regularized opponent model with maximum entropy objective. Z Tian, Y Wen, Z Gong, F Punakkath, S Zou, J Wang. IJCAI 2019.\\n[9] Opponent modeling in multi-agent systems. D Carmel, S Markovitch. IJCAI 1995.\"}",
"{\"title\": \"** Response to Mathematics Details (1/3)\", \"comment\": \"We sincerely thank the reviewer for the constructive comments.\", \"q1\": \"About \\\"non-correlated and correlated eqs in 2nd and 3rd line in eq.8 are not equivalent yet connected via equality.\\\"\", \"response\": \"The main challenge to estimating the weight exactly is to estimate the (s, a) distributions of demonstrators' trajectories. Notice that the demonstrations are always insufficient for a low-variance estimation and it costs much to update such density estimations during training. In fact, we did have tried with KDE (kernel density estimation) to compute an \\\"exact\\\" importance weight but the results were not good. Thus we refer to [6] for a simple solution and in our paper, we have presented that \\\"we fix $\\\\alpha = 1$ in our implementation, and as the experimental results have shown, it has no significant influences on performance. Besides, a similar approach can be found in Kostrikov et al. (2018).\\\" in the paragraph below Eq. (12).\", \"q2\": \"About \\\"importance weight\\\".\", \"reference\": \"[6] Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning. I Kostrikov, KK Agrawal, D Dwibedi, S Levine. ICLR 2019.\"}",
"{\"title\": \"Overall Response - Motivations & Contributions\", \"comment\": \"We thank all reviewers for the valuable comments on improving the quality of this work and we would like to clarify our motivation and contributions:\", \"1___motivation\": \"In the real world, agents make decisions by constantly predicting and reasoning correlated intelligent agents' behaviors. We model such behaviors as correlated policy structure. Like in a driving scenario, a human driver makes decisions based on predicting and inducing the surrounding conditions that consisted of varies traffic participants; in a soccer game, a player would reason the next move of both his teammates and opponents before kick/moving decision.\\nIn this paper, we aim to model the interactions among agents, by which we seek to perform high-fidelity simulation of the multi-agent environment with regenerating similar trajectories by imitating their correlated policies from demonstration data. However, traditional imitation methods such as GAIL, MA-GAIL and MA-AIRL lack the ability to model interactions from demonstrations sampled from these correlated policies.\", \"2___contributions\": \"(1) We consider regenerating interacted trajectory data with recovered correlated policies, which is expected to follow a similar distribution with that from experts. \\n(2) We firstly propose to consider the influence of opponents in multi-agent imitation learning, in result showing the ability to learn from experts with correlated policies. With opponents modeling, our proposed framework CoDAIL gains the properties of decentralized-training and decentralized-execution. \\n(3) We show a different perspective that the multi-agent generative imitation learning problem (or multi-agent inverse reinforcement learning problem) can be seen to converge to an $\\\\epsilon$-NE solution concept. Under our theoretical architecture, we start from the max-entropy inverse reinforcement learning objective of each agent while MA-GAIL paper derives from a NE solution and a corresponding dual problem. In result, MA-GAIL can be regarded as a special case of our CoDAIL when ignoring the policy correlation among agents. \\n\\nAccording to your constructive comments, we have revised the equation symbols and discussions, fixed the typos and added more related works from different areas in our latest version paper, by which we think most confusions have been removed.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes to model interactions in a multi-agent system by considering correlated policies. In order to do so, the work modifies the GAIL framework to derive a learning objective. Similar to GAIL, the discriminator distinguishes between state, action, next state sequences but crucially the actions here are considered for all agents.\\n\\nThe paper is a natural extension of GAIL/MA-GAIL. I have two major points that need to be addressed.\\n\\n1. The exposition and significance of some of the theoretical results is unclear.\\n- The non-correlated and correlated eqns in 2nd and 3rd line in eq. 8 are not equivalent in general, yet connected via an equality.\\n In particular, Proposition 2 considers an importance weighting procedure to reweight state, action, next state triplets. It is unclear how this resolves the shortcomings of pi_E^{-1} being inaccessible. Prop 2 shifts from pi_E^{-1} to pi^{-1} and hence, the expectations in Prop 2 and Eq. 11 are not equivalent. \\n- More importantly, how are the importance weights estimated in Eq. 12? The numerator requires pi_E^{-1}, which is not accessible. If the numerator and denominator are estimated separately, it becomes a chicken-and-egg problem since the denominator is itself intended to be an imitating the expert policy appearing in the numerator?\\n\\n2. Missing related work\\nThere is a huge body of missing work in multi-agent interactions modeling and generative modeling. [1, 2] consider modeling of agent interactions via imitation learning and a principled evaluation framework of generalization in the Markov games setting. By sharing parameters, they are also able to model correlations across agent policies and have strong results on generalization to cooperation/competition with unseen agents with similar policies (which wouldn't have been possible if correlations were not modeled). Similarly, [3, 4] are other similar works which consider modeling of other agent interactions/diverse behaviors via imitation style approaches. Finally, the idea of correcting for the mismatch in state, action, next state triplets in Proposition 2 has been considered for model-based off-policy evaluation in [5]. They proposed a likelihood-free method to estimate importance weights, which seems might be necessary for this task as well (re: qs. on how are importance weights estimated?).\", \"re\": \"experiments. Results look good and convincing for most parts. I don't see much value of the qualitative evaluation in Figure 1. If the KL divergence is low, we can expect the marginals to be better estimated. Trying out various levels of generalization as proposed in [2] would significantly strengthen the paper.\\n\\nTypos\\nsec 2.1 Transition dynamics should have range in R+\\nProof of Prop 2. \\\\mu instead of u\", \"references\": \"[1] Learning Policy Representations in Multiagent Systems. ICML 2018.\\n[2] Evaluating Generalization in Multiagent Systems using Agent-Interaction Graphs. AAMAS 2018.\\n[3] Machine Theory of Mind. ICML 2018.\\n[4] Robust imitation of diverse behaviors. NeurIPS 2017.\\n[5] Bias Correction of Learned Generative Models using Likelihood-free Importance Weighting. NeurIPS 2019.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose a decentralized adversarial imitation learning algorithm with correlated policies, which recovers each agent\\u2019s policy through approximating opponents action using opponent modeling. Extensive experimental results showed that the proposed framework, CoDAIL, better fits scenarios with correlated multi-agent policies.\\n\\nGenerally, the paper follows the idea of GAIL and MAGAIL. Differing from the previous works, the paper introduces \\\\epsilon-Nash equilibrium as the solution to multi-agent imitation learning in Markov games. It shows that using the concept of \\\\epsilon-Nash equilibrium as constraints is consistent and equivalent to adding the difference of the causal entropy of the expert policy and the causal entropy of a possible policy in RL procedure. It makes sense. \\n\\nBelow, I have a few concerns to the current status of the paper.\\n\\n1.\\tThe authors propose \\\\epsilon-Nash equilibrium to model the convergent state in multi-agent scenarios, however, in section 3.1 the objective function of MA-RL (Equation 5) is still the discounted causal entropy of policy, the same as that of MA-GAIL paper. It is unclear how the \\\\epsilon-NE is considered in modeling MA-RL problem.\\n\\n2.\\tRather than assuming conditional independence of actions from different agents, the authors considered that the joint policy as a correlated policy conditioned on state and all opponents\\u2019 actions. With the new assumption, the paper re-defines the occupancy measure and introduces an approach to approximate the unobservable opponents\\u2019 policies, in order to access opponents\\u2019 actions. However, in the section 3.2 when discussing the opponents modeling, the paper did not clearly explain how the joint opponent function \\\\sigma^{(i)} is designed. The description \\\\sigma^{(i)} is confusing.\\n\\n3.\\tTypos: in equation 14 \\u201ci\\u201d or \\u201c-i\\u201d; appendix algorithm 1 line 3 \\u201cpi\\u201d or \\u201c\\\\pi\\u201d.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this work, a multi-agent imitation learning algorithm with opponent modeling is proposed, where each agent considers other agents\\u2019 expected actions in advance and uses them to generate their own actions. Assuming each agent can observe other agents\\u2019 actions, which is a reasonable assumption in MARL problems, a decentralized algorithm called CoDAIL is proposed. For each iteration of CoDAIL, (1) each agent trains opponent models (other agents\\u2019 policies) by minimizing either MSE loss (continuous actions) or CE loss (discrete actions), (2) samples actions from those opponent models, (3) updates individual rewards (discriminators) and critics and (4) updates policies with multi-agent extention of ACKTR (which is used in MA-GAIL and MA-AIRL as well).\\n\\nThe experiments in the submission show that there is a significant gain relative to baselines (MA-GAIL and MA-AIRL) in OpenAI Multiagent Particle Environments (MPE) in terms of (true) reward differences and KL divergence between agents\\u2019 and experts\\u2019 state distributions.\\n\\nI think the empirical contribution of this work is clear to be accepted, but I give Weak Accept due to the following comments:\\n\\n- I think there\\u2019s a similarity between Theorem 6 in MA-GAIL paper and Proposition 1 in the submission. I hope the difference between Proposition 1 and Theorem 6 to be clarified. \\n\\n- Proposition 2 seems to me redundant because it\\u2019s neither important for theoretical analysis in 3.3 nor for the experiments. I believe a few sentences are enough to describe why authors choose \\\\alpha=1 (or equivalent explanations).\\n\\n- The authors suppose fully observable Markov Games in the paper, but it makes me confused when I consider the experiments in the submission. For example in Cooperative Navigation, each agent\\u2019s observation includes (1) position vector relative to agents and landmarks and (2) their own velocities (which cannot be observed by other agents directly). Since authors argue CoDAIL is a decentralized algorithm, I think agents are not allowed to use others\\u2019 observation for opponent modeling, but it seems that agents fully utilize others\\u2019 observations. I hope it to be clarified and if that\\u2019s the case, I wonder if we can regard CoDAIL as a decentralized method. \\n\\nI\\u2019m willing to increase my score if my questions are clearly answered.\"}"
]
} |
rJel41BtDH | Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning | [
"Eric Arazo",
"Diego Ortego",
"Paul Albert",
"Noel E. O'Connor",
"Kevin McGuinness"
] | Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that mixup augmentation and setting a minimum number of labeled samples per mini-batch are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR-10/100 and Mini-ImageNet despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Code will be made available. | [
"Semi-supervised learning",
"pseudo-labeling",
"deep semi-supervised learning",
"confirmation bias",
"image classification"
] | Reject | https://openreview.net/pdf?id=rJel41BtDH | https://openreview.net/forum?id=rJel41BtDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Jd6y2fOXks",
"SkeyKpaFoS",
"BJgwVTaYoS",
"Hye47n6KjS",
"HJlApoptjr",
"Syeyoi6KsS",
"SJlg3L55tH",
"B1g97ORFtB",
"ByeJgrROYH",
"Syla9DLY_r"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1576798728612,
1573670263227,
1573670190848,
1573669916304,
1573669829999,
1573669782703,
1571624616029,
1571575841553,
1571509478863,
1570494357506
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1642/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1642/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1642/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1642/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1642/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1642/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1642/AnonReviewer3"
],
[
"~Bao_Wang1"
],
[
"ICLR.cc/2020/Conference/Paper1642/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper focuses on semi-supervised learning and presents a pseudo labeling-based approach with i) mixup (Zhang et al. 2018); ii) keeping $k$ labelled examples in each minibatch.\\n\\nThe paper is clear and well-written; it presents a simple and empirically effective idea. Reviewers appreciate the nice proof of concept on the two-moons dataset, the fact that the approach is validated with different architectures. Some details would need to be clarified, e.g. about the dropout control.\\n\\nA main contribution of the paper is to show that pseudo-labelling plus the combination of mixup and certainty (keeping $k$ labelled examples in each minibatch) can outperform the state of the art based on consistency regularization methods, while being simpler and computationally much less demanding. \\n\\nWhile the paper does a good job of showing that \\\"it works\\\", the reader however misses some discussion about \\\"why it works\\\". It is most interesting that the performances are not improving with $k$ (Table 1). An in-depth analysis of the trade-off between the uncertainty (through mix-up and the entropy of the pseudo-labels) and certainty, and how it impacts the performance, would be appreciated. You might consider monitoring how this trade-off evolves along learning; I suspect that evolving $k$ along the epochs might make sense; the question is to find a simple way to control online this hyper-parameter. \\n\\nThe area chair encourages the authors to continue this very promising path of research, and dig a little bit deeper, considering the question of optimizing the trade-off between certainty and uncertainty along the training trajectory.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"RE: Full response\", \"comment\": [\"Regarding \\u201cconfirmation bias\\u201d term\"], \"we_adopted_this_term_from_other_papers\": \"Tarvainen & Valpola, 2019 (MT) and Li et al., 2019 (CCL), and note that is also named the noise accumulation problem (Zhang et al., 2016). In psychology it is defined as \\u201cthe tendency to search for, interpret, favor, and recall information in a way that affirms one's prior beliefs or hypotheses\\u201d [1]. In the context of Deep Neural Networks the term can be explained as: \\u201cthe model is prone to confirm the previous predictions and resist new changes\\u201d (CCL). Dealing with confirmation bias has been studied in MT and CCL, where they report a behaviour like the one we encounter, but for consistency regularization approaches. The issue they find is that when increasing the weight of the consistency regularization term, it outweighs the cross-entropy term and prevents the learning of new information (see Figure 1 in MT).\\n\\n[1] Plous, Scott, 1993, The Psychology of Judgment and Decision Making, p. 233.\\n\\n\\n- Regarding a slight change in the framing\\n\\nWe have changed the framing slightly following your suggestion to reflect that we show that pseudo-labeling does not need consistency regularization and prevent possible misunderstandings on previous capabilities already shown by pseudo-labeling when combined with consistency regularization. The following changes were made to the manuscript.\", \"abstract\": \"\\u201cThese results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work.\\u201d\", \"introduction\": \"\\u201cThis paper explores pseudo-labeling for semi-supervised deep learning from the network predictions and shows that, contrary to previous attempts on pseudo-labeling (Iscen et al., 2019, Oliver et al., 2018, Shi et al., 2018), simple modifications to prevent confirmation bias lead to state-of-the-art performance without adding consistency regularization strategies.\\u201d\\n\\n\\n- Regarding combination of our approach with consistency regularization\\n\\nWe agree in that consistency regularization might further improve our approach as previous evidence shows that pseudo-labeling and consistency regularization encounter benefits when combined (Iscen et al., 2019, Shi et al., 2018). Since pseudo-labeling and consistency regularization represent different forms of leveraging unlabeled data, they might encounter some beneficial complementarity. However, as we have added to the conclusions section, we leave this for future work as want to stress the potential of pseudo-labeling by itself.\\n\\n- Regarding SVHN\\n\\nWe have updated the paper to include results on SVHN dataset in Table 3 and in Appendix A.3 using the 13-CNN network. We have experimented with 250, 500, and 1000 labeled examples and obtained, respectively, 3.66 \\u00b1 0.12, 3.64 \\u00b1 0.04, and 3.55 \\u00b1 0.08 (these are state-of-the-art results on-par with top-performing consistency regularization approaches). It is important to highlight that to assure convergence to reasonable performance with few labels we had to perform a longer warm-up period (150 epochs) to improve the quality of pseudo-labels in early training epochs (the same modification in CIFAR-10 using 250 labeled examples achieves a similar performance inside the range of error reported in the paper).\\n\\n- Regarding the use of the validation set\\n\\nWe decided to adopt the criterion of separating a small validation subset from the training data and then replacing it due to the same approach used in the 2019 ICLR paper by Athiwaratkun et al.. This ensures that 10K samples of the test subset are never seen during hyperparameter tuning and 50K samples are used for training. All numbers reported in the tables were obtained under the same conditions as were used in our experiments, i.e. using 50K training examples and 10K test examples. The only exception is ICT (Verma et al., 2019), where they use the labeled samples both with labels and without labels, thus slightly increasing the amount of unlabeled examples above 50K.\\n\\n- Regarding the toy examples asymmetry\\n\\nThe asymmetry seen in Fig 1 is due to the fact that the samples that have labels do not form a symmetric pattern. These samples more strongly affect the location of the decision boundary than the unlabelled samples. This observation can be seen as well in figures reported in (Rebuffi et al., 2019) and (Verma et al., 2019).\\n\\n\\n- Regarding MixMatch (MM) with different architectures\\n\\nWe think that MM is a very powerful approach that would not have issues when run with the 13-CNN layer network. Also, as reported in (Kolesnikov et al. 2019), the network architecture may play a very important role, as shown for self-supervised learning with VGG-type and ResNet-type architectures. We observed something similar for semi-supervised learning with pseudo-labeling. Future work should take into consideration that trying multiple architectures might reveal interesting results.\"}",
"{\"title\": \"RE: Thank you for the review\", \"comment\": \"Thank you for your review and useful feedback. We have corrected the minor typos reported in the updated version of the manuscript.\"}",
"{\"title\": \"RE: Full response\", \"comment\": \"Thank you for your review and useful feedback. We have corrected the typo reported in the updated version of the manuscript.\\n\\n- Regarding the difference with MixMatch (MM)\\n\\nMM is a powerful consistency regularization approach. Here we focus on pseudo-labeling. This is a substantial difference because the type of guidance that these two approaches use is based on different ideas. To highlight this difference we have modified the corresponding paragraph in the introduction (new text in italics): \\u201cRecent approaches in image classification primarily focus on exploiting the consistency in the predictions for the same sample under different perturbations (consistency regularization) (Sajjadi et al., 2016; Li et al., 2019), while other approaches directly generate labels for the unlabeled data to guide the learning process (pseudo-labeling) (Lee, 2013; Iscen et al., 2019). These two alternatives differ importantly in the mechanism they use to exploit unlabeled samples.\\u201d\\n\\nTherefore, yes, both papers use mixup, but they differ importantly how unlabeled samples are used. Our method uses pseudo-labeling, which was thought not to work without combining it with consistency regularization, and we demonstrate that when dealing with confirmation bias (which we tackle mainly with mixup) it achieves state-of-the-art results. We think that modifying previous beliefs is an important contribution that we support with: a toy problem visualization in Figure 1, extensive analysis of different hyperparameters (adding and removing mixup in Table 1, the importance of setting a minimum number of samples per mini-batch in Table 1, dropout and data augmentation importance in Table 2 and newly added hyperparameter studies as suggested by Reviewer#2 in Appendix A.3), and extensive evaluations in CIFAR-10/100, Mini-ImageNet, and (newly added) SVHN (Table 3 in the paper and Table 7 in the Appendix A.3).\\n\\n- Regarding SVHN\\n\\nFollowing your suggestion, we evaluated our approach in the popular SVHN dataset obtaining state-of-the-art results. We use 250, 500, and 1000 labeled examples (uniformly distributed across classes as done in the related work), obtaining errors of 3.66 \\u00b1 0.12, 3.64 \\u00b1 0.04, and 3.55 \\u00b1 0.08 (these are state-of-the-art results on-par with top-performing consistency regularization approaches). We use the 13-CNN network and train 150 epochs (starting with learning rate 0.1 and dividing it by 10 twice in epochs 50 and 100). The modification needed to operate in this dataset was to perform a longer warm up stage to start the pseudo-labeling with good predictions and leading to reliable convergence (the same modification in CIFAR-10 using 250 labeled examples achieves a performance inside the range of error reported in the paper). We include SVHN results for 250 labels in Table 3, while complete results are provided in Appendix A.3. \\n\\n- Regarding pseudo-labels update\\n\\nThank you for noting the confusion. We update the pseudo-labels at the end of every epoch. We have updated the text in between Eq.1 and 2 in Section 3 to read: \\u201cIn particular, we store the softmax predictions h_\\u03b8(x_i) of the network in every mini-batch of an epoch and use them to modify the soft pseudo-label y \\u0303 for the N_u unlabeled samples at the end of every epoch\\u201d. We changed \\u201cat the end of the epoch\\u201d by \\u201cat the end of every epoch\\u201d to make it clear.\"}",
"{\"title\": \"RE: Full response\", \"comment\": \"-Regarding the motivation behind the design of our approach\", \"as_you_have_pointed_out\": \"we address confirmation bias to make pseudo-labeling (without consistency regularization) a suitable approach for semi-supervised learning (SSL). There might be other solutions aside from those proposed in this work, but we find mixup augmentation to be very effective and the minimum number of samples per mini-batch to be key when reducing the labeled examples. It is true that these \\u201ctricks\\u201d are not new, but we believe that the main contribution of the paper is to demonstrate that a conceptually simple pseudo-labeling approach can achieve state-of-the-art results for SSL without being combined with consistency regularization, which is in opposition to previous beliefs.\\n\\nMixup reduces the general confidence of the network (as shown in Thulasidasan et al. 2019) and this calibration effect directly tackles confirmation bias and helps pseudo-labeling on being a successful approach for SSL (as shown in Fig. 2).\\n\\nA minimum number of k samples per mini-batch is a common practice (MixMatch, MT, LP, MA-DNN) that is seldom reported formally. Nevertheless, Tab. 1 shows its importance when reducing the number of labeled samples and Figure 2 (left) shows that in these cases it further reduces confirmation bias. We agree with your observation that we have not sufficiently motivated the use of this parameter, thus we have extended the discussion at the end of Subsec. 3.1 as you suggested.\", \"regarding_your_suggestion_to_study_a_minimum_number_of_classes_per_batch_and_per_class\": \"we agree that this may be useful when having unbalanced data to prevent bias towards predicting certain classes. This is not the case in the datasets studied in this paper (note that the newly added SVHN is unbalanced, but the labeled set is balanced thus hiding the unbalanced nature of the dataset from a practical perspective). In cases where the number of classes increases, however, it becomes infeasible to ensure a minimum number of samples per class due to batch size restrictions (e.g. training a network with CIFAR100 and a batch of 100 samples allows only for a single sample per class and no unlabeled samples). We therefore think that your suggestion points an important issue that should be addressed in future work: are SSL approaches robust in unbalanced scenarios where the unbalanced nature is not known a-priori? We have pointed out this observation in Sec. 5.\\n\\n-Regarding stronger baselines when mixup (and other strategies) are combined with related work approaches\\n\\nWe agree that strong baselines are important. The paper includes ICT (Verma et al., 2019) and MM (Berthelot et al., 2019) in Tab. 3 and 4 for this reason. Both ICT and MM are recent and top-performing consistency regularization approaches that use mixup data augmentation. Regarding a minimum number of samples per batch: MM, MT, and MA-DNN are consistency regularization methods that use it, while LP is a pseudo-labeling approach that also adopt it. We show that our method outperforms these methods (except the consistency regularization method MM for which we are almost on-par). This supports the claim that pseudo-labeling does not require consistency regularization to achieve state-of-the-art results. Furthermore, pseudo-labeling approaches (TSSDL and LP) have previously been shown to benefit from their combination with consistency regularization; we further show that cleaner pseudo-labels (without dropout or data augmentation) lead to better performance. We have added to Sec. 5 that future work should explore the synergies that consistency regularization and a strong pseudo-labeling method like the one proposed might result in. \\n\\n-Regarding the study of hyperparameters lambda_A, lambda_B and alpha\\n\\nFollowing your suggestion we have studied these hyperparameters and report the results in Tab. 5 and 6 of Appendix A.2 to show our method\\u2019s behavior with different values of these parameters (note that we were already studying key characteristics such as using mixup or not, the minimum number of labeled examples parameter k, and synergies of mixup and dropout). These experiments confirm that the configuration selected is close to the best results achievable by tuning the hyperparameters. Regarding lambda_A and lambda_B: we use the values suggested in (Tanaka et al., 2018) which are very close to the top performance observed in the new experiment in Tab. 7. We use alpha=1 for the mixup hyperparam. as done in [2]. This value means that the mixing coefficient delta in mixup is uniformly sampled between 0 and 1, enabling a wide variety of data augmentations. Tab. 5 shows that alpha=1 might be surpassed by other alphas (values 4 and 8). However, these improvements are marginal and do not affect the contribution that mixup helps pseudo-labeling in reducing confirmation bias and leads to good performance.\\n\\n[2] V. Verma et al., Manifold Mixup: Better Representations by Interpolating Hidden States. ICML 2019.\"}",
"{\"title\": \"RE: Thank you for the review\", \"comment\": \"Thank you for your review and useful feedback. We have corrected the minor typos reported in the updated version of the manuscript.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: This paper focuses on the semi-supervised learning problem, and proposes a way to improve previous pseudo-labeling methods. In pseudo-labeling, there is an issue called confirmation bias, which accumulates the early errors of wrong pseudo labels. By adding some simple tricks such as adding mixup augmentation and setting a minimum number of labeled samples per mini-batch, the confirmation bias is shown to be reduced, leading to an improvement in accuracy. Experiments demonstrate that the additional tricks are meaningful and makes pseudo-labeling better than many baseline methods for semi-superivsed learning, including state-of-the-art consistency regularization methods.\", \"pros\": \"This is an interesting paper with a clear motivation, which is to fix the so-called confirmation bias that appears in pseudo-labeling methods for semi-supervised learning. Although the tricks introduced in the paper (mixup and changing the mini-batch selection rules) themselves are not novel, they make the proposed method simple. It is also shown to be meaningful in reducing the confirmation bias in Table 1 and Figure 2, achieving the original goal of the paper.\", \"cons\": \"The weakness of the paper is that the intuition or the motivation behind the design of the proposed method is not so clear. Using mixup is justified by the reason that mixup gives better confidence calibration. This is important for pseudo-labeling methods, because soft-label output predictions are used as pseudo labels. On the other hand, however, it was not so obvious why a minimum number of labeled samples per mini-batch was considered. Can we consider further extensions such as minimum number of labeled samples per mini-batch & per class? (Perhaps the discussions about mixup and soft labels in the last paragraph of Section 3 should be more emphasized, for example in the last paragraph of the Introduction section.)\\n\\nRelated to the weakness above, it is hard to see how far the regularization effects of adding mixup and mini-batch sampling rules are contributing to add synergy to the pseudo-labeling methods. This is partially answered with Figure 2, but it would make this easier to see if the experiments included stronger baselines, e.g., by adding the same regularization tricks to consistency regularization methods, perhaps in Table 3.\\n\\nFinally, since future work on pseudo labels will follow this paper\\u2019s setup, hyperparameters such as lamba_A, lambda_H, and alpha should be chosen carefully instead of fixing them.\\n\\n\\nOther minor comments (that did not impact the score):\\n\\n- In reference section, \\\"Z. MaXiaoyu Tao\\\" seems to combine two authors.\\n\\n- Table 3 never appears in the text. In Section 4.4, \\\"The table\\\" in the second sentence can be changed to \\\"Table 3\\\".\\n\\n- \\\"architecture plays and important role\\\" --> \\\"architecture plays an important role\\\"\\n\\n- In Table 2, \\\"+\\\" signs make it look like an equation. I suggest using commas instead.\\n\\n- \\\"ResNet arquitectures\\\" --> \\\"ResNet architectures\\\"\\n\\n~~~~~\\nThank you for the response and for the additional discussions that were included in the updated paper.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to combine pseudo-labelling with MixUp to tackle the semi-supervised classification problem. My problem is that \\\"MixMatch: A Holistic Approach to Semi-Supervised Learning\\\" by Berthelot et al. is very similar with just a few differences on the pseudo-labelling part. Could you stress more the difference between your paper and their paper ? Because I might be wrong about it.\", \"pros\": [\"Good results on C10\", \"A clear related work section that divides the existing works in pseudo labelling vs consistency\", \"Interesting results about the effects of using different architectures. I also like the ablation study.\"], \"weaknesses\": [\"Usually, SVHN is also among the tested datasets\", \"The pseudo labelling part is a bit unclear.For example, do you just refresh the pseudo-labels at the end of each epoch ?\", \"minor: a typo with \\\"and important role\\\"\", \"If there was not an existing paper already using MixUp, I would have leaned towards acceptance. You can still motivate the differences with the MixMatch paper.\"]}",
"{\"comment\": \"Hi, interpolation is a very cool idea in deep semi-supervised learning. Here I would like to point out a few papers that might be of interest to you.\\n\\n1. B. Wang, et al. Deep Neural Nets with Interpolating Function as Output Activation, NeurIPS 2018.\\n\\n2. B. Wang, et al. Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization, arXiv:1809.08516 2018\\n\\n3. B. Wang, et al. Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning, arXiv:1907.06800 2019.\\n\\nThanks for your attention.\", \"title\": \"It is a very cool idea\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"OVERALL:\\nI think this paper is worth accepting.\\nAll modern semi-supervised learning techniques use consistency regularization somehow,\\nand this paper shows that you can get away with just using pseudo-labeling combined with some\\nengineering to route around the main issue with pseudo labeling (which is apparently called confirmation bias,\\nthough I hadn't heard that, and I don't like it as a name because it's confusing).\\n\\nNeither MixUp nor the idea of fixing some number of labeled elements in a minibatch is new,\\nbut that's not the point - we thought one thing, and this paper suggests that\\nwe were wrong about that thing - to me this is exactly the sort of paper it's good to have at conferences.\\n\\nI would change the framing slightly.\\nYou're not showing that pseudo-labeling can be useful, because many techniques already incorporate a form of pseudo-labeling.\\nInstead, you're showing you can get away without consistency regularization.\", \"a_potential_improvement\": \"If you add up this techique with some of the most recent SLL techniques based on consistency regularization somehow,\\ndoes it do better, or are they both acting via the same mechanism?\", \"detailed_comments\": \"> , contrary to previous evidences on pseudo-labeling capabilities (Oliver et al., 2018),\\nIt's not really contrary to the findings of that paper, since you've totally changed the\\ntechnique compared to what's evaluated in that paper.\\n\\n> n (Berthelo et al., 2019) \\nIt's Berthelot\\n\\n> and are the mechanisms proposed in Subsection 3.1\\nDoesn't quite parse\\n\\n> Network predictions are, of course, sometimes incorrect.\\nThis is a great line.\\n\\n> We use three image classification datasets...\\nWhy not use SVHN, which is by now super standard for SSL papers?\\n\\n> , we add the 5K samples back to the training set for comparison\\nwith the state-of-the-art in Subsection 4.4,\\nThis is *allowed* from the perspective of reporting a valid test accuracy,\\nbut if other papers don't do that, it kind of mucks up the comparison, no?\\n\\nFig 1 is nice, but why does the effect not seem to be symmetric about the\\nblue and the red blobs?\\n\\n> architecture plays and important role\\n\\n\\n> However, it is already interesting that... and that future work should take this into account.\\nThis sentence doesn't quite make sense\", \"re_table_4\": \"I'm curious how e.g. MixMatch would fare w/ the 13-CNN network.\\nI am surprised that the change from WRN -> 13-CNN matters so much.\"}"
]
} |
HylxE1HKwS | Once-for-All: Train One Network and Specialize it for Efficient Deployment | [
"Han Cai",
"Chuang Gan",
"Tianzhe Wang",
"Zhekai Zhang",
"Song Han"
] | We address the challenging problem of efficient inference across many devices and resource constraints, especially on edge devices. Conventional approaches either manually design or use neural architecture search (NAS) to find a specialized neural network and train it from scratch for each case, which is computationally prohibitive (causing $CO_2$ emission as much as 5 cars' lifetime) thus unscalable. In this work, we propose to train a once-for-all (OFA) network that supports diverse architectural settings by decoupling training and search, to reduce the cost. We can quickly get a specialized sub-network by selecting from the OFA network without additional training. To efficiently train OFA networks, we also propose a novel progressive shrinking algorithm, a generalized pruning method that reduces the model size across many more dimensions than pruning (depth, width, kernel size, and resolution). It can obtain a surprisingly large number of sub-networks ($> 10^{19}$) that can fit different hardware platforms and latency constraints while maintaining the same level of accuracy as training independently. On diverse edge devices, OFA consistently outperforms state-of-the-art (SOTA) NAS methods (up to 4.0% ImageNet top1 accuracy improvement over MobileNetV3, or same accuracy but 1.5x faster than MobileNetV3, 2.6x faster than EfficientNet w.r.t measured latency) while reducing many orders of magnitude GPU hours and $CO_2$ emission. In particular, OFA achieves a new SOTA 80.0% ImageNet top-1 accuracy under the mobile setting ($<$600M MACs). OFA is the winning solution for the 3rd Low Power Computer Vision Challenge (LPCVC), DSP classification track and the 4th LPCVC, both classification track and detection track. Code and 50 pre-trained models (for many devices & many latency constraints) are released at https://github.com/mit-han-lab/once-for-all. | [
"Efficient Deep Learning",
"Specialized Neural Network Architecture",
"AutoML"
] | Accept (Poster) | https://openreview.net/pdf?id=HylxE1HKwS | https://openreview.net/forum?id=HylxE1HKwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"rxBB1m4Kk6",
"S1g4HP_njB",
"rkgITIzrjS",
"rJeuZIGHiB",
"Skx8DSMHoH",
"SyefqEGrsr",
"BJgb4Ezrjr",
"r1l72hZWir",
"HygXKK8RKB",
"BklpzCV0FB",
"ByxzZ83IYH",
"Bygt_gUQYr",
"BJe47svjOB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"comment"
],
"note_created": [
1576798728583,
1573844795801,
1573361342159,
1573361151526,
1573360990319,
1573360777970,
1573360681494,
1573096618671,
1571871099049,
1571864085393,
1571370490188,
1571147889197,
1570630427701
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1641/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1641/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1641/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1641/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1641/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1641/Authors"
],
[
"~Jason_Kuen1"
],
[
"ICLR.cc/2020/Conference/Paper1641/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1641/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1641/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1641/AnonReviewer2"
],
[
"~Rudy_Chin1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors propose a new method for neural architecture search, except it's not exactly that because model training is separated from architecture, which is the main point of the paper. Once this network is trained, sub-networks can be distilled from it and used for specific tasks.\\n\\nThe paper as submitted missed certain details, but after this was pointed out by reviewers the details were satisfactorily described by the authors. \\n\\nThe idea of the paper is original and interesting. The paper is correct and, after the revisions by authors, complete. In my view, this is sufficient for acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revision Uploaded\", \"comment\": \"We sincerely thank all reviewers for their constructive comments. We have revised our paper accordingly with the promised results and implementation details included. Please check out the new version!\", \"our_pre_trained_model_and_training_code_are_available_at\": \"\", \"https\": \"//drive.google.com/open?id=1GrLufnGc_3UYG6l7kBX3JYjUqPr8ZaUQ\\n\\n1. We updated the experiment section (Section 4) with the new results in the MobileNetV3 search space. OFA consistently outperforms MobileNetV3 on various mobile platforms and latency constraints. \\n\\n2. In Appendix A, we included a figure showing the relationship between the performance of the accuracy prediction model and the accuracy of selected sub-networks. \\n\\n3. We updated figure1 and included a new figure (figure 5) that shows the entire trade-off curves of OFA on mobile platforms. \\n\\n4. In Appendix C, we added a table showing the detailed architecture of the full network. \\n\\n5. In Appendix E, we included implementation details of the progressive shrinking algorithm.\\n\\nIf there are any additional comments on the paper or on the code, please don\\u2019t hesitate to let us know.\"}",
"{\"title\": \"Our response to Reviewer #3\", \"comment\": \"Thanks very much for your constructive comments.\\n1. Why training from large to small can prevent interference between sub-networks.\\nTraining large sub-networks can also benefit small sub-networks to learn useful features. For example, after finishing the step of elastic kernel size, the sub-network (D=3, W=6, K=7, R=224) can already achieve 69.1% top-1 accuracy on ImageNet without any fine-tuning. This is consistent with previous observations in network pruning [1,2,3]. By training from large to small, both large sub-networks and small sub-networks can reuse previously learned knowledge (or features). Empirically, we find that it is helpful for the optimization of the shared weights with the goal of supporting large sub-networks and small sub-networks at the same time. \\n\\n2. Why subnetworks with weight sharing could achieve the same, or even better performances compared with those non shared counterparts.\\nWe first want to clarify that we are not targeting at improving the accuracy of a specific sub-network for a **single** scenario; instead, we want to improve the accuracy-efficiency trade-off on **many** hardware platforms while reducing the total training cost. To avoid confusion about the goal of this paper, we will emphasize our main contribution and make it more clear in the revision.\\n\\nWe conjecture the reason for this result is that smaller sub-networks can benefit from getting the knowledge transferred from well-trained large sub-networks through inheriting weights from large sub-networks and knowledge distillation. \\n\\nRegarding separating the benefits of PS and the disadvantage of weight sharing (i.e., interfering), we want to clarify that weight sharing is an essential component of the OFA framework since it is prohibitive to download and store so many networks independently on resource-constrained edge devices. \\n\\n3. Code release.\\nThank you for the suggestion. We definitely hope this work can be a useful tool for application purposes. We are currently cleaning the code. The training code and pre-trained models will be released anonymously in the OpenReview by Nov. 22. \\n\\nWe have also summarized all of our planned updates in our general response above. If there are any additional comments on the paper or on the planned updates, please don\\u2019t hesitate to let us know. \\n\\n[1] Han, Song, et al. \\\"Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding.\\\" in ICLR 2016.\\n[2] Liu, Zhuang, et al. \\\"Learning efficient convolutional networks through network slimming.\\\" in ICCV 2017.\\n[3] He, Yihui, et al. \\\"Channel pruning for accelerating very deep neural networks.\\\" in ICCV 2017.\"}",
"{\"title\": \"Our response to Reviewer #2\", \"comment\": \"Thanks very much for your constructive and detailed comments. We will fix the typos and remove \\u201cOnce for All #25\\u201d from figure 1.\\n1. Training details and code release.\\nFor reproduction, we will add a detailed and clear description of our training details by Nov. 15. We are also working on cleaning the code. The training code and pre-trained models will be posted anonymously in the OpenReview by Nov. 22.\\n\\n2. Sample more sub-networks and produce some Pareto curves.\\nThat\\u2019s a great idea. Thanks for the suggestion. We will update our figures showing the entire trade-off curves rather than a few points by Nov. 15. \\n\\n3. Adding the original model in the appendix.\\nThank you for the suggestion. We will add a figure showing the detailed architecture of the full (original) model in the appendix.\\n\\n4. Why the progressive shrinking goes resolution->kernel->depth->width.\\nThe order is determined based on the difficulty of each task. Intuitively, we hope the model to complete easy tasks first and then handle more difficult tasks, similar to the idea of curriculum learning. \\n\\n5. Why the channel sorting operation preserves the accuracy of larger sub-networks.\\nWhen performing the channel sorting operation on a specific layer, we first sort the input dimension of the layer according to their importance (i.e., L1 norm). Then the output dimension of the previous layer is reorganized accordingly to make sure the functionality of large sub-networks does not change. \\n\\nWe have also summarized all of our planned updates in our general response above. If there are any additional comments on the paper or on the planned updates, please don\\u2019t hesitate to let us know.\"}",
"{\"title\": \"Our response to Reviewer #1\", \"comment\": \"Thanks very much for your constructive comments.\\n1. Performance of the accuracy prediction model and how it influences the final selection.\\nWe will add a figure in the appendix by Nov. 15, showing the relationship between the performance of the accuracy prediction model and the accuracy of selected sub-networks. \\n\\n2. Comparison to MobileNetV3 in Table 2.\\nThanks for the suggestion. We agree that it is essential to compare our model to MobileNetV3 which gives the current SOTA performances on mobile platforms. To have an Apple-to-Apple comparison with it, we will apply our method to the same architecture space as MobileNetV3. The new results will be included by Nov. 15. \\n\\nWe have also summarized all of our planned updates in our general response above. If there are any additional comments on the paper or on the planned updates, please don\\u2019t hesitate to let us know.\"}",
"{\"title\": \"Our general response\", \"comment\": \"We sincerely thank all reviewers for their comments. We summarize our planned updates as follows:\\n\\n1. We will apply our method to the same architecture space as MobileNetV3. The new results will be included by Nov. 15. \\n\\n2. We will add a figure in the appendix by Nov. 15, showing the relationship between the performance of the accuracy prediction model and the accuracy of selected sub-networks. \\n\\n3. We will update our figures showing the entire trade-off curves with many points (rather than a few points) of OFA on different hardware platforms by Nov. 15. \\n\\n4. We will add the detailed architecture of the full model in the appendix. \\n\\n5. For reproduction, we will include a detailed description of our training details by Nov. 15. We are working on cleaning the code. The training code and pre-trained models will be released anonymously in the OpenReview by Nov. 22. \\n\\nIf there are any additional comments on the paper or on the planned updates, please don\\u2019t hesitate to let us know.\"}",
"{\"title\": \"Thanks for suggesting a related paper. We will add a reference to the paper in the revision.\", \"comment\": \"Hi Jason,\\n\\nRegarding your question about distillation, the teacher model does not share weights with the OFA network. Specifically, after training the full network, one copy of the full network weights is used as the teacher model, and another copy of the full network weights is used for further training to support smaller sub-networks. Therefore, training smaller sub-networks does not affect the teacher model. \\n\\nBest,\\nAuthors\"}",
"{\"title\": \"Interesting work and a missing reference\", \"comment\": \"Hi, thanks for the interesting work that advances the progress of efficient deep learning.\\n\\nI especially find the idea of progressive shrinking intriguing. It is said that the smaller sub-networks distill knowledge from larger sub-networks.. Since all sub-networks share the same weights, wouldn't training smaller sub-networks change the prediction behavior of larger sub-networks (making them unreliable for distillation)?\\n\\nThere is a missing reference to a related work that also similarly focuses on multiple efficiency configurations using a single model without retraining. It would be good if the authors could acknowledge it.\\n- Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks, CVPR 2018.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this manuscript, authors propose an OFA NAS framework. They train a supernet first and then finetune the elastic version of the large network. After training, the sub-networks derived from the supernet can be applied for different scenarios directly without retraining. The motivation is clear and interesting. My concerns are as follows.\\n1.\\tWhen sampling sub-networks, a prediction model is applied to predict the accuracy of networks. It is interesting to show the accuracy of the prediction model itself and how it will influence the final selection.\\n2.\\tThe results compared in Table 2 are outdated. Authors should at least add the result of MobileNetV3.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper tries to tackle the problem of searching best architectures for specialized resource constraint deployment scenarios. The authors basically take a two-step approach: First train a large network including all the small networks with weight sharing and some specially designed trick (e.g., progressive shrinking). Second, use prediction based NAS method to learn the performance/inference prediction module, from which the good sub architecture corresponding to a particular scenario is obtained. The experiments show that the proposed method is promising.\", \"pros\": \"1. It is an interesting new paradigm that tries to solve AutoML for different deployment scenarios \\u201conce for all\\u201d. AFAIK there is no prior works thinking in this way.\\n2. It is useful and encouraging to see the proposed method achieves satisfactory performances, on par with the current best method specially designed for different deployment environment, while the computational cost is reduced by a large margin. \\n3. Paper is clearly written and easy to understand.\", \"cons\": \"1. The motivation towards \\u201cprogressive shrinking (PS)\\u201d is not that clear. It seems natural to train a large network, and from it to train sub structures, since overparameterization helps NN training. However, it is hard to imagine that training from large to small could eliminate the \\u201cinterfering\\u201d of subnetworks, let alone \\u201cwhile maintaining the same accuracy as independently trained networks\\u201d. To me it is neither theoretically nor empirically supported (Please note the training of subnetworks definitely affect the learnt weights of the big one through weight sharing). In particular, the subnetworks with weight sharing could achieve the same, or even better performances compared with those non shared counterparts, which seems too good to be true.\\n 1. A possible explanation might be that the overparameterization brings additional gain in the optimization process of each small network, especially with the help of knowledge distillation. If that is true, an additional ablation study should be done to separate the benefits of PS, and the disadvantage of weight sharing (i.e., interfering). \\n2. I see no statements about code release. If a clear, and TIMELY code (for the SEARCH phase, not only for the Eval phase) release could be done, then at least from the perspective of application, the impact of this paper could be further enhanced.\"}",
"{\"comment\": \"Hi Rudy,\\n\\nThanks for your interest. We will release the code after the double-blind review. \\n\\nWe use 150 epochs to train the full (large) network before switching to the elastic version. In training the elastic version, the full (large) network is not trained. We set the learning rate for fine-tuning as 0.04 (1/10 initial learning rate). We choose the hyper-parameters by cross-validation (learning rates around 0.04 give stable results; increasing the number of epochs can usually improve the results).\\n\\nRegarding your second question, we do not claim that OFA produces sub-networks that outperform the individually trained ones. Our main contribution is to reduce the total cost of handling **many** deployment scenarios (hardware platforms and constraints), which is crucial for real-world applications, rather than targeting a **single** scenario. Therefore, the key advantage of OFA is that OFA can efficiently specialize for different deployment scenarios while individually trained models cannot. Even independently training the sub-network with distillation (using the same teacher network as OFA), the accuracy slightly improved from 74.3% to 74.7%, which is still at the same level as OFA produced sub-network (74.8%). \\n\\nBest,\\nAuthors\", \"title\": \"Open Source & Motivation\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"In this papers, the authors learn a Once-for-all net. This starts as a big neural network which is trained normally (albeit with input images of different resolutions). It is then fine-tuned while sampling sub-networks with progressively smaller kernels, then lower depth, then width (while still sampling larger networks occasionally, as it reads). This results in a network from which one can extract sub-networks for various resource constraints (latency, memory etc.) that perform well without a need for retraining.\\n\\nThis paper is well written, and the results are very good. However there are serious problems that need addressing.\\n\\nThe method as described *is not reproducible*. The scheduling of sampling subnetworks is alluded to on page 4, and that's it. It is essential that the authors include their exact subnet sampling schedule e.g. as pseudocode with hyperparameters. There is no point doing good work if other researchers cannot build off it. \\n\\nOn another reproducibility note, as far as I can tell, the original model isn't given. There would be no harm in adding this to the appendix. \\n\\nFigure 1 is misleading, as we don't find out until later in the paper that Once For All #25 means that each of these points was finetuned for a further 25 epochs (which on ImageNet is non-trivial). This defeats the narrative of the paper (once-for-all plus some fine-tuning isn't exactly once-for-all).\\n\\nIs there a reason why the progressive shrinking goes resolution->kernel->depth->width? Was this just the permutation that worked best? I would be curious as to why this is.\\n\\nFor elastic width, I wasn't sure why the \\\"channel sorting operation preserves the accuracy of larger sub-networks\\\". Could you please elaborate?\\n\\nKudos on adding CO2 emissions in Table 2, I hope this gets reported more often.\\n\\nIn the introduction, the authors talk about iPhones and then the hardware considered is Samsung and Google. A minor note, but it seems inconsistent.\\n\\nAnother minor note, in Table 2, (Strubell et al) should be out of the brackets, as it is part of the sentence.\\n\\nGiven that there are 10^19 subnetworks that can be sampled, it would be nice to see more than 3-4 appear on a plot. This makes it seem like they might have been cherry-picked. Sampling a few 100/1000 subnets and producing some Pareto curves would be both interesting and insightful.\\n\\nPros\\n-------\\n- Good results\\n- Well written\\n- Neat idea\\n\\nCons\\n-------\\n- Training details are obfuscated. This paper should not be accepted without them.\\n- Very few subnetworks of the vast quantity that exist are observed.\\n\\nIn conclusion, I am giving this paper a weak reject, as it is currently impossible to reproduce, and as such, is of no use to the community. If the authors remedy this I will gladly raise my score.\"}",
"{\"comment\": \"Dear authors,\\n\\nThis is a really interesting work that one can train a large network such that the sub-networks within the large network still work really well, which appears to be even better than the individual trained ones!\\n\\nI'm trying to implement this idea and would like to know the specific hyper-parameters used in the paper. Specifically, how long do you train the large network before switching to the elastic version. In training the elastic version, do you still train the large network? If so, how are their losses combined? When training the elastic version gradually, what is the specific learning rate since the paper mentioned that you train with small learning rate so that the large network wouldn't deviate too much from the pre-trained weights. Also, can you elaborate on how the number of epochs and learning rate used to fine-tune each of the elastic space chosen and how they affect the results?\\n\\nAnother question I have is that you mentioned using distillation for the large network to distill the sub-networks, do you do the same for the independent trained models? Specifically, in Table 1, do you use knowledge distillation for the independent trained models? If not, I think it is not clear if the proposed OFA network indeed produce sub-networks that outperform the individually trained ones. \\n\\nThanks,\\nRudy\", \"title\": \"Interesting work\"}"
]
} |
H1lxVyStPH | Generalized Convolutional Forest Networks for Domain Generalization and Visual Recognition | [
"Jongbin Ryu",
"Gitaek Kwon",
"Ming-Hsuan Yang",
"Jongwoo Lim"
] | When constructing random forests, it is of prime importance to ensure high accuracy and low correlation of individual tree classifiers for good performance. Nevertheless, it is typically difficult for existing random forest methods to strike a good balance between these conflicting factors. In this work, we propose a generalized convolutional forest networks to learn a feature space to maximize the strength of individual tree classifiers while minimizing the respective correlation. The feature space is iteratively constructed by a probabilistic triplet sampling method based on the distribution obtained from the splits of the random forest. The sampling process is designed to pull the data of the same label together for higher strength and push away the data frequently falling to the same leaf nodes. We perform extensive experiments on five image classification and two domain generalization datasets with ResNet-50 and DenseNet-161 backbone networks. Experimental results show that the proposed algorithm performs favorably against state-of-the-art methods. | [
"convolutional forest networks",
"domain generalization",
"visual recognition",
"individual tree classifiers",
"feature space",
"data",
"random forests",
"prime importance",
"high accuracy",
"low correlation"
] | Accept (Poster) | https://openreview.net/pdf?id=H1lxVyStPH | https://openreview.net/forum?id=H1lxVyStPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"1OapIup5YI",
"H1x-FTj5or",
"SJl4w6jqoB",
"HklZmToqiB",
"HklgYgX-5S",
"HJlFJWh6FS",
"HJxQzU9stS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728554,
1573727608754,
1573727579621,
1573727513080,
1572053112461,
1571827937370,
1571689994916
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1640/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1640/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1640/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1640/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1640/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1640/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors introduce an approach to learn a random forest model and a representation simultaneously. The basic idea is to modify the representation so that subsequent trees in the random forest are less correlated. The authors evaluate the technique empirically and show some modest gains. While the reviews were mixed, the approach is quite different from the usual approaches published at ICLR and so I think it's worth highlighting this work.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank Reviewer 1 for the helpful comments. We answer to the comments of Reviewer 1.\\n\\n1. General supervised classification tasks\\nTo demonstrate the generalization ability, we evaluate the proposed GCFN in domain generalization and visual recognition tasks. This is because domain generalization itself evaluates the generalization performance on different domain settings of training and test sets. Visual recognition of various domains evaluates recognition performance in terms of generalization due to multiple test domains.\\n\\nWe will add the experiments with the CIFAR and SVHN data sets to demonstrate the effectiveness of the GCFN. Due to the short rebuttal period, the experiment results will be added in the final version of this paper. \\n\\nThe proposed method is not only suitable for domain generalization and visual recognition of the paper, but can be extended to the general recognition tasks on the CIFAR and SVHN datasets.\\n\\n2. It is correct that the proposed method iteratively learns a generalized feature distribution based on the split results of a random forest. Thus, the learned feature distribution is not independent of each decision tree. However, when creating a new decision tree in a random forest, it does not take into account the splitting results of previously trained decision trees. In other words, the feature learning algorithm includes a process that depends on the partition results of the decision tree, but there is no dependency process for the train decision tree itself. Thus, we use a metric of the upper bound of generalization error for theoretical implications.\\n\\nAs suggested by the reviewer, it is worth studying more because learning methods break the independence of input functions. However, we show that the upper bound of generalization error and actual experimental results are almost the same, and the results are meaningful for showing the generalization ability of the proposed method.\\n\\n3. We apologize for the confusion of the representations in Figure 3 and Table 1. In fact, the x-axis in Figure 3, labeled 'iteration', represents the \\u2018epoch\\u2019 of the network learning procedure, but not the number of trees. Thus, there is no network learning about canonical features, which does not improve the accuracy with regard to the \\u2018epoch\\u2019 for the canonical features. We clarify that the \\u2018iteration\\u2019 in Figure 3 denotes the \\u2018epoch\\u2019 in the revised paper. \\n\\n4. We use the word \\u201cbias\\u201d to indicate the relationship between trees. The term \\u201cless bias\\u201d means \\u201cless correlated to each other trees\\u201d. We have revised the sentence as below.\\n\\nWhile each decision tree may achieve mediocre performance, the aggregated random forest performs significantly better and less correlated if the decision trees are heterogeneous.\\n\\nThe correction can be found on page 1 of the revised paper.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank Reviewer 3 for the helpful comments. We answer to the comments of Reviewer 3.\\n\\n1. Backbone networks\\nWe compare the proposed algorithm with the state-of-the-art methods using the same backbone network such as ResNet-18 (Table 3), AlexNet (Table 4), ResNet-50 (Table 5, 6, 8 and 9), ResNet-152 (Table 7) and DenseNet-161 (Table 9).\\n\\nIn Table 9, we use two backbone networks (ResNet-50 and DenseNet-161) for comparisons. Since pairwise confusion (PC) [1] report results from both backbone networks, we compare the proposed method with PC on both for fair comparisons.\\n\\nWe also revised the paper to clarify the backbone networks on page 6 and abstract.\\n \\n[1] Abhimanyu Dubey, Otkrist Gupta, Pei Guo, Ramesh Raskar, Ryan Farrell, and Nikhil Naik. Pairwise confusion for fine-grained visual classification. In European Conference on Computer Vision, pp. 70\\u201386, 2018.\\n\\n2. Overhead of each iteration\\nWe measured the overhead of training time for random forests.\\n\\nAs stated in answer 3, we use 300 epochs and random forests are trained at every 3 epochs for reducing the overhead. \\n\\nThe time for training a random forest is about 15 seconds (MIT Indoor dataset with ResNet-50), which results in only 15sec X 300 epochs / 3 = 25 minute overhead in the whole learning process.\\n\\n3. The number of trees\\nWe compare the performance of random forests constructed on canonical, strengthened, and generalized feature space with T=1,10 and 50 in Table 1. As shown in [2] there is little change in performance above 64 trees for a random forest, we measure the performance change from 1 to 50 in a way similar to [2]. Table 1 shows the ablation study of the performance with regard to the number of trees. \\n\\n[2] Oshiro Thais Mayumi, Perez Pedro Santoro, Baranauskas Jos\\u00b4e Augusto, How many trees in a random forest?. In International workshop on machine learning and data mining in pattern recognition, pp. 154\\u2013168, 2012\\n\\n4. Some results in Table 3 and 4\\nThere are some cases that the proposed GCFN does not outperform the state-of-the-art methods. However, the GCFN outperforms them in terms of average accuracy for each dataset, which shows the effectiveness of GCFN. It is more important to analyze the overall average performance of all cases to validate the generalization ability rather than one or two cases.\\n\\n5. Validation on small networks\\nWe perform some additional experiments on the AlexNet [1] which is relatively small and fast networks compared to ResNet-50 and DenseNet-161.\\n\\nWe compare the random forests on canonical features as the baseline with the proposed GCFN on the DTD, MIT Indoor and Scene-15 datasets. \\n\\nThe results also confirm that GCFN also performs well with relatively small networks.\\n DTD MIT Indoor Scene-15 \\n T=1 T=10 T= 50 T=1 T=10 T= 50 T=1 T=10 T= 50 \\nBase 44.9 60.4 62.8 39.9 59.2 62.0 79.9 88.5 88.9 \\nGCFN 55.0 63.1 64.2 51.6 62.1 63.1 84.7 88.7 89.5 \\n\\n[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Neural Information Processing Systems, 2012\\n\\n6. Implementation details \\nWe add the implementation details about the learning rate, batch size, number of epochs, and training process in Page 12 of the paper as below.\\n\\nWe use 300 epochs with 5*10^(-3) learning rate, weigh decay with 9*10^(-4) for the network training. Random forests are created at every 3 epochs to reduce the overhead. Once we build a random forest, the neural networks are updated for 3 epochs based on the split result of the random forest. The training time overhead due to the random forests is 15 seconds for MIT Indoor dataset with ResNet-50, which results in only 25 minute overhead in the whole learning process. For the fast training, we use 50 trees with the split function of `F' for the network update.\\n\\n7. Page limit \\nWe fix the page limit issue by moving some parts to the appendix.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank Reviewer 2 for the helpful comments. We answer to the comments of Reviewer 2.\\n\\nWe revised the paper to explain the obvious benefits of the proposed method in terms of overfitting, bias and generalization in Page 7.\\n\\nA detailed description of this topic follows.\\n\\nIt is known that large bias and variance cause overfitting problems. The model generalization reduces the overfitting problem by regularizing the learning model. A well-trained random forest is known to have low bias and variance to generalize the classification task. To have low bias of a random forest, a good practice is to learn the decision tree very deeply. For low bias in each tree, the bagging strategy of a random forest reduces the variance for the final prediction. Therefore, in general, random forests can have low bias and variance. \\n\\nHowever, there is still bias and variance dilemma in training decision trees. Making each decision trees very deep increases the variance of the decision tree. Suppose that if tree depth is one, there are only two decisions that the tree can make, which leads to lower variance. However, if the tree depth is ten, the number of decision cases will be about one thousand, resulting in a much higher variance. Therefore, if each decision tree has too high variance due to the depth, a random forest can still have a large variance even after the bagging process. To mitigate this problem, we introduce the GCFN, which constructs very low bias where the variance is not too high. We use the strength and correlation terms to theoretical analyze the overfitting problem due to the bias and variance dilemma. The positive sampling strategy of our method contributes to increasing the strength, and thus each tree has a low bias value. On the other hand, the negative sampling scheme regularizes the rapid increase of the strength, which can cause the overfitting problem while reducing the correlation. We measure the degree of model overfitting in terms of the upper bound of generalization error in the paper. We show that the proposed sampling strategy reduces the upper bound of the generalization error, and therefore, the GCFN can provide the generalized performance of various visual recognition tasks and avoid the overfitting problem even with very low bias. \\n\\nWe also modify the word \\u201cbias\\u201d in the introduction section to better describe our intention. \\nWe mean to use the word \\u201cbias\\u201d to describe that each tree looks similar. The word \\u201cbias\\u201d does not have the same meaning as the above discussion on the bias and variance dilemma. The revised text can be found on page 1 of the paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper aims to improve the random forest performance by iterative constructing & feeding more powerful features that are to be used by the random forest learning processing where a random subset of features are chosen from the current feature pool when making growing/stopping decision at the current split node. The idea is new and interesting, and its usefulness has been empirically shown. On the other hand, it is not clear how this additional procedure would do with the good properties of RFs such as less subjective to overfitting and bias. It would be very helpful if the paper could shred some lights in this regard.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a method for generalized image recognition based on random forest, that use directly the features extracted by the backbone, that assign pseudo-labels to the data. The learning is performed with a triplet loss adapted for better generalization.\", \"decision\": \"weak reject\", \"motivation\": \"the method is incremental and presented in general in a clear way and easy to follow, the authors present a simple but interesting trick to make the triplet loss more effective on a random forest in the case of generalization to a new unlabeled dataset. This method looks incremental to me because it is addressing the problem of pseudo-labelling for learning on a new dataset and instead of using confidence measures uses a random forest to assign labels.\\nThe experimental section of the paper is a bit confusing because is not clear if the results presented are with comparable network (e.g. ResNet18) like the cited state-of-the-art papers, from further readings I am confident the autors compared fairly with similar architectures. Authors should perhaps stress they compare with state-of-the-art in fair condition to avoid confusion as in my case. How much is the overhead of building the random forest for each iteration of the learning (algorithm 1), a more detailed analysis on this is useful for understanding the method. Could this method be used to train a network from scratch on an unlabeled data or on data with noisy labels? How did the authors choose the T decision trees, is there any ablation study, general practice or euristics behind the choice of 1,10,50? The comparison with state-of-the-art Tab 3 and Tab 4 shows that for some datasets other techniques are better, did the authors draw some conclusions from that? Comparing Tab 3 and 4 with Tab 5/6/7/8/9 looks like this method can work but only in the case of much bigger network like ResNet50 and DenseNet 161 which can limit its use for high resources (computing power) cases.\", \"replicability\": \"I think with improvements in the experimental section the method results can be replicated. At the moment it lack many details like learning rates, epoch of training and other useful information that are useful.\", \"minor\": \"there are two lines out of the 9 page limit\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a scheme to learn representations and a random\\nforest model in a closely coupled manner, where the representations\\nare learned to simultaneously improve the predictive performance of\\nthe individual trees in the random forest and reduce the correlation\\nbetween these trees. This is achieved via a novel triplet sampling\\nscheme where the positive samples are used to improve the predictive\\nperformance of the individual trees, while the negative samples are\\nused to reduce the correlation between the trees. The empirical\\nresults indicate the improvement of the proposed scheme over random\\nforests learned over the original representations across various\\ndatasets. \\n\\nThe manuscript is well presented and I am leaning towards accept, one\\nmajor issue with this manuscript is the number of pages which crosses\\nthe 8 page recommended limit in the ICLR CfP. Given that higher\\nstandard, I feel that there are multiple minor issues which should be\\naddressed. \\n\\n- It is not clear why the proposed scheme is especially well suited\\n for \\\"domain generalization\\\" and \\\"visual recognition\\\" and not for\\n general supervised classification tasks (like those on CIFAR, SVHN,\\n etc). This needs to be clarified.\\n- The theoretical results for random forest rely on the independence\\n of the individual trees. Given that the trees in the random forest\\n are no longer independent in the proposed scheme, the upper bound in\\n Equation (1) may no longer be valid. While this does not affect the\\n empirical performance, it might be good to discuss the theoretical\\n implications of the proposed scheme.\\n- It is not clear why the curves for the canonical features in Figure\\n 3 not improving with the number of iterations (which corresponds to\\n number of trees to the best of my understanding). The results in\\n Table 1 do indicate improvement in the performance with canonical\\n features, putting Figure 3 and Table 1 at odds.\", \"minor\": \"- The individual trees in a random forest are supposed to have low\\n bias but high variance, and the heterogeneity between the trees is\\n supposed to squash the variance with the bagging. The introduction\\n mentions that the individual trees in the forest are \\\"less\\n biased\\\". Maybe this is because of our different definitions of bias\\n but any clarification here would be helpful.\"}"
]
} |
S1xJ4JHFvS | Acutum: When Generalization Meets Adaptability | [
"Xunpeng Huang",
"Zhengyang Liu",
"Zhe Wang",
"Yue Yu",
"Lei Li"
] | In spite of the slow convergence, stochastic gradient descent (SGD) is still the most practical optimization method due to its outstanding generalization ability and simplicity. On the other hand, adaptive methods have attracted much more attention of optimization and machine learning communities, both for the leverage of life-long information and for the deep and fundamental mathematical theory. Taking the best of both worlds is the most exciting and challenging question in the field of optimization for machine learning.
In this paper, we take a small step towards such ultimate goal. We revisit existing adaptive methods from a novel point of view, which reveals a fresh understanding of momentum. Our new intuition empowers us to remove the second moments in Adam without the loss of performance. Based on our view, we propose a new method, named acute adaptive momentum (Acutum). To the best of our knowledge, Acutum is the first adaptive gradient method without second moments. Experimentally, we demonstrate that our method has a faster convergence rate than Adam/Amsgrad, and generalizes as well as SGD with momentum. We also provide a convergence analysis of our proposed method to complement our intuition. | [
"optimization",
"momentum",
"adaptive gradient methods"
] | Reject | https://openreview.net/pdf?id=S1xJ4JHFvS | https://openreview.net/forum?id=S1xJ4JHFvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"gyg254z4lz",
"S1gjWbZ2iB",
"Byg5DzhcsS",
"BJeulyAMjr",
"SkxJST6zsS",
"r1xK5nTGsr",
"BkgxL3dCtH",
"rJgbavlCYr",
"HJelXDOTFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728526,
1573814531504,
1573728865726,
1573211888297,
1573211447311,
1573211281205,
1571880008083,
1571846072816,
1571813144247
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1639/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1639/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1639/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1639/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1639/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1639/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1639/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1639/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper addresses an important problem of finding a good trade-off between generalization and convergence speed of stochastic gradient methods for training deep nets. However, there is a consensus among the reviewers, even after rebuttals provided by the authors, that the contribution is somewhat limited and the paper may require additional work before it is ready to be published.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your response! Actually, we can only obtain an $O(d\\\\sqrt{T})$ upper bound for the term $\\\\sum_{i=1}^d\\\\left\\\\|g_{1:T,i}\\\\right\\\\|_2$ in all Adam-type optimizers, which is just like Eq.(18) in our paper. With such upper bound, choosing step size as $\\\\alpha/\\\\sqrt{t}$ can indeed balance the order of $d$ among different terms in Eq.(9) (the order of $d$ of the first term and the second term in Eq.(9) are both $O(d)$). However, it also makes the regret admit no advantages over the online gradient descent, even for other Adam-type optimizers. Thus, if we choose the step size as $O(\\\\alpha\\\\sqrt{d/t})$, we can guarantee the same regret $O(\\\\sqrt{dT})$ with SGD under the condition $\\\\sum_{i=1}^d\\\\left\\\\|g_{1:T,i}\\\\right\\\\|_2\\\\le O(\\\\sqrt{T})$. In our opinion, providing the convex regret bound is to qualitatively show that Acutum is convergent. As for the analysis of the convergence rate, we may utilize standard frameworks, say[1] or [2]. As you know, new ideas are required to derive tighter arguments. From a practical perspective, we believe our experiments can clearly show the convergence rate and the potential of Acutum. Besides, we didn't mention that improving the regret bound as our contribution. It's worth to mention that our main contribution is to propose a new insight to explain all the other Adam-type methods, and then demonstrate the improvement by following the novel understanding.\\n\\nFor your second question, we will add the absolute value sign there in our next revision. Thanks a lot for your carefully proof-checking:)\\n\\n[1]Chen, Xiangyi, et al. \\\"On the convergence of a class of adam-type algorithms for non-convex optimization.\\\" arXiv preprint arXiv:1808.02941 (2018).\\n[2]Zaheer, Manzil, et al. \\\"Adaptive methods for nonconvex optimization.\\\" Advances in Neural Information Processing Systems. 2018.\"}",
"{\"title\": \"response\", \"comment\": \"Thank you for your response to my comments. Although the factor $\\\\sqrt{d}$ can be removed in front of $\\\\sum_{i=1}^d\\\\|g_{1:T},i\\\\|_2$, the regret bound involves $d\\\\sqrt{T}$ in the first term. In this sense, the regret bound admits no advantages over the online gradient descent.\\n\\nA minor issue in the proof of Lemma A.2: the first inequality generally not holds since $g_t^T\\\\hat{m}_{t-1}$ may be negative. However, this can be simply addressed if the authors adds absolute value there.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for your supportive review and useful suggestions.\", \"q\": \"The presentation and structure of the paper should be improved further ...\", \"a\": \"Thanks for your suggestion! Convincing readers is the most important work for research papers. We are carefully considering how to add subsection in Section 3 to make readers follow the main idea of this paper more easily, and we will update the relevant context in our next version:)\\n\\nThe response above has been reflected in our revised paper accordingly. Please let us know if you have any further suggestions.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your valuable suggestions, which is indeed helpful to improve our theoretical analysis. Such improvements can answer your concerns straightforwardly.\\n\\nActually, \\\"let the search direction opposite to the gradient at the current batch of examples and a bit orthogonal to the previous batch of examples\\\" is our observation inspired by the sub-problem of Adagrad. Based on such an observation, the intuition of our Acutum is to find a descent direction that can both make an acute angle with both the current batch-gradient and the approximate descent direction of previous batch loss. In practice, choosing the direction as the angle bisector of $g_t$ and $\\\\hat{m}_t$ on the hyper-plane they form can make the experimental results satisfied.\", \"q\": \"In both eq (14) and (15) ...\", \"a\": \"Eq.(14) and Eq.(15) are different parts of Eq.(11), we divide the summation into two parts for a more clear statement. We have fixed the typos about $\\\\sqrt{\\\\sum_{t=1}^t1/\\\\sqrt{t}}$ in our latest version.\\n\\nLastly, please allow us to emphasize the importance of this article. We are the first to explain the effect of second moments through constructing approximate subproblems and point that \\\"second moments essentially penalize the projection of the current descent direction on the exponential moving average of previous gradients\\\", which is totally different from most of the previous works. We believe such consideration could be instructive to the design of future optimizers. Furthermore, our Acutum only utilizes ONLY the first moments to achieve a comparable efficiency with Adam-type optimizers and even a better generalization, which illustrates that our intuition is effective from a practice perspective. Theoretically, our Acutum enjoys a regret of $\\\\tilde{O}(1/\\\\sqrt{T})$ which is the same as the best of known bound for general convex online learning problems (Adam, AMSGrad, etc).\\n\\nWe hope you find your concerns satisfactorily addressed by our response, which has also been reflected in the revised paper. Please let us know if you have more comments or any other suggestions.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thanks for your helpful suggestions and comments. We address your concerns as follows.\", \"q\": \"decay the lr by 0.1...\", \"a\": \"In the CV community, lr decay is a common trick of getting a better result. To achieve the SOTA testing accuracy, we did such work for all optimizers. Such settings also appear in related work, e.g., Padam.\\n\\nWe have revised our paper accordingly. Almost all existing adaptive methods require second moments. In a novel point of view, we achieved similar results with ONLY first moments. We also proved our conclusion both theoretically and experimentally in our paper. \\nAny further comments on the paper are more than welcome.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Balancing the generalization and convergence speed is an important direction in deep learning. This paper propose a new method to balance them. That is very interesting to me. However, I have several concerns as follows.\\n1. I cannot agree that \\\"fewer hyper-parameters\\\" in Algorithm 1. The authors should provide more materials to support this claim, such as a comparison table of different mathods. \\n2. In Theorem 42, how to set the parameter \\\\epsilon to obtain the conclusion.\\n3. The authors should provide more comparison of the theoretical results of different SGD algorithms (such Adam, RMSProp ...)\", \"some_minor_issues\": \"1. The conditions of formulation of Eq. (6) should be placed together with Eq. (6).\\n2. What is the definition to c_2,0, c_2,1 .... It is not clear to readers.\\n3. The presentation and structure of the paper should be improved further, such as adding several subsection in Sec. 3, so that readers could easily follow the main idea of this paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper attempts to remove the use of the second moment in Adam in order to improve the generalization ability of Adam.\", \"Apriori, it is not clear why removing the second moment is important. Does it improve the generalization or decrease the runtime substantially?\", \"Please clarify how our method compares against Yogi (Adaptive Methods for Nonconvex Optimization, Zaheer' 2018) and the more recent RADAM (ON THE VARIANCE OF THE ADAPTIVE LEARNING\", \"RATE AND BEYOND, Liu 2019) , both theoretically and empirically.\", \"The paper is meandering and confusing. State your update/algorithm and then explain how it compares against the other methods. Besides, there might be technical problems with it.\"], \"see_the_detailed_review_below\": [\"Section 1: \\\"the generalization results (of adaptive methods) cannot be as good as SGD\\\". Please cite the relevant papers, (for example, Wilson. 2017) that show this empirically.\", \"Section 1: \\\"the proposed algorithm outperforms Adam in convergence speed\\\" - Please clarify what \\\"convergence speed\\\" refers to. Is it the number of gradient evaluations, the rate of convergence or the wall-clock time. Please state how did you conclude this.\", \"Section 2: The update rule of Adagrad is incorrect. The step-size is constant \\\\alpha and it is decreased over time because of the v_t term.\", \"Section 3: There is no guarantee on the approximation in Equation 5. Young's inequality and the resulting upper bound can be quite loose.\", \"Section 3: In general, it is not possible to have an update that decreases the loss on the current batch, but does not increase the previous batches loss. It is always possible to construct a counter-example to this.\", \"\\\"In practice, the computational cost of computing\\\" the gradient for all i is expensive. Indeed, this is batch gradient descent. I am not sure how this is relevant to the discussion in the paper.\", \"Cite and compare against the variance reduced methods (Stochastic Average Gradient, Schmidt, 2013; SVRG, Johnson, 2013) as these try to \\\"approximate\\\" the full gradient in order to decrease the variance.\", \"The derivation/formulation of Equation 8 is not clear to me. Why is the \\\\hat{m} normalized?\", \"In algorithm 1, it seems you need to choose the sequence of step-sizes \\\\alpha_t and \\\\beta_t. How is this an adaptive method then? How are these sequences chosen theoretically and practically? Please clarify this.\", \"Section 4: Please compare the resulting regret bound to that of Adam, Adagrad and AMSgrad. Why does \\\\alpha_t = O(1/t)? If we have to decrease the step-size according to a sequence, why should I not use standard SGD?\", \"Section 5: \\\"we decay the learning rate by 0.1 every 50 epochs\\\" This is not aligned with either the theory or the algorithm you proposed.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes new stochastic optimization methods to achieve a fast convergence of adaptive SGD and preserve the generalization ability of SGD. The idea is to let the search direction opposite to the gradient at the current batch of examples and a bit orthogonal to previous batch of examples. The algorithm is easy to implement and backed up with regret bounds. Several experimental results are also reported to verify the effectiveness of the algorithm.\\n\\nThe regret bound in Theorem 4.2. is not quite satisfactory. For example, Adagrad in Duchi et al (2011) can achieve the regret bound $O(\\\\sum_{i=1}^{d}\\\\|g_{1:T,i}\\\\|_2)$. However, there is an additional factor of $\\\\sqrt{d}$ in the regret boun in Theorem 4.2, which is not appealing in high dimensional problems. The regret analysis follows largely from standard arguments.\\n\\nI can not follow the last identity of (10). It should not hold since there is a missing $\\\\epsilon$ there.\\n\\nIn both eq (14) and (15), it is not clear to me why the authors divided the summation into two parts. Also, $\\\\sqrt{\\\\sum_{t=1}^{T}\\\\frac{1}{\\\\sqrt{t}}}$ should be $\\\\sqrt{\\\\sum_{t=1}^{T}\\\\frac{1}{t}}$ there.\\n\\n----------------------\", \"after_rebuttal\": \"I have read the author response. Unlike AdaGrad, the proposed algorithm does not show an advantage over standard OGD. I would like to keep my original score.\"}"
]
} |
BylyV1BtDB | FR-GAN: Fair and Robust Training | [
"Yuji Roh",
"Kangwook Lee",
"Gyeong Jo Hwang",
"Steven Euijong Whang",
"Changho Suh"
] | We consider the problem of fair and robust model training in the presence of data poisoning. Ensuring fairness usually involves a tradeoff against accuracy, so if the data poisoning is mistakenly viewed as additional bias to be fixed, the accuracy will be sacrificed even more. We demonstrate that this phenomenon indeed holds for state-of-the-art model fairness techniques. We then propose FR-GAN, which holistically performs fair and robust model training using generative adversarial networks (GANs). We first use a generator that attempts to classify examples as accurately as possible. In addition, we deploy two discriminators: (1) a fairness discriminator that predicts the sensitive attribute from classification results and (2) a robustness discriminator that distinguishes examples and predictions from a clean validation set. Our framework respects all the prominent fairness measures: disparate impact, equalized odds, and equal opportunity. Also, FR-GAN optimizes fairness without requiring the knowledge of prior statistics of the sensitive attributes. In our experiments, FR-GAN shows almost no decrease in fairness and accuracy in the presence of data poisoning unlike other state-of-the-art fairness methods, which are vulnerable. In addition, FR-GAN can be adjusted using parameters to maintain reasonable accuracy and fairness even if the validation set is too small or unavailable. | [
"generative adversarial networks",
"model fairness",
"model robustness"
] | Reject | https://openreview.net/pdf?id=BylyV1BtDB | https://openreview.net/forum?id=BylyV1BtDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"oxehWUL-V0",
"HklcxeOYsB",
"r1gaUy_KjS",
"SJlInCvKir",
"rklyVs7S5B",
"SJe-mj1AFH",
"ryg38nhhtS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728497,
1573646321898,
1573646164541,
1573645998473,
1572317990538,
1571842840809,
1571765332056
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1638/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1638/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1638/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1638/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1638/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1638/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This manuscript proposes an approach for fair and robust training of predictive modeling -- both of which are implemented using adversarial methods, i.e., an adversarial loss for fairness and an adversarial loss for robustness. The resulting model is evaluated empirically and shown to improve fairness and robustness performance.\\n\\nThe reviewers and AC agree that the problem studied is timely and interesting, as there is limited work on joint fairness and robustness. However, the reviewers were unconvinced about the novelty and clarity of the conceptual and empirical results. In reviews and discussion, the reviewers also noted insufficient motivation for the approach.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thanks for the insightful comments.\\n\\nQ3-1. (1) motivation\\n\\nA3-1. \\nWe believe that given that real data would increasingly become both biased and poisoned (this is what we expect in the big data era - see the next paragraph for details), our main contribution of providing an integrated solution for fair and robust training is likely to become important in the near future. \\n\\nWe contend that supporting robust training is just as critical as fair training. Dataset searching is becoming mainstream as demonstrated by Google Dataset Search (Goods) [1] and its public version for searching scientific datasets [2]. While data lakes within companies may consist of refined datasets, datasets in the public are easy to poison. In our experiments, we could easily poison the Adult and COMPAS datasets using simple label flipping techniques. And anyone can poison public datasets using attacks in the literature and share them. We thus believe that it is essential to address both bias and poisoning as a preventive measure. We reflected these points in our revision (Section 1, highlighted in blue).\\n\\n[1] Goods: Organizing Google's Datasets, ACM SIGMOD 2017.\\n[2] Google Dataset Search: Building a search engine for datasets in an open Web ecosystem, WWW 2019.\\n\\n\\nQ3-2. (2) contributions\\n\\nA3-2. \\nWe agree that the fairness part of FR-GAN is similar to Adversarial Debiasing (AD) [3]. However, we strengthen the theoretical results of AD using information theory, and this motivates us to propose a novel robust training discriminator. We agree that the key insight of using adversarial training to minimize mutual information between the prediction and sensitive attribute is the same for FR-GAN and AD. Although, the end algorithms of FR-GAN and AD are also similar, our paper plays a role in providing systematic methodology not only for various fairness metrics, but also for the design of robust training. As far as we know, no other validation-set-based approach (including Ren et. al. 2018) leverages the idea of adversarial training. We reflected these points in our revision (Section 2, highlighted in blue).\\n\\n[3] Mitigating Unwanted Biases with Adversarial Learning, AAAI 2018.\\n\\n\\nQ3-3. (3) competing methods\\n\\nA3-3. \\nAs Reviewer 2 mentioned, FR-GAN is one of the first works to address both fairness and robustness and that there are few baselines to compare with. Hence, we employed one reasonable baseline, which first sanitizes the poisoned data using a well-known sanitization technique and then performs a fair training (see Tables 1 and 2, rows with +LD). We clarified these points in our revision (Section 5.1, highlighted in blue).\\n\\n\\nQ3-4. (4) writeup\\n\\nA3-4. \\nAll of our results are on a separate test set. We clarified this in our revision (Section 5, highlighted in blue) to avoid any confusion.\\n\\n\\nQ3-5. Lack of convincing tests for robustness\\n\\nA3-5. \\nWe agree that generalizing to all poisoning attacks is important. If we know all possible attacks, we can construct a training set containing these attacks. In the more challenging case where we do not know which attacks even exist, there seems to be a fundamental limitation in protecting against the attacks. Generalization is a universal problem in machine learning where a model trained on one dataset is not guaranteed to perform well in another dataset with a different distribution. Although the generalization is a critical issue to address, we think it is beyond the scope of the current work. We reflected these points in our revision (Section 6, highlighted in blue).\\n\\n\\nQ3-6. Minor notes\\n\\nA3-6. \\nWe will display the figures with the same axes throughout in our revision. We added the information about the number of nodes in the hidden layers in our revision (Appendix A.4, highlighted in blue).\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thanks for the insightful comments.\\n\\nQ2-1. Novelty and Motivation\\n\\nA2-1. \\nWe agree that the fairness part of FR-GAN is similar to Adversarial Debiasing (AD). However, we believe that given that real data would increasingly become both biased and poisoned (this is what we expect in the big data era - see the next paragraph for details), our main contribution of providing an integrated solution for fair and robust training is likely to become important in the near future. \\n\\nWe contend that supporting robust training is just as critical as fair training. Dataset searching is becoming mainstream as demonstrated by Google Dataset Search (Goods) [1] and its public version for searching scientific datasets [2]. While data lakes within companies may consist of refined datasets, datasets in the public are easy to poison. In our experiments, we could easily poison the Adult and COMPAS datasets using simple label flipping techniques. And anyone can poison public datasets using attacks in the literature and share them. We thus believe that it is essential to address both bias and poisoning as a preventive measure.\\n\\nRegarding technical contributions, we strengthen the theoretical results of AD using information theory, and this motivates us to propose a novel robust training discriminator. We agree that the key insight of using adversarial training to minimize mutual information between the prediction and sensitive attribute is the same for FR-GAN and AD. Although, the end algorithms of FR-GAN and AD are also similar, our paper plays a role in providing systematic methodology not only for various fairness metrics, but also for the design of robust training. As far as we know, no other validation-set-based approach (including Ren et. al. 2018) leverages the idea of adversarial training. \\n\\nWe reflected all of the points above in our revision (Sections 1 and 2, highlighted in blue).\\n\\n[1] Goods: Organizing Google's Datasets, ACM SIGMOD 2017.\\n[2] Google Dataset Search: Building a search engine for datasets in an open Web ecosystem, WWW 2019.\\n\\n\\nQ2-2. Baseline\\n\\nA2-2. \\nAs Reviewer 2 mentioned, FR-GAN is one of the first works to address both fairness and robustness and that there are few baselines to compare with. Hence, we employed one reasonable baseline, which first sanitizes the poisoned data using a well-known sanitization technique and then performs a fair training (see Tables 1 and 2, rows with +LD). We clarified these points in our revision (Section 5.1, highlighted in blue).\\n\\n\\nQ2-3. Motivating real-world example\\n\\nA2-3. \\nUnfortunately, there is no such public poisoned dataset in the context of fairness training. We believe this is because: 1) the fair-&-robust training of our focus is a new topic; and 2) no adversary would explicitly claim that s/he poisoned a dataset. \\n\\nWe agree with Reviewer 1 that FR-GAN can be generalized to a setting where labels are subjective instead of being clean or poisoned. Here the dataset without the undesirable human biases becomes the \\\"clean\\\" validation set. Inspired by this idea, we would like to propose a new method for constructing a real dataset. Suppose that we are indeed performing loan decisions where there is a high chance of human bias (i.e., poisoning) due to the high workload. We can construct a clean validation set by selecting a few loans, assigning more evaluators, and taking a majority vote of the evaluations to minimize the bias. While employing more evaluators can be expensive, constructing a small validation set is sufficient for robust training, which makes this method practical. We reflected these points in our revision (Section 6, highlighted in blue).\\n\\n\\nQ2-4. Minor comment\\n\\nA2-4. \\nWe rephrased \\\"so that the model accuracy is reduced the most\\\" in Section 3 to \\\"To generate poisoned data, we measure accuracy degradation for a flipping of each Z. We then choose the top-10% of the Z values with the highest degradation and flip them.\\\" (Section 3, highlighted in blue). Thanks for pointing this out.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thanks for the insightful comments.\\n\\nQ1-1. Motivation and Contributions\\n\\nA1-1. \\nWe agree that the fairness part of FR-GAN is similar to Adversarial Debiasing (AD). However, we believe that given that real data would increasingly become both biased and poisoned (this is what we expect in the big data era - see the next paragraph for details), our main contribution of providing an integrated solution for fair and robust training is likely to become important in the near future. \\n\\nWe contend that supporting robust training is just as critical as fair training. Dataset searching is becoming mainstream as demonstrated by Google Dataset Search (Goods) [1] and its public version for searching scientific datasets [2]. While data lakes within companies may consist of refined datasets, datasets in the public are easy to poison. In our experiments, we could easily poison the Adult and COMPAS datasets using simple label flipping techniques. And anyone can poison public datasets using attacks in the literature and share them. We thus believe that it is essential to address both bias and poisoning as a preventive measure.\\n\\nRegarding technical contributions, we strengthen the theoretical results of AD using information theory, and this motivates us to propose a novel robust training discriminator. We agree that the key insight of using adversarial training to minimize mutual information between the prediction and sensitive attribute is the same for FR-GAN and AD. Although, the end algorithms of FR-GAN and AD are also similar, our paper plays a role in providing systematic methodology not only for various fairness metrics, but also for the design of robust training. As far as we know, no other validation-set-based approach (including Ren et.al. 2018) leverages the idea of adversarial training. \\n\\nWe reflected all of the points above in our revision (Sections 1 and 2, highlighted in blue).\\n\\n[1] Goods: Organizing Google's Datasets, ACM SIGMOD 2017.\\n[2] Google Dataset Search: Building a search engine for datasets in an open Web ecosystem, WWW 2019.\\n\\n\\nQ1-2. Experiments\\n\\nA1-2. \\nAs per your great comment, we now performed more detailed comparisons with AD. In particular, we generated confusion matrices both for disparate impact and equalized odds. As a result, we find FR-GAN performs better than AD under such measures, perhaps due to our robustness discriminator component (please see Appendix A.5 in the revised version of our paper, highlighted in blue). The robustness discriminator successfully ignores the poisoned distribution in the training data, and FR-GAN's TPR is higher than AD with sanitization. We may further refine our results later.\\n\\n\\nQ1-3. Real-world examples\\n\\nA1-3. \\nUnfortunately, there is no such public poisoned dataset in the context of fairness training. We believe this is because: 1) the fair-&-robust training of our focus is a new topic; and 2) no adversary would explicitly claim that s/he poisoned a dataset. \\n\\nWe fully agree that FR-GAN can be generalized to a setting where labels are subjective instead of being clean or poisoned. Here the dataset without the undesirable human biases becomes the \\\"clean\\\" validation set. Inspired by this idea, we would like to propose a new method for constructing a real dataset. Suppose that we are indeed performing loan decisions where there is a high chance of human bias (i.e., poisoning) due to the high workload. We can construct a clean validation set by selecting a few loans, assigning more evaluators, and taking a majority vote of the evaluations to minimize the bias. While employing more evaluators can be expensive, constructing a small validation set is sufficient for robust training, which makes this method practical. \\n\\nIn general, given two datasets with different distributions where one of them is desirable, FR-GAN can train robustly against the other distribution. \\n\\nWe reflected all the points above in our revision (Section 6, highlighted in blue).\\n\\n\\nQ1-4. Related work\\n\\nA1-4. \\nWe thank the reviewers for introducing papers on individual fairness. We were actually aware of this literature, but since FR-GAN is solving a new problem, our first step was to use prominent measures for group fairness, just like the AD paper. We mentioned all the individual fairness measures in our revision (Section 2, highlighted in blue) and will investigate them in a future work.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes to use a GAN style approach for training a classifier that is robust to data poisoning and can achieve a pre-specified notion of group fairness.\\n\\nThe contribution of this paper is incremental in the context of prior Adversarial Debasing (AD) approach using essentially same GAN for group fairness and prior work presenting ideas of utilizing clean validation data to defend against data poisoning. This paper is proposing to add an additional discriminator to the AD approach that distinguishes training data and clean validation data. If the training data is poisoned, such distinguishment may be possible and maximizing loss of this discriminator can help to robustify against poisoned samples.\\n\\nExperimental results are insufficient to argue improvement over the AD. There are no AD results in the equalized odds Adult experiment in the supplement. I recommend more detailed comparison against the AD method (including results showing confusion matrices). Also note that AD, as presented in the original paper, is optimizing for demographic parity, but can also be adjusted to other group fairness metrics. Finally, in the context of Adult dataset, it is important to also report performance metrics such as balanced TPR since the labels are quite imbalanced.\\n\\nAre there any real data examples where poisoning does not need to be introduced artificially and the proposed method helps to improve the fairness properties? I think an interesting direction could be to consider data where labels are subjective. For example, a dataset on loan decisions can be naturally \\\"poisoned\\\" with human biases, i.e. information that someone did not receive the loan may be due to an error or bias of a human in charge of the decision making.\\n\\nLastly I think that the discussion of the prior related work on fairness is incomplete. This paper exclusively covers group fairness, which indeed has been shown to have some disadvantages. For example, prior work [1] has shown that some group fairness notions can not be satisfied simultaneously in certain cases. In this regard it is also important to report multiple group fairness metrics simultaneously in the experiments. The other fairness definition, that has not been mentioned in this paper, is individual fairness [2]. It has legal and intuitive interpretations. Multiple recent papers have explored the direction of individual fairness [3,4,5,6].\\n\\n[1] Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores.\\n[2] Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012, January). Fairness through awareness.\\n[3] Garg, S., Perot, V., Limtiaco, N., Taly, A., Chi, E. H., & Beutel, A. (2019, January). Counterfactual fairness in text classification through robustness.\\n[4] Yurochkin, M., Bower, A., & Sun, Y. (2019). Learning fair predictors with Sensitive Subspace Robustness.\\n[5] Kearns, M., Roth, A., & Sharifi-Malvajerdi, S. (2019). Average Individual Fairness: Algorithms, Generalization and Experiments.\\n[6] Jung, C., Kearns, M., Neel, S., Roth, A., Stapleton, L., & Wu, Z. S. (2019). Eliciting and Enforcing Subjective Individual Fairness.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper combines adversarial fair training with adversarial robust training. The basic idea is that a classifier is combined with two adversaries: one tries to predict the sensitive attribute $Z$ from the output of the classifier (essentially the approach by Edwards&Storkey 2015) and the other adversary tries to recognize if a label was predicted or is from a clean hold-out dataset. The latter is intended to harden the classifier against data-poisoning of the training set.\\n\\n---\\n\\nThe paper is clearly written and technically sound.\\n\\nThe fairness aspect of the proposed method is fairly standard and not very novel (going back to 2015). The robustness aspect is an interesting addition and seems novel, but I'm not sure if it's enough to get the paper accepted. If I understand it correctly, FR-GAN without the \\\"R\\\" part should just be equivalent to Adversarial Debiasing (Zhang et al., 2018). And if there is no data-poisoning, then the \\\"R\\\" part doesn't have any effect.\\n\\nThe fact that this is the first fairness-related method that additionally deals with robustness, makes it also difficult to judge the performance of the method. I would wish for a more appropriate baseline; one that makes use of the clean validation set somehow.\\n\\nThere might be synergistic effects where the robustness aspects helps the fairness aspect but this comes at the cost of needing a clean validation set (and it only matters with poisoned data).\\n\\nWhat is also missing is a motivating real-world example. When would you encounter flipped labels in the training set, but also have access to a clean validation set?\", \"minor_comments\": [\"I did not understand what was meant by the phrase \\\"so that the model accuracy is reduced the most.\\\" in the first paragraph of section 3\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces a new method for training a classifier that simultaneously optimizes for a fairness criterion and robustness to data poisoning. The method is shown to increase measures of fairness and reduce inaccuracy on poisoned data relative to classifiers that only consider accuracy or fairness. Extensive results are shown for both synthetic and real benchmark data sets.\", \"i_would_lean_to_reject_for_the_following_reasons\": \"1) the problem is not well-motivated. I would like a more clear example of some problem with sensitive attributes in which the data is publicly available and the providers of the data are motivated to falsify it. 2) the contribution is very simple and the individual pieces do not seem to be significant contributions. In particular, the use of GANs for fairness is previously done, and the use of the GAN for robustness here seems too simple to be broadly useful 3) the results are less convincing than they might otherwise be because none of the competing methods tested make use of a clean validation set 4) the paper is somewhat unpolished. I find the results difficult to read, although the arrows are helpful, and it is not clear to me whether these results are on a test set or the training set.\", \"lack_of_convincing_tests_for_robustness\": \"It is disappointing that FR-GAN does not offer any promises to be robust in general. Despite access to a clean validation set, the classifier is trained only to ignore the type of data poisoning that exists in the training set. If the test set were out of distribution in a different way relative to the training set, I see no reason to believe FR-GAN would protect against this. Furthermore, because it is not stated that these are test set results, I am not certain that they are not training set results, in which case some performance may be due to overfitting.\", \"minor_notes\": \"It would be nice for comparison if the charts had the same axes throughout.\\n\\nWhat are the numbers of nodes used in the hidden layers?\"}"
]
} |
Sye0XkBKvS | SNODE: Spectral Discretization of Neural ODEs for System Identification | [
"Alessio Quaglino",
"Marco Gallieri",
"Jonathan Masci",
"Jan Koutník"
] | This paper proposes the use of spectral element methods \citep{canuto_spectral_1988} for fast and accurate training of Neural Ordinary Differential Equations (ODE-Nets; \citealp{Chen2018NeuralOD}) for system identification. This is achieved by expressing their dynamics as a truncated series of Legendre polynomials. The series coefficients, as well as the network weights, are computed by minimizing the weighted sum of the loss function and the violation of the ODE-Net dynamics. The problem is solved by coordinate descent that alternately minimizes, with respect to the coefficients and the weights, two unconstrained sub-problems using standard backpropagation and gradient methods. The resulting optimization scheme is fully time-parallel and results in a low memory footprint. Experimental comparison to standard methods, such as backpropagation through explicit solvers and the adjoint technique \citep{Chen2018NeuralOD}, on training surrogate models of small and medium-scale dynamical systems shows that it is at least one order of magnitude faster at reaching a comparable value of the loss function. The corresponding testing MSE is one order of magnitude smaller as well, suggesting generalization capabilities increase. | [
"Recurrent neural networks",
"system identification",
"neural ODEs"
] | Accept (Poster) | https://openreview.net/pdf?id=Sye0XkBKvS | https://openreview.net/forum?id=Sye0XkBKvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BKh_GvQpO",
"HkxrIqo9iH",
"Bke4X3zmiH",
"S1lJdRTWiS",
"rJeDrA6-sB",
"rJxNmApWsH",
"rJgELaTbiH",
"BklwF3abjr",
"SkelTPIi5B",
"H1lLFOChKS",
"BJg92wGiYS",
"ryeSS-kjwr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1576798728468,
1573726797375,
1573231643608,
1573146215261,
1573146175214,
1573146139564,
1573145931658,
1573145727294,
1572722615682,
1571772542070,
1571657650081,
1569546557032
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1637/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1637/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1637/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1637/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1637/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1637/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1637/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1637/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1637/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1637/AnonReviewer3"
],
[
"~Yiping_Lu1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This work proposes using spectral element methods to speed up training of ODE Networks for system identification. The authors utilize truncated series of Legendre polynomials to analyze the dynamics and then conduct experiments that shows their proposed scheme achieves an order of magnitude improvement in training speed compared to baseline methods. Reviewers raised some concerns (e.g. empirical comparison against adjoint methods in the multi-agent example) or asked for clarifications (e.g. details of time sampling of the data). The authors adequately addressed most of these concerns via rebuttal response as well as revising the initial submission. At the end, all reviewers recommended for accept based on contributions of this work on improving training speed of ODE Networks. R4 hopes that some of the additional concerns that are not yet reflected in the current revision, be addressed in the camera ready version.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Final revision uploaded\", \"comment\": \"We have uploaded the final revision, containing the amendments suggested by the Reviewers and the results of the additional experiments using the adjoint method for the multi-agent example. Further details concerning all of the points raised in each review can be found in the comments that we posted last week below.\"}",
"{\"title\": \"Revision 1 uploaded\", \"comment\": \"We have uploaded the first revision, containing most of discussed amendments from the Reviews. We are running few more experiments with the adjoint method baseline on the multi-agent example. Initial experiments are already included in this revision.\"}",
"{\"title\": \"Reply to Review #4 - part 2\", \"comment\": \"1) Among the possible choices presented in ref. [Canuto et al.], we selected the combination of Legendre polynomials and Gauss-Lobatto points for ease of implementation. In particular, the Gauss-Lobatto nodes include the point , where the initial condition is specified. This makes is easier to directly replace the ODE at with the initial condition, rather than adding one equation to the residual. Moreover, Lagrange polynomials provide a simple recursive formula for the computation of derivatives and produce better conditioning of the system matrices compared with Lagrange polynomials. The drawback is that the resulting mass matrix M is non-diagonal. However, for our case M is small and its inverse can be stored directly. We also remark that any other combination of polynomials and quadrature methods is possible in our framework. We will make this clearer in the paper.\\n2) There are two parts of the algorithms where quadrature points are needed: for the computation of the residual and of the loss. For the residual, we always use the Gauss-Lobatto points. For the loss, which requires evaluating the data at the quadrature points, we have compared equally-spaced and Gauss-Lobatto points in the high-data regime. Since the results are very similar, as expected for a smooth integrand, Gauss-Lobatto provide a much more efficient option. In the low/sparse regime, we have used randomly sampled points for our methods and equally-spaced for the baselines. We will make this clearer in the paper.\\n3) See point 1. of the major points above.\\n4) In equation (8), the state variables are v and . The parameter u corresponds to the representation of the input data (e.g. torques trajectories represented through Legendre polynomials computed in the initialization phase). The forward model then solves for v and , given u. In the experiments, however, we use u(t) as per equation (5), with the following differences: (i) we used a truncated Fourier basis instead of Legendre polynomials and (ii) rather than optimizing for the coefficients , we generate them using a random distribution and keep them fixed during training. This is written in appendix A and B, but we will make it clearer in the main text. We use Fourier basis in order to generate the data for the example. Therefore, in this case there was no need to train an additional polynomial for . In the general case, one would train a representation of u given the input data, as written in equation (5).\\n5) We recognize that in the unnumbered equation below (8), the networks are defined with the same notation as for the simulation generating the data (in appendix). To clarify, we train fully-connected networks using and features for (as written in the paper below the unnumbered equation). The output of has dimension 9, which is then reshaped to the 3x3 tensor . The same holds true for the other two tensors., with the only difference being that the bias is deactivated for the networks and . We will make sure that the notation is different for the surrogate model and for the simulation. We propose to use for the surrogate.\\n6) See point 4. above.\\n7) The choice for the examples comes from the need in modeling and control to obtain reliable forecast for both the short- and long-term dynamics. This is a particularly sensitive issue in robotics and automation as errors can lead to instabilities and unsafe decisions.\"}",
"{\"title\": \"Reply to Review #2 - part 2\", \"comment\": \"Concerning the performance of Neural ODEs vs. RNNs on time series forecasting, this is reported in Section 5.1 of [Chen et. al, 2018] (see Table 2). If required by the Reviewer, we will include the citation right after the claim.\"}",
"{\"title\": \"Reply to Review #4 - part 1\", \"comment\": \"We thank the Reviewer for the positive feedback.\", \"major_points\": \"1) Following the Reviewer's comments, we performed further comparisons against the adjoint method. In particular, in the low-data regime for the single-agent example the generalization results and trajectories are identical to backpropagation. We are also running the high-data regime for the multi-agent example. We will add the full results in the table of the final version of the paper. If the Reviewer deems it necessary, we could add the corresponding figures to the appendix, although they are identical to backpropagation. For the hybrid method [Gholami et al, 2019], we recognize it could offer an improvement in stability and accuracy also for these examples with respect to the adjoint and backprop methods. We will make this clearer in Section 2. We have not investigated this method experimentally since its computational cost is the same as that of the adjoint method and our main focus was on speed and parallelization.\\n2) We agree with the Reviewer that the choice of initial trajectory plays a critical role. We note that however this is in general true for all optimization problems. For the proposed algorithm, there are several possible options to do this. A trivial approach would be to repeat the known initial condition $x_0$ over the entire time interval. A second approach would be to integrate once the ODE in time using the initial network weights. This choice would yield a zero residual but may produce weights that are similar to standard backpropagation. A third and preferred approach is to perform a fit of the data according to some user-prescribed criterion. This is the approach that we used. When weights exist such that the ODE can follow this trajectory, the delta method is obtained. The role of the alpha method is therefore to slightly correct the desired trajectory so that it can be approximated by the ODE.\"}",
"{\"title\": \"Reply to Review #2\", \"comment\": \"We thank the Reviewer for the positive feedback. With regards to the main concern raised, namely the regular / irregular time sampling of the data, we apologize that we have not made clear that the results in the low-data / sparse regime were performed by randomly sampling the input data, using a uniform distribution over the entire time interval [0,T] for our methods. For the baselines, we used equally-spaced points. We will make this clearer in the final version.\"}",
"{\"title\": \"Reply to Review #3\", \"comment\": \"We thank the Reviewer for the very positive feedback. With regard to the question concerning X, it is a function space, typically a Sobolev or a Hilbert space, where the solution x(t) is sought. For our method, we would like it to be a Hilbert space H^p, where p is the order of the polynomial, in order to maximize the accuracy of the representation (as also discussed in Section 8). We will make it clearer after equation (2).\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This work extends prior work on Neural ODEs. From what I understand, the Neural ODE approach builds off the idea of representing the sequence of transformations of a hidden state (in residual nets, RNNs, etc.) as an ODE parameterized by a neural network. In the original paper, the network is optimized via gradients calculated by the adjoint sensitivity method. This paper puts forth the following contributions: a compact representation of the state transition function as a combination of Legendre polynomials, and an optimization scheme whose error is tied to the polynomial order and whose structure lends itself easily to parallelization. The authors also demonstrate their model on an experiment on planar vehicle dynamics, in which their model is shown to have improved predictive quality and efficiency.\\n\\nI am inclined to accept this paper, due to its various contributions on speed and performance, with the caveat for some clarifications on how the experiments were conducted and compared. Given these clarifications in an author response, I would be willing to increase the score.\", \"pros_of_the_paper\": \"1) Trajectory predictions on the two planar vehicle dynamics experiments was impressive.\\n\\t2) The proposed representation of the state transition dynamics is indeed more memory-efficient, and its approximation error is modulatable by the hyperparameter order \\\\p.\\n\\t3) Speedup due to parallelization is substantial.\", \"cons_of_the_paper\": \"1) The experiments did not display a proper comparison against the hybrid method mentioned in Section 1. The experiments also did not compare against adjoint methods in the multi-agent example, or in the low-data regime for the single-agent example. Instead, the experiments mostly highlighted the problems with direct backpropagation through the ODE solver, which is already well-known to have issues in robustness and stability. While it is nice to have empirical results that showcase this, a more comprehensive comparison against current adjoint methods would be more interesting, especially in the multi-agent example.\\n\\t2) It slightly detracts from the cleanliness of the story that we must first create an initial trajectory, before performing our coordinate descent.\", \"questions_and_points_of_confusion\": \"1) What intuits the choices of the Legendre polynomial as your set of basis functions, and the Gauss-Lobatto scheme to select collocation points, instead of alternative candidates?\\n\\t2) In Section 5, it was mentioned that \\\"100 equally-spaced points produce a comparable result\\\" to the Gauss-Lobatto quadrature points. Furthermore, it was mentioned that these evenly-spaced collocation points were used for experiments - was this the case for all experiments? If so, then what purpose does Gauss-Lobatto play in the paper? \\n\\t3) In Figure 1, were the DoPr5 and Euler plots shown for the backpropagation or adjoint method? If it was the backpropagation method, the plots for the adjoint method would be highly interesting to show.\\n\\t4) From what I understand from Section 4, when we perform step 0 and step 2 to update the trajectory, we simply optimize the coefficients \\\\x_i and \\\\u_i directly to minimize the current objective as both \\\\x(t) and \\\\u(t) are represented as a combination of Legendre polynomials. However, in Equation 8, for the planar vehicle dynamics, \\\\u is deterministically generated from the current states and network weights. It makes sense to add structure to \\\\u, as we need a way to make sure the inputs can indeed generate the trajectory of \\\\x. Does this mean the spectral method is not performed for \\\\u as was detailed in Equation 5?\\n\\t5) I am confused about how the gray-box models are built in Section 5. It states that \\\"For \\\\f_J, sin(\\\\phi) and cos(\\\\phi) are used as input features, where \\\\phi is the vehicle orientation.\\\" Does that mean that only \\\\phi from \\\\eta is passed in as an input, both sin(\\\\phi) and cos(\\\\phi) are passed in as inputs, or the entire current \\\\eta is passed in as input? And is the output a new \\\\eta, which is then structured into the 3x3 matrix \\\\J(\\\\eta) in Appendix A? Or does it simply output the matrix directly for the given value of \\\\phi. I am confused because it is written that \\\\J(\\\\eta) is equal to \\\\f_J(\\\\eta;\\\\theta_J), but then in the appendix it is written that \\\\J(\\\\eta) is equal to the matrix - so where is the network? This confusion extends to the other gray-box models \\\\C(v) and \\\\d(v).\\n\\t6) It is written in Equation 3 that the residual also takes in the input \\\\u(t). However, in the experiments, namely Equation 8, it does not appear like \\\\u is used to calculate the residual at all.\\n\\t7) What motivated the use of the planar vehicle dynamics experiment to showcase your model? Were there other baselines or benchmarks you considered or attempted?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work proposes a new approach for the evolution of Neural ODEs for particle systems. The authors suggest to replace the traditional backpropagation through ODEs or the recent adjoint method for backpropagation and instead solve the problem as an alternating optimization scheme. In particular it is suggested that using spectral methods (Legendre's polynomials) first compute a minimizer of the trajectory (optimizing a trajectory x(t)) given the Loss/Langrangian (that is based on the data times of training). After an initial trajectory is computed, follow an alternating minimization, where in the first step, minimize the discrepancy between the network (that describes the time change of the ODE) and the time derivative of the current trajectory. In the second step, re-compute the trajectory with the new updated network and modified lagrangian to update the network parameters again. This two step optimization is applied back and forth until a residual condition is reached, i.e. the loss is small. The network's parameters in the two stages are optimized via SGD and ADAM respectively. Further, to perform the required numerical integration in each step the authors apply Gaussian quadrature and turn the overall optimization scheme into the repeating application of a finite number of gradient updates in an alternating fashion (as has become popular in recent Deep Learning approaches e.g. GANs). The authors test the new method on different particle systems' trajectories and observe a numerical speed-up and improved accuracy as compared with previous approaches.\\n\\nAdmittingly, I am not an expert in Neural ODEs and am not too familiar with the literature investigating neural networks for modelling differential equations and control. The paper, however is well written and the presentation is very nice in my opinion. It is hard for me to judge how original some of the ideas presented but from my perspective they seem quite solid.\\n\\nOverall, I am currently voting for weak accept, for solid presentation and content, but with with the following problems:\\n\\nIt seems that Neural ODEs are most beneficial, over the alternatives, when used with irregular data times and or sparse number of time points. This is a point that is mostly missing in the discussion and the experimental section, from my understanding the experimental section uses equally spaced intervals. If this is the case, I do not find them sufficient and I would hope to see how does this method perform when trained with sparser and or irregular time points. Currently the method presented is illustrated as an effective algorithm for noisy ODE solvers. For this reason, I am also wondering whether this is also the right venue to present this interesting work.\", \"other_small_things\": \"In the second sentence\\n\\\"ODE-Nets have been shown to provide superior performance with respect to classic\\nRNNs on time series forecasting with sparse training data.\\\"\\nIt could be nice to provide a reference illustrating the improved performance of Neural ODEs over RNNS on a time series forecasting task.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"TITLE\", \"snode\": \"Spectral Discretization of Neural ODEs for System Identification\\n\\nREVIEW SUMMARY\\nExceptionally clear and well written paper that demonstrates a strong improvement on an important problem. Novel, timely, and of broad interest.\\n\\nPAPER SUMMARY\\nThe paper presents a new method for estimating parameters in a neural ODE based on a polynomial representation of trajectories and alternating updates of trajectories and neural net parameters.\\n\\nCLARITY\\nThe presentation is exceptionally clear.\\n\\nORIGINALITY\\nTo my knowledge the proposed method is novel.\\n\\nSIGNIFICANCE\\nThe paper demonstrates a strong and practically significant improvement in learning, and I expect the results will be of interest to everybody working in this area.\\n\\nFURTHER COMMENTS\\n\\nIn eq. 2, \\\"X\\\" (blackboard typeface) is not defined. At this point, it is not clear how x(t) is represented.\\n\\n\\\"trades-off\\\" -> trades off\"}",
"{\"comment\": \"Congrats on your work and I really enjoy reading it.\\nI'm writting the comment to introduce some of our related works on discretization Neural ODEs and system identification\\nLu Y, Zhong A, Li Q, et al. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations[J]. arXiv preprint arXiv:1710.10121, 2017.\\nLong Z, Lu Y, Ma X, et al. PDE-net: Learning PDEs from data[J]. arXiv preprint arXiv:1710.09668, 2017.\\n\\n\\nAlso these early papers also aim to connect ODEs and deep learning\\nWeinan, E. \\\"A proposal on machine learning via dynamical systems.\\\" Communications in Mathematics and Statistics 5.1 (2017): 1-11.\\nLi, Qianxiao, et al. \\\"Maximum principle based algorithms for deep learning.\\\" The Journal of Machine Learning Research 18.1 (2017): 5998-6026.\\nWeinan, E., Jiequn Han, and Qianxiao Li. \\\"A mean-field optimal control formulation of deep learning.\\\" Research in the Mathematical Sciences 6.1 (2019): 10.\\nHaber, Eldad, and Lars Ruthotto. \\\"Stable architectures for deep neural networks.\\\" Inverse Problems 34.1 (2017): 014004.\\nChang, Bo, et al. \\\"Multi-level residual networks from dynamical systems view.\\\" arXiv preprint arXiv:1710.10348 (2017).\\nRuthotto, Lars, and Eldad Haber. \\\"Deep neural networks motivated by partial differential equations.\\\" arXiv preprint arXiv:1804.04272 (2018).\\n\\n\\nAnd these works are working on the conjugate method\\nLi, Qianxiao, et al. \\\"Maximum principle based algorithms for deep learning.\\\" The Journal of Machine Learning Research 18.1 (2017): 5998-6026.\\nDinghuai Zhang*, Tianyuan Zhang*,Yiping Lu*, Zhanxing Zhu, Bin Dong. \\\"You Only Propagate Once: Painless Adversarial Training Using Maximal Principle.\\\" (*equal contribution) 33rd Annual Conference on Neural Information Processing Systems 2019(NeurIPS2019).\\nLi Q, Hao S. An optimal control approach to deep learning and applications to discrete-weight neural networks[J]. arXiv preprint arXiv:1803.01299, 2018.\", \"title\": \"Related works\"}"
]
} |
BJl07ySKvS | Guiding Program Synthesis by Learning to Generate Examples | [
"Larissa Laich",
"Pavol Bielik",
"Martin Vechev"
] | A key challenge of existing program synthesizers is ensuring that the synthesized program generalizes well. This can be difficult to achieve as the specification provided by the end user is often limited, containing as few as one or two input-output examples. In this paper we address this challenge via an iterative approach that finds ambiguities in the provided specification and learns to resolve these by generating additional input-output examples. The main insight is to reduce the problem of selecting which program generalizes well to the simpler task of deciding which output is correct. As a result, to train our probabilistic models, we can take advantage of the large amounts of data in the form of program outputs, which are often much easier to obtain than the corresponding ground-truth programs. | [
"program synthesis",
"programming by examples"
] | Accept (Poster) | https://openreview.net/pdf?id=BJl07ySKvS | https://openreview.net/forum?id=BJl07ySKvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"xS_-Mv8nvg",
"SJgjq4tsjS",
"rJl9NnOjjS",
"ByxLP-NioH",
"Hklcgt8KiS",
"Bkl2z1Htir",
"rJlaeTn_sr",
"rJljp4f4jB",
"Skg_xQGNjr",
"B1eTI1z4oB",
"SJegExwycB",
"SJlNo_lk5B",
"Skgbh4Untr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728437,
1573782674998,
1573780530182,
1573761373649,
1573640434142,
1573633812466,
1573600500595,
1573295298616,
1573294831593,
1573293908639,
1571938343596,
1571911835901,
1571738792828
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1636/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1636/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1636/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1636/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1636/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1636/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1636/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1636/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1636/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1636/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1636/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1636/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper consider the problem of program induction from a small dataset of input-output pairs; the small amount of available data results a large set of valid candidate programs.\\nThe authors propose to train an neural oracle by unsupervised learning on the given data, and synthesizing new pairs to augment the given data, therefore reducing the set of admissible programs.\\nThis is reminiscent of data augmentation schemes, eg elastic transforms for image data.\\n\\nThe reviewers appreciate the simplicity and effectiveness of this approach, as demonstrated on an android UI dataset.\\nThe authors successfully addressed most negative points raised by the reviewers in the rebuttal, except the lack of experimental validating on other datasets.\\n\\nI recommend to accept this paper, based on reviews and my own reading.\\nI think the manuscript could be further improved by more explicitly discussing (early in the paper) the intuition why the authors think this approach is sensible:\\nThe additional information for more successfully infering the correct program has to come from somewhere; as no new information is eg given by a human oracle, it was injected by the choice of prior over neural oracles.\\nIt is essential that the paper discuss this.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Re: Evaluating on different tasks\", \"comment\": \"I appreciate the authors for pointing out the difficulty of obtaining a suitable dataset for program synthesis. However, I am entirely not convinced by this list. First of all, while many papers here use a single dataset, most of the papers use a commonly acceptable dataset such as Karel (not really a fixed dataset but a domain), FlashFillTest, and AlgoLisp. Yet, this submission uses a different dataset that has not been explored before, which makes it harder to compare with other methods. On the other hand, I would like to note that not all the common practices shared by many publications are good, and certainly should not prevent the authors from further improving this submission. Therefore, I still consider this as a weakness but do appreciate the overall efforts.\"}",
"{\"title\": \"Evaluating on different tasks\", \"comment\": \"We agree that it would be interesting to evaluate our approach on a different program synthesis task. However, this requires a non-trivial effort that often includes replicating the original work, obtaining suitable dataset for training and testing as well as designing a suitable neural architecture for the given domain at hand. For example, to obtain the Rico dataset (D_U dataset used in our work), Deka et.al. had to develop a crowdsourcing platform where workers spent 2, 450 hours over five months and got paid ~$20, 000 in compensation. Similarly, in our work it took over a month just to obtain the D_S+ dataset (which extends the original Rico dataset).\\n\\nNote that these challenges are not specific to our work as the majority of prior works also focus on a single task. We include a representative list of works below:\\n\\nMenon et. al. 2013, \\u201cA Machine Learning Framework for Programming by Example\\u201d, \\n - text processing task: handcrafted samples + obtained from Microsoft Excel help forums\\n\\nDevlin et. al., 2017 \\u201cRobustFill: Neural Program Learning under Noisy I/O\\u201d, \\n - string transformations (FlashFillTest dataset)\\n\\nParisotto et. al. 2017, \\u201cNeuro-Symbolic Program Synthesis\\u201d, \\n - string transformations (FlashFill dataset)\\n\\nEllis et. al., 2018, \\u201cLearning to Infer Graphics Programs from Hand-Drawn Images\\u201d, \\n - graphics programs\\n\\nLiang et. al, 2010, \\u201cLearning Programs: A Hierarchical Bayesian Approach\\u201d, \\n - text editing domain, illustration on a simple arithmetic domain\\n\\nBalog et. al., 2017, \\u201cDeepCoder: Learning to Write Programs\\u201d, \\n - restricted DSL for functional list processing\\n\\nShin et. al., 2018, \\u201cImproving Neural Program Synthesis with Inferred Execution Traces\\u201d, \\n - Karel domain\\n\\nBunel et. al. 2018, \\u201cLeveraging Grammar and Reinforcement Learning for Neural Program Synthesis\\u201d, \\n - Karel domain\\n\\nSingh, 2016, \\u201cBlinkFill: Semi-supervised Programming By Example for Syntactic String Transformations\\u201d, \\n - string transformations (FlashFillTest dataset)\\n\\nSingh et. al., 2015, \\u201cPredicting a Correct Program in Programming By Example\\u201d: \\n - string transformations (FlashFillTest dataset)\\n\\nPolosukhin, Skidanov, 2018, \\u201cNeural program search: Solving programming tasks from description and examples.\\u201d, \\n - AlgoLisp\", \"we_also_include_small_number_of_works_that_do_evaluate_on_multiple_tasks\": [\"Ellis et. al. 2017, \\u201cLearning to Learn Programs from Examples: Going Beyond Program Structure\\u201d):\", \"string transformation,\", \"text extraction\", \"Nye et. al. 2019, \\u201cLearning to Infer Program Sketches\\u201d:\", \"list processing problems,\", \"string transformations,\", \"AlgoLisp\"]}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the response and the clarifications!\"}",
"{\"title\": \"Re: Response to Reviewer #3\", \"comment\": \"I appreciate the effort the authors put into revising the paper. The revision addresses my comments on (1) evaluating the performance of different neural oracle models, (2) presenting intermediate results, and (3) missing related works.\\n\\n I have a mixed feeling about accepting this paper. I like the intuitive idea of iteratively alternating between producing programs with an existing program synthesizer and augmenting examples to disambiguate possible programs with a neural oracle that learns to select correct outputs. However, I am still not completely satisfied with the fact that this paper only conducts experiments on a single dataset. I would like to see if the proposed framework can work on different program synthesis tasks. I decided not to update my score at this moment.\"}",
"{\"title\": \"Discussion of Review#1\", \"comment\": \"Thank you for your clarifications in the paper, these satisfy me to update my score. Regarding my question on the search for a distinguishing input, I think it would be helpful to expand the description in Sect. 5 to explicitly state that the search for a distinguishing input is _not_ required in this setting.\", \"minor_comments_in_revision\": [\"page 1 par 2: \\\"only a small number of them generalizes\\\" -> generalize\", \"page 4 par 1: \\\"user provided\\\" -> \\\"user-provided\\\"\"]}",
"{\"title\": \"Paper Revision\", \"comment\": \"Dear reviewers, we would like to thank you for all your comments and suggestions.\\nWe have updated our paper with a revision to address them. We summarize the main changes below:\\n\\n1) [Introduction] Make less general claims and explicitly say that the focus is on the domain of Android layouts:\\n\\nWe have rewritten the introduction section to remove claims of generalizability and to explicitly say that the domain considered in our work is Android layouts. We also remove the example discussing Excel spreadsheets.\\n\\n2) [Related Work] The related work now includes that approaches such as Devlin et al. 2017, Sun et al. 2018 and Parisotto et.al 2017, which propose neural architectures that encode input-output examples, might lead to further improvements to the neural models designed in our work. We have also added references to the related line of work on neural machines (e.g, Graves et. al. 2016)\\n\\n3) Incorporate various comments and suggestions throughout the paper to improve readability.\\n\\n4) [Appendix B.1] Additional evaluations of all the learned neural oracle models\\n\\n5) [Appendix B.2] Additional ablation study that investigates how well our models learn high level handcrafted properties derived from InferUI \\n\\n6) [Appendix C.3] Example and a discussion illustrating the advantage of training on the dataset D_S+ compared to D_S\\n\\n7) [Appendix C.4] Example of the individual steps performed during the synthesis \\n\\n8) [Appendix C.5] Visualization of good and bad synthesis outputs produced by the models used in our work\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank the reviewer for the thorough comments. Please find the answers to your questions below:\\n\\n*reproducibility*\\nGiven the clear description in the main paper and the details provided in the appendix, I believe reproducing the results is possible if the dataset is available.\", \"a\": \"We already included a comparison to the work of Devlin et.al and are happy to also include comparison to the work of Sun et. al. In our related work we tried to focus on the high level approaches used by prior works to resolve ambiguities in the input specification rather than the details of the neural architectures they use. Having said that, we do believe that it is possible to improve the performance of the neural oracle even further by using a more complex architecture, better datasets or via including additional information (e.g., encoding both the output and the synthesized programs).\", \"q\": \"Some neural program synthesis works explore a variety of mechanisms to encode examples and fuse their features, which are not mentioned in the paper [Devlin et al. in ICML 2017, Sun et al. in ICML 2018]. I believe it would be interesting to see if these methods could further improve the performance of the neural oracle.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for the comments and clarifying questions.\", \"q\": \"I wasn't able to find any evaluation of a model trained on $\\\\mathcal{D}_{U}$, did I miss it in the paper?\", \"a\": \"We omitted such evaluation in the paper as in our domain the dataset $\\\\mathcal{D}_{S}$ is created synthetically from $\\\\mathcal{D}_{U}$ (by automatically generating negative samples) and results in models that are strictly better than those trained only on $\\\\mathcal{D}_{U}$.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for the comments and suggestions to improve our paper.\", \"we_are_currently_preparing_a_revision_of_our_work_that_addresses_both_main_suggestions\": \"(i) to make less general claims and explicitly say the focus is on the domain of Android layouts, and (ii) to improve the readability of the paper. We will publish a list of changes, after also incorporating the feedback received from the other reviewers, once the revision is ready.\", \"q\": \"page 7 par 2: \\\"We use one screen dimension as the input specification , the second as the distinguishing input\\\" - this confused me, as the paper discussed discovering the distinguishing input (page 4, paragraph \\\"Finding a distinguishing input\\\"), whereas it sounds here like that input is manually selected.\", \"a\": \"In our evaluation we restrict the distinguishing input to be among those device sizes for which we trained a model. Since our datasets consist of three different screen sizes the choice is small. However, we could generate more than three dimensions for the training dataset and use them to train additional models. This was however not needed as three different devices are typically enough to resolve majority of ambiguities (as long as the oracle does not make mistakes).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper handles the challenge of generating generalizable programs from input-output specifications when the size of the specification can be quite limited and therefore ambiguous. When proposed candidate programs lead to divergent outputs on a new input, the paper proposes to use a learned neural oracle that can evaluate which of the outputs are most likely. The paper applies their technique to the task of synthesizing Android UI layout code from labels of components and their positions.\\n\\nThe experiments compare the method against the baseline InferUI. To summarize the results, we can see that the proposed method in the paper can perform about as well as existing hand-crafted constraints that guide the search process of the previous work, when training an oracle on the dataset with negative examples created by noising the positive examples.\\n\\nOne limitation of the method is that it would works best when there is a clear latent structure behind the outputs produced by the correct program, such as in the paper's target domain of generating UIs where there are clear aesthetic rules and design guidelines that make it possible to evaluate which output is most preferred. For other domains, it may be more important to evaluate the candidate program together with its output, which would make it similar to a re-ranking approach.\\n\\nI believe this paper presents a novel and insightful approach to creating programs from imprecise specifications. Therefore, I vote to accept the paper.\", \"some_questions_for_the_authors\": [\"How big was $\\\\mathcal{I}_i$ in the supervised dataset $\\\\mathcal{D}_{S+}$? Was it always 3?\", \"I wasn't able to find any evaluation of a model trained on $\\\\mathcal{D}_U$, did I miss it in the paper?\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"[Summary]\\nThis paper aims to improve the generalization of program synthesis, ensuring that the synthesized programs not only work on observed input/output (I/O) examples but also generalize well to assessment examples (i.e. model the real intent of the end-user). To this end, the paper proposes a framework that iteratively alternates between producing programs with an existing program synthesizer and augmenting examples to disambiguate possible programs with a neural oracle that learns to select correct outputs. Several architectures and the design of input of the neural oracle have been investigated. The experiments on Andriod layout program synthesis with an InferUI synthesizer show that the proposed framework can improve the generalization of synthesized programs. However, I find it is difficult to evaluate the effectiveness without sufficient qualitative results and the intermediate outputs (e.g. a distinguishing input and candidate outputs) of the proposed framework (see details below).\", \"significance\": \"are the results significant? 4/5\", \"novelty\": \"are the problems or approaches novel? 4/5\", \"evaluation\": \"are claims well-supported by theoretical analysis or experimental results? 3/5\", \"clarity\": \"is the paper well-organized and clearly written? 4/5\\n\\n[Strengths]\\n\\n*motivation*\\n- The motivation for improving the generalization of program synthesis by augmenting examples is convincing.\\n\\n*novelty*\\n- The idea of utilizing a neural network to select correct outputs to augment examples for disambiguating the possible programs is intuitive and convincing. This paper presents an effective way to implement this idea.\\n\\n*technical contribution*\\n- The paper investigates a set of network architectures and ways to specify the network input for learning the neural oracle. The RNN+CNN model that leverages both rendered views and features seems effective.\\n\\n*clarity*\\n- The overall writing is clear. The authors utilize figures well to illustrate the ideas. Figure 1 clearly shows the proposed framework.\\n\\n*experimental results*\\n- The presentations of the results are clear. The results demonstrate that the proposed framework can improve generalization accuracy.\\n\\n*reproducibility*\\n- Given the clear description in the main paper and the details provided in the appendix, I believe reproducing the results is possible if the dataset is available. \\n\\n[Weaknesses]\\n\\n*related work*\\nThe descriptions of the related work are not comprehensive. Some neural program synthesis works explore a variety of mechanisms to encode examples and fuse their features, which are not mentioned in the paper. [Devlin et al. in ICML 2017] investigates different attention mechanisms to sequentially encode a set of I/O examples and performs pooling to merge them. [Sun et al. in ICML 2018] proposes a doubly encoding method to capture more details of examples and merge the features using a relation network. I believe it would be interesting to see if these methods could further improve the performance of the neural oracle.\\n\\n*experiment setup*\\n- The experiments are not sufficient. While the claims look promising, the proposed method is only evaluated in only one dataset, which is not sufficiently convincing. I suggest the authors to also experiment the FlashFillTest dataset where string transformation programs are synthesized. \\n- A more comprehensive description of the dataset is lacking. \\n\\n*experiment results*\\n- I find it hard to judge the effectiveness of the proposed framework without seeing sufficient qualitative results. I suggest the authors randomly sample some synthesized programs (both success and failure) and present them in the paper.\\n- I believe it is important to present some examples of the given I/O pairs, initially synthesized programs (p_1), found distinguishing input (x*), candidate outputs (y), the prediction of the neural oracle (i.e. selected outputs), the augmented examples (I \\\\cup {(x*, y*)}), and finally the next synthesized program. Without this, it is very difficult to understand the performance of the proposed framework and what could go wrong. \\n\\n*ablation study: the neural oracle*\\nOnly the final performance (i.e. the program synthesis performance) is shown in the paper. I believe it would be helpful if the performance of the neural oracle was also presented. As the whole framework depends on how accurate the neural oracle can select the correct output, it is important to evaluate this. One way to show this is to simply show the performance of all the neural oracles (with different architectures) trained on D_S (the positive samples and the incorrect samples) or even D_{S+}.\\n\\nDevlin et al. \\\"RobustFill: Neural Program Learning under Noisy I/O\\\" in ICML 2017\\nSun et al. \\\"Neural Program Synthesis from Diverse Demonstration Videos\\\" in ICML 2018\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": [\"= Summary\", \"A method for a refinement loop for program synthesizers operating on input/ouput specifications is presented. The core idea is to generate several candidate solutions, execute them on several inputs, and then use a learned component to judge which of the resulting input/output pairs are most likely to be correct. This avoids having to judge the correctness of the generated programs and instead focuses on the easier task of judging the correctness of outputs. An implementation of the idea in a tool for synthesizing programs generating UIs is evaluated, showing impressive improvements over the baseline.\", \"= Strong/Weak Points\", \"The idea is surprisingly simple and applies to an important problem in program synthesis.\", \"The experiments show that the method works very well in UI-generation domain\", \"The paper repeatedly claims general applicability to program synthesizers, but is only evaluated in the specific domain of UI-generating programs. I have substantial doubts that the approach would work as well in the domains of, e.g., string manipulation, Karel, or data structure transformations. My doubts are based on the fact that there are easily generalizable rules for UIs (no overlaps, symmetry, ...), whereas other domains are less easily described. This creates a substantial gap between paper claims and empirical results.\", \"The writting is somewhat sloppy (see below), which makes it sometimes hard to understand. Names such as \\\"views\\\" are used without explanation, and it's not explained how a device is an input to a program (yes, I get what this means, but it makes in unnecessarily hard to follow the paper)\", \"= Recommendation\", \"I would ask the authors to rewrite their paper to make less general claims, but believe that the general idea of judging the correctness of a program (or policy) by evaluating it on different inputs is a powerful concept that would be of substantial value to the wider ICLR audience. Improving the readability of the paper would make me improve my rating to a full accept.\", \"= Minor Comments\", \"page 1, par \\\"Generalization challenge\\\": The second sentence here is 4 lines long and very hard to follow. Please rephrase.\", \"page 2, par 2: \\\"no large real-word datasets exists\\\" -> exist\", \"page 2, par 3: \\\"even when both optimizations of InferUI are disabled\\\": at this point, the reader doesn't know about any optimizations of InferUI.\", \"page 4, par 1: \\\"i.e., $\\\\exists p \\\\in \\\\mathcal{L}$\\\" - $\\\\mathcal{L}$ is undefined here (will be defined later on the page)\", \"page 4, par 2: \\\"Generate a candidate program $p_1 \\\\models \\\\mathcal{I}$\\\" - in step 2, there are suddenly also $p_2 \\\\ldots p_n$, which are never explicitly generated. Either adapt this step, or explicitly generate them in step 2 based on the distinguishing input\", \"page 7 par 2: \\\"We use one screen dimension as the input specification $\\\\mathcal{I}$, the second as the distinguishing input\\\" - this confused me, as the paper discussed discovering the distinguishing input (page 4, paragraph \\\"Finding a distinguishing input\\\"), whereas it sounds here like that input is manually selected.\"]}"
]
} |
rklTmyBKPH | Fast Neural Network Adaptation via Parameter Remapping and Architecture Search | [
"Jiemin Fang*",
"Yuzhu Sun*",
"Kangjian Peng*",
"Qian Zhang",
"Yuan Li",
"Wenyu Liu",
"Xinggang Wang"
] | Deep neural networks achieve remarkable performance in many computer vision tasks. Most state-of-the-art~(SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone, commonly pre-trained on ImageNet. However, performance gains can be achieved by designing network architectures specifically for detection and segmentation, as shown by recent neural architecture search (NAS) research for detection and segmentation. One major challenge though, is that ImageNet pre-training of the search space representation (a.k.a. super network) or the searched networks incurs huge computational cost. In this paper, we propose a Fast Neural Network Adaptation (FNA) method, which can adapt both the architecture and parameters of a seed network (e.g. a high performing manually designed backbone) to become a network with different depth, width, or kernels via a Parameter Remapping technique, making it possible to utilize NAS for detection/segmentation tasks a lot more efficiently. In our experiments, we conduct FNA on MobileNetV2 to obtain new networks for both segmentation and detection that clearly out-perform existing networks designed both manually and by NAS. The total computation cost of FNA is significantly less than SOTA segmentation/detection NAS approaches: 1737$\times$ less than DPC, 6.8$\times$ less than Auto-DeepLab and 7.4$\times$ less than DetNAS. The code is available at https://github.com/JaminFong/FNA . | [
"less",
"detection",
"segmentation",
"nas",
"neural network adaptation",
"parameter remapping",
"sota",
"backbone",
"imagenet",
"fna"
] | Accept (Poster) | https://openreview.net/pdf?id=rklTmyBKPH | https://openreview.net/forum?id=rklTmyBKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"v5Uwg_gUR",
"B1eIspq2oH",
"BkebcochsB",
"SJxfMYF2iH",
"rJggnOBrjS",
"B1xwBkpZsr",
"BylZzChZjS",
"H1eZlc3WoS",
"S1xwrYnWsr",
"SkekZOLsKB",
"r1x08lBvYB",
"S1xI2WlfYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728396,
1573854622376,
1573854088653,
1573849353952,
1573374119945,
1573142335252,
1573142024538,
1573140969207,
1573140799052,
1571674103372,
1571405909910,
1571058094195
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1635/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1635/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1635/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1635/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1635/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1635/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1635/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1635/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1635/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1635/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1635/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Main content: Paper proposes a fast network adaptation (FNA) method, which takes a pre-trained image classification network, and produces a network for the task of object detection/semantic segmentation\", \"summary_of_discussion\": \"\", \"reviewer1\": \"interesting paper with good results, specifically without the need to do pre-training on Imagenet. Cons are better comparisons to existing methods and run on more datasets.\", \"reviewer2\": \"interesting idea on adapting source network network via parameter re-mapping that offers good results in both performance and training time.\", \"reviewer3\": \"novel method overall, though some concerns on the concrete parameter remapping scheme. Results are impressive\", \"recommendation\": \"Interesting idea and good results. Paper could be improved with better comparison to existing techniques. Overall recommend weak accept.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper Revision\", \"comment\": \"We thank all the reviewers for their careful comments and constructive suggestions. We revise the paper following the advice and hope our new version of the paper can better illustrate our method. Promised experiments are included.\\n\\n1. For a clear illustration, we change \\\"the inner network\\\" to \\\"the target architecture\\\".\\n\\n2. We revise the part of experiments about parameter remapping in Sec. 4.3 and redraw the Tab. 5.\\n\\n3. We add random search experiments in Sec. 4.4.\\n\\n4. We move some detailed descriptions of hyper-parameters to the Appendix.\\n\\n5. We correct some statements and improve the writing parts of the paper.\"}",
"{\"title\": \"Experimental Results on ResNet-50\", \"comment\": \"Thanks for your suggestions for conducting experiments by choosing ResNet-50 as the seed network. We design the search space of architecture adaptation as follows. We utilize the bottleneck block defined in ResNet-50 [1] to construct our search space. One bottleneck consists of two 1x1 convolutions, one inner 3x3 convolution, and the skip connection. We search for the depth, width and the kernel sizes of ResNet50. More specifically, we add more layers in the super network for depth search. The kernel size settings include {3x3, 5x5}. We allow several width ratios {1/2, 3/4, 1} of the inner kxk convolutions to search the widths.\\n\\nIt is worth mention that the architecture search with ResNet-based search spaces on segmentation or detection tasks is challenging work. Enlarging kernel sizes in the inner convolutions causes huge computation cost increasing. We allow more width settings to balance the computation cost of the models. Due to the limited time, we do not tune the hyper-parameters for ResNet. Therefore there is still room for improvement of performance. We conduct experiments on the RetinaNet [2] framework and show our results as follows. FNA promotes the mAP by 0.3% with 20M fewer MAdds compared width ResNet-50. We would like to try more ResNet experiments in the future.\\n\\nThanks for your advice again. We hope our answers can clear your concerns.\\n\\n\\nMethod MAdds(B) mAP(%)\\nResNet-50 202.85 33.3\\nFNA 202.83 33.6\\n\\n[1] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.\\n[2] Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2980-2988.\"}",
"{\"title\": \"Response to Reviewer1 (3)\", \"comment\": \">>> Response to \\\"the odd choices of hyper-parameters\\\":\\nWe revise the loss function as follows, as the $\\\\gamma$ parameter in the original paper is of no use,\\n$\\\\mathcal{L} = \\\\mathcal{L}_{task} + \\\\lambda \\\\log_{\\\\tau}(cost)$.\\nIt is common to adjust the hyper-parameters of multi-objective optimization in many NAS method [3, 4, 5, 6]. These hyper-parameter settings are often omitted in the main text of most NAS papers. The adjustment of these parameters is to get a similar model size with that of other methods for a fair comparison.\\n\\n>>> Response to \\\"Error bars ... 0.1% separating FNA and MnasNet-92\\\":\\nWe repeatedly train the network obtained by FNA for three times and it obtains three same results, 23.0% mAP. It is worth noting that MnasNet [3] takes a huge cost (around 3,800 GPU days) to search the architecture on the ImageNet classification task and achieves SOTA results on the SSDLite framework [3, 7]. The computation cost of FNA is apparently smaller, 176x less than MnasNet. The MAdds of FNA is 100M less than MnasNet-92, while FNA achieves 0.1% higher mAP.\\n\\nWe sincerely thank you for your comprehensive comments and hopefully have cleared your concerns.\\n\\n[1] Chen T, Goodfellow I, Shlens J. Net2net: Accelerating learning via knowledge transfer[J]. arXiv preprint arXiv:1511.05641, 2015.\\n[1] Chen Y, Yang T, Zhang X, et al. Detnas: Neural architecture search on object detection[J]. NeurIPS, 2019.\\n[3] Tan M, Chen B, Pang R, et al. Mnasnet: Platform-aware neural architecture search for mobile[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 2820-2828.\\n[4] Wu B, Dai X, Zhang P, et al. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 10734-10742.\\n[5] Cai H, Zhu L, Han S. Proxylessnas: Direct neural architecture search on target task and hardware[J]. arXiv preprint arXiv:1812.00332, 2018.\\n[6] Chu X, Zhang B, Xu R, et al. Fairnas: Rethinking evaluation fairness of weight sharing neural architecture search[J]. arXiv preprint arXiv:1907.01845, 2019.\\n[7] Sandler M, Howard A, Zhu M, et al. Mobilenetv2: Inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 4510-4520.\"}",
"{\"title\": \"Random Search Experiments\", \"comment\": \"We carry out experiments with the random search (RandSearch) strategy. All the results are shown in the following table. As Row (2) shows, we simplify the FNA process as \\\"Remap -> Differentiable Search (DiffSearch) -> Remap -> ParamsAdapt\\\" for the clear illustration. We purely replace the original differentiable NAS method in FNA with the random search method in Row (3). And FNA with RandSearch achieves comparable results with our original method. It further confirms that FNA is a general framework for network adaptation and has great generality. NAS is only an implementation tool for architecture adaptation. The whole framework of FNA can be treated as a NAS-method agnostic mechanism. It is worth noting that even using random search, our FNA still outperforms DetNAS with 0.2% mAP better and 150M Flops fewer.\\n\\nWe conduct more ablation studies to demonstrate the effectiveness of the parameter remapping scheme. In Row (4) we remove the parameter remapping process before a random search, the mAP drops by 2.0% compared to Row (3). Then we remove the parameter remapping before parameter adaptation in Row (5), the mAP decreases by 8.2% compared to Row (3). When we remove the parameter remapping before both processes in Row (6), it gets the worst performance. All the experiments demonstrate the importance and effectiveness of the parameter remapping scheme. The results will be updated in the revised version of our paper. Thanks a lot for your suggestion of performing this random search ablation study.\\n\\nRow Num Method MAdds(B) map(%)\\n(1) DetNAS[1] 133.26 33.3\\n(2) FNA (Remap -> DiffSearch -> Remap -> ParamsAdapt) \\t 133.03 33.9\\n(3) FNA (Remap -> RandSearch -> Remap -> ParamsAdapt) 133.11 33.5\\n(4) RandInit -> RandSearch -> Remap -> ParamsAdapt 133.08 31.5\\n(5) Remap -> RandSearch -> RandInit -> ParamsAdapt 133.11 25.3\\n(6) RandInit -> RandSearch -> RandInit -> ParamsAdapt 133.08 24.9\\n\\n[1] Chen Y, Yang T, Zhang X, et al. Detnas: Neural architecture search on object detection[J]. NeurIPS, 2019.\"}",
"{\"title\": \"Response to Reviewer1 (2)\", \"comment\": \">>> Response to \\\"what's going on in Table 5\\\":\\nWe feel sorry for the vagueness of Tab. 5 in the paper. We conduct ablation studies to demonstrate the effectiveness of parameter remapping and show the comparison results in Tab. 5. We redraw the table in the following.\\n\\nRow Num Method MAdds(B) mIOU(%) \\n(1) Remap -> ArchAdapt -> Remap -> ParamAdapt (FNA) 24.17 76.6\\n(2) RandInit -> ArchAdapt -> Remap -> ParamAdapt 24.29 76.0\\n(3) Remap -> ArchAdapt -> RandInit -> ParamAdapt 24.17 73.0\\n(4) RandInit -> ArchAdapt -> RandInit -> ParamAdapt 24.29 72.4\\n(5) Remap -> ArchAdapt -> Retrain -> ParamAdapt 24.17 76.5\", \"remap\": \"Parameter Remapping. ArchAdapt: Architecture Adaptation. RandInit: Random Initialization. Pretrain: ImageNet Pretrain. ParamAdapt: Parameter Adaptation.\\n\\nWe attempt to optionally remove the parameter remapping process before the two stages, i.e. architecture adaptation and parameter adaptation. In Row (2) we remove the parameter remapping process before architecture adaptation. In other words, the search is performed from scratch without using the pre-trained network. The mIOU in Row (2) drops by 0.6% compared to FNA in Row (1). Then we remove the parameter remapping before parameter adaptation in Row (3), i.e. training the target architecture from scratch on the target task. The mIOU decreases by 3.6% compared to the result of FNA. When we remove the parameter remapping before both stages in Row (4), it gets the worst performance. In Row (5), we first pre-train the searched architecture on ImageNet and then fine-tune it on the target task. It is worth noting that FNA even achieves a higher mIOU by a narrow margin (0.1%) than the ImageNet pre-trained one in Row (5). We conjecture that this may benefit from the regularization effect of parameter remapping before the parameter adaptation stage.\\n\\nAll the experiments are conducted using the same searching and training settings for fair comparisons. With parameters remapping applied on both stages, the adaptation achieves the best results. Especially, the remapping process before parameter adaptation tends to provide greater performance gains than the remapping before architecture adaptation. All the experimental results demonstrate the importance and effectiveness of the proposed parameter remapping scheme.\\n\\nWe revise this part in the new version of our paper. We hope this revision and explanation can solve your puzzle.\\n\\n>>> Response to \\\"The stuff in Table 6 ...\\\":\\nWe explore more strategies of parameter remapping in Tab. 6 (Tab. 7 in our new version), as it is an important topic to find more efficient parameter remapping methods. More specifically, we use the importance of the channel to remap the parameters on the width level. The importance evaluation metric is set with the statistics of BN or the standard deviation/L1 norm of the parameters on the channel level. For the kernel-level parameter mapping, we further conduct a dilation manner. The experiments show that the parameter remapping method in FNA achieves the best results. Moreover, our proposed parameter remapping is the most convenient to implement.\"}",
"{\"title\": \"Response to Reviewer1 (1)\", \"comment\": \"We sincerely thank you for your detailed review and constructive suggestions.\\nOur proposed FNA aims at adapting the pre-trained neural network to new tasks efficiently with a novel parameter remapping mechanism. Inspired by the influential work Net2Net[1] which proposes an effective method for mapping the parameters of one network to a deeper and wider one, we propose a novel and effective parameter remapping scheme. With the remapping scheme, the parameters of one seed network can be remapped to the super network or the target network. The mapping dimension is further extended to the kernel level. And the proposed parameter remapping method can map parameters to shallower and narrower networks while Net2Net does not cover these perspectives. It is not an ad-hoc design choice as we explore various reasonable strategies in the experiments. The results show that our method achieves superior performances and our mechanism is more convenient to implement compared to others. Our proposed FNA is not only a NAS method, while it is a general framework to adapt the network to various tasks. FNA demonstrates the effectiveness of both segmentation and detection tasks in the experiments and beats state-of-the-art NAS-based detection and segmentation methods, while prior works mostly focus on only one task as Reviewer1 also mentions this. FNA is meaningful for researchers to carry out network optimization on various new tasks that bear the unaffordable pre-training cost as there are lots of available pre-trained models in the community. We hope our statement can clarify the core value of our work.\\n\\n>>> Response to \\\"no TLDR for this paper\\\":\\nWe are sorry that our abstract and introduction do not illustrate our main idea or contribution clearly. We will revise the paper detailedly in the next version. We supplement the TLDR as follows.\\n\\nWe propose a fast neural network adaptation (FNA) method to adapt a seed network with pre-trained weights to other new tasks. A parameter remapping mechanism is designed to accelerate the whole adaptation process which takes full advantage of the knowledge from the seed network.\\n\\n>>> Response to \\\"it isn't really fair in Table 4 to put pre-training cost as zero\\\":\\nFirstly, our proposed method aims at the available pre-trained models in the community as there are many of them. Secondly, the seed network in FNA has strong reusability. For example, if the search space changes due to the need for the task, the super network in other methods, e.g. DetNAS, needs to be pre-trained again. But the super network of FNA does not need to be pre-trained. All the pre-trained weights of the super network are remapped from the same seed network. It is totally a once-for-all manner. Even though we take the pre-trained cost of the seed network into consideration, FNA still holds a huge advantage with the perspective of both performance and computation cost. We show the comparison as follows.\\n\\n| Method | Total Cost | Super Network | Target Network |\\n|_____________|____________| Pre-training |Finetuning | Search | Pre-training | Finetuning |\\n| DetNAS[2] | 68 GDs | 12 GDs |12 GDs | 20 GDs | 12 GDs | 12 GDs |\\n| FNA | 15.9 GDs | 6.7 GDs | - | 6 GDs | - | 3.2 GDs |\\n\\n>>> Response to \\\"the generality of the seed network\\\":\\nSorry for this unclearness of the description of the seed network choice. We choose MobileNetV2 because it is widely used for the search space design in many NAS methods [3, 4, 5, 6]. But we would like to try our best to implement the FNA method on the ResNet model before the rebuttal deadline.\\n\\n>>> Response to \\\"a comparison to a random search\\\":\\nWe will provide the experiment result of the random search. Thanks for your constructive advice. Besides, our method aims to adapt the off-the-shelf network to other new tasks. NAS is only an implementation tool for architecture adaptation. Which NAS method we use is not the focus of our method indeed.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"We sincerely thank you for your review and assessment of our work.\\nOur proposed FNA method can adapt a network with pre-trained weights to other new tasks efficiently. The total computation cost of FNA is far smaller compared to other SOTA methods. We conduct sufficient experiments to demonstrate the effectiveness of our method and FNA achieves superior performances on various tasks. Moreover, it is convenient to apply FNA on more other tasks, e.g., pose estimation, face detection, NLP, speech recognition, etc. As there are lots of pre-trained models available in the community, FNA can help researchers who cannot afford expensive computation cost explore the model optimization on all kinds of tasks. FNA even makes NAS more accessible for more researchers.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"We sincerely thank you for your detailed and constructive comments.\\n1. We would like to revise the description part of the advantages over prior methods. Our main point aims at taking full advantage of the pre-trained weights of the seed network, which is essential for fast neural architecture search and parameter adaptation. Our parameter remapping mechanism accelerates the whole procedures greatly which makes it easy to conduct the network adaptation on different tasks.\\n\\n2. Our parameter remapping method does have some similar effects with Net2Net. However, they are quite different. Net2Net aims at deepening and widening the network and accelerates the training procedure. In Net2Net, parameters can only be mapped on the depth and width level. We extend the mapping dimension with the kernel level. Parameters can be also mapped to a shallower or narrower network with our remapping scheme, while Net2Net only maps parameters to a deeper and wider network. We deploy FNA in the popular efficient model MobileNetV2. Compared with those seemingly more advanced methods, the parameter remapping mechanism of FNA is easier to implement and achieves the best results. Moreover, exploring more effective parameter remapping methods is actually a valuable topic, as we do in Sec. 4.4.\\n\\n3. DetNAS[1] is the latest detection backbone search work that achieves SOTA results and has been accepted in NeurIPS 2019. Our comparison is sufficient and fair. Furthermore, in RetinaNet, FNA achieves 0.6 accuracy promotion and MAdds is 0.23B smaller compared with DetNAS. In SSDLite, FNA achieves 0.1 accuracy promotion with 100M fewer MAdds compared with MnasNet [2]. MnasNet takes a huge cost (around 3,800 GPU days) to search the architecture on the ImageNet classification task. MnasNet also achieves SOTA results on the SSDLite framework. The computation cost of FNA is apparently smaller, 176x less than MnasNet.\\n\\n4. We are sorry for the vagueness of Tab. 5 in the paper. Actually, we conduct sufficient apple-to-apple comparison experiments in Sec. 4.3 to showcase the effectiveness of parameter remapping in the network adaptation. We revise Tab. 5 as follows for clearer illustration.\\n\\nRow Num Method MAdds(G) mIOU(%) \\n(1) Remap -> ArchAdapt -> Remap -> ParamAdapt (FNA) 24.17 76.6\\n(2) RandInit -> ArchAdapt -> Remap -> ParamAdapt 24.29 76.0\\n(3) Remap -> ArchAdapt -> RandInit -> ParamAdapt 24.17 73.0\\n(4) RandInit -> ArchAdapt -> RandInit -> ParamAdapt 24.29 72.4\\n(5) Remap -> ArchAdapt -> Retrain -> ParamAdapt 24.17 76.5\", \"remap\": \"Parameter Remapping. ArchAdapt: Architecture Adaptation. RandInit: Random Initialization. Pretrain: ImageNet Pretrain. ParamAdapt: Parameter Adaptation.\\n\\nWe attempt to optionally remove the parameter remapping process before the two stages, i.e. architecture adaptation and parameter adaptation. In Row (2) we remove the parameter remapping process before architecture adaptation. In other words, the search is performed from scratch without using the pre-trained network. The mIOU in Row (2) drops by 0.6% compared to FNA in Row (1). Then we remove the parameter remapping before parameter adaptation in Row (3), i.e. training the target architecture from scratch on the target task. The mIOU decreases by 3.6% compared to the result of FNA. When we remove the parameter remapping before both stages in Row (4), it gets the worst performance. In Row (5), we first pre-train the searched architecture on ImageNet and then fine-tune it on the target task. It is worth noting that FNA even achieves a higher mIOU by a narrow margin (0.1%) than the ImageNet pre-trained one in Row (5). We conjecture that this may benefit from the regularization effect of parameter remapping before the parameter adaptation stage.\\n\\nAll the experiments are conducted using the same searching and training settings for fair comparisons. With parameters remapping applied on both stages, the adaptation achieves the best results. Especially, the remapping process before parameter adaptation tends to provide greater performance gains than the remapping before architecture adaptation. All the experimental results demonstrate the importance and effectiveness of the proposed parameter remapping scheme. We revise this part in the new version of our paper. \\n\\nWe thank for your detailed review once again and hope that our response can address your concerns.\\n\\n[1] Chen Y, Yang T, Zhang X, et al. Detnas: Neural architecture search on object detection[J]. NeurIPS, 2019.\\n[2] Tan M, Chen B, Pang R, et al. Mnasnet: Platform-aware neural architecture search for mobile[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 2820-2828.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a method called FNA (fast network adaptation), which takes a pretrained image classification network, and produces a network for the task of object detection/semantic segmentation. The process consists of three phases: Network Expansion, Architecture Adaptation and Parameters Adaptation, and uses the developed parameter remapping scheme twice. Experiments show that it outperforms recent other NAS methods for these two tasks with same or less computation.\\n\\nConcrete comments\\n1. The paper's overall method is a novel one, unifying NAS on det/seg tasks, while prior works mostly only focus on one task. It also \\\"eliminates\\\" the need for pretraining each instance of the subnetwork. But no one ever pretrain every classification network for searching on det/seg tasks right? It's an insane amount of computation after all. I'm afraid the emphasis of advantage over prior method here is not very accurate.\\n\\n2. The concrete parameter remapping scheme is not entirely novel. It is similar to the Net2Net method, while seems more naive than that. It does not preserve the mapping function like Net2Net. It seems like a very coarse effort, since mostly what you do is to copy weights, remove weights or fill in zeros. But it is also interesting to see that this naive method works, and actually beat some of the more advanced alternatives in Section 4. \\n\\n3. The results are quite impressive. On segmentation, the adapted model achieves ~1% mIOU improvement using similar or less iterations and similar size of model with the methods it compared to, and GPU hours' saving is more significant. If the authors faithfully compared with state-of-the-art methods in search det/seg architectures, but I'm not super familiar with this literature. On object detection the method does not improve the model size or accuracy, but reduces the search time a lot compared with DetNAS. Could the authors clarify that you compared with every recent high-performance NAS method on seg/det tasks?\\n\\n4. Though the improvement over prior methods is good, the experiments lack an apple-to-apple comparison. For example, using exactly the same NAS search method and supernet, and comparing the FNA method with that not using a pretrained model (i.e., directly search on det/seg) could be a good experiment to showcase the importance of adaptation.\\n\\nOverall I find the method is effective and experiments convincing and I recommend weak accept in my rating. I hope authors can address my concerns in the rebuttal.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper provides a new technique to adapt a source neural network performed well on classification task to image segmentation and objective detection tasks via the author called parameter-remapping trick. The parameter remapping uses weights from the source neural network to the two-stages: architecture adaption phase and parameter adaption phase. The technique results in improvements in both performance and training time.\\n\\nI like the direction this paper takes, NAS is too expensive and we need faster methods through meta learning/transfer learning. The paper is also clearly organized and written. To the best of my knowledge, the experiments setting is sensible and the results are good. But I am not in the Computer Vision field and I am not so familiar with NAS, I may missed something. Thus, I am less confident about my rating.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors take a MobileNet v2 trained for ImageNet classification, and adapt it either (i) semantic segmentation on Cityscapes, or (ii) object detection on COCO. They do this by first expanding the network into a \\\"supernet\\\" and copy weights in an ad-hoc manner, then, they perform DARTS-style architecture search before fine-tuning for the task at hand.\\n\\nThere is no TLDR for this paper, and I must admit, on reading the abstract and introduction I wasn't entirely sure what this paper was doing at first. Perhaps I was being slow.\\n\\nFrom a narrative perspective, one of the main selling points is not needing to perform any expensive ImageNet pre-training; however, a pre-trained MobileNetv2 is being utilised. While this was off-the-shelf, it still incurred an initial training cost, so it isn't really fair in e.g. Table 4 to put pre-training cost as zero. On a related note, the authors write that this network is used for its \\\"generality\\\". I'd argue that MobileNetv2 is a highly engineered network specialised for mobile computation; a standard ResNet-50 would be more general really.\\n\\nI would like to see a comparison to a random search, as there are several papers (https://arxiv.org/abs/1902.07638, https://arxiv.org/abs/1902.08142) indicating that this is a very strong baseline. \\n\\nAs mentioned earlier, the choices for remapping weights seem very ad-hoc. I can't really tell what's going on in Table 5 (why is PR in the NE and PA row?) so the ablation study of how effective this weight mapping is lost on me. The stuff in Table 6 is pretty interesting however, if convoluted.\\n\\nI find the odd choices of hyperparameters (tau as 45, gamma as 10, lambda as 9e-3) rather alarming. How important are these? Would this technique work under any other circumstances?\\n\\nError bars would be a welcome inclusion, particularly in Table 3 where you have 0.1% separating FNA and MNasnet-92. I appreciate that this can be expensive however.\", \"pros\": [\"Some promising results\", \"Good figures\"], \"cons\": [\"Ad-hoc design choices\", \"Not a fair comparison regarding pre-training.\", \"Very specific to one network choice\", \"Lack of error bars or comparison to random search.\", \"I am giving this paper a weak reject, as there is insufficient experimental evidence that the technique works, or generalises beyond Mobilenetv2. I am also concerned about the ad-hoc hyperparmaters or weight-mapping. A comprehensive ablation study, along with error bars, and another choice of seed network would do much to strengthen this paper.\"]}"
]
} |
r1la7krKPS | Measuring Calibration in Deep Learning | [
"Jeremy Nixon",
"Mike Dusenberry",
"Ghassen Jerfel",
"Linchuan Zhang",
"Dustin Tran"
] | Overconfidence and underconfidence in machine learning classifiers is measured by calibration: the degree to which the probabilities predicted for each class match the accuracy of the classifier on that prediction. We propose two new measures for calibration, the Static Calibration Error (SCE) and Adaptive Calibration Error (ACE). These measures take into account every prediction made by a model, in contrast to the popular Expected Calibration Error. | [
"Deep Learning",
"Multiclass Classification",
"Classification",
"Uncertainty Estimation",
"Calibration"
] | Reject | https://openreview.net/pdf?id=r1la7krKPS | https://openreview.net/forum?id=r1la7krKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"vrftCLEHmt",
"YEB3WXENzc",
"r1escKFm5S",
"rJe58YUoYr",
"SyxmYff5Fr",
"Skl4_SQaDr"
],
"note_type": [
"comment",
"decision",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1665958472005,
1576798728367,
1572211091452,
1571674450428,
1571590779410,
1569695083857
],
"note_signatures": [
[
"~Xinshao_Wang1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1634/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1634/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1634/AnonReviewer2"
],
[
"~Yukun_Ding1"
]
],
"structured_content_str": [
"{\"title\": \"Great work. For a sharing and discussion purpose about calibration error, we propose a Generic coarse Signed Calibration Error (GSCE).\", \"comment\": \"In a recent work, named [ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State](https://arxiv.org/abs/2207.00118),\\nwe have \\n* a technical subsection 4.1 about calibration error, where the Generic coarse Signed Calibration Error (GSCE) is proposed.\\n* an empirical analysis subsection 4.2 to visualize the miscalibration.\"}",
"{\"decision\": \"Reject\", \"comment\": \"The authors propose two measures of calibration that don't simply rely on the top prediction. The reviewers gave a lot of useful feedback. Unfortunately, the authors didn't respond.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Paper summary: This paper proposes two simple extensions to Expected Calibration Error (ECE): 1) SCE which accounts for multiclass classification settings by averaging over all the errors due to all classes (as opposed to error in the top one class only in ECE) and 2) ACE which attempts as distributing the predictions equally across the bins (as opposed to have too few highly populated bins in the interval). Authors evaluated their approach using ResNet-110 on CIFAR100 and ResNet-50 on ImageNet against most common post-training calibration methods.\", \"pros\": \"(+): The paper is well-motivated.\\n(+): The problem is important and has direct real world applications.\\n(+): The idea is simple and viable to improve ECE.\", \"cons_that_significantly_affected_my_score_and_resulted_in_rejecting_the_paper_are_as_follows\": \"\", \"1___experimental_setting_and_evaluations\": \"\", \"the_biggest_drawback_in_this_paper_is_the_experimental_setting_which_is_not_rigorous_enough_to_show_the_effectiveness_of_the_proposed_metrics_due_to_the_following_reasons\": \"(a) The first proposed extension (SCE) is too incremental. However, while I am not against simple and effective methods (I have even listed this as a pro above), I think it should be backed up with more thorough experiments and discussions. SCE is supposed to be more effective in case of having more number of classes however authors do not shed light into this properly by taking advantage of comparing their CIFAR100 and ImageNet 1K experiments. They keep emphasizing on the fact that the numbers provided in Table 1 and 2 are not comparable so how is the reader supposed to understand their differences? They also show their metrics give far less error to models than ECE in Table 1 and 2 and yet it is not clear why that is. Not to mention that authors need to state these results are obtained on how many runs and to report stds because the numbers appear too close and are hence not conclusive. \\n(b) Datasets and architectures: Authors have used MNIST and FASHION-MNIST when discussing the shortcomings of ECE. I was wondering what architecture they used for these experiments? The reason I am asking is that it is a known fact (see Guo et al., ICML17) that the simpler the architecture is, the more calibrated its predictions tend to be and vice versa. For instance LeNet 5 appears to be much more well-calibrated than more modern neural networks such as ResNet variants despite being less accurate (Guo et al, 2017). Therefore, it will probably be in their favor to show these issues on a less calibrated model.\\n(c) Can authors please explain why they have used ResNet-110 on CIFAR100 and ResNet-50 on ImageNet? I assume it might be because they intended to compare to (Guo et al, 2017) but there are more experiments there for comparison. Have they also tried any other architecture? It has been shown before that there can be a noticeable difference across different architectures used on a fixed dataset. Authors may want to add the arch effect to their evaluation. \\n\\n2) The second metric (ACE) while it is well-motivated, it is defined such that it leaves the impression that applying it will involve lots of heuristics as there is no systematic procedure is given. In section 6.3 (where I think could be a section to address the implementation details for this) no effective information is given either expect that authors recommend using 50+ bins for CIFAR and ImageNet without any quantitative support. It is also not clear why authors used 15 bins in their experiments for these datasets in Table 1 and 2 (maybe comparing to the baseline? but they could show both specially if 50+ bins is better for ACE). Moreover, in section 5.2 where it says \\u201cthe overall calibration error the metric should focus on the regions where the predictions are made (and focus less on regions with few predictions).\\u201c, I was wondering if authors have any suggestions on how to identify the predictions with low confidence as coming from in or out-distribution?\\n\\n2- Structure of the paper, writing, and visualizations:\\n\\n(a) Writing: The paper, in its current form, needs to be thoroughly proofread and reorganized. The text does not read well and is vague in most parts. Authors have spent too much time explaining the drawbacks of the prior work in 4 pages (entire sections 2,3, and 4) only to propose their ideas in section 5.1 and 5.2 in almost half a page. Unfortunately the analysis section (6) acts very poorly in providing a thorough exploration into their method.\\n\\n(b) Dividing each section to too many subsections has hurt the flow as each section is too short and does not provide deep evidence to support its title/subtitle. \\n\\n<<< Note that the followings are less major and are given only to help, and not necessarily part of my decision assessment >>>\\n\\n(c) Results shown in Table 3 and 4 are reporting \\u201cTACE\\u201d which is technically explained in the Appendix. Apart from the fact that authors need to make sure the Table comes before the references (not in the middle) but more importantly, as a paper submitted to ICLR I would expect it to be self-contained and be able to provide all the details needed. Authors should either move this Table to the appendix or move section (A.1) to the main text. \\n\\n(d) In section 3.4. Authors provide support for their claim on issues with binning scheme with a screenshot of their code in the Appendix. I think it is important to show this effect but in perhaps a table with quantitative results within the main text. \\n\\n(e) Title for Section 5 as the main contribution of this paper should be changed to reflect that.\\n\\n(f) Abstract in its current wording, does not provide absolute no detail into the proposed metrics. Whereas it can, because 1) it encourages the reader to keep reading 2) the method is simple enough to be summarized here.\\n\\n(g) The figures do not meet the conventional scientific standards and have to be significantly improved. Standard deviations are missing.\\n\\n(h) There are also grammar errors and typos, (for example on page 1 paragraph 5, the word \\\"may\\\" should be \\\"many\\\"), for which I have found passing my writing through the free version of Grammarly very helpful in getting rid of most such errors. \\n\\nAs a final note, I think this paper can be contributive to the field as it provides a novel and simple extension to a widely used calibration metric. However it needs to be written in a more effective way and be supported by a more rigorous experimental setting. Therefore, I will be willing to change my score if the presented issues will be addressed.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\n\\tThis paper proposes two novel scalar-valued evaluation measures, namely Static Calibration Error (SCE) and Adaptive Calibration Error (ACE) for evaluating the quality of calibration (the accuracy of estimating p(y|x) in classification) in deep learning. The authors suggested that the existing Expected Calibration Error (ECE), which is the current most popular scalar-valued metric for calibration, has several drawbacks, especially in the multi-class scenario. Intuitively, ECE only focuses on the predicted class probability (argmax) and ignores class probabilities of other K-1 classes, which implies that ECE may fail to capture the class probability estimates for all classes. They also illustrated the drawback of ECE under label noise.\\n\\n========================================================\", \"contribution\": \"1. Pointing out that ECE has several drawbacks in multiclass scenario, e.g., does not take predictions of all classes but only the one with maximum confidence.\\n2. Proposing two novel measures: SCE and ACE, where SCE is a natural extension to make sure the metric consider all class probability estimates, and ACE is adaptive in the sense that it is focus on the regions where many predictions are made.\\n3. Conducting experiments to illustrate that although temperature scaling may work very well when we used ECE as a metric, vector scaling can be advantegous when we consider SCE or ACE.\", \"clarity\": \"Although there are typos, it is not difficult to understand the motivation and what this paper is trying to propose. But it is suggested that the paper was done in a rush manner.\\n\\n1. Issue in Figure 1 and experiment 7.2:\\n\\nI found that Figure 1 is difficult to understand and I may misunderstand. Moreover, I couldn't get the main message of it. \\t\\t1.1 (Top-left), the message I got is that the error based on each metric is bigger as the label noise increases (but each metric is incomparable). \\n1.2 (Top-right), I couldn't get what Predictions ECE omits over threshold 0.1 means. It would be better to clearly explain it, e.g., how to compute % ECE omits over a certain threshold.\\n1.3 (Bottom), I learned that with label noise, both accuracy and model confidence (which I think is max p(y|x) decreases as the noise increases, which is common. Moreover, does Against and vs. mean the same thing in this context, then using the same word can make it more consistent. \\n\\n\\tMoreover, I wonder why we have to focus on the label noise because even in the normal scenario, ECE should have drawbacks already too and it is more interesting for me to see the illustration in the normal scenario. On the other hand, if I don't misunderstand, ECE did not omit a lot of predictions according to the top-right figure if there is no label noise. In practice, we may use a more sophisticated method to handle label noise. Because under label noise, the class probability estimation is already incorrect theoretically, i.e., it may shift depending on the noise type (Menon+, ICML2015: Learning from Corrupted Binary Labels via Class-Probability Estimation). And I am not sure why we have \\\"(Figure 1)\\\" at the end of the first sentence of Sec. 3.1, is that sentence related to Figure 1? In my opinion, Figure 2 is much better to visualize that ACE may capture things that ECE fails to capture. Finally, Figure 1 is a part of experiment 7.2 and I think it is fine to move this to the experiment section as an additional experiment under the label noise scenario. Finally, how to train your model in Figure 1, is it uncalibrated version? i.e., without temperature scaling or other modifications.\\n\\n2. Figure 3 is difficult to understand. \\n\\t2.1 (Left) what is sharpness, how to calculate sharpness and what is the y-axis? And what is the x-axis here, is it a confidence score?.\\n\\t2.2 (Right) What is the training step? And there is no value specified in the x-axis. I couldn't understand how to plot this figure.\\n3. Tables 1 and 2 are never discussed and in \\\"Table 1: ECE, TACE, SCE, and ACE\\\", there is no TACE and never mention in the main body of the paper. Instead, the main body discussed about Tables 3 and 4, which is not in the main body (or is it? since it is in between the reference).\\n4. How many trials did you run the experiment and what criteria you use to give boldface to a method? Since this paper also highly relies on the experimental results, it would be great to clarify. \\n\\n========================================================\", \"comments\": \"The authors did a great job to point out the problem of ECE. Although SCE is a very simple and natural extension of ECE, its contribution is significant because it relevates the drawback of ECE as the authors suggested. I believe this work can make an impact to the field. For ACE, I have an impression that it is difficult to use. Also, it would be nice to see the performance with respect to Maximum Calibration Error (MCE), which is completely ignored in this paper. Because in MCE, we can see that temperature scaling did not almost always perform significantly better than other methods as it performed with respect to ECE (in Appendix of Guo et al., 2017), which is similar to what we observed in SCE and ACE.\\n\\nUnfortunately, although I like the idea of this paper, I found that the clarity of the paper is insufficient in its current state. It seems that the paper was really done in a rush and thus the writing can be highly improved. As a result, given the current manuscript, I vote a weak reject for this paper.\\n\\n========================================================\", \"additional_questions\": \"1. Why the name of SCE is static calibration error? If you mean it is not adaptive as ACE, then ECE is also static in this sense. Therefore, it may be a good idea to come up with a different name,e.g., classwise calibration error.\\n2. May SCE and ACE suffer from class-imbalance scenario more than ECE?\\n3. Are there any advantages of ECE over SCE and ACE?\\n\\n========================================================\", \"potential_typos_i_found\": \"1. Abstract: the last sentence: taks -> takes\\n2. Abstract: Overconf. and underconf. is -> are\\n2. INTRO: the first sentence of the last paragraph may algorithms -> many algorithms\\n3. INTRO: 4th paragraph: has lead -> has led\\n4. INTRO: last sentence: Static Calibratinon -> Static Calibration\\n5. INTRO: last sentence Adaptive Calibration -> Adaptive Calibration Error\\n6. 2.1: {(x,y)} should be {(x,y)}_{i=1}^{N}?\\n7. all predictions made by the mode -> all predictions made by the model\\n8. in fig.3: calibraion -> calibration\\n9. Table 1: remove TACE\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"After an interesting review of calibration methods, the paper describes two new methods for assessing calibration. The first method, SCE, is an extension of the usual ECE to the multi class setting. The second method, ACE, is a slight variation where bins are computed adaptively.\\n\\nThe paper is interesting and relatively well written. Although the contribution is rather simple (can be describe in less than a page), I can see myself using the SCE/ACE metrics in the near future.\\n\\nIn order to strengthen the submission, I still feel that the authors should try to describe why the newly introduced metrics can help in applied scenarios.\\n(1) OK, the ECE omits a lot of important predictions (e.g. fig 1) --> give a real application where this matters\\n(2) OK, adaptive binning seems a sensible approach. --> give a real application where a difference between 0.99 and 0.999 does make a difference\\n\\nThe newly introduced metrics *are* interesting, and the theoretical justifications *are* sensible. The paper would be even better if the applied motivations were better described. Since the proposed method does have the potential to be applied in industrial/applied scenarios, it is slightly disappointing that it is presented as another academic exercise.\", \"minor_remark\": \"it is not clear why the factor (1/K) is needed in the definition of SCE since the weights are already summing up to one -- this makes SCE comparisons between datasets with different number fo classes more difficult.\"}",
"{\"comment\": \"Great work! The discussion on the calibration in the multiclass setting is very informative. You may want to cite the following paper that focuses on a similar problem.\", \"https\": \"//arxiv.org/abs/1903.02050\", \"title\": \"A related work\"}"
]
} |
H1lTQ1rFvS | R2D2: Reuse & Reduce via Dynamic Weight Diffusion for Training Efficient NLP Models | [
"Yi Tay",
"Aston Zhang",
"Shuai Zhang",
"Alvin Chan",
"Luu Anh Tuan",
"Siu Cheung Hui"
] | We propose R2D2 layers, a new neural block for training efficient NLP models. Our proposed method is characterized by a dynamic weight diffusion mechanism which learns to reuse and reduce parameters in the conventional transformation layer, commonly found in popular Transformer/LSTMs models. Our method is inspired by recent Quaternion methods which share parameters via the Hamilton product. This can be interpreted as a neural and learned approximation of the Hamilton product which imbues our method with increased flexibility and expressiveness, i.e., we are no longer restricted by the 4D nature of Quaternion weight sharing. We conduct extensive experiments in the NLP domain, showing that R2D2 (i) enables a parameter savings of up to 2 times to 16 times with minimal degradation of performance and (ii) outperforms other parameter savings alternative such as low-rank factorization and Quaternion methods.
| [
"Deep Learning",
"Natural Language Processing"
] | Reject | https://openreview.net/pdf?id=H1lTQ1rFvS | https://openreview.net/forum?id=H1lTQ1rFvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Sb_vGTHCs1",
"BygXzJ0siH",
"BJlV6jaoiB",
"r1gTNqaosS",
"BJer5FcCYH",
"B1xr0FcpFS",
"SJx3GtTwtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728337,
1573801739320,
1573800891550,
1573800501117,
1571887500924,
1571822029431,
1571440915644
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1633/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1633/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1633/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1633/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1633/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1633/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a very interesting alternative to feed-forward network layers, based on Quaternion methods and Hamilton products, which has the benefit of reducing the number of parameters in the neural network (more than 50% smaller) without sacrificing performance. They conducted extensive experiments on language tasks (NMT and NLI, among others) using transformers and LSTMs.\\n\\nThe paper appears to be clearly presented and have extensive results on a variety of tasks. However all reviewers pointed out that there is a lack of in-depth analysis and thus insight into why this approach works, as well as questions on the specific effects of regularization. These concerns were not addressed in the rebuttal period, instead leaving it to future work. My assessment is that, with further analysis, ablation studies, and comparison to alternative methods for reducing model size (quantization, etc), this paper has the potential to be quite impactful, and I look forward to future versions of this work. As it currently stands, however, I don\\u2019t believe it\\u2019s suitable for publication at ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Dear Reviewer,\\n\\nThanks for the insightful review! We are happy to hear that you liked our paper!\\n\\nPertaining to the dynamics of the model, we believe that our method is a more expressive, parameterized adaptation of the Hamilton product - which already brings about benefits from latent inter component interactions. We think that this helps the model learn to approximate an actual FC layer with less parameters, although not completely. Additionally, there are also motivation factors such as regularization from weight sharing. We will be continuing on this line of work to investigate this carefully and are thankful for the points you have brought up. Updated appendices of the visualized blocks (A and S) will be updated by the next version of the paper.\\n\\nOnce again, thanks for taking the time to review our paper and we are happy about the positive review.\"}",
"{\"title\": \"Response\", \"comment\": \"Dear Reviewer,\\n\\nThanks for the insightful comments and feedback! We fully appreciate your time and effort in reviewing our work. We are also glad that you appreciate the motivation of our work.\\n\\nRegarding the choice of dynamic weight fusion, the key idea behind our approach is to dynamically learn the partitioning. Aside from the hyperparameter N, there are no straightforward choices of varying the partitioning (other than making them dynamic-sized which is a way more complex approach). The obvious partitioning strategy is the Quaternion method, which we make extensive comparisons with.\\n\\nRegarding regularization, we fully agree that decoupling effects of regularization and parameter savings is tricky. We believe that that this is interesting and warrants further investigation.\\n\\nRegarding the choice of experiments, machine translation and NLI are more \\u201cwell-established\\u201d tasks. On the other hand, while the other 2 tasks are less popular, we decided to conduct these extra experiments to improve the diversity and coverage of the experiments. We agree that further analysis can help improve the paper and we are working on it for the updated version of the paper. \\n\\nRegarding decoding speed and parameter savings, the parameter size of Transformer base and R2D2 transformer base remains identical for all tasks (subject to only the vocab size). Moreover, the decoding speed, intuitively, should remain proportionate across different tasks. \\n\\nRegarding the related work, we have modified the paper with some references (as pointed out by Reviewer #3). \\n\\nRegarding decoding speed, we believe the reduced parameter size contributes to the improvement in speed. In particular, the matrix multiplication operations in R2D2 networks are now smaller.\\n\\nOnce again, thank you for taking the time to review our work!\"}",
"{\"title\": \"Response\", \"comment\": \"Dear Reviewer,\\n\\nThanks for the insightful comments and feedback, along with spending valuable time to review our paper! \\n\\nIn lieu of your detailed feedback, we have made the following changes to the paper.\\n1) Made it clearer that the proposed method is concerned with improving the memory footprint (parameter complexity). \\n2) Included a discussion and citation of the above-mentioned suggested literature (block diagonal, structured sparsity etc) to improve the completeness of the related work. To this end, we believe a detailed empirical comparison of all \\\"memory-saving\\\" methods is indeed interesting future work. We feel that this extended analysis is better dedicated to a follow-up work to comprehensively study the effect of orthogonal methods such as sparsity, quantization, low-precision, distillation etc. This is also partly in lieu of the limited window of the response period and we feel it would be better to carefully evaluate these methods without having a tight time constraint. The factorized and quaternion baselines are the closest to our method which we have selected to focus on in this paper.\\n3) Introduced to the reader that N is a hyperparameter.\\n4) Improved the clarity of experimental settings by stating that all parameter initialization and hyperparamters remains identical to the vanilla version.\\n5) Defined alpha (length penalty)\\n\\nWe fully agree that understanding the regularizing effect of our method is key. At this moment, we do not that any quantifiable results on hand. However, we can offer some first hand insights that the training curves (loss and val) of R2D2 Transformer are similar to the Vanilla Transformer. This is based on experience from developing this method. \\n\\nPertaining to the different block sizes, our tabular results do offer an analysis of different values of n (i.e., extents of memory savings). \\n\\nOnce again, thanks for taking the time to review our work!\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Paper Summary:\\n\\nThis paper proposes to train smaller models by decomposing the weights of fully connected networks as the product of smaller matrices, along with a reordering/transpose of the outcome. The experiments shows that models with less parameters yield comparable performance with their larger counterparts.\", \"review_summary\": \"The method is technically sound and the paper reads well. Experiments demonstrate the efficacy of the method, although some ablations are missing (see below). The paper is however not clear on the ultimate objective of the method (speed/accuracy/generalisation?) and does not compare with alternatives.\", \"detailed_review\": \"The introduction does not make clear if your motivation to make model smaller is training speed, inference speed, memory usage, generalization accuracy. Please clarify.\\n\\nThe explanation of the method, i.e. Section 2.2.1, is not clear, in particular for the mapping \\\\psi. I feel it would cleared if somewhere in the paper there was an equation with the element-wise correspondence, i.e. H_{?,?} = \\\\sum_k A_i,k S_k,j\\nIn that section, you should introduce that n is a hyperparameter before using it as well.\\nIn that section, you could also discuss parameter initialization, and whether this model can use weight decay over H or A/S. it is also not clear to me if you control the norm ratio between A and S given the weight magnitude is over parameterized.\\n\\nThe experimental section lack a validation/ablation study to help the reader understand the interplay between the number of blocks and the number of latent dimensions. It will also be good to show learning curves to compare training speed of different parameterization. \\nAlso no training errors are reported, does your method can be seen as a regularizer, i.e. is training objective closer to valid objective when n grows? Did you have to change other regularization parameters like dropout.\\n\\nTo me the main weakness of the paper lies in the lack of comparison with alternatives. Replacing fully connected layers with alternative has a rich literature that the authors ignore.\\nI feel it is necessary to compare the approach with\\n(i) block diagonal approaches, popular since ResNext for convolutions but equally applicable to linear layers. https://arxiv.org/abs/1611.05431\\n(ii) other form of structured sparsity. https://arxiv.org/abs/1902.09574 (survey). https://arxiv.org/abs/1812.08301 (squantizer) https://arxiv.org/abs/1802.08435 (block sparsity)...\\n(iii) distillation of large models into smaller models. https://arxiv.org/abs/1503.02531 https://arxiv.org/abs/1702.01802\\n(iv) it might not be necessary to compare, but at least mentioning approaches which predict weights from a meta network would be good. https://arxiv.org/abs/1609.09106\\n\\nAs a reviewer, I am a bit annoyed that made no effort to have a decent list of related work and that they delegate that work to the reviewers to do so.\", \"details\": \"\\\"transformation layer\\\": this is not common terminology, please prefer linear layer or fully-connected layer.\\nplease define all acronyms, e.g. FC.\\nThe experimental section does not define \\\\alpha (end of page 6).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new Reuse and Reduce with Dynamic weight Diffusion (R2D2) layer as an alternative to feed-forward layers in neural networks. The layer is inspired by the Hamilton Product in a hypercomplex space (where numbers have multiple imaginary components). The main idea is to have two smaller parameter blocks that are partitioned, multiplied, and concatenated together in order to form the full weight matrix of the feed-forward layer.\\nIn extensive experiments on NLI, NMT, text style transfer and subject-verb agreement, feed-forward layers in LSTMs and Transformers are replaced with R2D2 layers. The modified models achieve similar performance to the originals, while being more than 50% smaller. \\n\\nOverall, the proposed method is presented clearly and the experiments are comprehensive and convincing. For these reasons, I am leaning towards accepting this paper.\\n\\nThe proposed method is well explained. In particular, Figure 1 is helpful to obtain a conceptual picture of the method. This is in contrast to some of the previous methods based on hypercomplex operations, which often seem harder to grasp and visualize. In addition, it is helpful that connections to other operations such as matrix multiplication and the Hamilton product are highlighted.\\n\\nThe proposed method is evaluated extensively. It is applied to different models (LSTMs and Transformers) and on different tasks. Results are mostly convincing, as performance numbers are competitive with the baselines, while the models are much smaller. In addition, it compares to previous work, which it outperforms. \\n\\nThe main thing that I'm missing is some analysis of the dynamics of the model, what it is learning (in comparison to using FC layers) or why a smaller number of parameters is still competitive with the standard FC layers. Are feed-forward layers over-parameterized and only a smaller number of their weights are actually used in practice, similar to lottery tickets (https://arxiv.org/abs/1803.03635)? How do the learned A and S blocks look like? Is the entire model learning a different function or do the R2D2 layers just find a way to approximate a feed-forward layer? \\n\\nOverall, as the method seems straightforward enough to implement and achieves promising results, it has the potential to have some practical impact.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose R2D2 layers, which are trained to reduce and re-use existing parameters of a neural network layer, and apply this to Transformer and LSTM architectures. The authors conduct experiments on various NLP tasks, including NLI and NMT.\\n\\nThe main benefit of the proposed R2D2 layer is that the number of parameters can be reduced, as the existing parameters can be reused. I find this motivation compelling, particularly as it is well known Transformer networks are largely overparameterized.\", \"comments\": \"1. There is no analysis on the specific choices made for dynamic weight diffusion- the way the partitioning is done could have a large effect on the end result. There's also little comparison to other ways to share weights across a model besides the proposed weight diffusion method. \\n\\n2. Sharing parameters contributes a regularization effect - it is difficult to untie the contributions of increased regularization from the proposed method. This is particularly problematic as the majority of the datasets used are \\\"small\\\" by current standards. WMT en-de (authors do not include the sizes of the datasets, but this is 4.5 million sentences) is the only large scale dataset, and the BLEU drop is quite large on this dataset compared to the smaller ones such as IWSLT. \\n\\nTo tie my points #1 and #2 together, I feel the authors did experiments on a variety of different tasks, but these style transfer and subject verb agreement tasks are not particularly interesting or realistic - instead this space should be devoted to discussions of the advantages of their method and analysis on its performance, which is quite lightly covered.\\n\\n3. The authors claim that the R2D2 Transformer outperforms standard Transformer models on 5 out of 7 NMT tasks. This appears true if up-sampling with a factor of 2 is used to make the models larger again. The authors should compare to factorized/quaternion baselines which have a larger quantity of parameters as well. \\n\\n4. Table 3, where results are reported on the competitive WMT en-de benchmark, lacks comparison for number of parameters and decoding speed. This table would probably have the most compelling and impactful results for this paper as this is the most competitive task (aside from the pre-training regime on MNLI/QNLI as part of GLUE). Can the authors complete this table so readers can understand the parameter reduction and inference speed possible from this method on this benchmark?\\n\\n(As an aside, the technique should be applicable to the DynamicConv model, which is a Transformer variant?)\\n\\n5. The related work section is quite light on other approaches to reducing model size, such as knowledge distillation or quantization? While the approach taken in this paper leverages parameter sharing, the motivation is similar and I feel acknowledging this entire area of work would be relevant.\\n\\n6. I'm not clear on why we see inference time decoding speed improvements based on the description of the method. Can the authors clarify this point for me?\"}"
]
} |
rkl3m1BFDB | Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning | [
"Akanksha Atrey",
"Kaleigh Clary",
"David Jensen"
] | Saliency maps are frequently used to support explanations of the behavior of deep reinforcement learning (RL) agents. However, a review of how saliency maps are used in practice indicates that the derived explanations are often unfalsifiable and can be highly subjective. We introduce an empirical approach grounded in counterfactual reasoning to test the hypotheses generated from saliency maps and assess the degree to which they correspond to the semantics of RL environments. We use Atari games, a common benchmark for deep RL, to evaluate three types of saliency maps. Our results show the extent to which existing claims about Atari games can be evaluated and suggest that saliency maps are best viewed as an exploratory tool rather than an explanatory tool. | [
"explainability",
"saliency maps",
"representations",
"deep reinforcement learning"
] | Accept (Poster) | https://openreview.net/pdf?id=rkl3m1BFDB | https://openreview.net/forum?id=rkl3m1BFDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"reMm3MEhFU",
"r1xud4tiiS",
"Byl8RtUooB",
"B1lXpW7ijH",
"SyeeHuL9iH",
"S1g5luL9jB",
"SJlpnv8coB",
"HygC_PIcsS",
"r1lRNvb9sr",
"B1l6TLZCKB",
"rygokYDpYr",
"B1gg4VDstB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728305,
1573782639600,
1573771725901,
1573757370754,
1573705783983,
1573705713857,
1573705652759,
1573705590316,
1573685045672,
1571849925317,
1571809507040,
1571677223641
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1632/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1632/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1632/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1632/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1632/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1632/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1632/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1632/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1632/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1632/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1632/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This was a contentious paper, with quite a large variance in the ratings, and ultimately a lack of consensus. After reading the paper myself, I found it to be a valuable synthesis of common usage of saliency maps and a critique of their improper interpretation. Further, the demonstration of more rigorous methods of evaluating agents based on salience maps using case studies is quite illustrative and compelling. I think we as a field can agree that we\\u2019d like to gain better understanding our deep RL models. This is not possible if we don\\u2019t have a good understanding of the analysis tools we\\u2019re using.\\n\\nR2 rightly pointed out a need for quantitative justification for their results, in the form of statistical tests, which the authors were able to provide, leading the reviewer to revise their score to the highest value of 8. I thank them for instigating the discussion.\\n\\nR1 continues to feel that the lack of a methodological contribution (in the form of improving learning within an agent) is a weakness. However, I don\\u2019t believe that all papers at deep learning conferences have to have the goal of empirically \\u201clearning better\\u201d on some benchmark task or dataset, and that there\\u2019s room at ICLR for more analysis papers. Indeed, it\\u2019d be nice to see more papers like this.\\n \\nFor this reason, I\\u2019m inclined to recommend accept for this paper. However this paper does have weaknesses, in that the framework proposed could be made more rigorous and formal. Currently it seems rather adhoc and on a task-by-task basis (ie we need to have access to game states or define them ourselves for the task). It\\u2019s also disappointing that it doesn\\u2019t work for recurrent agents, which limits its applicability for analyzing current SOTA deep RL agents. I wonder if authors can comment on possible extensions that would allow for this.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We appreciate the continued discussion with Reviewer 1 and the revised score. Below we address the additional questions raised in Reviewer 1\\u2019s latest comments:\\n\\nGoal of the paper \\u2014 Reviewer 1 asks how the causal graphical model \\u201cimproves learning the agent\\u201d? Our proposed method is intended to improve *explanations* of deep RL agents rather than directly improve agent learning. Better explanations do not directly improve learning, but they are vital for diagnosing errors in agent performance, forecasting how a given agent will perform in new environments, and improving the design of agents and agent training procedures.\\n\\nValue of the graphical model \\u2014 Reviewer 1 is correct that the graphical model is used schematically to describe the methods we propose. We provide the graphical model to be formal and clear about a key distinction between intervening on pixels and intervening on game state. As we note below, such a distinction is the heart of this paper.\\n\\nNovelty of interventions \\u2014 Reviewer 1 notes that many prior methods intervene on pixels and asks \\u201cwhat\\u2019s new here?\\u201d Our primary contribution is to directly compare the inferences supported by intervening on pixels with the inferences supported by intervening on game state. Pixel-level interventions produce images for which the learned network function may not be well-defined and do not guarantee changes in semantic concepts or game state (see Table 1). This distinction is shown clearly by the graphical model. As we state in Section 4: \\u201cWe generate counterfactual conditions by intervening on the RL environment. Prior work has focused on manipulating the pixel input. However, this does not modify the underlying latent game state. Instead, we intervene directly on game state.\\u201d\\n\\nContribution \\u2014 Reviewer 1 asks \\u201care [you] providing new insight that we did not have about the saliency map in RL that we didn't have before?\\u201d Yes, we provide a major new insight that saliency maps cannot be trusted as evidence for causal relationships between semantic concepts and agent behavior. The strong evidence we provide to support this insight is a direct comparison between the inferences made by researchers using saliency maps and those that we can make using direct interventions on game state.\"}",
"{\"title\": \"Thank you for the response and the updated manuscript\", \"comment\": \"Thank you for addressing the points of improvement and major/minor comments raised in my review. I would have given the paper a higher score initially already, if the statistical analysis were present. It is crucially needed to verify the claims made based on the data in the paper. I am happy to see that the statistical analysis nicely confirms the results stated in the original manuscript. Together with the other improvements I am now confident to raise my score.\"}",
"{\"title\": \"not a method but an exploratory analysis paper\", \"comment\": \"I disagree with the authors that section 2 provides an overarching theory.\\n\\nFirst of all, the fact that there is a graphical model (which by the way was only used as schematic but not used for any inference of any kind) doesn't mean there is an overarching theory in the paper. Let me rephrase this: How does a causal graphical model (Fig 7 or Fig 2) helps/improves learning the agent and how do you incorporate it to the learning/inference of your method \\\"algorithmically\\\"? \\n\\nSecond, as the author mentioned in page 3, using the saliency map to change the value of the pixel is not new. Other have done it: (Simonyan et al., 2014), (Zeiler & Fergus, 2014), (Greydanus et al., 2017), (Iyer et al., 2018). --- so what is new here?\\n\\nYes, the saliency map method was published in the ICLR, but the saliency map method is a general tool. Given a BlackBox f, it provides us with importance value for each feature/pixel/... We can deploy this tool on any blackbox. I am struggling with this paper: Given a game with an action set A and state set S and reward function R, (1) Are you proposing a new \\\"Tool\\\" to overcome the limitation of learning? By that I mean a tool that can be applied on any game or at least on a family of game? I don't see that. (2) you are providing new insight that we did not have about the saliency map in RL that we didn't have before?\\n\\nHaving said that, I believe that the paper did a lot of experiments, and has a value an exploratory analysis paper (not a method paper). Perhaps, this what ICLR community wants. So if the AC rejects my vote, I am fine with it. I change my vote to weak reject.\"}",
"{\"title\": \"Summary of Revisions\", \"comment\": \"We made the following revisions to our posted paper:\", \"experimental_details\": [\"Described the architecture and hyper-parameters of the RL model (A2C) employed for the case studies (Section 5 and Appendix B).\", \"Described how saliency was measured in the case studies (Section 4 and 5).\"], \"experiment_results\": [\"Added quantitative results for case studies 2 and 3 in Section 5 and the Appendix (Tables 3, 6, 7, and 8).\", \"Scaled saliency in Figure 5a to be more visible.\", \"Removed example plot in Figure 4a and instead added example saliency maps in Appendix A.\", \"Added a correlation plot to represent the relationship between saliency on score and agent behavior (Figure 4c).\", \"Added more description to captions on the Figures and Tables for clarity.\"], \"discussion\": [\"Added a discussion on the applicability of the proposed methodology to recurrent deep RL agents (Section 6).\", \"Added an extended version of the causal graphical model in Figure 2 for Breakout to demonstrate generalizability (Figure 7 in Appendix D).\"]}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We appreciate Reviewer 3\\u2019s comments. The review has several main points that we address below:\\n\\nDouble aim of the paper \\u2014 Our principal contribution is a new method for empirical evaluation of explanations generated from saliency maps about the behavior of deep RL agents. We intentionally structured our paper to include both a survey of current practice and an application of our proposed approach. Both elements were intended to aid reader understanding. The survey describes the inferences that require evaluation, and the application demonstrates the surprising conclusions supported by the evaluation method. We have attempted to improve our description of this approach (see \\u201cClarity\\u201d below).\\n\\nCompleteness of the survey \\u2014 As we note in section 3, we surveyed 90 papers, each of which cited one or more key papers that described one of four saliency map methods. We would be happy to include additional papers in our survey, expand our survey criteria, or consider additional updates to the survey, if Reviewer 3 could provide specific suggestions. \\n\\nClarity \\u2014 We have added additional details to Section 4 on how saliency is measured. We have also added more quantitative results from the experiments to Section 5 and Appendix E (see Tables 3, 6, 7 and 8), and more details on the model used for training in Section 5 and Appendix B. We hope these additions make the paper more clear, and we welcome additional recommendations.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We greatly appreciate Reviewer 2's extensive comments and suggested improvements. We have incorporated the suggested improvements to the best of our ability during the response period, and we summarize those revisions in a separate official comment. Specifically, we have provided statistics that quantify experimental effects, run and reported the results of hypothesis tests, and provided additional experimental details. Below are some additional notes on specific comments from the review:\\n\\nApplication to recurrent deep RL agents \\u2014 As Reviewer 2 notes, the paper was written with feed-forward deep RL agents in mind. That said, the proposed methodology is post-hoc (i.e., not model-dependent), so aspects of the approach will carry over to recurrent RL agents. Our proposed methodology would not work for repeated interventions on recurrent RL agents due to their capacity for memorization. We have noted this distinction in Section 6.\\n\\nSemantic Invariance \\u2014 We completely agree that the semantic space devised by the agent might be quite different from the semantic space given by the latent factors of the environment. It is crucial to note that this mismatch is one aspect of what plays out when researchers create hypotheses about agent behavior, and the methodology we provide in this work demonstrates how to evaluate hypotheses that reflect that mismatch. \\n\\nPositive Example \\u2014 We agree it would have been ideal to provide an example where the original hypothesis was not rejected. Unfortunately, we exhausted the set of obvious hypotheses for the two games we considered, and all were rejected. We were surprised by these results, but they support the idea that saliency maps are easily misinterpreted. \\n\\nMemory in feed-forward agents \\u2014 The subtlety that Reviewer 2 points about feed-forward agents behaving like recurrent agents by offloading memory into the environment is extremely interesting. Assessing whether memorization leads to different saliency behavior is a fascinating direction for future work.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We appreciate Reviewer 1\\u2019s comments, and address the main points of the review below:\\n\\nGeneralizable contribution \\u2014 The paper makes several contributions: (1) a survey of how saliency maps are currently used to explain the behavior of deep RL agents; (2) a new method to empirically evaluate the inferences made from saliency maps; and (3) an experimental evaluation that uses our proposed method to measure how well saliency maps correspond to the semantic-level inferences of humans. Each of these contributions applies to any use of common saliency-map methods to understand the behavior of deep RL agents learned using feed-forward architectures.\\n\\nOverarching theory \\u2014 As we describe in section 2, our proposed method is based on a formal theory of counterfactual intervention. Though the graphical model in Figure 2 represents an Atari game environment, researchers can reason about interventions in different vision-based RL domains by substituting different content for the state and pixels. We added a specific graphical model for Breakout in the Appendix (Figure 6) to clarify how the generalized causal graphical model in Figure 2 can be specified to a given domain. Section 6 contains additional points on the generalization of the proposed methodology.\\n \\nICLR as a venue for this paper \\u2014 ICLR is a nearly ideal venue for this work. The paper that first introduced saliency maps was published in a 2014 ICLR workshop (Simonyan et al. 2014). Subsequent ICLR papers have introduced new saliency map methods (e.g., Zintgraf et al. 2017) and analyzed these methods (e.g., Ancona et al. 2018). Many studies have critiqued the use of saliency maps in computer vision (Adebayo et al., 2018; Samek et al., 2018; Kindermans et al., 2019), but we are the first to analyze the utility of saliency maps for understanding the behavior of deep RL agents. Finally, while methodological papers are relatively uncommon in machine learning conferences (including ICLR), effective evaluation of learned representations is vital to progress in the field. Saliency maps have become one of the primary methods to visualize the representations learned by deep neural networks, and better understanding the utility of saliency maps is central to understanding their proper role in research. \\n\\nAdebayo et al. \\\"Sanity checks for saliency maps.\\\" NeurIPS 2018.\\n\\nAncona et al. \\u201cTowards better understanding of gradient-based attribution methods for deep neural networks.\\u201d ICLR 2018.\\n\\nKindermans et al. \\\"The (un) reliability of saliency methods.\\\" Explainable AI: Interpreting, Explaining and Visualizing Deep Learning 2019.\\n\\nSamek et al. \\\"Evaluating the visualization of what a deep neural network has learned.\\\" IEEE Transactions on Neural Networks and Learning Systems 2017.\\n\\nSimonyan et al. \\u201cDeep inside convolutional networks: visualising image classification models and saliency maps.\\u201d ICLR Workshop 2014.\\n\\nZintgraf et al. \\u201cVisualizing deep neural network decisions: prediction difference analysis.\\u201d ICLR 2017.\"}",
"{\"title\": \"Waiting for the author's response\", \"comment\": \"While there's still a bit of time left, I'd like to encourage the authors to engage in a discussion with the reviewers as early as possible, and certainly prepare a rebuttal and revised manuscript. Since the other two reviews are quite short and do not provide specific criticism or points to improve, perhaps the other reviewers haven't fully engaged with the paper and the main point did not come across yet.\\n\\nI personally think the paper is well written, has clear aims, and while I agree that there's no central one-line main-result, I think that the more prosaic style of the paper suits the topic really well. If the author's convincingly address improvements a) and b) and respond to the major comments, I am likely to raise my score.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper has a double aim. First, it is a survey on saliency maps used in explaining deep reinforcement learning models. Second, it is a proposal of a method that should overcome limitations of the current approaches described in the survey.\\nThis double aim makes the paper hard to understand as the survey is not complete and the model is not well explained.\\nThe main limitations the novel model aims to solve seems to be the production of \\\"falsifiable\\\" hypothesis in the explaination with saliency maps. However, experiments are really hard to follow and it is not clear why this is the case.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Abstract:\\nThe author suggests that saliency maps should be viewed as exploratory tools rather than explanation. The explore this idea in the context of a game.\", \"here_is_my_main_issue\": \"Although I believe there is a value in studies like this. I am not sure ICLR is the right venue for it. The paper is well written but it reads like a long opinion/blog-post. There is no overarching theory or generalizable observation not to mention a solution. \\nYes, I agree that the method of interpreting the black box has a lot of issues and the counterfactual approach/causal approach is probably the right way to go but this is hardly news to the community.\", \"in_short\": \"what the generalizable contribution of the paper?\\n\\nI am open to change my mind if the discussion is convincing.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"[score raised from weak accept to accept due to rebuttal/improvements]\\n\\nSummary\\nThe paper investigates the practice of using pixel-level saliency maps in deep RL to \\u201cexplain\\u201d agent behavior in terms of semantics of the scene. The main problem, according to the paper, is that pixel-level saliency maps often correlate with semantic objects, however turning these correlations into explanations would require counterfactual analysis / interventions, which are almost never performed in practice. The paper highlights this issue with an extensive literature survey, and proposes a simple method to formulate \\u201cexplanations\\u201d found via saliency maps into falsifiable hypotheses. Three experiments show how to apply the methodology in practice - in all three cases pixel-level correlations cannot be easily mapped to semantic-level explanations that hold (counterfactual) validation.\\n\\nContributions\\nThe paper nicely summarizes the main contributions, namely: (i) a literature survey on pixel-saliency methods in deep RL and their use to \\u201cexplain\\u201d agent behavior, (ii) a detailed description of the problem with the latter and a proposal to mitigate the main issues, and (iii) three experimental case-studies to illustrate the problem further and show how the proposed method can help.\\n\\nQuality, Clarity, Novelty, Impact\\nThe paper addresses a highly important issue in the field of interpretable deep learning. The main message is that a lack of scientific rigor, namely stating falsifiable hypotheses and validation of claimed hypotheses, can easily lead to misinterpretation of deep RL systems. This is a somewhat disenchanting message, but I personally think it is important to ensure that this message is heard in the field of interpretable ML in particular, and in the wider deep learning community in general. It is tempting to give simple answers to complex problems, and while I think saliency maps will play a large role in interpreting deep network decisions, I am also convinced that we need causal explanations, which salience maps (currently) cannot provide on a semantic level. The paper is well written and clear, the literature survey is quite extensive and valuable. The experimental results are nice, however they currently crucially lack quantitative statements that back up the qualitative results (see improvements below). While the latter must be included for publication, I am fairly confident that this can be rectified during the rebuttal phase and therefore (tentatively) vote for acceptance.\\n\\nImprovements\\na) Mandatory for publication! Back the results-plots up by numbers! In particular: visually estimating densities / correlations from scatter plots is often impossible and misleading - while the plots are nice to have, the claims regarding Figure 5, 8, 9 (b) and (c) must be backed up by reporting actual correlations / statistical tests. For instance, it is impossible to judge visually whether there\\u2019s any trend in 5 (c). Please report correlations for 5, 8, 9 (b) and perform suitable statistical tests for measuring increase/decrease in correlation for 5, 8, 9 (c).\\nSimilarly, please report an appropriate metric to quantitatively judge the difference between the curves in 4, 6, 7 (c). It\\u2019s fine to include tables reporting the quantitative results in the appendix, they don\\u2019t necessarily have to be in the main paper.\\n\\nb) Experimental details. Please report the details required to reproduce the experiments. In particular, what was the precise architecture for A2C and the hyper-parameter settings (particularly since the reference that is cited is not a paper, but a GitHub repo). For the figures, please report how saliency was measured exactly (was there a bounding-box around saliency/enemies? What was its size? Were intensities somehow normalized, were distances to enemies measured between centers of bounding-boxes, \\u2026?)\\n\\nc) (Optional). It would be nice to see an example where the method is used but the original hypothesis is not rejected (i.e. there\\u2019s now stronger evidence for the original hypothesis due to the counterfactual analysis). I understand that this is beyond the scope of the rebuttal, and feel free to completely ignore this.\\n\\n\\nMajor Comments\\nI) Please state whether the paper was written with feed-forward deep RL agents only in mind, or whether the paper is intended to also include recurrent deep RL agents (it would also be helpful to know whether the experiments used a feed-forward, or a recurrent version of A2C). While I think that many aspects carry over from feedforward architectures to recurrent ones, I personally think that some issues with counterfactual analysis could become more intricate with recurrent agents. For instance, on page 6, the described invariance in the first paragraph under \\u2018Counterfactual Evaluation of Claims\\u2019 is fine for feed-forward agents, but could be debatable with recurrent agents. If you agree, please make this distinction clear in the paper (where appropriate) or state that the paper only applies to feedforward agents. If you disagree please indicate this during the rebuttal discussion. \\n\\nII) Page 6, just above Sec. 5: \\u201cSince the learned policies should be semantically invariant under manipulations of the RL environment...\\u201d. I agree that they should ideally be invariant, for the semantic interventions to make sense, but please comment on whether this is a trivial assumption, how this assumption could (in principle) be verified and the potential consequences of this assumption being violated. I personally think that there\\u2019s a fair chance that the semantic space carved up by the agent (that potentially overfits a task/family of tasks) might be quite different from the semantic space given by the latent factors of the environment. This mismatch and its potential interference with the method should be discussed as a current shortcoming.\\n\\nMinor Comments\\nI) A potential subtlety (which I don\\u2019t expect you to resolve/discuss in the paper) is that feed-forward agents in an MDP environment can behave like recurrent agents by offloading memory into the environment. E.g. a breakout agent could \\u201cmemorize\\u201d that it is in \\u201ctunnel-digging mode\\u201d by moving the paddle by a few pixels - this could then potentially shift it\\u2019s saliency away from the actual tunnel to the corresponding pixels around the paddle. Such cases might be very hard to interpret via saliency maps or interventional analysis, but I acknowledge that this is perhaps a more exotic case, given the current state of interpretable deep RL. Just a thought for future work perhaps...\"}"
]
} |
SyljQyBFDH | Meta-Learning Deep Energy-Based Memory Models | [
"Sergey Bartunov",
"Jack Rae",
"Simon Osindero",
"Timothy Lillicrap"
] | We study the problem of learning an associative memory model -- a system which is able to retrieve a remembered pattern based on its distorted or incomplete version.
Attractor networks provide a sound model of associative memory: patterns are stored as attractors of the network dynamics and associative retrieval is performed by running the dynamics starting from a query pattern until it converges to an attractor.
In such models the dynamics are often implemented as an optimization procedure that minimizes an energy function, such as in the classical Hopfield network.
In general it is difficult to derive a writing rule for a given dynamics and energy that is both compressive and fast.
Thus, most research in energy-based memory has been limited either to tractable energy models not expressive enough to handle complex high-dimensional objects such as natural images, or to models that do not offer fast writing.
We present a novel meta-learning approach to energy-based memory models (EBMM) that allows one to use an arbitrary neural architecture as an energy model and quickly store patterns in its weights.
We demonstrate experimentally that our EBMM approach can build compressed memories for synthetic and natural data, and is capable of associative retrieval that outperforms existing memory systems in terms of the reconstruction error and compression rate. | [
"associative memory",
"energy-based memory",
"meta-learning",
"compressive memory"
] | Accept (Poster) | https://openreview.net/pdf?id=SyljQyBFDH | https://openreview.net/forum?id=SyljQyBFDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"7UnhLzb3A5",
"k897WTvqj6",
"BJlwh7Xnsr",
"Sye2RlX3sH",
"SylSrLlojr",
"rJgjRC1jor",
"SyeMLFkijB",
"HJeE4YJijS",
"ryxTh1ncjr",
"SJgY4RocoH",
"Hye1YmQTqr",
"BkeZXMcccB",
"HyekxA0EcS",
"B1e_L5zpKH"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1577995564386,
1576798728276,
1573823406929,
1573822676262,
1573746237329,
1573744339131,
1573742921789,
1573742892495,
1573728180699,
1573727792631,
1572840311090,
1572672024562,
1572298214998,
1571789392088
],
"note_signatures": [
[
"~Jianwen_Xie1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1629/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1629/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1629/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1629/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1629/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1629/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1629/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1629/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1629/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1629/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1629/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1629/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Missing related reference about Energy-based models using neural networks to approximate the energy function\", \"comment\": \"Dear Authors,\\n\\nCongratulations on your nice accepted paper. \\n\\nI would like to point out some papers that are highly related to your current one, and hope you can cite them in your final version. All of them are about generative models, which are in the forms of energy-based models parameterized by neural nets. \\n\\nThe seminal paper that proposes an energy-based model parameterized by modern deep neural network and learned it by Langevin based MLE is in (Xie. ICML 2016) [1]. The paper also involves theory about the connection with the discriminative ConvNet, Hopfield network, and Contrastive divergence. The model is called \\\"Energy-based\\\" generative ConvNet, because it is naturally derived from the discriminative ConvNet, instead of manually designed.\\n\\n(Xie. CVPR 2017) [2] proposed to use Spatial-Temporal ConvNet as the energy function for video modeling. In the theory part, it firstly provides a self-adversarial interpretation for the MCMC-based learning of the EBM with ConvNet as energy functions.\\n\\n(Xie. CVPR 2018) [3] proposed to use volumetric ConvNet as the energy function for 3D shape patterns generation. It is called 3D descriptor Net. \\n\\n(Gao. CVPR 2018) [4] proposed multi-grid MCMC to learn EBM with ConvNet as energy function. \\n\\n(Nijkamp 2019) [5] proposed short-run MCMC to learn EBM with ConvNet as energy function.\", \"thank_you\": \")\\n\\nReference\\n[1] A Theory of Generative ConvNet. \\nJianwen Xie *, Yang Lu *, Song-Chun Zhu, Ying Nian Wu (ICML 2016)\\n\\n[2] Synthesizing Dynamic Pattern by Spatial-Temporal Generative ConvNet\\nJianwen Xie, Song-Chun Zhu, Ying Nian Wu (CVPR 2017)\\n\\n[3] Learning Descriptor Networks for 3D Shape Synthesis and Analysis\\nJianwen Xie *, Zilong Zheng *, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, Ying Nian Wu (CVPR) 2018 \\n\\n[4] Learning generative ConvNets via multigrid modeling and sampling. \\nR Gao*, Y Lu*, J Zhou, SC Zhu, and YN Wu (CVPR 2018). \\n\\n[5] On learning non-convergent non-persistent short-run MCMC toward energy-based model. \\nE Nijkamp, M Hill, SC Zhu, and YN Wu (NeurIPS 2019)\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Four knowledgable reviewers recommend accept. Good job!\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of changes\", \"comment\": \"We would like to summarize the changes we made to the initial submission based on the reviewers feedback.\\n\\n1) The training process is clarified.\\n2) A discussion of EBMM in the context of other modern techniques such as GANs and VAEs is added to Section 5.\\n3) Section 6 has been enriched with a more detailed discussion of existing limitations of EBMM and potential ways of improving on them.\\n4) Appendix A.3 has been added where we provide a direct comparison to three different non-memory baselines.\\n5) Comparison to Kanerva Machine has been added as Appendix A.4\\n6) We performed experiments with different distortion models in Appendix A.5. EBMM shows satisfactory behavior with both decreased and increased level of noise.\\n7) We tested the ability to transfer writing and reading capabilities of EBMM on the example of Omniglot and MNIST in Appendix A.6.\\n8) We performed an experiment with the correlated batches in Appendix A.7 and found that training on correlated batches leads to better memory consolidation.\\n\\nWe hope our reviewers and area chairs will find these changes as a significant improvement of the paper and as a comprehensive answer to all raised questions.\"}",
"{\"title\": \"Reply (continued)\", \"comment\": \"> Experiments on correlated batches\\n\\nWe display results for randomly chosen images within the test set. If storing very similar images this can actually make correct reconstruction more difficult, as there is more ambiguity in locating the original image from the occluded query image. We found that if the model was not trained on correlated batches, it does not benefit from them at the test time. However, when trained and tested on batches of 2 Omniglot classes, EBMM achieves significantly lower reconstruction error. Please see the new Appendix A.7 for details. We will be able to provide a comprehensive study for the final version of the paper.\"}",
"{\"title\": \"Reply\", \"comment\": \"> - Dataset / batching details\\n\\nFor Omniglot, Cifar and ImageNet, we used the natural train/test splits in the datasets. We perform meta-learning training for at most 2 million iterations. For Omniglot and Cifar we used a batch size of 64, for ImageNet we select a batch of 32 images at a time. Batches were sampled uniformly. We have made this much clearer in the text now.\\n\\n> - Experiments across multiple SNR and generalization on noise patterns\\n\\nThank you for the suggestion, we have now added Appendix A.5 where we test generalization to different SNR on Omniglot. The model perfectly adapts to lower SNR and is able to perform reasonably well with non-significantly stronger noise.\\nWe will be able to provide a more complete study in the final version of the paper.\\n\\n> - Missing related work\\n\\nThis is indeed a relevant paper, thank you for the suggestion. Training attractor models using implicit differentiation is a promising direction for future work.\\n\\n> - Large batch sizes for ImageNet\\n\\nThis work is very relevant, we now cite it.\\n\\n> - Mentioning Appendix D in the main paper\\n\\nThe appendix is now referenced.\\n\\n> - Related paper at NeurIPS this year \\n\\nWe agree it is a very relevant paper and we already cite it in the Related work section. To avoid any confusion, we would like to emphasize that our works differ in the interface of a memory module. In the MNM it is the key-value retrieval of non-structured vectors, while EBMM is focused on more association problems, e.g. where any part of the stored pattern can be used as a key. \\n\\n> - Comments on scalability\\n\\nScalability can be understood as a multi-dimensional concept. We made an improvement on the expressivity (by adopting modern deep learning techniques) and the speed (by utilizing gradient-based meta-learning) dimensions, but we do not demonstrate yet an advantage over slot-based memory in temporal tasks with incremental updates. Although our early experiments suggest that EBMM is more than viable in this setting, training on long sequences is less straightforward than traditional recurrent models. The papers you suggested as related work can be very helpful to improve on this dimension. We have refined Section 6 to reflect on this.\"}",
"{\"title\": \"Reply\", \"comment\": \"We\\u2019re pleased that the reviewer found the manuscript interesting, novel, and well-written. We have addressed each of the major concerns below.\\n\\n> 1) I am in general not really convinced about the supposed advantages of these attractor memory models (this paper and the earlier Kanerva machine) over more standard and much simpler approaches. \\n\\nThe point of our paper is not to manifest superiority of attractor- or energy-based models in any sense. We use this framework because it naturally allows us to 1) perform associative retrieval and 2) implement fast writing that is compatible with 1). \\nWe agree that both denoising and variational autoencoder frameworks can implement associative retrieval in a some sense, however, only to the extent to which any prior model can do so. In the new Appendix A.3 we evaluate VAE, DAE and Deep Image Prior (which is state of the art denoising model) and find that they perform significantly worse than EBMM and other memory baselines under the relevant test conditions. Moreover, as we explain in the text, in settings where the importance of a prior is less than the importance of memory, these models will ultimately fail. A good example of such setting is the experiment with binary strings in Appendix A.2 where it is clear that since patterns are sampled from a uniform distribution, no prior would implement associative retrieval. \\n\\n> Note that in an autoencoder, reading (inference) is already fast. The authors might point out that writing (training) will not be fast, which is correct. However, the meta-learning phase proposed in this paper will also not be fast and perhaps the fair comparison should be between the meta-learning phase of this paper and the standard training phase of an autoencoder. \\n\\nIt is true that there is a relatively slow meta-learning phase required by our approach. However, unlike denoising autoencoders, once our model has finished training it can read and write very quickly. Autoencoders are capable of storing and denoising data, but require a new slow and expensive training procedure each time one wishes to store new data. Depending on the intended use case one may prefer one or the other approach. For example, variants of our EBMM approach may be useful in the case that we wish to quickly and cheaply store data into a relatively volatile memory of the recent past (e.g. as when training an RL agent). Our approach is not strictly better than autoencoders. Rather, each has potential use cases that depend on downstream goals. \\n\\nWith respect to the comment about model class it should be noted that we can take advantage of very large and arbitrarily configured networks which included inductive biases (e.g. convolutional structure) if desired. Thus, our models can make use of large, structured, feedforward networks in the inner loop, and are thus able to learn about the rich structure in images and other complex data types.\\n\\n> 2) Currently, the paper only uses a specific type of \\u201cblock-noise\\u201d corruption. One thing that would be nice to see is some results with other noise models. \\n\\nAgreed. We have now run experiments and report results with a variety of different noise models, please refer to Appendix A.5.\\n\\n> 3) It would be good to say something about the meta-learned parameters, theta_bar, r, tau. Is there any meaningful structure in these parameters that distinguishes them from their random initial values? \\n\\nThese parameters have very different roles, so it is difficult to say which ones are more important. Theta_bar is the initialization for the writing process and meta-learning these is crucial, just as in any other MAML-like model. Note that \\\\bar{\\\\theta} is many orders of magnitude larger in size than r and \\\\tau. Indeed, as the initial configuration for our neural network, \\\\bar{\\\\theta} contains rich structured information about the data domain. \\nUsing generic step size decay rules for \\\\gamma and \\\\eta works, but the stability of training greatly improves if these parameters are trained, see e.g. \\u201cAntoniou, A., Edwards, H., & Storkey, A. (2018). How to train your MAML. arXiv preprint arXiv:1810.09502\\u201d.\"}",
"{\"title\": \"Reply (continued)\", \"comment\": \"> 8- For the chosen tasks, I am curious to see the experimental comparison to deep image prior (Ulyanov, 2018). Deep image prior would be very similar to the read operator (although the gradient descent is over the parameter of model) without having write operations when you define the energy as MSE.\\n\\nWe agree that this indeed is a relevant baseline and we performed a series of experiments with non-memory baselines, including Deep Image Prior. Please refer to Appendix A.3 for the quantitative study. In short, they performed strictly worse than models with memory because while a prior model can produce a plausible reconstruction it is not very helpful for the task of exact recall. In the case of Deep Image Prior it is important to note that it requires privileged information about location of the occluded area (equation 6), while we work without this assumption. Without this information, the model gets confused even by the relatively simple salt and pepper distortion.\\n\\n> Typos and writing style:\\n\\nThank you! We have fixed all of these typos and style issues.\"}",
"{\"title\": \"Reply\", \"comment\": \"> equations (1) - (5)\\n\\nYes. Another way to view our approach is as follows: we meta-learn a set of initial parameters for our neural network, \\\\bar{\\\\theta}. These initial parameters correspond to an initial energy landscape. Meta-learning insures that from this point in parameter space it is easy to make only a handful of gradient updates to produce a new energy landscape that effectively stores a new batch of data into memory. Once they are stored, it is possible to retrieve memories by inputting a query and then descending the energy function to retrieve an associated memory.\\n\\n> 1- The connection to meta-learning is unclear in your experiments. Can you elaborate on that? \\n\\nMeta-learning is used in our model to learn a good set of starting parameters \\\\theta, from which it is easy to quickly write memories into a network via gradient descent.\", \"another_way_to_say_this_is\": \"In the outer learning loop we learn the initial parameters \\\\bar{\\\\theta}, and in the inner loop we optimize the parameters to minimize the writing loss for the memories we want to store for the current batch/episode. Thus, the model learns to get good at quickly learning (or storing) new memories.\\n\\n> 2- The expectation in eq 5 is over different input patterns, which I assume that a set of input patterns belong to a task. What is that you write in memory? For each experiment, what are the different input patterns (tasks) that you have written in the memory?\\n\\nOur explanation of this process was confusing. Thank you for pointing out the issue here. We have fixed the explanation and mathematical notation around equations 4 & 5 to make this easier to follow. We refer the reviewer to this updated section of the text for a detailed explanation.\\nBriefly, we write a batch of N patterns into memory. These N patterns are sampled randomly from a larger dataset of training (or testing - during evaluation) patterns. Then we construct a reconstruction loss as a squared difference between the originally stored patterns and patterns retrieved from randomly distorted queries. This loss can now be used to compute a stochastic gradient that updates all parameters (theta_bar, r and tau).\\n\\n> 3- What is the \\\\theta that is feed to the read function at the test time? \\n\\nThe \\\\theta used by the read function at test time is created as follows:\", \"we_start_with_the_parameters_of_the_model_that_have_been_meta_learned\": \"\\\\bar{\\\\theta}.\", \"we_update_these_parameters_using_the_batch_of_data_to_be_stored_using_the_writing_procedure_given_by__eq_4\": \"\\\\theta^{T}\\nWe then test what the model can remember by querying it via the read operation.\\nCrucially, the data stored via step #2 has never been seen at test time.\\n\\n> Are you testing on the tasks that you already trained on?\\n\\nWe test on a set of held-out data, respecting the original train/test splits in Omniglot and ImageNet. The general task of storing data and retrieving corrupted examples is consistent during training and test.\\n\\n> How this approach generalizes to unseen (or relatively close) task? \\n\\nAs with many deep learning approaches, our models can capture the underlying statistics of the kinds of data that they are trained on (e.g. the structure of natural images as in the case of ImageNet). Our model learns general initial network parameters \\\\bar{\\\\theta} from which it can quickly and easily store data in a compressed format. \\nIn the new Appendix A.6 we verify that the Omniglot model successfully transfers to MNIST data.\\n\\n> 6- Can it recover any query that is not constructed with respect to the distortion model that is trained on? or what happens if the distorted image at test times comes from a different distortion model? (image blocking, for example)\\n\\nAs we show in the new Appendix A.5 - to some extent, yes. The model perfectly generalizes to smaller levels of noise and performs reasonably well with larger levels. We did not observe generalization to a different distortion model, which would indeed be a nice property, but such generalization is arguably difficult to expect with respect a distortion model completely unknown during the training. \\nNote that it is straightforward to use our approach to learn more general storage procedures by training across a distribution of distortion models.\\n\\n> 7- How many distorted samples are used for training?\\n\\nWe trained the model for at most 2 million iterations for all of our experiments (Appendix A). For the ImageNet experiment we trained with 32 images per iteration, so the model trained on 64M distorted images.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for your review. We agree that a discussion about our model and modern models such as GANs and VAEs was somewhat missing in our initial submission. We have added a paragraph in the Related work section to better position EBMM within modern deep learning and also performed a comparison with a number of non-memory baselines (Appendix A.3). We hope this confirms both conceptual and empirical contributions of our paper.\"}",
"{\"title\": \"From authors\", \"comment\": \"We would like to thank our reviewers for their time and valuable feedback, many of the comments helped us to improve the paper and obtain more results. We will soon be replying directly to each of the reviewers. Some of the requested experiments are still running and we will be updating the paper with new results.\\nWe believe we positively addressed most of the feedback and ask the reviewers to assess our replies.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThis paper proposes a new type of energy-based models, a class of non-normalized generative models that relies on an energy function to retrieve patterns that correspond to its minima. The goal that is tackled by the authors is to implement an associative memory system, i.e. a mechanism that is able to retrieve any one of a set of patterns, given a distorted copy of these patterns. This task is traditionally carried out using attractor neural networks like the Hopfield model, a recurrent neural network model endowed with a learning rule that allows it to quickly embed a given set of patterns in its weight matrix such that the patterns become stable fix points of its dynamics. As the authors point out though, models like the Hopfield model are limited in their capacity to assimilate attractor patterns and in term of their expressiveness. On the other hand, more complex models based on deep architectures trained with gradient descent are slow at updating their weights to create new attractors.\\nThe authors propose a new method to make up for the weaknesses of these two approaches. Their method is based on meta-learning, and in short consists in meta-training an energy function parametrized as a neural network such that executing a write dynamics on the weights results in a model whose read dynamics (a gradient descent on the energy function) is able to denoise distorted inputs and retrieve the original ones. In practice, the write dynamics is obtained as a gradient descent procedure on a writing loss (which is itself dependent on the energy function) as a function of the weights. The meta-learning procedure minimizes the discrepancy between the original patterns and the retrieved ones by optimizing end-to-end the learning schedule parameters and initial conditions of the weights, analogously to gradient-based meta-learning methods like MAML.\\nThe authors then carry out a series of experiments to check that their model is competitive with Memory-Augmented Neural Network (MANN) and Memory Networks (MemNets) in retrieving samples from Omniglot, CIFAR and ImageNet, in terms of retrieving abilities for a given memory size. In the supplementary material section they in addition compare their model's performance against the Hopfield model and recurrent networks on the classical toy task of retrieving random binary patterns, also with good results for the new model.\", \"decision\": \"This paper is very clearly and compactly written. The idea of training an energy-based model through gradient-based meta-learning seems novel and innovative. \\nOne thing that the the paper is arguably missing, is a convincing motivation section for focusing on energy-based models. The panorama of generative models has radically changed since attractor neural networks and energy-based models were first introduced. At the time powerful methods like variational autoencoders, normalizing flows and GANs didn't exist. But nowadays, one could arguably expect that energy-based models should be contextualized and motivated in the perspective of comparing them with these new breeds of deep generative models. I am absolutely not suggesting that the authors should providing experimental comparisons between their models and GAN or VAE, but simply that they compare them to their style of generative modeling in terms of advantages, disadvantages, use cases, and potential for applications.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"*Summary:*\\n\\nThe authors propose to tackle the associative memory problem by recasting read/write operations to read/write by optimizing the parameters/input of an energy based model. Writing is reformulated as training a parametric energy model (EBMM) to have local minima of energy w.r.t. the parameters at memorized data points. Reading is performed by performing (projected) gradient descent on the corrupted/incomplete input to minimize the energy. To ensure the operations are fast (read and write with minimal gradient steps), the authors propose to take inspiration from modern meta-learning literature and learn initialization parameters of the energy model (and other hyperparameters for GD during read/write) from which writing is fast while ensuring reading is also fast, since the models are trained to maximize read/write performance within a constrained number of gradient steps. Experimentally, the authors show that EBMM reading performs similar to baseline methods (but better across many memory sizes) on the standard Omniglot task. On CIFAR-10 and downsized ImageNet, they show much better L2 reconstruction error of corrupted images. They also show that the learnt energy \\n\\n*Recommendation:*\\n\\nI believe this is a very neat idea, and utilizes large parametric models for \\\"smart\\\" overfitting and compression of data for the associative memory task. The proposed meta-learning approach to training the model seems to perform well across multiple simple and challenging datasets, and therefore I would recommend accept. My current recommendation is very borderline (weak accept) because of a lack of some experimental rigour (which I would love clarifications on), and missing related work, which I mention below.\\n\\n*Discussion Points and Concerns from the Reviewer:*\\n\\n- Dataset / batching details \\nPlease mention how the datasets were split for training and testing the models. How much training data is utilized to meta-learn the EBMM initialization? How is batching performed? I believe these details are very important to mention in the paper for reproducibility of results. \\n\\nAre there any correlations in the batch selection? Can you evaluate how good the associative memory performs across different correlation levels in the batch (A well learnt algorithm should demonstrate better reconstruction at lower memory levels for correlated batches). \\n\\n- Experiments across multiple SNR and generalization on noise patterns\\nThe authors mention at the beginning of Section 4 that a random block is corrupted, but in the end the experiments are done on a constant corruption size on the CIFAR and ImageNet images. How do the models perform across different signal-to-noise ratios? Similarly, the model is trained on simple noise patterns \\n\\n- Missing related work\\nThere is related work [1] in learning in Hopfield Networks using the implicit function theorem and finding stationary points of the dynamics. This work is not mentioned in the paper, and is a valid baseline for this paper as well.\\n\\n- Mentioning Appendix D in the main paper\\nAppendix D is not mentioned in the main paper and has a short discussion on the mismatch between the reading process and the writing loss during meta-training. It also mentions additional tricks required for the training, and I believe it should be mentioned in the main paper like other sections are appropriately referenced. \\n\\n- Large batch sizes for ImageNet\\nWork from [2] can be utilized to backpropagate through very long optimization sequences and therefore can be utilized to train with larger batch sizes in the ImageNet example. It is important to see how the small model utilized for ImageNet works to compress higher batch sizes, as that is one of the major practical issues with the algorithm.\\n\\n- Related paper at NeurIPS this year \\n[3] is a related paper from Neurips this year, which the authors could consider adding as contemporary work\\n\\n- Comments on scalability\\nThe associative memory papers have often been criticized for lack of scalability, and I think the authors make progress towards making this better with the use of unconstrained energy models in the learning process. It would be nice to have a discussion of the scalability from the authors, highlighting issues in the current model and future directions\", \"references\": \"[1] Reviving and Improving Recurrent Back-Propagation, ICML '18\\n[2] Gradient-based Hyperparameter Optimization through Reversible Learning, Maclaurin et al. ICML '15\\n[3] Metalearned Neural Memory, Munkhdalai et al. NeurIPS '19\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Thanks for the extensive answers. I updated my rating based on the provided clarification and extra experiments.\\n\\n=====================\\nMy understanding of eq 1-5 is that the algorithm finds an energy landscape (by modifying \\\\theta) for each dataset (task) such that in this landscape, the inputs from the distribution are reachable by truncated gradient-descent initiating at a query (distorted input with respect to some distortion model).\\n\\n1- The connection to meta-learning is unclear in your experiments. Can you elaborate on that? \\n\\n2- The expectation in eq 5 is over different input patterns, which I assume that a set of input patterns belong to a task. What is that you write in memory? For each experiment, what are the different input patterns (tasks) that you have written in the memory?\\n\\n3- What is the \\\\theta that is feed to the read function at the test time? \\n\\n4- Are you testing on the tasks that you already trained on?\\n\\n5- How this approach generalizes to unseen (or relatively close) task? \\n\\n6- Can it recover any query that is not constructed with respect to the distortion model that is trained on? or what happens if the distorted image at test times comes from a different distortion model? (image blocking, for example)\\n\\n7- How many distorted samples are used for training?\\n\\n\\n8- For the chosen tasks, I am curious to see the experimental comparison to deep image prior (Ulyanov, 2018). Deep image prior would be very similar to the read operator (although the gradient descent is over the parameter of model) without having write operations when you define the energy as MSE.\", \"typos_and_writing_style\": \"-- The expectation in eq 2 should independently show what the expectation is taken with respect to. \\n-- input patters -> input patterns\\n-- figure 3 -> Figure 3\\n-- section 1 -> Section 1\\n-- 4 random images -> four random images \\n-- the Figure 5b -> Figure 5b\\n-- Models such as (Ba et al., 2016; Miconi et al., 2018) enable -> Models such as Ba et al. (2016) and Miconi et al. (2018) enable\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"======================================== Update after revisions ============================================\\n\\nI appreciate the effort the authors have put into the revision and the rebuttal. I'm happy to increase my score and recommend acceptance based on the revised paper. \\n\\nHowever, I have to say that some of my worries still linger. With respect to non-memory baselines, the authors have responded that the memory based models will outperform non-memory models in cases where prior structure is less important than memory and provided a demonstration of this with an extreme example, i.e. a case with no structure (random binary strings example). I understand the point being made here, but this is a rather pedantic and uninteresting example. The authors have provided another (more interesting) example in Appendix A3 and shown that the memory-based model outperforms some simpler baselines such as the DAE even in this case. But no explanation is given for this result. Why is the memory based model outperforming the DAE in this case, given that this is an example where prior *is* very important? I'm a bit worried that the DAE results may perhaps be due to a non-optimized architecture or training setup (and what exactly is the architecture used for the DAE here)? I would appreciate it if the authors could clarify these issues in the final version.\\n\\nI have also spotted several typos. For the final version, please make sure to go through the paper thoroughly at least once and fix all the typos.\\n\\n========================================================================================================\\n\\nThis paper proposes a meta-learning approach to learning fast read and write mechanisms in an energy-based model so that a given set of images can be quickly inducted into memory and retrieved from memory with noisy queries. The paper is well-written and the proposed approach seems interesting and novel enough. However, I have some concerns about the paper that need to be addressed. Here are the main issues for me:\\n\\n1) I am in general not really convinced about the supposed advantages of these attractor memory models (this paper and the earlier Kanerva machine) over more standard and much simpler approaches. For example, for the problem of retrieval from noisy queries, a more standard approach would be a simple autoencoder. Note that in an autoencoder, reading (inference) is already fast. The authors might point out that writing (training) will not be fast, which is correct. However, the meta-learning phase proposed in this paper will also not be fast and perhaps the fair comparison should be between the meta-learning phase of this paper and the standard training phase of an autoencoder. Note that the autoencoder will have additional benefits. For example, with the autoencoder, one is not constrained by memory storage requirements and can make use of a much larger set of images to train the model. This allows the model to learn a richer structure in images. Moreover, with a large enough feedforward net, one can approximate arbitrarily complex dependencies in images. However, in attractor memory models, on the other hand, one necessarily restricts oneself to a particular model class that can be expressed as gradient descent dynamics in an energy landscape both during reading and writing. This seems overly restrictive to me. So, perhaps, the authors can clarify the supposed advantages of these attractor memory models a bit better. For example, I would be interested in seeing some comparative results with, say, a denoising autoencoder model.\\n\\n2) Currently, the paper only uses a specific type of \\u201cblock-noise\\u201d corruption. One thing that would be nice to see is some results with other noise models. I think this is important to demonstrate that the approach is general enough to handle different kinds of noise. Also, a salt-and-pepper noise will allow the authors to compare their results with the dynamic Kanerva machine (the authors note that the DKM failed to train successfully for the block-noise used here). \\n\\n3) It would be good to say something about the meta-learned parameters, theta_bar, r, tau. Is there any meaningful structure in these parameters that distinguishes them from their random initial values? Is one of these parameters more important than the others? For example, what happens if you just use generic step size decay rules for gamma and eta (or perhaps no decay at all)?\"}"
]
} |
Syx9Q1rYvH | Mutual Information Maximization for Robust Plannable Representations | [
"Yiming Ding",
"Ignasi Clavera",
"Pieter Abbeel"
] | Extending the capabilities of robotics to real-world complex, unstructured environments requires the capability of developing better perception systems while maintaining low sample complexity. When dealing with high-dimensional state spaces, current methods are either model-free, or model-based with reconstruction based objectives. The sample inefficiency of the former constitutes a major barrier for applying them to the real-world. While the latter present low sample complexity, they learn latent spaces that need to reconstruct every single detail of the scene. Real-world environments are unstructured and cluttered with objects. Capturing all the variability on the latent representation harms its applicability to downstream tasks. In this work, we present mutual information maximization for robust plannable representations (MIRO), an information theoretic representational learning objective for model-based reinforcement learning. Our objective optimizes for a latent space that maximizes the mutual information with future observations and emphasizes the relevant aspects of the dynamics, which allows to capture all the information needed for planning.
We show that our approach learns a latent representation that in cluttered scenes focuses on the task relevant features, ignoring the irrelevant aspects. At the same time, state-of-the-art methods with reconstruction objectives are unable to learn in such environments. | [
"reinforcement learning",
"robust learning",
"model based",
"planning",
"representation learning"
] | Reject | https://openreview.net/pdf?id=Syx9Q1rYvH | https://openreview.net/forum?id=Syx9Q1rYvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"v1KObiYty_",
"HklgKJQ3jr",
"B1lHIkXnoB",
"Bkg6m1X2sr",
"B1x4s2u0Fr",
"Hyl7bVL0tr",
"r1eaHySCFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728246,
1573822328449,
1573822284818,
1573822244604,
1571880092446,
1571869690829,
1571864389057
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1628/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1628/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1628/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1628/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1628/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1628/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The manuscript concerns a mutual information maximization objective for dynamics model learning, with the aim of using this representation for planning / skill learning. The central claim is that this objective promotes robustness to visual distractors, compared with reconstruction-based objectives. The proposed method is evaluated on DeepMind Control Suite tasks from rendered pixel observations, modified to include simple visual distractors.\\n\\nReviewers concurred that the problem under consideration is important, and (for the most part) that the presentation was clear, though one reviewer disagreed, remarking that the method is only introduced on the 5th page. A central sticking point was whether the method would reliably give rise to representations that ignore distractors and preferentially encode task information. (I would note that a very similar phenomenon to the behaviour they describe has been empirically demonstrated before in Warde-Farley et al 2018, also on DM Control Suite tasks, where the most predictable/controllable elements of a scene are reliably imitated by a goal-conditioned policy trained against a MI-based reward). The distractors evaluated were criticized as unrealistically stochastic, that fully deterministic distractors may confound the procedure; while a revised version of the manuscript experimented with *less* random distractors, these distractors were still unpredictable at the scale of more than a few frames.\\n\\nWhile the manuscript has improved considerably in several ways based on reviewer feedback, reviewers remain unconvinced by the empirical investigation, particularly the choice of distractors. I therefore recommend rejection at this time, while encouraging the authors to incorporate criticisms to strengthen a resubmission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for their thoughtful and constructive comments. As suggested by the reviewers, we ran additional experiments and clarified sections of the paper. The additional experiments and modifications are: 1) Added a section with realistic distractors, 2) run several ablations, such as architectural choices, ablating the mutual information in the objective, and sensitivity to different prediction horizon; 3) extended the related work section, and 4) added the hyperparameters and network configuration.\\n\\nIn the following, we address the specifics of the reviewer. Specifically, we aimed to address all the suggestions that the reviewer pointed.\\n1. We added a new section 6.2 and extended our evaluation to more realistic and predictable distractors that follow a coherent motion. We investigated two schemes: 1) train on distractors that randomly chooses positions between frames and test on predictable distractors, and 2) train and test on predictable distractors. The results show that our approach successfully performs the tasks when predictable distractors are present, while (1) having better performance than (2). This result is not surprising: since we are maximizing the mutual information, our learned latent space will ignore components of the image with high entropy.\\n2. We added in the appendix an ablation study that, among other ablations, evaluates our method when just learning the reward prediction. This baseline completely fails at performing the task due to the weak signal that the reward provides.\\n3. We have modified section 4.2 to clarify the questions and improved readability, and the fixes for the reviewer questions:\\n3.A. It should be indeed a latent, and thus not shaded. Since we parameterize $s_t$ with a Gaussian distribution, the KL divergence can be calculated with closed form. \\n3.B. Yes, as depicted in Figure 3. It should be $s_t$ and not $\\\\hat{s}_{t+k}$.\\n3.C. The scale is number of rollouts. The horizon of all the tasks are H=1000, so each rollout corresponds to 1000 environment interactions.\\n4. We have added on the appendix a hyperparameter section that should contain all the details of the architecture. \\n5. We have extended the related work section to include the mentioned citations and other relevant related work.\\n\\nWe hope that with these clarifications and further analysis of our method the reviewer will consider our work for acceptance.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for their thoughtful and constructive comments. As suggested by the reviewers, we ran additional experiments and clarified sections of the paper. The additional experiments and modifications are: 1) Added a section with realistic distractors, 2) run several ablations, such as architectural choices, ablating the mutual information in the objective, and sensitive to different prediction horizon; 3) extended the related work section, and 4) added the hyperparameters and network configuration.\\n\\nIn the following, we address the specifics of the reviewer. First, the weaknesses; second, the question; and lastly, the comments.\", \"weaknesses\": \"1. It was our aim to motivate this in section 4.1. We have clarified the text and discussed why the mutual information objective is expected to pay less attention to distractors. The intuition is: although encoding more information in the latent space increases mutual information across time steps, when the latent space has limited capacity, it is incentivized to pivot to parts of the state space that contributes most information gain and disregard the elements that brings little information gain (distractors).\\n2. Our assumption is that task relevant information are elements in the scene that can be altered by agent actions.\\n3. We added a new section 6.2 and extended our evaluation to more realistic and predictable distractors that follow a coherent motion. We investigated two schemes: 1) train on distractors that randomly chooses positions between frames and test on predictable distractors, and 2) train and test on predictable distractors. The results show that our approach successfully performs the tasks when predictable distractors are present, while (1) having better performance than (2). This result is not surprising: since we are maximizing the mutual information, our learned latent space will ignore components of the image with high entropy.\\n4. We have added on the appendix a hyperparameter section that should contain all the details of the architecture. Regarding the specific questions of the reviewer, the horizon for NCE objective is h=3. We used a vanilla RNN with a probabilistic latent space.\\n5. In the appendix, we have also incorporated an ablation study that studies architectural choices, variation of loss functions, and sensitivity to prediction horizon.\", \"questions\": \"1. The information theoretic notation on random variables is used in the standard literature (for instance see [1]). Since we model the reward as a Gaussian variable, we try to minimize the KL divergence between in our reward predictor and the true reward in order to train an accurate reward predictor. In practice, this amounts to train the reward predictor by maximum likelihood.\\n2. The objective is summed across time-steps. One new trajectory is sampled every 5000 gradient steps taken. The actions are determined by CEM planner based on the current model (plus exploratory noise) . We have modified the paper to reflect it.\", \"comments\": \"1. We fully agree with the reviewer, and we have removed the claim in the introduction.\\n2. We will modify the title to be more descriptive.\\n\\nWe hope that with these clarifications and further analysis of our method the reviewer will consider our work for acceptance.\\t\\n\\n\\n[1] Elements of Information Theory. Thomas M. Cover and Joy A. Thomas.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for their thoughtful and constructive comments. As suggested by the reviewers, we ran additional experiments and clarified sections of the paper. The additional experiments and modifications are: 1) Added a section with realistic distractors, 2) run several ablations, such as architectural choices, ablating the mutual information in the objective, and sensitive to different prediction horizon; 3) extended the related work section, and 4) added the hyperparameters and network configuration.\\n\\n>The paper overlooks a good part of the related work on extending VAEs to sequence data\\nWe have extended the related work section to include the previous work mentioned by reviewer 1, and more references regarding sequential VAEs.\\n>The method is only introduced in the 5th page\\nWe motivate one of the key components of our method on page 3. The method is introduced in page 4 and 5.\\n\\n> No ablation study is conducted\", \"an_appendix_has_been_added_to_the_paper_that_contains_an_ablation_study_of_different_architecture_choices\": \"deterministic versus probabilistic latent space, choice of incorporating the actions into the CPC objective, and the removal of the mutual information objective. We also evaluate the sensitivity of our method to the CPC horizon. This ablation study underpins the choice of our architecture.\\n\\n>The experiments are in my opinion not convincing\\nApproaches that use shooting with model-predictive control, while effective in low-dimensional domains they do not scale up well to more complex domains. Our evaluated domains are the ones commonly tested when planning with such methods from state-space[1], and images [2].\\n\\nWe hope that with these clarifications and further analysis of our method the reviewer will consider our work for acceptance.\\t\\n\\n\\n[1] Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models. Kurtland Chua, Roberto Calandra, Rowan McAllister, Sergey Levine\\n[2] Learning Latent Dynamics for Planning from Pixels. Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, James Davidson.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"########### Post-rebuttal summary ############\\nThe proposed method relies on the fact that the distractors have highly unpredictable movements such that a mutual information objective between frames of a sequence learns to ignore them. The experimental evaluations are performed with distractors that randomly change positions / directions, which fits the requirements of the method but not necessarily the behavior of distractors in real environments. To properly evaluate this, experiments in environments with more realistic distractors (who's dynamics are not chosen by the authors) are necessary. Therefore, I do not recommend acceptance of this submission.\\n########################################\\n\\n\\n## Paper Summary\\n\\nThis paper combines a mutual information maximisation objective a la CPC with objectives for dynamics and reward prediction to learn a representation for downstream planning / skill learning that is more robust to visual distractors than comparable representations learned with reconstruction-based objectives a la VAE. In particular, the MI objective maximises the mutual information between the representation of the current state and future observations. The authors show improved robustness of their representation to visual distractors over a baseline with pixel-reconstruction-based representation learning (PlaNet). As a result they are able to achieve better performance when model-predictive control is used on top fo the learned representation to perform control on simulated DM Control Suite tasks with added simple visual distractors.\\n\\n\\n## Strengths\\n\\n- the paper addresses a relevant problem with an intuitive approach\\n\\n- the paper is well written and easy to read\\n\\n- the analytic toy experiments at the end of Sec 4.2 and in 6.2 help understand the properties of the learned representation\\n\\n- the proposed method, when applied to the model-based algorithm PlaNet, shows improved robustness in settings with artificially introduced visual distractors in simulated DM Control Suite tasks\\n\\n\\n## Weaknesses\\n\\n(1) unclear whether artificial distractors are indicative of behavior with real distractors: the fact that distractors do not follow coherent motions but instead randomly change position between two consecutive frames makes it hard to estimate how this would perform on more natural distractors. As there is no MI between the distractor's position in one image and any other image, a CPC-style representation learning objective will naturally be encouraged to ignore them. However, if they move with more natural, coherent motion this might not be the case. I would suggest to add an experiment with more natural distractor motion (see below).\\n\\n(2) it is unclear how much of the invariance to distractors is coming from the MI objective and how much simply from the fact that the representation is learned with jointly predicting the reward. In order to justify that the MI objective is helpful for learning the representation I would suggest to run an ablation experiment that trains using *only* the reward prediction objective and compare performance. (see also suggested experiment below)\\n\\n(3) parts of the model formulation require clarification: when reading Sec 4.2 that describes the model some parts were unclear to me (see concrete questions below). \\n\\n(4) the amount of detail provided in the paper is insufficient for reproducing the results: the paper lacks detailed information about the used architectures, hyperparameters and versions of the baselines employed (e.g. PlaNet with stochastic/deterministic prediction?), code is not provided.\\n\\n(5) lacks reference to recent work on CPC-style objectives for RL (see suggested references below)\\n\\n\\n## Questions\\n\\n(A) why is the latent state variable s_t observed in the model depicted in Fig 3 (i.e. part of the input data)? Shouldn't this variable be latent? How can the KL constraint on it be computed if it is not observed in the input data?\\n(B) should the first term on the right hand side of equation (2) have \\\\hat{s}_t instead of \\\\hat{s}_{t+h}? Otherwise, how is the I_{NCE} computation conditioned on the current state s_t?\\n(C) what is the scale of the x-axis in Fig 5, i.e. does it show the number of environment rollouts / steps or the number of model re-training iterations? if the latter is the case, how does that translate to the number of environment interactions?\\n\\n\\n## Suggestions to improve the paper\\n\\n(for 1) add an experiment where the distractor has more natural dynamics so that there is MI between the distractor positions in consecutive frames (e.g. ball bouncing in the image frame in the background instead of randomly jumping to new positions)\\n(for 2) add an experiment with a reward-prediction only baseline, i.e. only action-conditioned reward prediction so that task-irrelevant parts are ignored by default (i.e. also no reconstruction objective, but also no MI objective) -> show how PlaNet performance compares to the so far reported numbers when using this representation for planning\\n(for 4) add details about the architecture, hyperparameters and training schedule, for both the method and all comparisons to the appendix\\n(for 5) add references to related works that use CPC-style MI objectives for representation learning in the context of RL/skill learning: \\n\\t- [1] Nachum et al., ICLR 2019 -- applies CPC-style objective to hierarchical RL setting\\n\\t- [2] Anand et al., NeurIPS 2019 -- investigates MI objectives for representation learning on a wide range of Atari games (don't apply to RL)\\n\\t- [3] Gregor et al., NeurIPS 2019 -- while the main proposed model is generative they compare to a contrastive version that uses CPC to learn predictive representations (don't use it for RL)\\n\\t- [4] Guo et al., Arxiv 2018 -- similar investigation to [3] of CPC-style objective for representation learning in RL environments (don't use it for RL)\\n\\t- it should also be mentioned that the original CPC paper already showed that adding CPC-style auxiliary loss to RL improves performance (even though they did not compare to other model-based methods)\\n- add qualitative rollouts for predictions from the PlaNet predictive network both with and without distractor to the appendix\\n\\n\\n## Minor Edit Suggestions\\n- \\\"Learning latent dynamics from pixels\\\", Hafner et al. is cited twice in the reference section\\n- it might help to add the reward prediction module to Fig 3 or mention in the caption that it is omitted, it is only described later in the text and was confusing for me on first sight\\n\\n\\n[Novelty]: okay\\n[technical novelty]: minor\\n[Experimental Design]: okay\\n[potential impact]: high\\n\\n\\n#######################\\n[overall recommendation]: weakReject - I am inclined to accept this paper but am not fully convinced that the random distractors provide a good intuition about how the proposed method would behave with more natural distractors. If the authors are able to report positive results on the two requested experiments I am willing to raise my score.\\n[Confidence]: High\\n\\n\\n[1] Near-Optimal Representation Learning for Hierarchical Reinforcement Learning, Nachum et al., ICLR 2019\\n[2] Unsupervised State Representation Learning in Atari, Anand et al., NeurIPS 2019\\n[3] Shaping Belief States with Generative Environment Models for RL, Gregor et al., NeurIPS 2019\\n[4] Neural Predictive Belief Representations, Guo et al., Arxiv 2018\\n\\n\\n\\n### Rebuttal Comment (copied here so that it's visible to the authors) ###\\n\\nI appreciate the effort the reviewers put into the rebuttal! My main concern was that the mutual information objective only encourages the model to ignore the distractors because they change positions randomly between frames, i.e. there is no mutual information between distractor positions in consecutive frames. On the other hand, if the distractor's motion was perfectly deterministic there would be infinite mutual information between distractor positions in consecutive frames and therefore the representation might exclusively model the distractor. I.e. the more predictable the distractor, the worse the proposed method will perform.\\n\\nI asked the authors to perform an experiment with a more predictable distractor that bounces in the image frame to test this hypothesis. The authors instead chose a distractor that moves predictably for a few steps before randomly changing movement directions. While this sounds similar it can actually make a big difference depending on how frames are sampled for the CPC objective, i.e. if the required pair of frames is sampled across a random direction change there is again no mutual information between the distractor positions in both frames.\", \"in_real_scenes_the_behavior_of_distractors_likely_lies_somewhere_in_between_these_extremes\": \"they will likely not be fully deterministic but certainly not have frequent moments of purely random direction changes. Therefore, to properly evaluate the merit of the proposed approach, experiments on more realistic / previously published environments are needed where the distractor dynamics are given and cannot be altered to better fit the proposed method.\\n\\nDue to this fundamental concern about the method I cannot recommend acceptance of the submission.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose a latent dynamics model that is learned by maximizing a bound on mutual information between image embeddings and the latent state h time steps later. The model is evaluated on four standard visual control tasks that are solved by online planning.\", \"strengths\": [\"The paper addresses an important open problem with latent dynamics models.\", \"The paper is written clearly.\"], \"weaknesses\": [\"The paper does not discuss why the mutual information is expected to pay less attention to distractor objects.\", \"The paper repeatedly mentions \\\"task relevant\\\" information. However, I cannot find anything about the method that would make the learned features more task relevant than reconstruction. This should be clarified.\", \"The distractor objects in the experiments randomly change locations in each frame. How would the model be expected to behave if they changed in a predictable way?\", \"The paper is lacking detail about hyper parameters and model architecture. What value for h is used? Is the transition function a vanilla RNN?\", \"The paper is missing an ablation study. It would be interesting which of the design choices about the model contribute to its success.\"], \"questions\": [\"Could you please explain the KL term that is weighted by lambda_2 in Eq 1? The KL notation on random variables rather than distributions seems non-standard. It is unclear why information about the reward should be penalized.\", \"Is the objective summed across time steps? How is the data sampled that the model trains on?\"], \"comments\": [\"The paper claims in the introduction that reconstruction based approaches cannot discard low-level information. This claim should be rephrased, since the decoder variance allows to discard low-level information (the amount can be controlled by scaling the KL).\", \"I would suggest to remove the word \\\"robust\\\" from the title or find a more descriptive term to replace it with.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose a model-based reinforcement algorithm in which the model is a sequential latent variable model and the actions planned with a cross-entropy method (CEM) planner. The model is learnt by maximizing a lower bound on the mutual information between the latent states and their successor observations (instead of the classical sequential ELBO). The authors argue that the latter objective function yield robustness to distraction in visual scenes. The algorithm, named MIRO, is experimented on 4 simulated environments.\\n---\\nOverall I did not find the paper particularly clear and easy to read. The method is only introduced in the 5th page and no ablation study is conducted. \\nIt is still not obvious to me why maximizing the MI in the objective function would reduce the influence of potential distractors.` \\nFurthermore, the paper overlooks a good part of the related work on extending VAEs to sequence data, published in the last 3 years and does not draw links to similar architectures. \\nThe experiments are in my opinion not convincing, as the approach is only experimented on 2 non trivial -yet not particularly challenging- environments (Finger and Half Cheetah).\", \"minor\": \"in the equation of the ELBO, page 3, the parameters \\\\theta and \\\\phi are swapped.\"}"
]
} |
Bkx5XyrtPS | Depth creates no more spurious local minima in linear networks | [
"Li Zhang"
] | We show that for any convex differentiable loss, a deep linear network has no spurious local minima as long as it is true for the two layer case. This reduction greatly simplifies the study on the existence of spurious local minima in deep linear networks. When applied to the quadratic loss, our result immediately implies the powerful result by Kawaguchi (2016). Further, with the recent work by Zhou& Liang (2018), we can remove all the assumptions in (Kawaguchi, 2016). This property holds for more general “multi-tower” linear networks too. Our proof builds on the work in (Laurent & von Brecht, 2018) and develops a new perturbation argument to show that any spurious local minimum must have full rank, a structural property which can be useful more generally | [
"local minimum",
"deep linear network"
] | Reject | https://openreview.net/pdf?id=Bkx5XyrtPS | https://openreview.net/forum?id=Bkx5XyrtPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"XVeKRBcKp",
"Bkg1_DOvsS",
"B1lWtYBXor",
"rkg90tx-oS",
"rkg1Svl-jr",
"HJlLXukZoS",
"SylpH0iMqH",
"ryxmC2kpFB",
"B1gk7EgVYr"
],
"note_type": [
"decision",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728216,
1573517159404,
1573243256725,
1573091793625,
1573091127263,
1573087261880,
1572154948765,
1571777738563,
1571189782630
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1627/Authors"
],
[
"~Micah_Goldblum1"
],
[
"ICLR.cc/2020/Conference/Paper1627/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1627/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1627/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1627/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1627/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1627/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Paper shows that the question of linear deep networks having spurious local minima under benign conditions on the loss function can be reduced to the two layer case. This paper is motivated by and builds upon works that are proven for specific cases. Reviewers found the techniques used to prove the result not very novel in light of existing techniques. Novelty of technique is of particular importance to this area because these results have little practical value in linear networks on their own; the goal is to extend these techniques to the more interesting non-linear case.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for the reference!\"}",
"{\"title\": \"An Interesting Connection\", \"comment\": \"Hi Authors,\\nThank you for your interesting paper. I wanted to bring to your attention that your insights into spurious local minima is related to our paper which shows, both theoretically and empirically, that highly suboptimal local minima do exist in the loss landscape of nonlinear neural networks.[1] Please consider mentioning the relationship with our work in your next version.\\n\\n[1] https://arxiv.org/abs/1910.00359\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you very much for your comments. We agree with you that the non-linear neural networks is the ultimately interesting question. We hope the work here can provide some new angle (e.g. the reduction from deep to two layer networks) towards that question.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you very much for your comments, which is very helpful for clarifying our contribution and improving the presentation of the paper. Please see the inline responses.\\n\\n> Comments: \\n>\\n> 1) I understand that there exists some work on deep linear network recently. However, they seem to be only for theoretical purpose. Most of the current practical problems do not consider this kind of network for training. If it has high impact in practice, then people are starting to use it. Could you please provide more reasons why we need to care about this impractical network? \\n\\nIt is true that deep linear networks are not used much, if any, in practice (though two layer linear networks, e.g. matrix factorization, is broadly used for recommender systems). The main reason to study this is to gain insight and to invent tools to understand the practical case. This is quite common practice (for example, recent studies on wide shallow networks) when we are unable to solve the eventual question but we would still like to make progress.\\n\\n> 2) It is still unclear about the contributions of the paper. Why \\u201cdeep linear network has no spurious local minima as long as it is true for the two layer case\\u201d is important? And what we can take any advantage from here? \\n\\nBoth nonlinearity and depth can increase the complexity of the optimization landscape. It would be super interesting to show that depth does not hurt. Our paper showed that this is the case for deep linear networks. This is an interesting conceptual contribution. The proof requires a few novel arguments too. \\n\\n> What if there exist some spurious local minima for the two layer case (which is widely true)?\\n\\nWe know they do not exist for quadratic loss and for any differentiable convex loss when there is no bottleneck. Actually we were unable to construct an example (although we suspect they do exist).\\n\\n> 3) The paper looks like a technical report and seems not to be ready. \\n\\nThe paper was intended as a clean proof of a clean statement. Your (and others) comments have been very useful for clarifying the contribution of the paper and improving the presentation. We would appreciate any further advices on what to include in the paper.\\n\\n> The results are quite incremental from the existing ones. The contributions of this work to the deep learning community are still ambiguous.\\n\\nBesides the contribution stated above, with our paper, we now know that there is no spurious local minima in deep linear network for quadratic losses (this was only known conditionally before our paper). In addition, our proof is quite accessible which we hope to help to enable further progress on related topics.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you very much for your detailed review and the comments/questions, which are very helpful for clarifying our contribution and improving the presentation of the paper.\\n\\nPlease see the inline responses to your specific questions.\\n\\n> For novelty, it is unclear if the results from Lemma 1 to Theorem 1 and 2 are both being stated as novel results. The first part of proof of Theorem 1 is obvious and straightforward, and the other direction has been used before for multiple times as claimed in the paper, what is your novelty exactly here? \\n\\nThe main technical contribution is Lemma 1. The other claims (Theorem 1, 2, Cor 1) follow more or less directly from it, but they are interesting conceptually. We did state explicitly in the proof of Theorem 1 that the implication of Lemma 1 to Theorem 1 (a rather easy argument) is included for the completeness. The fact that the property of deep linear networks can be determined by the two layer network is certainly novel and interesting too (in our opinion). Besides the conceptual novelty, there is also technical novelty, as stated below.\\n\\n> For the key technical claim of Lemma 1, it looks like this perturbation technique already exists in (Laurent & Brecht, 2018), why do you claim it as a novel argument?\\n\\nThe proof is inspired by Laurent & Brecht, 2018, as explicitly stated in the paper. Especially, it follows the same argument to construct a family of local minima (up to line 14 on page 5.) But then the proof branches from there. In Laurent&Brecht, the critical condition used is that the null space is the entire space. So it only requires one line (formula (21) in that paper) to carry the induction. But here, because of the existence of bottleneck, the critical condition is that the local minima must lie on some subspace. It requires to generalize the argument to deal with this case. The bulk of the proof of Lemma 1 (from line 15 on) is about carrying the induction through with this weaker constraints. But we should contrast this better in the paper.\\n\\n> Besides novelty, there are also some other unclear pieces in this paper needs clarification:\\n> 1)\\tIs the main result which is \\u201cno spurious local minima for deep neural network\\u201d holds for any differentiable convex loss other than quadratic loss?\\n\\nThe main result is as stated in Theorem 1, i.e. for any differentiable convex loss, whether a deep linear network has spurious local minima reduces to the two layer case. \\n\\n> How will Theorem 1 help us understand the mystery of neural network? \\n\\nConceptually, it shows that depth does not introduce extra local minima for deep linear networks. In general, the complexity of landscape can be caused by the non-linearity and/or depth. Here we show that depth does not make the landscape more complex for the linear networks. We think this is a progress towards understanding the mystery of neural networks. And if not, any understanding of deep neural network should include deep linear network as a special case. So it is a pre-requisite to some sense.\\n\\n> 2)\\tHow does the result help us understand non-linear deep neural network, which is commonly use in practice?\\n\\nGood question. We can only speculate here --- perhaps similar phenomena exist for non-linear networks? It would greatly simplify our task if we can reduce the study to two or small number of layers. We have shown this is possible for deep linear networks, which is perhaps interesting, at least conceptually?\\n\\n> 3)\\tThe paper should give some explanations about why the results help training neural networks.\\n\\nThe paper is solely about understanding the landscape of deep linear networks, which is, in our opinion, an important question which needs to be answered even before studying the convergence.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The motivation of this paper is training deep neural network seems to not suffer from local minima, and it tries to explain this phenomenon by showing that all local minima of deep neural network is global minima. The paper shows that for any convex differentiable loss function, a deep linear neural network has no so called spurious local minima, which to be specific, are local minima that are not global minima, as long as it is true for two-layer Neural Network. The motivation is that combining with existing result that no spurious local minima exists for quadratic loss in two-layer Neural Network, this relation connecting between two-layer and deeper linear neural network immediately implies an existing result that all local minima are global minima, removing all assumptions. The result also holds for general \\u201cmulti-tower\\u201d linear networks.\\n\\nOverall, this paper could be an improvement of existing results. It is well written and the proof step is clear in general. However, there\\u2019re some weakness need clarifications on the results, especially on the novelty. Given reasonable clarifications in response, I would be willing to change my score.\\n\\nFor novelty, it is unclear if the results from Lemma 1 to Theorem 1 and 2 are both being stated as novel results. The first part of proof of Theorem 1 is obvious and straightforward, and the other direction has been used before for multiple times as claimed in the paper, what is your novelty exactly here? For the key technical claim of Lemma 1, it looks like this perturbation technique already exists in (Laurent & Brecht, 2018), why do you claim it as a novel argument? \\n\\nBesides novelty, there are also some other unclear pieces in this paper needs clarification:\\n1)\\tIs the main result which is \\u201cno spurious local minima for deep neural network\\u201d holds for any differentiable convex loss other than quadratic loss? How will Theorem 1 help us understand the mystery of neural network? \\n2)\\tHow does the result help us understand non-linear deep neural network, which is commonly use in practice?\\n3)\\tThe paper should give some explanations about why the results help training neural networks.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper shows an interesting result: deep linear NN has introduced no more spurious local minima than two layer NN and provides an intuitive and short proof for the results, which improve and generalize the previous results under milder assumptions. Overall, the paper is well written and clear in comparison and explanation.\\n\\nThe weakness is that the main theoretical contribution seems to be merely Lemma 1, and all other theorems are a direct corollary. Also, it would be of great interest to see concrete results on non-linear neural networks, since that is exactly what is used in common practice.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\nThe paper shows a deep linear network has no spurious local minima as long as it is true for the two layer case for any convex differentiable loss.\", \"comments\": \"1) I understand that there exists some work on deep linear network recently. However, they seem to be only for theoretical purpose. Most of the current practical problems do not consider this kind of network for training. If it has high impact in practice, then people are starting to use it. Could you please provide more reasons why we need to care about this impractical network? \\n\\n2) It is still unclear about the contributions of the paper. Why \\u201cdeep linear network has no spurious local minima as long as it is true for the two layer case\\u201d is important? And what we can take any advantage from here? What if there exist some spurious local minima for the two layer case (which is widely true)? \\n\\n3) The paper looks like a technical report and seems not to be ready. \\n\\nThe results are quite incremental from the existing ones. The contributions of this work to the deep learning community are still ambiguous.\"}"
]
} |
HkgqmyrYDH | WORD SEQUENCE PREDICTION FOR AMHARIC LANGUAGE | [
"Nuniyat Kifle",
"Ermias Abebe"
] | Word prediction is guessing what word comes after, based on some current information, and it is the main
focus of this study. Even though Amharic is used by a large number of populations, no significant work is
done on the topic. In this study, Amharic word sequence prediction model is developed using Machine
learning. We used statistical methods using Hidden Markov Model by incorporating detailed parts of speech
tag and user profiling or adaptation. One of the needs for this research is to overcome the challenges on inflected languages. Word sequence prediction is a challenging task for inflected languages (Gustavii &Pettersson, 2003; Seyyed & Assi, 2005). These kinds of languages are morphologically rich and have enormous word forms, which is a word can
have different forms. As Amharic language is morphologically rich it shares the problem (Tessema,
2014).This problem makes word prediction system much more difficult and results poor performance.
Previous researches used dictionary approach with no consideration of context information. Due to this
reason, storing all forms in a dictionary won’t solve the problem as in English and other less inflected
languages. Therefore, we introduced two models; tags and words and linear interpolation that use parts of
speech tag information in addition to word n-grams in order to maximize the likelihood of syntactic
appropriateness of the suggestions. The statistics included in the systems varies from single word
frequencies to parts-of-speech tag n-grams. We described a combined statistical and lexical word prediction
system and developed Amharic language models of bigram and trigram for the training purpose. The overall
study followed Design Science Research Methodology (DSRM).
| [
"Word prediction",
"POS",
"Statistical approach"
] | Reject | https://openreview.net/pdf?id=HkgqmyrYDH | https://openreview.net/forum?id=HkgqmyrYDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"e-cDXG8Tv8",
"rygH6_Mh5r",
"HJxiqVt3tr",
"BJeDrU0bKH"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728186,
1572772028630,
1571751059354,
1571051071263
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1626/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1626/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1626/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a language model for Amharic using HMMs and incorporating POS tags. The paper is very short and lacks essential parts such as describing the exact model and the experimental design and results. The reviewers all rejected this paper, and there was no author rebuttal. This paper is clearly not appropriate for publication at ICLR.\", \"title\": \"Paper Decision\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"*What is this paper about?*\\n\\nThe authors propose a method to incorporate POS tags into a language model to improve its performance in Amharic language.\", \"short_review\": \"The authors tackle an interesting task which deserves more attention. Nonetheless, they do not fully describe their models or results with enough detail, so it is hard to evaluate this work.\", \"contributions\": \"This work tackles a relevant problem that seriously impacts speakers of low resource languages.\\n\\n*What strengths does this paper have?*\\n\\nIt tackles and interesting problem.\\n\\n*What weaknesses does this paper have?*\\n\\nThe authors do not present their models in enough details so that the reader fully understands it.\\nThey also only gloss over the results, not presenting them in any concrete form, stating: \\u201cWe believe the results obtained were effective in reflecting bet-ter speed, correctness of suggestions (grammatical), and search space since these are the basic issues in word sequence prediction and in assistive technology\\u201d. This referred results are not shown in the manuscript though.\\n\\n\\n*Detailed comments:*\\n\\nThe paper does not use the official conference template. It is also very short, not going in details about the used techniques. Finally, references are not correctly formatted.\\n\\nSection 1.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\nThis paper proposes to predict word sequences for Amharic language-- a language spoken in Eastern Africa. It proposes to use HMMs with POS tags and morphological features to perform this prediction task.\\n\\n\\nThe paper is just 3 pages, contains 1 paragraph of methodology, and no experiments section. It is clearly a very early stage work and not in the scope of ICLR. This paper should have been desk-rejected as it needs more work before it is fit for publication. There is lot of work on word sequence prediction and HMMs are no longer the state-of-the-art. The authors should consider looking at RNN-based methods such as LSTMs for this task.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": [\"I would not like to sound offensive, but this paper is clearly below the standards of the conference, and outside any academic orthodoxy for the matter:\", \"It is only 3 pages long including references, and does not even follow the conference template.\", \"It has only 3 sections (\\\"introduction\\\", \\\"methodology\\\" and \\\"conclusions\\\") and 2 subsections (\\\"background of study\\\" and \\\"limitation\\\"), and all of them are only one paragraph long.\", \"The work done is not adequately described, so it is not possible to say much about it, but it seems clear that there is no novelty nor sufficient rigor in it. The problem tackled is defined as \\\"word prediction\\\", which seems to be some form of language modeling. The proposed method combines HMMs with n-grams and morphological and POS features, which are all well established and should be considered more of a (nowadays outdated) baseline. There is no proper evaluation: the experimental settings are not described, and only one number is reported, with nothing to compare to.\"], \"to_the_authors\": \"Please do not feel discouraged by my review. Your motivation (helping people with dyslexia in Ethiopia) is certainly laudable, but neither the work carried out nor its presentation meets the standards of our research community. I would suggest that you check some of the accepted papers in last year's conference to get a sense of what is expected. I assume that you are new to the field. Don't feel the need to publish your own research from the first day, and take your time to become familiar with the field and study the basic concepts. It takes time, but we all had to go through it :)\"}"
]
} |
HygYmJBKwH | YaoGAN: Learning Worst-case Competitive Algorithms from Self-generated Inputs | [
"Goran Zuzic",
"Di Wang",
"Aranyak Mehta",
"D. Sivakumar"
] | We tackle the challenge of using machine learning to find algorithms with strong worst-case guarantees for online combinatorial optimization problems. Whereas the previous approach along this direction (Kong et al., 2018) relies on significant domain expertise to provide hard distributions over input instances at training, we ask whether this can be accomplished from first principles, i.e., without any human-provided data beyond specifying the objective of the optimization problem. To answer this question, we draw insights from classic results in game theory, analysis of algorithms, and online learning to introduce a novel framework. At the high level, similar to a generative adversarial network (GAN), our framework has two components whose respective goals are to learn the optimal algorithm as well as a set of input instances that captures the essential difficulty of the given optimization problem. The two components are trained against each other and evolved simultaneously. We test our ideas on the ski rental problem and the fractional AdWords problem. For these well-studied problems, our preliminary results demonstrate that the framework is capable of finding algorithms as well as difficult input instances that are consistent with known optimal results. We believe our new framework points to a promising direction which can facilitate the research of algorithm design by leveraging ML to improve the state of the art both in theory and in practice.
| [
"learning",
"algorithms",
"competitive algorithms",
"input instances",
"optimization problem",
"framework",
"components",
"yaogan",
"inputs yaogan",
"inputs"
] | Reject | https://openreview.net/pdf?id=HygYmJBKwH | https://openreview.net/forum?id=HygYmJBKwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"mv94DUG1S",
"H1xP2ojXjB",
"BkgNVosXiB",
"H1egloiXiS",
"BkgUZcsmsS",
"SyghCKsmor",
"SylS_OoXsH",
"BygZ2yjmir",
"H1lxDJomsr",
"rkeZqrk0tr",
"SkxLGy2ftH",
"Skgxo_i6_S"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728157,
1573268399075,
1573268267862,
1573268199962,
1573267966005,
1573267924343,
1573267565287,
1573265320953,
1573265239648,
1571841417219,
1571106573531,
1570777240005
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1625/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1625/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1625/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1625/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1625/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1625/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1625/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1625/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1625/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1625/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1625/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors propose an intriguing way to designing competitive online algorithms. However, the state of the paper and the provided evidence of the success of the proposed methodology is too preliminary to merit acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewer #1\", \"comment\": \"Thank you for your review. Please also see our high-level clarification above which we believe can help in better interpretation of our contribution. Some specific responses below:\\n\\nReviewer #1 is absolutely right -- we don\\u2019t know yet how to scale this to more difficult combinatorial problems. But let\\u2019s clarify that statement a bit more:\\n\\nThe ski-rental problem is often the first problem studied when teaching online algorithms, but it is certainly far from a \\u201ctoy problem\\u201d when we wish to learn an algorithm from scratch. We apologize if we painted an incorrect picture by calling it a \\u201csimple example\\u201d and a \\u201cstaple introductory problem\\u201d. It is easy to describe in that it has a single hidden parameter (the length of the ski season) and a single revealed parameter (the cost of buying). It is a staple introductory problem because it is elegant and illuminates the essential difficulties in designing online algorithms: there is a nearly-trivial factor-2 competitive algorithm (rent until you\\u2019ve spent $B, then buy, so even if the ski season ends the next day, you\\u2019ve not spent more than twice the least possible amount), but the 1-1/e competitive ratio algorithm is quite creative and subtle, and serves as an introduction to the richness of the field of online algorithms. In fact, the Karlin et al. (1986) paper also introduced the notion of competitive analysis of online algorithms, and is probably the most-cited paper in this field. In some sense, this poses us the ideal challenge: can ML approaches discover creative and subtle \\u201csolutions\\u201d (in our case, an algorithm)?\\n\\nOn a more technical note, please note that our \\u201cmachinery\\u201d of solving the two-player game is needed to discover an algorithm for the ski rental problem: if we don\\u2019t allow the players to alternate and reach an equilibrium, for any fixed distribution on the ski rental instances (B, K), there is a deterministic algorithm that is optimal (among all online algorithms), and the worst-case performance of that (or any) deterministic algorithm is *provably* limited by a factor of 2 (i.e., there exists some distribution on instances where it will fail badly). Also refer to our discussion on this in the high-level clarification at the top.\\n\\nThe AdWords problem considered is actually a difficult combinatorial problem, and is an archetypal online combinatorial optimization problem that captures the class of problems solvable by one of the most powerful techniques in this area -- primal-dual algorithms, which have led to the state-of-the-art approximation algorithms for numerous hard online (and offline) optimization problems. In particular, it generalizes bipartite matching, historically one of the most significant combinatorial optimization problems (led to the development of the classic Hungarian method, see https://en.wikipedia.org/wiki/Hungarian_algorithm).\", \"we_did_water_down_our_ambition_in_a_few_ways\": \"Instead of producing an algorithm that works for inputs of all sizes, we focus on the case of 9x3 (three advertisers, nine slots) -- a fixed finite size! This choice was arrived at based on the following criteria: what can we learn in a few hours of computation that\\u2019s still *well beyond* what can be achieved through exhaustive search (for an algorithm). Think of our task roughly as learning to play a very hard game on a 9x3 board -- we would, of course, love to learn how to play the same game on arbitrary size boards, but the fact is that the game is mighty hard even at this \\u201cboard size\\u201d (since in each round, one player plays a 0-1 assignment to each cell in the board, and the other player picks a subset of the columns, in fact a weight vector on the columns).\\nInstead of producing an algorithm that works for the 0-1 version of the problem, we produce an algorithm that works for the fractional version of the problem. This is, once again, motivated by making something work with modest amount of computation. Our explorations indicated that producing an algorithm for the 0-1 version needs reinforcement learning, and producing an algorithm that works on all 9x3 instances using this approach would still take several days of computation. \\n\\n\\nOn calling it Yao\\u2019s Principle: as Reviewer #1 correctly noted, this is an application of the classic von Neumann minimax principle to the \\u201cgame\\u201d between an \\u201calgorithm player\\u201d and an \\u201cinput player\\u201d. We call it Yao\\u2019s principle primarily in accordance with tradition in theoretical CS (see https://en.wikipedia.org/wiki/Yao%27s_principle and also https://blog.computationalcomplexity.org/2006/10/favorite-theorems-yao-principle.html, where it is noted that \\u201cYao observed [the result]\\u201d and commentators note that it\\u2019s called Yao\\u2019s principle because this observation has significant consequences for many central problems in TCS). We are happy to add text to reflect this.\"}",
"{\"title\": \"Response to reviewer #2, part 1/2\", \"comment\": \"Thank you for your review. Please also see our high-level clarification above which we believe can help in better interpretation of our contribution. Some specific responses below:\\n\\n** Addressing comments on the write-up: \\n\\nThanks, these help improve the presentation greatly (we realize we wrote the exposition more from a theoretical view and missed important ML details).\", \"details_on_architecture\": \"Agreed, and thanks. We have added some details on the specific network architectures to Appendix C (for ski rental) and Appendix D (for AdWords).\", \"new_suggested_structure_and_related_suggestions\": \"These are nice suggestions and explain why the structure was confusing. We\\u2019ve worked on these to come close to the suggested structure.\\n\\n\\n\\u201cMSVV\\u201d reference. Thanks for pointing out! This is the same algorithm described above in Mehta et al., but we realize that must have been confusing. Fixed.\\n\\n** Addressing Technical Comments:\\n\\n-- \\u201chyperparameter searching\\u201d:\", \"the_networks_we_used_in_this_work_are_fairly_simple\": \"dense layers with standard ReLu activation, and we use standard Adam optimizer. Simply choosing commonly recommended values for the parameters turn out to work well for the problems we looked at. In general, we agree with the reviewer\\u2019s point that hyperparameter searching can be important. For this particular work, our focus is to introduce the high-level ideas/framework and offer initial evidence that it can be effective, so we do not dwell much on the technical parts of ML in our discussion due to page limits. We also clarify that it is not the case that \\u201cwe have no interests in extending ML techniques\\u201d in general. Indeed, we believe that for the future success of our approach on more open problems in online algorithms, it very much relies on the advances of ML in terms of neural network structure, optimization algorithms and training techniques. We also hope our work can motivate the design of new tools/techniques tailored for this direction.\"}",
"{\"title\": \"Response to reviewer #2, part 2/2\", \"comment\": \"-- \\u201cThis work only considers problems for which the optimal input distribution is known, but is motivated by the fact that it could be applied to problems for which the optimal distribution is unknown and thus being able to discover new algorithms. It is hard to support this motivation when no experiments are done in its favor.\\u201d\\n\\n\\nThe long-term agenda / research program is indeed two-fold: \\n1. Investigate whether known optimal worst-case algorithms can be reproduced without any domain knowledge (i.e., \\u201cCan ML learn Algorithms\\u201d). This is the case in which the optimal distribution of inputs is also known.\\n2. Discover new/better worst-case algorithms for problems with the aid of ML, when neither a good algorithms or input distribution is known.\\n\\n#2 is a long-term goal, and not tackled in this paper, but we believe #1 (tackled in this paper) is itself of strong interest (and difficult) -- would ML be able to discover the same \\u201cpen-and-paper\\u201d algorithms that computer scientists invented? The problems we study (ski-rental and Adwords) fall into the first category of problems. Note that the algorithms in the two cases are very different in structure.\\n\\nFurther, please note that even though the optimal distribution of input is known in these two problems, we do not use it at all in training. Indeed, this is the main point of this paper -- the previous work of Kong et al. used these distributions to train the algorithm network (and hence that technique still needed the prior theoretical \\u201cpen-and-paper\\u201d work), while this work starts with ZERO knowledge. We follow this approach even in case #1 when the optimal input distribution is known exactly because we have the ultimate goal #2 in mind, that is, we want to design a framework that can eventually also work without knowledge of optimal input distribution (but that goal is outside the scope of this paper).\\n\\n\\n-- \\u201cNo comparison has been made between their approach and other previous approaches. We only know that the proposed approach finds near-optimal solutions with a difference of 0.01 competitive ratio. It is thus very hard to know if this new approach brings any improvement to previous work.\\u201d \\nWe believe there is some misunderstanding here as to the contribution. As such, there are no previous approaches to \\u201clearn algorithms\\u201d (besides Kong et al.). To be more explicit (in case we didn\\u2019t understand the comment), previous work for algorithmic problems could fall into a few buckets:\\n\\n(1) The original algorithms papers which found optimal worst case algorithms [Karlin et al. 1986, Mehta et al. 2007]. These give the analytical benchmarks. E.g., [Mehta et al. 2007] proposes the algorithm to solve Adwords, and proves that it achieves the optimal CR of 1-1/e ~ 0.63 (i.e., no matter what the online input sequence is, you get >= 1-1/e of the optimal solution in hindsight if you knew the instance offline). Thus the difference of 0.01 CR is a direct comparison to that work.\\n\\n(2) One may imagine there could be some kind of optimization (IP / LP) technique or some ML technique to solve specific instances of the problem (a specific instance of Adwords e.g.). But this is in fact not a feasible possibility, for two reasons: (a) Our problems are online problems where the full instance itself is not known in advance, and (b) we are looking for worst case competitive algorithms, i.e., a policy that does well no matter how the instance unfolds in the future. Thus there can not be previous work to compare in such a bucket.\\n\\n(3) Kong et al., 2018 is the closest previous work since it shows how to learn algorithms in the online setting. As mentioned above, the critical difference is that our paper learns the algorithms without any prior knowledge of the worst input distribution, but evolves both the distribution and the algorithm jointly (with some parallels to GANs, AlphaZero, self-play, etc. as we have stated). Quantitatively, the CR results are equally good; our main objective is to see if the learned algorithm is close in policy to the theoretical algorithm, and whether we are reasonably close to the optimal CR.\"}",
"{\"title\": \"Response to specific comments (1)--(7). part 1/2\", \"comment\": \"Thanks for the comments, they are helpful in improving exposition.\\n\\n(1) In general, it can be difficult or impossible to quantitatively evaluate the solutions accurately (see the discussion of \\u201cTraining convergence\\u201d in Appendix D.2). As to how we can make sense of the trained algorithm network and extract human-level insight or knowledge out of the neural network, interpretability in deep learning typically requires some domain expertise. In our work, we feed the algorithm network various inputs and inspect its outputs. We used AdWords in our work as demonstration because the optimal algorithm is known, so we can verify that our algorithm makes the right decision in different cases. An ultimate/ideal application of our approach is to facilitate the discovery of good algorithms for not so well-understood problems. That is, by inspecting the algorithm neural network\\u2019s behavior at carefully chosen inputs (e.g., the adversarially generated input instances during training), an expert can draw enough insight to extract the strategy out of the neural network into something humans can comprehend (i.e. \\u201ctextbook\\u201d algorithm). Of course this step will require significant domain expertise, but the hope is that we can produce a good candidate algorithm (i.e. the neural network) without much domain expertise so the second phase is easier than drawing up a good algorithm from scratch. We note this is similar in the situation of GAN where domain experts draw insights on structure of latent space by inspecting the generator after training, or in the situation of chess, where top human players enhance their understanding of the game and come up with unconventional strategies via observing how the deep neural network plays.\\n\\n\\n(2) The game theoretical view says in the algorithm-adversary game, we want to find the min-max strategy of the algorithm player (i.e. optimal worst-case algorithm), and a known way to find such strategy is if the algorithm player runs a no-regret dynamic (e.g. multiplicative weights update), and the adversary player in each round plays the best response to the algorithm player\\u2019s strategy of that round. This suggests that to effectively train the algorithm agent, we should aim to find the best response (i.e. the input instance on which the algorithm performs the worst). This is fundamental in our framework since it tells what we want the adversarial network to search for (i.e. gives an objective to the Algorithm 1 in section 3.1). Retrospectively this seems obvious, but the previous approach (and also the predominant approach in classical ML) is to have a good training set upfront, i.e. a distribution over difficult input instances in our context. This is infeasible if we want a framework that can also work in the case where we don\\u2019t have such knowledge, and the no-regret dynamic approach allows us to get around this, and reduce the task to adaptively finding bad inputs through training. We accomplish this by using the adversary network and other techniques. \\n\\n\\n(3) The prior work of Kong et al. relies on a good training set of input instances a priori to effectively train the algorithm. The two shortcomings are [mentioned in the paragraph \\u201cSolving an Online Optimization Problem\\u201d at the end of Section 2]: (1) it requires the knowledge of the adversarial input distribution (i.e. the maxmin player\\u2019s optimal strategy) and (2) even the adversarial distribution alone is not enough (e.g. the rock-paper-scissors example mentioned in the discussion) so additional human expertise is required to combine a high-entropy distribution to the adversarial distribution. Both shortcomings point to that it requires a significant amount of expert input for their approach to work, which is infeasible if the goal is to have a framework that can work for problems where we currently lack such knowledge. As discussed for the above question (2), we get around this issue by not trying to construct a good training set upfront, but adaptively come up with good training inputs as the training evolves. This is also in spirit the high-level strategy used in GANs and AlphaZero to get around the issue of no high-quality training datasets. \\nWe have added a sentence at the end of the discussion \\u201cSolving an Online Optimization Problem\\u201d (end of Sec 2) to explain how we overcome the issue.\"}",
"{\"title\": \"Response to specific comments (1)--(7). part 2/2\", \"comment\": \"(4) We agree with the reviewer that in many cases there is a gap between solving the discrete problem and the fractional problem. In general it is an established approach to solve the fractional problem and use additional techniques such as rounding to fill the gap. As to AdWords, although the discrete problem naturally corresponds to the real world scenario, we do not consider fractional AdWords below the bar compared to discrete AdWords in terms of difficulty. The optimal CR bound and the adversarial distribution are the same for both cases, and the optimal algorithms basically have the same structure. One may arguably say that the optimal algorithm for the fractional problem has richer structure as in the fractional problem the action space is much larger as we can fractionally assign each ad to many advertisers. \\nAs to the shortcomings of our techniques and why we pick the fractional problem, note that the GAN framework needs the computation of the discriminator network (i.e. the algorithm agent in our context) to be differentiable in order to update the generator network (i.e. the adversary in our context) during training. This poses difficulties if we ask the algorithm agent to make discrete decisions via sampling or rounding since it will not be differentiable. This doesn\\u2019t mean that our high-level framework (i.e. training the algorithm and adversary networks simultaneously) is doomed, since we can use other ML techniques (e.g. reinforcement learning) to implement our framework, but in general sampling and rounding will lead to much more work during training, so we pick the GAN structure in this work.\\n\\n(5) We know from theory that if the algorithm player runs a no-regret dynamic (e.g. MWU) and the adversary player responds with the worst input for the algorithm in each round, then the algorithm player converges to the optimal algorithm, and the uniform distribution over the adversary player\\u2019s responses gives the adversarial distribution. However, we cannot really follow this approach as the space of algorithms is infinite and we cannot run a MWU on this space, and in general it is also hard or impossible to find the absolute worst input in each round. In the practical framework, the algorithm player uses a neural network, and the adversary network tries its best to come up with a bad (but not necessarily worst) input each round. Thus we don\\u2019t have all the clean theoretical guarantees anymore, but the intuition should still largely hold (as our empirical result suggests).\\n\\n\\n(6) We updated the appendix to address this. See \\u201cTraining convergence\\u201d in Appendix D.2 \\n(7) We updated the appendix to address this. See \\u201cAdversarial distribution\\u201d in Appendix D.2\"}",
"{\"title\": \"Overall response to reviewer #3\", \"comment\": \"Thank you for your review. Please also see our high-level clarification above which we believe can help in better interpretation of our contribution. Some specific responses below:\\n\\n-- \\u201cproposed method demonstrates that an instance of one class of problems, Fractional Adwords, can be learned to solve without domain expertise, however fails to prove that the approach would be beneficial for any other instances of the same problem.\\u201d\\n\\nPlease refer to our overall comments on this question (and also a few more details in reply to Reviewer#1\\u2019s similar question).\\n\\n\\n\\n-- Comment on scale / speed for large instances of combinatorial optimization:\\n\\n\\nThe point of this work is only to see if ML can find optimal algorithms, and not about doing it faster than the known theoretical algorithms. Note that this is not similar to the case of solving an offline combinatorial problem via integer programming or other solvers, since our problems are online, i.e., the instance is not known beforehand, so there is no comparison to such \\u201cgeneral-purpose\\u201d solvers. Thus we don't compare to the running time of offline solvers, but to the worst-case competitive ratio of the optimal online algorithms. As mentioned in the comment, this approach may eventually lead to finding optimal or near-optimal algorithms for a problem (not an instance of a problem) for which no algorithm is known -- but this is outside the scope of this work future work. \\n\\nAgain drawing the analogy of playing Go, the objective is mostly on training an agent that can make competitive moves rather than very fast moves, and there is no known \\u201cgeneral-purpose\\u201d strategy to accomplish this.\\n\\n*Please also see reply to reviewer #2 on a similar question of evaluating against other methods*\\n\\n\\n-- \\u201cSki Rental problem can also be learned to solve though it is trivial and does not even use the framework the authors propose in its full extent, i.e. problem instances are not generated by use of a machine learning model, which is one of the main claims the authors are making.\\u201d\\n\\nPlease see our high-level clarification on top.\"}",
"{\"title\": \"High-level clarification 2/2\", \"comment\": \"Despite ski-rental\\u2019s simple structure, it is an important example showing why the previous approach of using a fixed training distribution upfront won\\u2019t work in general. In particular, if we fix any distribution over the season length, and use it as the training dataset, then the best algorithm against that distribution will be a deterministic strategy, which we know from theory cannot have a worst-case competitive ratio smaller than 2, whereas the optimal algorithm achieves ~1.58. This suggests we must switch to a framework where we adaptively come up with instances as we train the algorithm. Consider a more familiar example outside online algorithms, the game of rock-paper-scissors, if we want to train a player to learn the worst-case optimal strategy (i.e. playing the three moves uniformly at random), we cannot accomplish this by training against an adversary who plays a fixed (possibly randomized) strategy. No matter what the fixed strategy is (even the optimal uniform strategy), there is always a best deterministic strategy to counter (e.g. any strategy performs equally well against the fixed uniform strategy), so we shouldn\\u2019t expect our player to learn the optimal uniform strategy. We have to make the adversary adaptive during training. Please see more comments on the significance of the ski rental problem in our response to Reviewer #1.\\n\\nWhile the Adwords problem is difficult (even for a small number of advertisers), and finding a worst-case algorithm for ski-rental is also not trivial to do with ML without any prior knowledge, we agree that our results can be much stronger if we also solve AdWords with arbitrary number of advertisers, and also solve more problems in online combinatorial optimization. We consider these tasks beyond the scope of this initial work, and will definitely pursue in future work. Training a network that can handle three advertisers may seem trivial compared to training a network that can simultaneously handle the cases of any number of advertisers, but it is not really the case. Again, using Go as an example, it is basically the difference between training a network that can excel at Go on the standard 19*19 board versus training a network that can excel (simultaneously) at Go on board of arbitrary size. Of course the latter is a much stronger result, but the fixed size task is by no means trivial. Similarly, AlphaZero only demonstrates its effectiveness on the examples of chess, shogi and Go rather than all possible tabletop games, (analogous to us only experimented with ski-rental and AdWords), but the results offer hope/indication that it can be effective in other settings. \\n\\n\\nThe main point of our work is to suggest a new framework in the context of ML+algorithm design, and is by nature exploratory. We admit there is a significant amount of work to be done to fulfill our long term goal of applying ML to facilitate the discovery of new algorithms that extend the frontier of online combinatorial optimization. We think that this is a very meaningful direction to apply ML techniques in broader contexts, and can have significant practical impact. We hope both the ML and algorithm design communities can feel excited about this.\"}",
"{\"title\": \"High-level clarification 1/2\", \"comment\": \"We thank all the reviewers for the thorough review and feedback. We want to start with some high-level clarification. \\n\\nDue to the nature of this work being at the intersection of ML and online algorithm design, there are various notions/results from the online algorithm side that are essential for ML audience to fully grasp the paper. Unfortunately, we had to be very brief (or skip completely) on most of them due to page limits. We expanded the appendix in our draft to make our result more self-contained to the ML audience. See Appendix A and D.\\n\\nAlthough we are by no means claiming that our results have the same import as the breakthrough work on playing Go, it is very helpful to use the example in that domain as an analogy to what we are doing. For AlphaGo, its training required a large amount of high-quality examples of how top human players play the game. The framework of AlphaZero largely eliminates the requirement of such human-level expertise, and can learn competitive strategies from scratch via self-playing. The significance of AlphaZero over AlphaGo is largely in that the new framework can generalize to contexts/tasks where we don\\u2019t have high-quality examples for training. In our context, the previous work of Kong et al. demonstrates that ML agent can learn optimal algorithm if people train it with high-quality training instances, which requires significant human tuning. By contrast, our approach can work without such expertise, and thus opens the door for it to be useful to discover unknown algorithms. \\n\\nWe note that in the field of algorithm design, a problem being well-understood is not an indicator of its lack of difficulty or complexity, but also largely due to its importance and how much time researchers have devoted to it. In particular, AdWords is considered a very difficult problem in the field of online algorithms, and the MSVV [J. ACM\\u201907] result that gave the algorithm and the optimality analysis is considered a breakthrough in the field. It is also an archetypical problem capturing the class of problems solvable by one of the most powerful techniques in this area -- primal-dual algorithms. More importantly, the fact that ski-rental and AdWords are well-understood doesn\\u2019t make the ML task any easier, since we don\\u2019t transfer any expert knowledge on these problems when we train networks. Thus, the fact that our framework can learn optimal algorithms on these problems suggests that it could also be effective on not so well-studied problems. Although the long-term goal is to draw insights from ML generated algorithms and design optimal algorithm for not well-studied problems, we look at AdWords and ski-rental in this work because our expert knowledge allows us to verify that our ML framework can be indeed effective in that the ML generated algorithms agree with our understanding of the optimal algorithms.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Update to the Review after the rebuttal from the Authors:\\nAfter carefully reviewing the responses by the authors especially on my concerns about the significance of solving an instance of a given problem and the improvement in the exposition of the ideas I would like to amend my earlier decision and recommend to accept. For completeness below is the original review.\\n \\n\\nThis paper introduces a framework to learn to generate solutions to online combinatorial optimization problems with worst case guarantees. The framework as the authors claim eliminates the need for manual hard to solve instance/data creation, which is necessary to teach the model to provide the aforementioned worst case guarantees. Therefore the main contribution of the paper can be said that this framework shows that it is possible to train a machine learning model, which can learn an algorithm to solve hard online combinatorial optimization problems and this training can be done without knowing much about the actual optimization problem domain. The only input required is the way to calculate the objective function of the actual problem. This contribution is demonstrated on two classes of problems: Ski-Rental and Fractional AdWords. The framework requires two neural networks one for solution generation agent and one for problem instance generation. These two networks are trained jointly from scratch and the underlying algorithm for the training is provided. \\n\\nAlthough a generic framework that learns to solve online combinatorial optimization problems without domain knowledge is by itself a very motivational goal neither the paper successfully demonstrates that the framework the authors propose achieves this goal nor it explains well enough why one would take the machine learning approach to find good algorithms to such problems. Is it because the ML solution would be faster to compute with big instances? Is it because with the proposed approach one can curate sophisticated heuristic solutions when provable optimality is out of reach?\\n\\nThis paper should be rejected because proposed method demonstrates that an instance of one class of problems, Fractional Adwords, can be learned to solve without domain expertise, however fails to prove that the approach would be beneficial for any other instances of the same problem. Although they show that the Ski Rental problem can also be learned to solve though it is trivial and does not even use the framework the authors propose in its full extent, ie. problem instances are not generated by use of a machine learning model, which is one of main claims the authors are making. Therefore I do not find being able to solve this problem as a supporting evidence for the contributions claimed. In particular there is not any theoretical not experimental evidence that the approach would scale to any instances where a pure optimization approach would be slow to provide any meaningful solutions. I find this important because for combinatorial optimization usually scale matters a lot. While a small instance of a problem can be solve by a general purpose solver quickly a small increase in the problem size can turn out to be intractable. When proposing a machine learning approach to such problems I would expect the model to scale better than pure optimization approach so that there would be demonstrable benefit. Although the paper proposes an interesting framework I would argue that it is a \\u201cgreen apple\\u201d in the sense that authors need to motivate the approach better and expand the contribution beyond solving a particular instance. Authors acknowledge the fact that their experimental setup is rather limited in Appendix C.1, which I agree with and they also claim that there is a representation for a uniform algorithm for any number of advertisers for the AdWords problem, however they leave this as a future work, which I find unfortunate. I would recommend taking this direction rigorously and expand the contribution, which would prove to be a very sound contribution.\", \"in_order_to_clarify_the_exposition_the_following_are_some_questions\": \"1. Authors call the approach YaoGAN due to its structural similarity to GANs. I understand the fact that they are training two neural networks in an alternating scheme, which is similar to the GAN training. How can one evaluate the solutions generated by this framework similar to how GAN generators are evaluated? Can one walk the latent distribution of the algorithm agent and draw insights, which might lead into tailoring some algorithms that would be appropriate for some input distribution although in general inferior in terms of worst case guarantees?\\n\\n2. The main technical contribution claim needs to be elaborated. I understand how the game theoretic framework is established but how does this manifest itself in the algorithm described in Section 3.1 needs more explanation.\\n\\n3. Authors claim there are two shortcomings of the previous method proposed in Kong et. al 2018. They need to elaborate how their method overcomes these issues better.\\n\\n4. Authors state that fractional relaxation of combinatorial mainly integer optimization problems, which is accurate. Yet their approach is only able to solve the fractional version of the AdWords problem. In addition I agree with the fact that although continuous relaxations to integer optimization problems might provide insightful directions they usually employed to to prove bounds on the heuristic approaches. Yet the authors stop at only solving this version with a machine learning approach, which does not hit the bar for me. I would have expected the authors to at least elaborate on why the current framework is not suitable for the non-relaxed problem. What are the shortcomings? \\n\\n5.In Appendix A authors talk about no-regret dynamics, which are relevant. However, they state they loosely follow this approach. What does that entail? What kind of theoretical guarantees are given up due to not following this, a better exposition on this topic would help to support the claims.\\n\\n6. In appendix C.2 authors provide additional plots for the Fractional AdWords problem. However, they retain from providing any intuition about them. In particular what is the conclusion to be drawn from Figure 5. This needs more elaboration. Is this way of training results expected? What is the lesson learned?\\n\\n7.In Figure 8 they provide example data from experience array. What are the significance of these examples? How they help us understand the problem instance generation was actually able to find interesting instances? What kind of dynamics are under covered? These are not directly revealed by only looking at the pictures one needs more explanation to support the claims.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper introduces a new approach to solve optimization problems without relying on any human-provided data beyond the specification of the optimization problem itself. The approach is inspired by the two-player zero-sum game paradigm and follow a generative adversarial network (GAN) setting. One network is trained to output the optimal behavior for a given problem, while the other is trained to output difficult instances of the given problem. These two networks are trained simultaneously and compete against the other until some equilibrium is achieved.\", \"This approach is tested on two small problems for which the optimal behavior is known and seems to perform near theoretical optimality.\", \"I weakly reject this paper because although the approach is indeed interesting, the paper is lacking some structure, as described below:\", \"The paper clearly mentions that no optimization of the training setup or the hyperparameters has been done because the authors are not interested in extending ML techniques. However, hyperparameter searching is not extending any ML technique, it is just an approach to find a good training configuration and show robustness in different hyperparameters settings. It is thus unclear if the approach is robust against different hyperparameter settings.\", \"Very little details (apart from the optimization algorithm) are given regarding the architecture used (types of input, output, neural units, activation functions, number of hidden layers, loss function, etc...), which makes it very hard to reproduce this approach.\", \"Section 1.1 presents results with too many details without introducing the problem. I would suggest the authors to either introduce the two problems earlier or to simply say that near-optimal results are achieved, without giving detailed results, because it is very hard to understand them without any introduction of the task being achieved.\", \"One task is presented in Section 2 \\\"Preliminaries\\\" while the other task is presented in Section 4 \\\"AdWords\\\". It is hard to follow the flow of ideas present in the paper when similar things are not together. I would suggest restructuring the paper into a more classical structure such as: <intro without detailed results - previous work & problematic - approach taken with more details for reproducibility - description of the two tasks - description of experiments with more details for reproducibility - results - conclusion>.\", \"The paper mentions the MSVV algorithm twice but no reference or explanation is provided. It is very hard to understand sentences referring to this.\", \"This work only considers problems for which the optimal input distribution is known, but is motivated by the fact that it could be applied to problems for which the optimal distribution is unknown and thus being able to discover new algorithms. It is hard to support this motivation when no experiments are done in its favor.\", \"No comparison has been made between their approach and other previous approaches. We only know that the proposed approach finds near-optimal solutions with a difference of 0.01 competitive ratio. It is thus very hard to know if this new approach brings any improvement to previous work.\", \"Below are a few things that were not considered to make a decision, but are only details that would make the paper slightly better:\", \"typo at the beginning of section 3.1: missing 'be' in \\\"This can either *be* by an ...\\\"\", \"typo at the beginning os section 4: missing 'be' in \\\"... the algorithm must irrevocably *be* allocated to ...\\\"\", \"Axis' names to the different plots in the Figures would help understand them better. Also, the description of some figures could benefit more details that could be taken off from the text.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This papers tackles the following question. Is it possible to learn the \\\"most\\\" complex instance of a class of (combinatorial) problem while finding (or recovering) algorithms with strong minimax rate.\\n\\nThis is very interesting and clearly a nice line of work (in theory though).\\n\\nThe techniques used rely on GANs since it can be shown that finding the best (random) algorithm and the worst (deterministic) instance is equivalent to finding the worst random instance against the best deterministic algorithm. This is actually a direct consequence of any minmax theorem in game theory; the authors decided to credit that result to Yao (I tend to *strongly* disagree with that point as, even if he stated this fact in CS, this result was quite standard several decades before him - anyway.).\\n\\nThen this idea is evaluated in two examples. A toy problem (the ski rental) and a more or less concrete ones (adwords pb of Mehta). This is the major disappointment in the paper. The basic idea is very interesting, but I would have expect more interesting use cases as teased by the first sentence of the abstract \\\"find algorithms with strong worst-case guarantees for online combinatorial optimization problems\\\".\\n\\nSo at the end, I am a bit puzzled. I really like the idea, but I have the feeling that this technique should have been developed for more complicated setting. Or maybe it is actually not working on more difficult combinatorial problem (and this is hidden in the paper). I believe that this paper is thus not in its final form and could be largely improved.\"}"
]
} |
HJeFmkBtvB | Annealed Denoising score matching: learning Energy based model in high-dimensional spaces | [
"Zengyi Li",
"Yubei Chen",
"Friedrich T. Sommer"
] | Energy based models outputs unmormalized log-probability values given datasamples. Such a estimation is essential in a variety of application problems suchas sample generation, denoising, sample restoration, outlier detection, Bayesianreasoning, and many more. However, standard maximum likelihood training iscomputationally expensive due to the requirement of sampling model distribution.Score matching potentially alleviates this problem, and denoising score matching(Vincent, 2011) is a particular convenient version. However, previous attemptsfailed to produce models capable of high quality sample synthesis. We believethat it is because they only performed denoising score matching over a singlenoise scale. To overcome this limitation, here we instead learn an energy functionusing all noise scales. When sampled using Annealed Langevin dynamics andsingle step denoising jump, our model produced high-quality samples comparableto state-of-the-art techniques such as GANs, in addition to assigning likelihood totest data comparable to previous likelihood models. Our model set a new sam-ple quality baseline in likelihood-based models. We further demonstrate that our model learns sample distribution and generalize well on an image inpainting tasks. | [
"Energy based models",
"score matching",
"annealing",
"likelihood",
"generative model",
"unsupervised learning"
] | Reject | https://openreview.net/pdf?id=HJeFmkBtvB | https://openreview.net/forum?id=HJeFmkBtvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"WynniSQQ5z",
"SJeizFPSjS",
"rJe2xFDrjB",
"S1erAdDHoB",
"H1l69OwBiS",
"BkesvLyAFH",
"rkej7Q86KS",
"rygT2g4otH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728125,
1573382419461,
1573382387831,
1573382349354,
1573382293414,
1571841635071,
1571803939391,
1571664053197
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1624/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1624/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1624/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1624/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1624/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1624/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1624/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a variant of the Noise Conditional Score Network (NCSN) which does score matching using a single Gaussian scale mixture noise model. Unlike the NCSN, it learns a single energy-based model, and therefore can be compared directly to other models in terms of compression. I've read the paper, and the methods, exposition, and experiments all seem solid. Numerically, the score is slightly below the cutoff; reviewers generally think the paper is well-executed, but lacking in novelty and quality of results relative to Song & Ermon (2019).\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewer #3\", \"comment\": \"Thank you for your review, comments and encouraging feedbacks!\\n\\nWe have revised the presentation of our algorithm accordingly to better reflect the essence of our algorithm and the conceptual difference between our method and that of Song&Ermon 2019.\\n\\nFor answer to Q1 1) , Q1 3) and part of Q2 1) please refer to the general response and section 3 of the revision of the paper. \\n\\nRegarding Q1 2). Indeed NCSN model take noise scale as input but is not a set of completely separate models, so our statement is not entirely accurate. We have thus updated the relevant statements in our paper. The most essential difference between our model and NCSN is that NCSN learns score of a series of different distributions while our method learns only one distribution. \\n\\nRegarding Q2 1) and 2) The original motivation for our model causes unnecessary confusions, therefore, we have revised our presentation in the updated manuscript. In the revision, we have clarified which part of our algorithm applies generally and which part applies only to Gaussian noise. Essentially equation 4) and 5) apply to any noise distribution, but to make the approximation between equation 5) and 6), one has to choose specific distribution to take the score average over, which will require specific knowledge about the noise distribution.\\n\\nThanks\\n\\nReferences \\nY Song, S Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. NeurIPS 2019.\"}",
"{\"title\": \"Response to reviewer #1\", \"comment\": \"Thank you for your response and please kindly allow us to explain ourselves better.\\n\\nYour major concern, the overlap between our paper and Song & Ermon 2019, and the slight underperformance of our model, has been addressed in the general response. Please also refer to section 3 of our updated manuscript for a better presentation of our proposed model.\\n\\nRegarding your concern about the statement \\u201cassign likelihood to data\\u201d. In our opinion, energy-based models should be considered likelihood-based as energy value represents unnormalized log likelihood. After partition function has been estimated by methods such as AIS and reverse AIS, normalized log-likelihood can be obtained for any data point.\\n\\nWe have also revised section 2 so that it no longer contains speculative claims.\\n\\nThanks\\n\\nReferences \\nY Song, S Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. NeurIPS 2019.\"}",
"{\"title\": \"Response to reviewer #2\", \"comment\": \"Thank you for your review, comments and suggestions for corrections!\\n\\nWe realized that the original presentation of our algorithm is misleading and have revised our paper accordingly to better present the core idea of our model. Please kindly take a look at section 3 of the updated manuscript.\\n\\nWe addressed the relationship between our work and that of Song&Ermon 2019 in the general response, which should also clarify your last question: \\u201c Does the model really infer the noise magnitude from a given image?\\u201d. We also discussed possible reasons for the slight underperformance of our model.\\n\\nRegarding the concern about the convergence of annealed Langevin dynamics. We would like to note that it is a well-known classical result that under Langevin dynamics, the probability density of samples evolves according to the Fokker-Plank equation, which then have Boltzmann distribution p(x) = exp(-E(x)/T)/Z as equilibrium solution. This applies to any constant temperature T. However, according to Neal 2001, annealing process is a heuristic method and there is no theoretical guarantee that an annealing process will produce fair samples from the final distribution, although importance sampling technique can be used if an unbiased average of some function is needed (Neal 2001).\\n\\nReferences\\nY Song, S Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. NeurIPS 2019. \\n\\nRM Neal. Annealed Importance Sampling. Statistics and computing, 2001.\"}",
"{\"title\": \"Response to all reviewers\", \"comment\": \"We thank all the reviewers for their efforts on evaluations and helpful comments! \\n\\nFirst, we would like to address a major concern shared by all three reviewers: a potential overlap between our work and Song&Ermon 2019. We acknowledge that the original paper could evoke this impression. We first explain the difference between Song and Ermons\\u2019 and our model and then describe the changes in the manuscript to address this issue. \\n1) Model differences:\\nThe NCSN model is trained with multiple noise levels and learns a score function conditioned on noise level. In other words, the deep network in the NCSN model learns to map a tuple of a data point and noise level to a score vector.\\n\\nThe most important difference of our model to the NCSN model is that it is an energy-based model. In other words, the deep learning network in our model maps a data point to an energy value. Thus, the mapping uses the noise level in the data point implicitly, rather than receiving the noise level as an additional input parameter. \\n\\nThis difference is reflected in the corresponding objective functions. Both objectives consist of a weighted sum of expectation values of an L2 distance and look superficially similar. But note that in the NCSN objective each L2 term in the sum contains \\\\sigma_i , a parameter that changes with i. In contrast, each L2 term in our model contains the same sigma_0, the parameter of the fixed Parzen window. Further note, that the neural network in NCSN model has \\\\sigma_i as argument, in addition to the data point, whereas the sole argument in the neural network of our model is the data point. \\n\\nAs a result of this difference, our model can perform single step denoising over all noise scales without prior information about the noise magnitude. Further, our model is directly a density function of the data whereas it is not straight-forward how to convert the noise conditioned score of the NCSN model into a density. \\n \\n\\n2) We have thoroughly rewritten the abstract and body of the paper to make our contribution easily accessible. \\n\\nIt is now clearly acknowledged that the NCSN model is the first generative model based on denoising score matching that uses noise with multiple levels in the training to provide state-of-the-art performance in sample synthesis of high dimensional datasets. \\n\\nThe two contributions of our work are now clearly described. \\n\\n1) An energy based model providing state-of-the-art performance (among energy-based models) in sample synthesis of high dimensional data.\\n\\n2) Starting from the manifold hypothesis also used by Song & Ermon, we provide theoretical argument along with empirical evidence on why training with multiple noise levels is required for modeling high-dimensional data.\\nThis contribution has been recognized by Reviewer 2. \\n\\n\\nSecond, all reviewers asked why our model performs slightly inferior on sample generation than the NCSN. \\n\\nOther than the difference in model architecture and fine tuning, one plausible reason for the slight underperformance of our model is the following: Our model is more parsimonious, but during derivation of the objective function (see section 3.1 of the new version of the paper), an approximation is needed which may reduce performance.\\n\\nAdditionally, the NCSN\\u2019s output is a vector that, at least during optimization, does not always have to be the derivative of a scalar function. For a vector field of dimension n, being the gradient of a scalar function amounts to satisfying n*(n-1)/2 partial differential equations as constraints, as the high-dimensional equivalence of the curl must be 0. For the CIFAR-10 dataset this amounts to more than 1500 constraints per pixel. In contrast, in our model the network output is a scalar function. Thus it is possible that the NCSN model performs better because it explores a larger set of functions during optimization. \\n\\n\\nReferences\\nY Song, S Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. NeurIPS 2019.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a method of learning of energy based models using denoising score matching. This technique has been used before but only with limited success. The authors hypothesize that this is due to the fact that the matching was only performed over a single noise scale. The main idea of this work is to employ a range of scales to learn a single energy function. This trick helps to alleviate the problem of noisy samples concentrating in a low-volume region of the ambient space.\\n\\nIt seems that the paper draws significant inspiration from the work by Song & Ermon, 19. The difference between the two appears to be minor:\\n1) The density is represented as a Boltzman distribution and therefore the score function is reduced to the gradient of the energy function (this has been done before)\\n2) Instead of conditioning the energy on the noise level the authors propose to use explicit scaling by the inverse temperature\", \"pros\": [\"The paper is mostly well-written.\", \"I think Section 2 does a good job at illustrating challenges in training energy-based models using denoising score matching with a single noise scale.\", \"As compared to (Song & Ermon, 19) using the Boltzman distribution ensures that the learned score is an actual conservative vector field. Arguably, learning an image to scalar network is easier than learning an image to image one.\", \"Samples from the model are of competitive visual quality.\"], \"cons\": [\"Scaling energy by the inverse temperature seems to be one of the most important aspects of the paper but is only justified by \\u201cintuition from physics\\u201d. I\\u2019m not entirely sure that this is a valid assumption. In contrast, (Song & Ermon, 19) don\\u2019t put any hard constraints on the values of the score for different noise levels besides that they are produced by a single conditional network. I would appreciate if the authors discussed that difference in more detail.\", \"The authors don\\u2019t provide any analysis as to whether the annealed Langevin MC procedure leads to the samples from the right distribution.\", \"The quantitative results don\\u2019t seem to be better (actually, they are worse) than those from (Song & Ermon, 19).\", \"Notes/questions:\", \"Abstract: \\u201cunmormalized\\u201d -> \\u201cunnormalized\\u201d\", \"Section 2.1, (1): \\\\tilde{x} -> \\\\tilde{\\\\mathbf{x}}\", \"Section 2.2, paragraph 2: What does superscript C mean in the noisy manifold? Never defined.\", \"Section 2.2, paragraph 4: \\u201csome example\\u201d -> \\u201csome examples\\u201d (?)\", \"Section 3, paragraph 1: \\u201cCIFAT-10\\u201d -> \\u201cCIFAR-10\\u201d\", \"Section 4, paragraph 2: \\u201cfor each T as a separate model\\u201d. I don\\u2019t think this is a correct statement. (Song & Ermon, 19) use a single conditional model for all the noise levels.\", \"Section 4, paragraph 2: \\u201cdoes not rely on explicit receive noise magnitude\\u201d -> \\u201cdoes not rely on receiving noise magnitude explicitly\\u201d (?) I also don\\u2019t quite understand this entire sentence. Does the model really infer the noise magnitude from a given image? It seems like in Equation (7) there is an assumption that the temperature T is equal to 1. I don\\u2019t feel like there is a lot of difference between the proposed model and (Song & Ermon, 19) when it comes to supplying noise information. I\\u2019d appreciate if the authors could clarify that bit for me.\", \"My main concern about this paper is that it doesn\\u2019t seem like a big step from its starting point (Song & Ermon, 19). The modifications are shown to work empirically but don\\u2019t result in a significantly better model. Moreover, I feel like the paper could do a better job at justifying those changes. I\\u2019m giving a borderline score but willing to increase it if the authors address my questions.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"########Updated Review ###########\\n\\nI would like to thank the author(s) for their reply, which I have carefully read and it partly addresses my original concerns. Still, as agreed by all three reviewers, this paper might not be a significant step up compared with [1]. I am raising my point to weak reject to reflect my updated belief. I think this paper needs a bit more highlights to pass the threshold. \\n\\n###############################\\n\\n\\nThis paper tries to address the problem of non-parametric maximal likelihood estimation via matching the score function wrt data. It is a clear rejection due to its significant overlap with the recent NeurIPS publication [1]. The author(s) have failed to clarify how their proposal differs from [1] in a significant way. From what I can tell after a quick read, both papers tried to training the score function using the denoising auto-encoder, amortized through a neural network, strategically annealed with a sequence of different noise levels, sampled with the Langevin scheme. I put two papers side-by-side and you can visually tell the uncanny resemblance. Additionally, the proposed model does not outperform that from [1] (see Table 1). I am also not happy about the misleading statement in the abstract that this work \\\"assign likelihood to test data\\\", which is actually performed by AIS. Section 2.2 is particularly problematic. The assumption of \\\"data approximately uniformly distributed on the manifold\\\" is outrageous, which basically invalidates the need for density estimation because of the uniformity. The 1/f power law characteristic is irrelevant to the likelihood estimation problem, and the statements are both heuristic & misleading. \\n\\n[1] Y Song, S Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. NeurIPS 2019.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes to learn an energy based generative model using an \\u2018annealed\\u2019 denoising score matching objective. The main contribution of the paper is to show that denoising score matching can be trained on a range of noise scales concurrently using a small modification to the loss. Compared to approximate likelihood learning of Energy based models the key benefit is to sidestep the need for sampling from the model distribution which has proven to be very challenging in practice. Using a slightly modified Langevin Sampler the paper further demonstrated encouraging sample qualities on CIFAR10 as measured by FID and IS scores.\\n\\nOverall I think the paper is well motivated and written, experiments are sound with encouraging results that will be useful for further progress in training energy based models. I currently score the paper as a \\u2018weak accept\\u2019, the reason for not giving \\u2018accept\\u2019 is that I think the paper is closely related to Song & Ermin 2019 (see detailed comments below) - However i can be convinced to bump my score depending on the author feedback \\n\\nQ1) I think you should elaborate more on how exactly your method is different from the NCSN model presented in Song & Ermon 2019? Especially.\\nQ1.1) Is your method similar to the NCSN except that you do linear scaling with temperature in the loss and train a joint model across all temperature scales?\\n\\nQ1.2) In the related works section you claim that \\u2018[Song & Ermin] \\u2026 this model learns p(xhat) for each T as a separate model\\u2019. Quickly reading through that paper i do not think that statement is accurate - I think they learn a model where the main difference is that it takes T as input instead of scaling the gradient term in the loss?\\n\\nQ1.3 )Do you have any intuition for why they seem to get slightly better results than the one you obtain in your paper? Is it simply architecture/training details that differ or something more \\u2018fundamental\\u2019?\\n\\n\\nQ2) In relation to the Score matching objective.\\nQ2.1) In eq (4) it is not completely clear to me what the motivation for linear scaling in T is. Can you elaborate on what you mean with \\u2018We borrow intuition from physics and simply set E_T(xhat) = E(xhat)/T ...\\u2019?\\nIn relation to the above Can you clarify which part of your results holds for Gaussian noise and which holds in general. \\n\\nQ2.2) For the gaussian case I think linear scaling as done in eq(5) is sensible, however for arbitrary noise distributions linear scaling is akin to a first order approximation (which might be inaccurate across a range of different noise levels)?\", \"minor\": \"I think it would ease the reading of the paper if you showed the derivation (in appendix) that Eq (1) and Eq(2) are equivalent.\", \"minor_comment\": \"Learning generative models using denoising have also been explored in [Soenderby 2016]. Here the difficulties of different noise scales was also found and explored but (importantly) not solved.\\n\\n[Song & Ermon]. Generative Modeling by Estimating Gradients of the Data Distribution\\n[Soenderby 2016]: Amortised map inference for image super-resolution\"}"
]
} |
SJx_QJHYDB | Finding Winning Tickets with Limited (or No) Supervision | [
"Mathilde Caron",
"Ari Morcos",
"Piotr Bojanowski",
"Julien Mairal",
"Armand Joulin"
] | The lottery ticket hypothesis argues that neural networks contain sparse subnetworks, which, if appropriately initialized (the winning tickets), are capable of matching the accuracy of the full network when trained in isolation. Empirically made in different contexts, such an observation opens interesting questions about the dynamics of neural network optimization and the importance of their initializations. However, the properties of winning tickets are not well understood, especially the importance of supervision in the generating process. In this paper, we aim to answer the following open questions: can we find winning tickets with few data samples or few labels? can we even obtain good tickets without supervision? Perhaps surprisingly, we provide a positive answer to both, by generating winning tickets with limited access to data, or with self-supervision---thus without using manual annotations---and then demonstrating the transferability of the tickets to challenging classification tasks such as ImageNet.
| [
"Lottery Tickets Hypothesis",
"Self-Supervised Learning",
"Deep Learning",
"Image Recognition"
] | Reject | https://openreview.net/pdf?id=SJx_QJHYDB | https://openreview.net/forum?id=SJx_QJHYDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"YWcJxsJx6i",
"rJgexFfhir",
"BkxyHdzhsr",
"BJxzhLG3jr",
"SJl5O8GnjS",
"H1gzfIG2or",
"ryxRdy869S",
"B1gdk_E5cH",
"SJeuh5z19B",
"SkxRhOSTFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798728095,
1573820648364,
1573820471483,
1573820074332,
1573820017912,
1573819914304,
1572851574375,
1572648927694,
1571920560406,
1571801270321
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1623/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1623/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1623/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1623/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1623/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1623/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1623/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1623/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1623/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper studies finding winning tickets with limited supervision. The authors consider a variety of different settings. An interesting contribution is to show that findings on small datasets may be misleading. That said, all three reviewers agree that novelty is limited, and some found inconsistencies and passages that were hard to read: Based on this, it seems the paper doesn't quite meet the ICLR bar in its current form.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to review #3\", \"comment\": \"We agree with the reviewer that our contribution is not methodological but mainly experimental.\\n\\nAlso, we agree that some of our observations might be intuitively expected and reasonable, though we note that the fact that winning tickets transfer between similar datasets with labels (as shown in Morcos et al., 2019) does not necessarily imply that transfer from self-supervised tasks should be possible. Based on previous results, it is entirely plausible that winning tickets are dependent on labels (i.e., p(y|x) vs. p(x)). \\nFurthermore, we argue that, even if these results are expected, confirming these intuitions with rigorous experimentations as we propose in our paper is still important, as noted by R2 (\\u201ckind of expected, but it is still good that this paper provide solid experimental results to verify this\\u201d). \\n\\n\\u201cI also don't see a practical benefit beyond transfer learning setup.\\u201c: As a byproduct, the label-agnostic winning tickets also allow to study the transferability of winning tickets between different tasks, which has a concrete practical benefit. Indeed, similarly to the motivation of Morcos et al., if winning tickets can transfer between tasks, then they can be reused across a variety of problems, thus dispensing the need for generating new winning tickets for each new task.\\n\\n\\u201cgiven that lottery tickets are transferable (Morcos paper) it is really not that surprising\\u201d: The datasets used in the paper of Morcos et al. were pretty similar to one another, and the fact that we can transfer between similar supervised tasks does not suggest that we should be able to transfer from self-supervised to supervised. Our work provides insights regarding the dependence of winning tickets on p(x) vs. p(y|x).\\n\\n\\\"I was surprised to not see pseudo-labeling or consistency training\\\": For our semi-supervised experiment, we choose to focus solely on the semi-supervised technique introduced in the paper \\\"S4L: Self-Supervised Semi-Supervised Learning\\\" of Zhai et al. (ICCV 2019) because it yields better performance compared to VAT or pseudo-labeling on ImageNet (see Table 1. from their paper).\"}",
"{\"title\": \"Reply to review #1\", \"comment\": \"1) Following the reviewer's comment, we report in Appendix E tables the exact accuracies in each setting. We report mean and standard errors for our experiments which we run with 3 (ImageNet and Places) or 6 (CIFAR) different seeds. We thank the reviewer for this recommendation and for helping us improving the clarity and robustness of the comparison.\\n\\n2) The \\u201cRandom - adjusted\\u201d baseline is not obtained by applying the pruning mask to randomly initialized weights. In the following lines, we motivate and clarify this baseline and have included this explanation in the paper updated version.\\nWe find that deep architectures (VGG-19 or ResNet-18 for example) trained on CIFAR-10 are naturally sparse (~80% of the weights are zeroed at convergence). Pruning a network at rates below its level of natural sparsity without impacting the performance is trivial because the network is already sparse. Indeed, we found that in the random global pruning baseline (which can remove non zero weights), pruning at rates below the natural sparsity of the network degrades accuracy, while pruning of weights that are already zeroed has no effect. Experiments performed with pruning rates below the natural level of sparsity of the network (~80%) are uninformative. Inconveniently, this performance gap carries over to higher pruning rates (in which we are interested in) and can lead to misleading interpretations. The random adjusted baseline removes this effect by first pruning the weights that naturally converge to zero after training. Then, we randomly mask the remaining non-zeroed weights to get different final desired pruning rates. The remaining non-masked weights are randomly initialized. This baseline therefore corrects for the natural sparsity present in CIFAR-10 networks. \\n\\nRegarding the random initialization remark, Liu et al. indeed show in Figure 7.a (unstructured iterative pruning) that starting from randomly reinitialized weights works well on deep architectures (VGG-16 and ResNet-50) on CIFAR-10 when pruned at rates below ~90%. This is consistent with the observations of Frankle et al. (2019) in the Appendix A of their paper. Indeed, Frankle et al. (2019) also show that up to a certain level of sparsity, training the subnetwork from its original weights or from random re-initialization gives comparable performance. However, for more extreme pruning rates (>90%), resetting the subnetwork to its original weights gives better performance than random re-initialization. In this work, we follow up on the work of Frankle et al. (2019) and Morcos et al. (2019) that both provide empirical evidence that in the regime of large datasets or high pruning rates, starting from a particular set of weights instead of random initialization is critical to reach high accuracy.\\nFor completeness though, we take into account the remark of the reviewer and have included in Appendix G results with random re-initialization for winning tickets found with labels or with RotNet self-supervised task on both ImageNet and CIFAR-10. \\nOn ImageNet, consistently with the experiments of Frankle et al. (2019) we observe that resetting the weights accordingly is crucial to get high accuracy. Indeed, on both ResNet-50 and AlexNet, for labels, the subnetworks that are reset to their weights early in training (dark blue plain line) perform significantly better than subnetworks randomly re-initialized (dark blue dashed line). Interestingly, this is not the case for RotNet winning tickets: starting from original weights (pink plain line) gives only a very slight boost (or even no boost at all) of performance compared to randomly re-initialization (pink dashed line). Overall, labels or rotnet subnetworks perform in the same ball park when randomly re-initialized, but using the original weights gives a large boost of performance for labels but not for rotnet. Thus, it suggests that the information carried by the pruned mask itself is similar for labels and rotnet subnetworks but the weights of the rotnet winning tickets are not as good starting points as the weights from labels winning tickets. We thank the reviewer for suggesting this experiment; it gives interesting insights on the difference of performance between labels and rotnet winning tickets.\\nOn CIFAR-10, up to a certain level of sparsity that roughly corresponds to the natural level of sparsity of the network, using random re-initialization or weights \\u2018early in training\\u2019 gives similar performance. However, for more extreme pruning rates, using a particular set of weights gives significantly better performance than random re-initialization.\\n\\n3) We chose not to vary the dataset size on CIFAR-10 because it is already small. However, following the reviewer recommendation we include results with CIFAR-10 in Appendix F.\\n\\nOverall, we hope that our updated version of the paper along with our comments provide clarifications about our experimental settings and reinforce the validity of our results.\"}",
"{\"title\": \"Reply to review #4\", \"comment\": \"We thank the reviewer for this positive feedback. We did not experiment on the particular scenario of multi-task learning with limited amount of data; we agree that this is an interesting problem for future work.\"}",
"{\"title\": \"Reply to review #2\", \"comment\": \"We thank the reviewer for this constructive and thoughtful feedback.\\n\\u201cThis undermines the *bold* claim in the abstract\\u201d: The remark about the bold claim is a fair point, and we have updated the paper with this caveat accordingly.\\n\\n\\u201cThis paper raises the issue of ill-definedness of \\u201cearly in training\\u201d, but did not provide a solution.\\u201c: We agree that the fact that we do not provide a solution to the problem of late resetting is slightly disappointing. Yet, this is not the main focus of our paper.\\n\\n\\u201cthe ability to exactly perserve the accuracy while pruning the weights\\u201c: We emphasize that our primary aim is to better understand lottery tickets rather than just get good performance. In particular, we are interested in whether the winning ticket initializations derived from data with little or no supervision outperform subnetworks initialized randomly. Our finding that these winning tickets do in fact outperform random tickets suggests that the properties of winning ticket initializations which lead to better optimization are largely independent of labels, and rather mostly rely on p(x) (though we do note, as the reviewer pointed out, that the inclusion of labels does lead to better winning tickets, though not by much). \\n\\n\\u201cI feel that the novelty of this paper is limited, and do not provide much new insights.\\u201c\\u201d: Please see our general comment for more detail on the novelty of our work and why the insights we generated are relevant to future work on the lottery ticket hypothesis.\"}",
"{\"title\": \"Global comment\", \"comment\": \"We thank the reviewers for taking the time to provide detailed and thoughtful comments. This constructive feedback has been helping us improving our submission.\\n\\nOur contribution is essentially experimental and we were pleased to see that overall, the reviewers found our experimental results to be \\u201csolid and provide more understandings of the lottery ticket hypothesis\\u201d (R2) and assessed that we have \\u201cconducted extensive experiments on three open questions and results prove [ours] assumptions\\u201d (R4).\\nYet, reviewer 1 is concerned by the robustness of our experimental setup and we address his or her concerns in our reply and in the updated version of the paper.\\n\\nThe main caveat from the reviewers relates to the lack of novelty (R2: \\u201cthe novelty of this paper is limited\\u201d; R3: \\u201cimmediate followup on Morcos et al.\\u201d, \\u201cfairly obvious\\u201d). They also question the interest and practical value of our study (R2: \\u201cdo not provide much new insights\\u201d; R3: \\u201c I also don't see a practical benefit\\u201d).\\n\\n*Novelty.*\\nTo the best of our knowledge, we propose the first study of the lottery ticket hypothesis in the context of limited access to samples and labels*.* Our experiments are fairly extensive: we generate winning tickets on ImageNet for several different settings (2 different self-supervision losses, 4 different sizes of dataset and 4 different number of classes, semi-supervision) at 14 different pruning rates ranging from 20% to 99.9%, thus covering both moderate and extreme sparsity.\\nWe are the first paper addressing the lottery ticket hypothesis with a majority of our experiments conducted on ImageNet,\\nwhile showing that conclusions on smaller datasets may be misleading. Our experiments show indeed that deep networks trained on CIFAR-10 are naturally sparse, making conclusions potentially incorrect.\\n\\nMoreover, our findings are different from Morcos et al., who show that winning tickets can transfer between different datasets with a common domain (natural images) trained on the same task (labels classification). The fact that we can transfer between similar supervised tasks does not suggest that we should be able to transfer from self-supervised to supervised tasks. Also, it does not guarantee that winning tickets found with only 10 classes (out of 1000) transfer well to full ImageNet for example. Besides, even if these results were expected somehow, it would still be essential to verify these with rigorous experiments, as we propose in our paper.\\n\\n\\n*Motivation - why does it matter ?*\\nIn our submission, we aim to better understand the properties of winning tickets. Indeed, a better understanding of winning ticket properties might enable faster winning ticket generation and thus allow for concrete applications in fields such as network compression or initialization. We propose an extensive series of experiments investigating winning ticket generation with limited access to labels and samples in order to isolate and assess the dependance in p(x) and p(y|x) of the winning tickets.\\nAs a byproduct of this design, the label-agnostic winning tickets also allow to study the transferability of winning tickets between different tasks, which has a concrete practical benefit. Indeed, similar to the motivation of Morcos et al., if winning tickets can transfer between tasks, then they can be reused across a variety of problems, thus dispensing the need for generating new winning tickets for each new task.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper empirically studies the lottery ticket hypothesis with limited or no supervision. First, the authors use self-supervised learning to generate winning tickets, showing that \\\"good\\\" (reasonable) winning tickets can be found without labels. Second, the authors show that finding \\\"good\\\" (reasonable) winning tickets can be accelerated by a factor 5 on ImageNet by using only a subset of the data. The authors also argue that using large datasets is important to study lottery tickets, since deep networks trained on CIFAR-10 are natually sparse, making conclusions potentially misleading.\\n\\nThe experimental results are rich and provide more understanding of winning ticket generation with limited or no supervision. The results on self-supervised learning task (including the layer-wise pruning results) and a subset of training dataset are reasonable and kind of expected, but it is still good that this paper provide solid experimental results to verify this. As the paper observed, \\\"none of the tickets found with limited access to labels and or data matches the accuracy of tickets found with all the labeled data when considering moderate pruning rates (more than 10% of unpruned weights)\\non ImageNet. Indeed, we consistently observe a decrease in performance compared to the full overparametrized network as soon as we prune the network.\\\" In this sense, winning tickets are certainly label and data dependant. This undermines the *bold* claim in the abstract that \\\"we provide a positive answer to both questions, by generating winning tickets with limited access to data, or with self-supervision\\\". From my perspective, the ability to exactly perserve the accuracy while pruning the weights (see the flat regions of \\\"Lables\\\" curves in Figure 1,2,3,4,5) is the interesting part of the lottery ticket hypothesis. We have several different ways to achieve a descreased accuracy with a smaller network, the dynamics there may be a mixture of the lottery ticket hypothesis and standard model pruning, which needs more careful experiment design to separate different dynamics.\\n\\n\\\"using large datasets is important to study lottery tickets, since deep networks trained on CIFAR-10 are natually sparse, making conclusions potentially misleading.\\\" \\\"The definition of \\u201cearly in training\\u201d is somehow ill-defined: network\\nweights change much more for the first epochs than for the last ones.\\\" These two messages are important to future study of the lottery ticket hypothesis. This paper raises the issue of ill-definedness of \\u201cearly in training\\u201d, but did not provide a solution. \\n\\nOverall, I found that the experimental results in this paper are solid and provide more understandings of the lottery ticket hypothesis. However, I feel that the novelty of this paper is limited, and do not provide much new insights. Therefore, it does not reach the bar of being published at ICLR, from my perspective. Therefore, I say \\\"Weak Reject\\\".\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"In this paper, the authors try to provide empirical answers to several important open questions on winning tickets. They conduct most of experiments on ImageNet and results show that winning ticket is robust, and few data samples can also obtain good winning tickets.\\n\\nGenerally, the paper has conducted extensive experiments on three open questions and results prove their assumptions.\\n\\nAs describe in page 7, lottery tickets are sensitive to data distributions. I\\u2019m wondering, whether there will be winning ticket for multi-task learning with limited data each task? Will this be helpful in distilling the model?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In the original lottery ticket paper, it points out that training the pruned architecture from scratch with initial weight can achieve the same performance compared to fine-tuning it. This work further discuss this phenomenon when data or label is not enough. It is good to see the few-data/label can still provide a comparable results. But some experiment\\u2019s results and its setting are confusing, while also makes me concerned about the conclusion solidness.\\n\\n1) Usually in classification task (especially in cifar10 dataset), 0.5% to 1% accuracy could be a huge gap between two models. For example, in the original \\u201cLottery Ticket Hypothesis\\u201d paper, using initial weight only has roughly 0.5% improvement compared to random initialization. But the figures in this paper do not contain a zoom-in details for each line, make me hard to distinguish the performance between each setting. If the author does not provide a detailed version, it will look like theses model have the same performance, which is actually wrong. The author should either plot a zoom-in figure especially when the pruning ratio larger than 50% or give a Table with accuracy of each setting. And it is better to complete the figure with several random seed and plot the error bar to avoid randomness.\\n\\n2) Does the \\u201cRandom - adjusted\\u201d item in Figure. 1 mean the correctly pruning architecture with random initialization? In \\\"rethinking the value of network pruning\\\", Liu et al. points that in the large learning rate setting (lr=0.1, which is also your setting), random initialization can achieve the same performance compared to the lottery ticket. In my perspective, I want to see whether few-data/label also works on random initialization instead of lottery tickets. I expect the author to explain the \\u201cRandom -adjusted\\u201d experiment setting clearly in the response and I suggest the author to discuss the\\nrandom initialization part specifically.\\n\\n3) Figure.3 only shows the \\u201cvarying dataset size\\u201d experiments on ImageNet. The experiments on cifar10 is lacked. The author should complete this part in the response.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies the problem of finding sparse networks in a limited supervision setup. The authors build on the lottery ticket work of Frankle & Carbin and investigate the validity of their idea when one has few or no labels. This work is an immediate followup on Morcos et al. who investigated the transferability of lottery tickets.\\n\\nThis work is more observational rather than algorithmic or theoretical. Authors study various small sample/label setups where network sparsification works well. \\n\\nMain contribution is Section 4.1 where self-supervision is investigated. However given that lottery tickets are transferable (Morcos paper) it is really not that surprising that semisupervised learning algorithms will do a decent job as well. I also don't see a practical benefit beyond transfer learning setup.\\n\\nSection 4.2 essentially sweeps through supervised problem parameters such as reducing sample size, adding noise etc and . The main application seems to be extracting lottery tickets faster by downsampling the data however this aspect is again fairly obvious. \\n\\nIn short, unfortunately, this paper doesn't cut it for ICLR. As improvements, I would recommend adding standard semi-supervised training techniques to their comparison. I was surprised to not see pseudo-labeling or consistency training (e.g. virtual adversarial training).\"}"
]
} |
HkxdQkSYDB | Graph Convolutional Reinforcement Learning | [
"Jiechuan Jiang",
"Chen Dun",
"Tiejun Huang",
"Zongqing Lu"
] | Learning to cooperate is crucially important in multi-agent environments. The key is to understand the mutual interplay between agents. However, multi-agent environments are highly dynamic, where agents keep moving and their neighbors change quickly. This makes it hard to learn abstract representations of mutual interplay between agents. To tackle these difficulties, we propose graph convolutional reinforcement learning, where graph convolution adapts to the dynamics of the underlying graph of the multi-agent environment, and relation kernels capture the interplay between agents by their relation representations. Latent features produced by convolutional layers from gradually increased receptive fields are exploited to learn cooperation, and cooperation is further improved by temporal relation regularization for consistency. Empirically, we show that our method substantially outperforms existing methods in a variety of cooperative scenarios. | [
"agents",
"graph convolutional reinforcement",
"environments",
"mutual interplay",
"cooperation",
"important",
"key",
"dynamic",
"neighbors",
"hard"
] | Accept (Poster) | https://openreview.net/pdf?id=HkxdQkSYDB | https://openreview.net/forum?id=HkxdQkSYDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Sz-dXUzt6A",
"M-tH3al04L",
"ByxZYj7hjr",
"BJeVi97hsH",
"HJg1W8g3sS",
"BklJkMghjH",
"BJgLDYsciB",
"BJld6mP9or",
"HylRNbw9iS",
"H1xLgxw9ir",
"rygwdyD9sS",
"SJlH50IcoS",
"SygTyGcOcr",
"Skga4mX4cS",
"HJg3DQ7gcH",
"Byxv13sAFS",
"SygMD_i_uB",
"SklOSsFduH",
"SJx0QUYOdB",
"r1gtJmtu_r",
"BJxoQu_udS"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"official_review",
"official_comment",
"comment",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1581447790786,
1576798728064,
1573825400616,
1573825180059,
1573811702593,
1573810647306,
1573726558346,
1573708735922,
1573708085964,
1573707758393,
1573707630683,
1573707404800,
1572540901222,
1572250421246,
1571988323873,
1571892190658,
1570449498192,
1570442047936,
1570440742466,
1570439904533,
1570437154679
],
"note_signatures": [
[
"~Douglas_De_Rizzo_Meneghetti1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1622/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1622/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1622/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1622/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1622/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1622/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1622/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1622/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1622/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1622/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1622/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1622/Authors"
],
[
"~Huiknight_Li1"
],
[
"ICLR.cc/2020/Conference/Paper1622/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1622/Authors"
],
[
"~Hopeful_Rational2"
],
[
"ICLR.cc/2020/Conference/Paper1622/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1622/Authors"
],
[
"~Hopeful_Rational2"
]
],
"structured_content_str": [
"{\"title\": \"Questions related to the graph and some spelling/notation mistakes\", \"comment\": \"The paper mentions the creation of a matrix $F^t$ which is invariant to the ordering of nodes in the graph, but what happens in the experiments if one of the agents (or enemies) die? Are all adjacency values for all other agents zeroed for the entity that died? Conversely, what happens if a new agent is added to the environment? Can all the matrices and the overall model accommodate this?\\n\\nI missed a figure exemplifying how graphs are created, how they change over time or even what they look like. Are all nodes in the graph agents in the same team or does the graph model other things, such as adversaries and environment objects? Why are the edges determined by an arbitrary distance measure if the messaging between nodes is later weighted by self-attention? Couldn't the graph be complete and the attention weights learned to allow the model to learn what to ignore?\", \"problem_with_notation\": \"In page 3, L is used as \\\"the length of feature vector\\\". In page 5, L is used as number of enemies.\\n\\nThe acronym \\\"DGN\\\" is never defined. I suppose it was chosen to establish the model as a graph-convolutional variant of DQN, but it would be nice to define it, e.g. as \\\"Deep Graph Network\\\".\\n\\nThere were some spelling mistakes, but I suggest finding and fixing one instance of \\\"regularation\\\" in the text.\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The work proposes a graph convolutional network based approach to multi-agent reinforcement learning. This approach is designed to be able to adaptively capture changing interactions between agents. Initial reviews highlighted several limitations but these were largely addressed by the authors. The resulting paper makes a valuable contribution by proposing a well-motivated approach, and by conducting extensive empirical validation and analysis that result in novel insights. I encourage the authors to take on board any remaining reviewer suggestions as they prepare the camera ready version of the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \">>> Clarifying \\u2018these two methods require the objects in the environment are explicitly labeled, which is infeasible in many real-world applications.\\u2019\\n\\nThese two methods use the entities in the environment as the nodes of the graph. So, they need explicitly know what the entities are and where they are in the environment to construct the graph at each timestep. However, in real-world applications, such information cannot be obtained.\\n\\n>>> About increasing the neighbourhood even more.\\n\\nWhen $|\\\\mathbb{B}|$=4, the receptive field of the second convolutional layer is 1+4*(1+4)=21. It is able to cover all of the 20 agents. And the experiments of $|\\\\mathbb{B}|$=1,2,3, and 4 have verified our claims about how the size of neighbors $|\\\\mathbb{B}|$ affects the performance of DGN. When increasing the neighborhood even more, the method will become a full communication method. \\n\\n>>> \\\"I believe that if you make a claim which is not supported by the other two experiments, then the claim might be wrong. In this case, removing the ablation results does not add clarity, but hides the important information.\\\"\\n\\nIn the revise version, we have added the ablation study of DGN in jungle and routing in Appendix. Please refer to the last paragraph of Appendix and Figure 15 and 16 for details.\"}",
"{\"title\": \"Response\", \"comment\": \">>> ...This makes it hard to learn abstract representations of mutual interplay between agents.\\n\\nFirst, the mutual interplay between agents is hard to be quantitatively represented. Moreover, multi-agent environments are changing quickly, which is a result caused by all agents, so it is hard to capture the pairwise relation. Our relation kernel is a neat method to quantitatively represent the pairwise relation.\\n\\n>>> \\\"It would be beneficial for the paper and all its readers if you write down the precise formalism. Is it Dec-POMDP?\\\"\\n\\nYes, it is Dec-POMDP. We have added the formalism at the first paragraph of Section 3.1.\\n\\n>>> \\\"I still don't understand why it is the case. Thanks for the additional experiments, however, I would like to see similar experiments for the other two domains.\\\"\\n\\nThe graph of the agents changes quickly. The change of the graph at next state will cause the change of target Q value. This is a problem of moving target, which is similar to the problem the target network addressed in DQN. That is the reason why Q-function is difficult to converge. We really do not have enough time to perform additional experiments on other two scenarios before the deadline of rebuttal.\"}",
"{\"title\": \"Reviewer response\", \"comment\": \">>> ...This makes it hard to learn abstract representations of mutual interplay between agents.\\n\\nI don't understand why it is the case.\\n\\n>>> Our problem is a POMDP, where each agent gets a partial observation of the state and obtains a local reward. The objective is to maximize the sum of all agents\\u2019 expected returns.\\n\\nIt would be beneficial for the paper and all its readers if you write down the precise formalism. Is it Dec-POMDP?\\n\\n>>> However, the graph changes quickly, which makes Q-function difficult to converge.\\n\\nI still don't understand why it is the case. Thanks for the additional experiments, however, I would like to see similar experiments for the other two domains.\"}",
"{\"title\": \"Reviewer's response\", \"comment\": \"I appreciate the time and effort the authors invested in improving their paper.\\n\\n>>> However, these two methods require the objects in the environment are explicitly labeled, which is infeasible in many real-world applications.\\n\\nCan you, please, clarify this?\\n\\n>>> We have performed additional experiments on bigger neighborhood.\\n\\nDo I understand correctly, that the total number of agents was 20 there? If yes, it would be interesting to increase the neighbourhood even more.\\n\\n>>> In other two scenarios, the same conclusions can also be drawn by ablation, but not as significant as in battle. Thus, we neglect the ablation results in these two scenarios for clarity. \\n\\nI believe that if you make a claim which is not supported by the other two experiments, then the claim might be wrong. In this case, removing the ablation results does not add clarity, but hides the important information.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your clarifications and for updating the manuscript. The clarity in the revised manuscript is indeed significantly improved and I feel confident in recommending acceptance of the paper.\", \"one_minor_comment\": \"it might be useful to run the paper through a spelling/grammar checker (an automated service should suffice) to fix some small grammar mistakes, such as \\\"in dynamic environment where the neighbors of agent quickly change\\\".\"}",
"{\"title\": \"Responses to Review #3\", \"comment\": \"We have rewritten the section of temporal relation regularization to better present our idea. The first two paragraphs of Section 3.3 have addressed the reviewer\\u2019s comments on temporal relation regularization. In the following, we first list the responses to these comments for reference, and then give the responses to other comments. \\n\\n>>> \\u2018Intuitively, if the relation representation produced by the relation kernel of upper layer truly captures the abstract relation between surrounding agents and itself, such relation representation should be stable/consistent\\u2019 (Please clarify)\\n\\nIn CNNs/GCNs, higher layer learns more abstract representation. Similarly, in DGN, the relation representation captured by upper layer should be more abstract and stable. This is the motivation of applying temporal relation regularization to the upper layer.\\n\\n>>> \\u2018We use the attention weight distribution in the next state as the target for the current attention weight distribution\\u2019 (What is the reasoning behind this? Would an exponential moving average of attention logits/weights work as well?) \\n\\nCooperation is a persistent and long-term process. Who to cooperate with and how to cooperate should be consistent and stable for at least a short period of time even when the state/feature of surrounding agents changes. Thus, the attention weight distribution over the neighboring agents should be also consistent and stable for a short period of time. To make the learned attention weight distribution stable over timesteps, we use the attention weight distribution in the next state as the target for the current attention weight distribution. \\n\\nIn dynamic environment where the neighbors of agent quickly change, moving average and RNN structures cannot be performed on the attention weights of different neighbors and thus do not work. \\n\\n>>> \\u2018Since we only focus on the self-consistent of the relation representation based on the current feature extraction network we apply current network to the next state to produce the new relation representation instead of the target network as in deep Q learning\\u2019 (unclear)\\n\\nFor the calculation of KL divergence between relation representations in two timesteps, we apply current network to the next state to produce the target relation representation. This is because relation representation is highly correlated with the weights of feature extraction. But update of such weights in target network always lags behind that of current network, making the relation representation produced by target network not consistent with that produced by current network.\\n\\n>>> Explaining the arguments in the KL term in Eq. 4.\\n\\nYou understood it correctly. But we need to point out that $\\\\mathcal{G}_m^{\\\\kappa}$, the attention weights after the softmax (Equation 2), are actually a distribution over its neighbors, with the probability $\\\\alpha_{ij}$ for each neighbor $j$. Thus, we could use KL divergence to measure the difference.\\n\\n>>> \\u201cKL is not symmetric -- what motivates the particular ordering in your case? Did you consider symmetric divergences such as JSD?\\u201d\\n\\nSymmetry is not necessary. We use the attention weights in the next state as the target and only update the attention weights at the current state to make it close to the target. KL(current|target) measures how the current attention weight distribution is different from the target attention weight distribution. We also tested MSE, a symmetric metric, and it also works. However, the performance of KL divergence is better. \\n\\n>>>Assembling adjacency matrices\\n\\nAssembling adjacency matrices is not necessary but an easy way to implement multi-head attention. This technique is also used in GAT (ICLR 2018) (keras version). In fact, our implementation is also efficient and highly parallel on GPUs, since the computation of the algorithm is realized by dot product. GNN libraries might help the efficiency and are compatible with DGN. We will try these techniques to investigate the difference. \\n\\n>>> \\u201cIt should further be mentioned that some of the baselines are trained with a different training algorithm and do not only differ in agent architecture (e.g. CommNet) \\u2014 what is the effect of this?\\u201d\\n\\nAll the baselines are trained with Q-learning and only differ in agent architecture. As pointed out in the CommNet paper, CommNet can be combined with standard RL algorithms or supervised learning. We use the Q-learning version as our baseline.\\n\\n>>> About observation and preprocessing\\n\\nIn the experiments that the observation of each agent is a square view with 11 \\u00d7 11 grids centered at the agent and its own coordinates, which is provided by MAgent without preprocessing. \\n\\n>>> About the random seeds\\n\\nWe indeed changed the random seeds for each run. In routing, each router connects other three routers, and thus the action space of each agent (packet) is small. Therefore, the convergence process of each algorithm is prone to be similar under different random seeds.\"}",
"{\"title\": \"Responses to Review #1\", \"comment\": \"DGN and all the baselines we compared with are Q-learning algorithms, but ATOC and TarMAC are actor-critic algorithms. Moreover, TarMAC uses a centralized critic that optimizes the shared global reward, which is different from other methods. Thus, they are not quite suitable for fair comparison. Nevertheless, we performed additional experiments to compare with them. As shown in Figure 11, DGN outperforms ATOC. The reason is that LSTM kernel is worse than multi-head attention kernel in capturing relation between agents. Like CommNet, TarMAC is also a full communication method. Similarly, DGN also outperforms TarMAC. The reason is that receiving redundant information may negatively affect the performance. \\n\\nWe consider the case where the state information is unavailable and the agent can only use observations/encodings from other agents to learn to construct and exploit more centralized information. This is true in many real-world applications. However, collecting more observations/encodings incurs more costs and irrelevant observations/encodings can even negatively affect the performance. Because of this, as shown in Figure 9, the performance of $|\\\\mathbb{B}|$=4 is worse than $|\\\\mathbb{B}|$=3 or 2. RFM requires the true state information for supervise learning, and thus RFM is not suitable for fair comparison. \\n\\nMoreover, we have revised the paper to properly use the acronyms, cite the references, and fix the typos.\"}",
"{\"title\": \"Responses to Review #4: part 1\", \"comment\": \">>> Related work\\n\\nThanks for bringing up the missing references. MAGnet [Malysheva et al., 2018] learns relevance information in the form of a relevance graph, where the relation weights are learned by pre-defined loss function based on heuristic rules, but relation weights in DGN are learned by directly minimizing the temporal-difference error of value function end-to-end. Agarwal et al. (2019) used attention mechanism for communication and proposed a curriculum learning for transferable cooperation. However, these two methods require the objects in the environment are explicitly labeled, which is infeasible in many real-world applications. However, DGN agents only use their raw local observation. We have included these references and clarified the differences in the reversion. \\n\\n>>> The metrics to determine the neighbor set.\\n\\nThe set of neighbors of an agent could be the agents in its local observation, or the agents within its communication range. It depends on specific scenarios. In the experiments, we use distance and select k-nearest agents as the neighbors and we have also investigated how the number of neighbors affects the performance.\\n\\n>>> Additional experiment to verify the claim that it may be costly and less helpful to take all other agents into consideration.\\n \\nThanks for your constructive suggestion. We have performed additional experiments on bigger neighborhood. As shown in Figure 9, when we set $|\\\\mathbb{B}|$ = 4, the performance drops. In addition, as shown in Figure 6, the full communication method, CommNet, has very limited performance. These verify that it may be less helpful and even negatively affect the performance to take all other agents into consideration. Due to limited time, we have not yet reconstructed the paper to incorporate Figure 8 and 9 in the main part of the paper. We will do that in the final version.\\n\\n>>> \\u201cAt the end of Section 3.1, you mention the soft update of the target network. Later, in 3.3, you say that that you do not use the target network. Can you elaborate more on that?\\u201d\\n\\nWe indeed use the target network to produce the target value for computing TD-error of Q function. However, for the calculation of the KL divergence between relation representations in two timesteps, we use current network instead of target network. The reason is explained in detail in the second paragraph of Section 3.3. \\n\\n>>> \\u201cIn Equation 4, is it a separate KL for each of the attention heads? If yes, this is not clear from the formula.\\u201d\\n\\nYes, it is the separate KL for each of the attention heads, we have made this clear in Equation 4 in the reversion.\\n\\n>>> Explaining the ablation experiments for other testbeds.\\n\\nIn other two scenarios, the same conclusions can also be drawn by ablation, but not as significant as in battle. Thus, we neglect the ablation results in these two scenarios for clarity. \\n\\n>>> \\u201cWhy does the DQN performance drop in the second half of the training in Figure 4 for all of the runs?\\u201d\\n\\nThe enemy model built in MAgent is very powerful, making the Battle game difficult. We watched and analyzed their behaviors for all the runs. As described in Section 4.1, at the beginning, DQN agents learn sub-optimum strategies such as gathering at a corner to avoid to be attacked. These strategies might help at the beginning, and thus the reward is relatively high. But the agents at the edge of the group are easily attacked, receiving low reward and making the reward unevenly distributed among the group. Fitting the 'low reward data' produced by the sub-optimum policy, the DQN converges to more passive policy, e.g., moving disorderly. That is the reason that the mean reward decreases in the later phase.\"}",
"{\"title\": \"Responses to Review #4: part 2\", \"comment\": \">>> About the dynamic graph\\n\\nWe model the multi-agent environment as a graph, where each agent is a node and there is an edge between an agent and each neighbor. As agents keep moving and their neighbors changes quickly, the graph is highly dynamic. We have made this clear in the revision. \\n\\n>>> \\u201cWhat do you mean precisely by easy to scale? Can you support this claim?\\u201d\\n\\nSince all agents use the same neural network weights, we can directly apply the models trained with small-scale agents to the large-scale scenario. In routing, we apply the trained models (N=20) to the setting from N=40 to N=200. As illustrated in Table 3 and Figure 12, DGN continuously outperforms Floyd with BL up to N = 140.\\n\\n>>> About the testbed jungle\\n\\nJungle is a testbed designed by ourselves. It is a typical social dilemma where agents must learn to eat foods together and avoid to attack each other. \\n\\n>>> MDP formalism in partially observable environments.\\n\\nOur problem is a POMDP, where each agent gets a partial observation of the state and obtains a local reward. The objective is to maximize the sum of all agents\\u2019 expected returns.\\n\\n\\n>>> The meaning of \\u2018more convolutional layers will not increase the local region of node i.\\u2019\\n\\nWe mean regardless of how many convolutional layers are stacked, node i only communicates with its neighbors. This makes DGN practical in real-world applications, where each agent has limited communication range (e.g., wireless communication). \\n\\n\\n>>> The difficulty of learning on the changing graph of agents. \\n\\nIdeally, Q-function should be learned on the changing graph of agents. However, the graph changes quickly, which makes Q-function difficult to converge. Fixing the graph in two successive timesteps mitigates the effect of changing graph and eases the learning difficulty. We performed additional experiments to investigate this. Figure 10 shows that fixing the graph indeed speed up the learning. Moreover, keeping the agent graph unchanged is also necessary for temporal relation regularization. \\n\\n>>> Explaining the factorization by DGN.\\n\\nThe objective is to optimize the sum of all agents\\u2019 expected returns. DGN factorizes the problem by each agent optimizes its own local reward, similar to CommNet, BiCNet, etc. Note that this factorization is different from VDN, QMIX and QTRAN where all agents share a global environmental reward and there is still a centralized Q-function that directly optimizes the shared reward during training. In DGN, CommNet and BiCNet, as each agent learns to optimize its own local reward, you could also see them as sophisticated independent Q-learning.\\n\\n\\nMoreover, in Equation 3, we concatenate the output of each attention head, which is described in the paragraph above the equation. The size of the environment is 30x30.\"}",
"{\"title\": \"To all the reviewers\", \"comment\": \"We appreciate the efforts made by the anonymous reviewers on reviewing our paper. Many thanks for the comments which are especially useful for us to improve the quality of this paper. In this revised version, we have carefully addressed the concerns of the reviewers by fixing the problems, performing additional experiments, rewriting the section of temporal relation regularization, and adding necessary references and explanations. We have also improved the writing of the paper. We hope that the reviewers will find our revision satisfactory.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper addresses the problem of coordination in the multi-agent reinforcement learning setting. It proposes the value function factorization similar to independent Q-learning conditioning on the output of the graph convolutional neural network, where the graph topology is based on the agents\\u2019 nearest neighbours. The paper is interesting and has some great ideas, for example, KL term to ensure temporal cooperation consistency. However, the paper has its drawbacks and I feel obliged to point them out below. I vote for the weak acceptance of this paper.\\n\\nOne of the main drawbacks of the paper is that it is extremely hard to grasp. Even the Abstract and Introduction are hard to understand without having a pass over the whole paper. The authors often use vague terms such as 'highly dynamic environments' or 'dynamics of the graph' which make it hard to understand what they mean. The paper would benefit from a more precise language. Some of the important notions of the paper are used before they are introduced, which make the general picture very hard to understand and to relate the work to the existing research.\\n\\n'Related Work' section seems to be missing some recent work applying graph neural networks to multi-agent learning settings:\\n\\u2022 Malysheva, Aleksandra, Tegg Taekyong Sung, Chae-Bong Sohn, Daniel Kudenko, and Aleksei Shpilman. \\\"Deep Multi-Agent Reinforcement Learning with Relevance Graphs.\\\" arXiv preprint arXiv:1811.12557 (2018).\\n\\u2022 Agarwal, Akshat, Sumit Kumar, and Katia Sycara. \\\"Learning Transferable Cooperative Behavior in Multi-Agent Teams.\\\" arXiv preprint arXiv:1906.01202 (2019).\", \"my_questions_to_the_authors\": \"\\u2022 In section 3 you mention 'a set of neighbours ..., which is determined by distance or other metrics'. Can you elaborate on that? What are these metrics in your case?\\n\\u2022 Just before the Section 3.1, you say 'In addition, in many multi-agent environments, it may be costly and less helpful to take all other agents into consideration.' Have you run any experiments on that? In the appendix, you show, that making the neighbourhood smaller negatively affects the performance, but what if you make it bigger? Ideally, I would like to see an extended version of Figure 8 and 9 in the main part of the paper since they are very interesting and important for the claims the paper makes.\\n\\u2022 At the end of Section 3.1, you mention the soft update of the target network. Later, in 3.3, you say that that you do not use the target network. Can you elaborate more on that?\\n\\u2022 In Equation 4, is it a separate KL for each of the attention heads? If yes, this is not clear from the formula.\\n\\u2022 It will be useful to see the ablation experiments for all of the testbeds, not only for Battle.\\n\\u2022 Why do you think the DQN performance drops in the second half of the training in Figure 4 for all of the runs?\\n\\u2022 Have you tried summation instead of the mean aggregation step?\\n\\nI will put comments for particular parts of the paper below.\\n\\nABSTRACT\\n\\n>>> ...environments are highly dynamic\\n\\nWhat do you mean precisely here?\\n\\n>>> ...graph convolution adapts to the dynamics of the underlying graph of the multi-agent environment\\n\\nWhat is the 'dynamics of the underlying graph'? What is the 'graph of the multi-agent environment'?\\n\\n>>> 'coordination is further boosted'\\n\\nNot sure that 'boosted' is the right word here.\\n\\nINTRODUCTION\\n\\n>>> '...where mutual interplay between humans is abstracted by their relations'\\n\\nNot sure what it means.\\n\\n>>> we consider the underlying graph of agents...\\n\\nThe agent graph has not been introduced yet.\\n\\n>>> DGN shares weights among all agent(s) making it easy to scale\\n\\nWhat do you mean precisely by 'easy to scale'? Can you support this claim?\\n\\n>>> We empirically show the learning effectiveness of DGN in jungle\\n\\nNeeds a reference to the testbed.\\n\\n>>> ... interplay between agents and abstract relation representation\\n\\nWhat is 'abstract relation representation?\\n\\n>>> We consider partially observable environments.\\n\\nWhat do you mean precisely by that? What is the MDP formalism most suitable for your problem statement? What is objective under your formalism?\\n\\n>>> However, more convolutional layers will not increase the local region of node i.\\n\\nWhat do you mean by that?\\n\\n>>> As the number and position of agents vary over time, the underlying graph continuously changes, which brings difficulties to graph convolution.\\n\\nWhat kind of difficulties?\\n\\n>>> As the action of agent can change the graph at next timestep which makes it hard to learn Q function.\\n\\nWhy does it make it hard?\\n\\n>>> DGN can also be seen as a factorization of a centralized policy that outputs actions for all the agents to optimize the average expected return.\\n\\nIt would be useful for the reader to compare your approach with all the others type of the value function factorization. To me, your approach looks like a more sophisticated version of independent Q-learning, is that true?\", \"minor_comments\": [\"In 3.2 it would be very helpful to put the dimensions for all of the variables for easier understanding.\", \"The brackets in the equation 3 are not very clear (what are you concatenating across?)\", \"In section 4, when describing an environment you say 'local observation that contains a square view with 11x11 grids'. What is the total size of the environment?\", \"The performance plots for Battle include ablations before the ablation subsection is introduced. This is a bit confusing.\", \"All figures/tables captions should be more detailed and descriptive.\", \"\\u2018However, causal influence is not directly related to the reward of environment.\\u2019 Should be \\u2018of the environment\\u2019.\"]}",
"{\"title\": \"RE: The released code lacks some key files.\", \"comment\": \"There is a readme file in Battle and Jungle, respectively. Please check if it helps. Let us know if it does not.\"}",
"{\"title\": \"The released code lacks some key files.\", \"comment\": \"I am interested in your work and try to reproduce your work. However, there is no any comments about in the released code and the running environment. Could you please update the code to make it easily be reproduced ?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes an algorithm allowing \\\"cooperation\\\" between agents in multi-agent reinforcement learning, modeling agents as nodes in a graph. Each agent having only a partial view of the environment, the proposed algorithm uses multi-head attention as a (graph) convolution kernel but otherwise remains similar to the DQN algorithm. Performance is evaluated on three tasks using the MAgent framework.\\n\\nThe paper is reasonably well motivated, grounded and written. It addresses an interesting question: how to make agents cooperate in an efficient way? It does so by combining ideas from two lines of work, bringing incremental novelty. \\n\\nMy main concern relates to the experiments. It seems that ATOC and TarMAC would be the best baselines to compare against for a fair evaluation of the algorithm. Could they be added?\", \"one_question_for_the_authors\": \"at the beginning of Section 3, it is stated that \\\"it may be costly and less helpful to take all other agents into consideration\\\". It seems counter intuitive that DGN with several convolutional layers (to have a large receptive field) would be less costly than directly receiving global information? And isn't, in a sense, DGN also making use of global information when it has a large enough receptive field, even if indirectly? In this case, would it also make sense to more thoroughly compare DGN with RFM or other global state algorithms? Can this be clarified?\\n\\nFinally, readability is somewhat hindered by several small issues:\\n- Acronyms used in the paper should really be introduced, at least when they are first used. DGN is never introduced, DGN-R/DGN-M are introduced several paragraphs after being first mentioned and BL needs some guessing.\\n- Re-citing the same paper several times when mentioned in different sections is good practice. I found myself going over and over back to the related work section to find references and acronyms.\", \"some_typos\": \"\", \"page_1\": \"among all agent -> among all agents\", \"page_3\": \"of S -> of size S\", \"page_4\": \"weighed -> weighted, concate -> concatenate\", \"page_5\": \"respecitvely -> respectively\", \"page_6\": \"regularation\"}",
"{\"comment\": \"We tested MSE and it also works. However, the performance of KL divergence is better. As mentioned in the paper, the relation in different timesteps should not be the same but similar, thus we use KL divergence to compute the distance between the distributions.\", \"title\": \"RE:KL divergence\"}",
"{\"comment\": \"Thanks for the quick reply. Is there any specific reason for using KL divergence instead of any other divergence measures?\", \"title\": \"KL divergence\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces Graph Convolutional Reinforcement Learning (referred to as DGN). DGN is a Deep Q-Learning (DQN) agent structured as a graph neural network / graph convolutional network with multi-head dot product attention as a message aggregation function. Graphs are obtained based on spatial neighborhoods (e.g. k nearest neighbors) or based on network structure in the domain. DGN considers a multi-agent setting with a centralized learning algorithm and shared parameters across all (controlled) agents, but individually allocated reward. Further, the paper considers environments where other non-learning agents are present which follow a pre-trained, stationary policy. In addition to the attention-based multi-agent architecture, the paper introduces a regularizer on attention weights similar to the use of target networks in DQN, to stabilize training. Results demonstrate that the proposed model architecture outperforms related earlier agent architectures that do not use attention or use a fully-connected graph.\\n\\nOverall, this paper addresses an interesting problem, introduces a novel combination of well-established architecture/agent building blocks and introduces a novel regularizer. The novelty and significance of the contributions, however is limited, as many recent works have explored using graph-structured representations and attention in multi-agent domains (e.g. VAIN: Hoshen (NeurIPS 2017), Zambaldi et al. (ICLR 2019), Tacchetti et al. (ICLR 2019)). The combination of these blocks and the considered problem setting is novel, but otherwise incremental. Nonetheless, the results are interesting, the overall architecture is simple (which I consider to be a good sign), and the attention regularizer is novel, hence I would rate this paper as relevant to the ICLR audience.\", \"my_main_concern_with_this_paper_is_clarity_of_writing\": \"I have the feeling that important details are missing and some modeling decisions and formulas are difficult to understand. For example, I found section 3.3 difficult to read. The following sentences/statements need revision or further explanation:\\n* \\u201cIntuitively, if the relation representation produced by the relation kernel of upper layer truly captures the abstract relation between surrounding agents and itself, such relation representation should be stable/consistent\\u201d (Please clarify)\\n* \\u201cWe use the attention weight distribution in the next state as the target for the current attention weight distribution\\u201d (What is the reasoning behind this? Would an exponential moving average of attention logits/weights work as well?)\\n* \\u201cWhile RNN/LSTM forces consistent action, regardless of cooperation\\u201d (unclear)\\n* \\u201cSince we only focus on the self-consistent of the relation representation based on the current feature extraction network we apply current network to the next state to produce the new relation representation instead of the target network as in deep Q learning\\u201d (unclear)\\n* The KL term in Eq. 4 is odd: z_i is defined as G^K and vice versa, neither of them appear to be distributions. I suppose one of the two arguments of the KL term should be the attention distribution for the current time step and the other argument for the next time step (if I understood the motivation in the earlier paragraph correctly), but this is not evident from Eq. 4.\\n* KL is not symmetric -- what motivates the particular ordering in your case? Did you consider symmetric divergences such as Jensen-Shannon divergence (JSD)?\\n\\nI also wonder about the necessity of assembling adjacency matrices per node to create an intermediate ordered representation of the neighborhood on which, afterwards, an order-invariant operation such as mean pooling or self-attentive pooling is applied. Wouldn't it be more efficient to implement these operations directly using sparse scatter/gather operations as most recent GNN frameworks implement these techniques (e.g. PyTorch Geometric or DeepMind's graph_nets library)?\\n\\nFurther, important experimental details are missing, e.g., how observations / node features are represented / obtained from the environment and preprocessed. Do you encode position (continuous/discrete) and normalize in some way? It should further be mentioned that some of the baselines are trained with a different training algorithm and do not only differ in agent architecture (e.g. CommNet) \\u2014 what is the effect of this?\\n\\nExperimentally, the results seem sound, but the variance in the results is suprisingly low (see e.g. Figure 7 DQN) \\u2014 did you change the random seed between runs (both environment seed and the seed for initializing the agent weights)?\\n\\nOverall, this paper is interesting but needs revision in terms of clarity. Novelty is incremental, but if the paper would otherwise be very well written, I think it could qualify for acceptance. In its current state, I recommend a weak reject.\\n\\n\\n--- UPDATE AFTER REVISION ---\\nThe clarity in the revised manuscript is significantly improved and I feel confident in recommending acceptance of the paper.\"}",
"{\"comment\": \"After the softmax, the attention weight in the local region is a distribution, with the probability $\\\\alpha_{ij}$. We use KL divergence to measure the difference between the attention weight distributions in two timesteps. Please refer to the code for more details.\", \"title\": \"Re: Use of KL divergence\"}",
"{\"comment\": \"Hi. The paper is nice. However, could you please elaborate more on the use of KL divergence for computing the distance between the attention weight distributions. Thanks.\", \"title\": \"Use of KL divergence\"}"
]
} |
HJePXkHtvS | Deep Generative Classifier for Out-of-distribution Sample Detection | [
"Dongha Lee",
"Sehun Yu",
"Hwanjo Yu"
] | The capability of reliably detecting out-of-distribution samples is one of the key factors in deploying a good classifier, as the test distribution always does not match with the training distribution in most real-world applications. In this work, we propose a deep generative classifier which is effective to detect out-of-distribution samples as well as classify in-distribution samples, by integrating the concept of Gaussian discriminant analysis into deep neural networks. Unlike the discriminative (or softmax) classifier that only focuses on the decision boundary partitioning its latent space into multiple regions, our generative classifier aims to explicitly model class-conditional distributions as separable Gaussian distributions. Thereby, we can define the confidence score by the distance between a test sample and the center of each distribution. Our empirical evaluation on multi-class images and tabular data demonstrate that the generative classifier achieves the best performances in distinguishing out-of-distribution samples, and also it can be generalized well for various types of deep neural networks. | [
"Out-of-distribution Detection",
"Generative Classifier",
"Deep Neural Networks",
"Multi-class Classification",
"Gaussian Discriminant Analysis"
] | Reject | https://openreview.net/pdf?id=HJePXkHtvS | https://openreview.net/forum?id=HJePXkHtvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"tOhGpVMw39",
"Byl13TdZsS",
"HJgDWCr-oH",
"BJxYqil-iH",
"rkg7luoWcS",
"Sygh7IBZ5r",
"SJgAzmZUYr",
"SJx0sj7NOS",
"Syl8zfSZ_S"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798728034,
1573125542532,
1573113343502,
1573092240525,
1572087786784,
1572062756314,
1571324693554,
1570155430041,
1569964558331
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1620/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1620/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1620/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1620/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1620/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1620/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1620/Authors"
],
[
"~Yen-Chang_Hsu1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper presents a training method for deep neural networks to detect out-of-distribution samples under perspective of Gaussian discriminant analysis.\\n\\nReviewers and AC agree that some idea is given in the previous work (although it does not focus on training), and additional ideas in the paper are not super novel. Furthermore, experimental results are weak, e.g., comparison with other deep generative classifiers are desirable, as the paper focuses on training such deep models.\\n\\nHence, I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Rebuttal #2\", \"comment\": \"Thanks for your review.\\n\\nWe would disagree with the reviewer on the aspect of novelty. Our work is not about just a modification of the learning objective, but designing a novel objective for effectively detecting OOD samples by using deep neural networks (DNNs) in perspective of Gaussian discriminant analysis (GDA). Unlike the objective used for training the softmax classifier, our proposed objective is theoretically derived from GDA, and this theoretical background guarantees that each class-conditional distribution follows isotropic Gaussian distribution with the same variance in the latent space.\\n\\nNote that the latent space optimized by the softmax classifier does not guarantee that 1) the class-conditional distributions follow the Gaussian distribution and 2) they have the same covariance matrix, as shown in Figure 1. However, the state-of-the-art method [Lee et al. 2018] computes the Mahalanobis distance using the tied-covariance matrix (assuming that all the covariances are the same) in such space. For this reason, the Mahalanobis method cannot accurately capture the confidence of each sample (i.e., how likely the sample belongs to the in-distribution), and the proposed method clearly address this problem. Thereby, our generative classifier achieves higher OOD detection accuracy than the state-of-the-art method.\", \"about_your_question\": \"The proposed classifier does not need any additional optimization techniques. In the experiments, we used the Xavier weight initialization and the Adam optimizer, which are conventionally used for DNNs (or the softmax classifier). In this sense, we claim that our learning objective empirically provides the stable convergence and it can be easily employed in the deep learning framework without complicated mathematical modeling or sophisticated optimization.\\n\\n[Lee et al. 2018] A Simple Unified Framework for Detecting Out-of-distribution Samples and Adversarial Attacks, NIPS 2018\"}",
"{\"title\": \"Rebuttal #1\", \"comment\": \"Thanks for your review.\\n\\n1) The main motivation of our work is from the observation that the previous Mahalanobis method adopts the concept of the generative classifier under the strong assumption, which is not realistic enough. Specifically, the latent space optimized by the softmax classifier does not guarantee that each empirical class-conditional distribution follows the Gaussian distribution. In addition, the Mahalanobis detector requires the linear discriminant analysis (LDA) assumption that all the class covariances are the same, to compute the Mahalanobis distance by using tied-covariance matrix. Figure 1 clearly shows that the latent space trained by the softmax classifier is not suitable for the Mahalanobis detector in that 1) all the covariances are not the same as well as 2) ID and OOD samples are difficult to be distinguished. \\n\\nTo address this limitation, we optimize the latent space so that each class-conditional distribution follows the isotropic Gaussian distribution with the same covariance. By doing so, we can simply define the confidence score based on the Euclidean distance without any assumptions, and our proposed score more effectively distinguishes OOD samples from ID samples than the Mahalanobis detector that works on the latent space trained by the softmax classifier. \\n\\n2) Thank you for letting us know the missing related work. However, they do not define the confidence score that can be used for detecting OOD samples, so we cannot directly compare their performance. It is worth noting that all of them simply focus on ID classification, not OOD detection (please refer to our responses to the reviewer #3). Unlike the existing distance-based classifiers, our objective introduces the regularization term for OOD detection motivated by [Ruff et al. 2018] and it enables to accurately detect OOD samples. \\n\\nIn the research literature of OOD detection, there have been several attempts to re-train a network based on their own objectives [Malinin and Gales 2018], but they cannot avoid compromising the performance of ID classification. For this reason, the detectors that employ the pre-trained softmax classifier (including the baseline detector and Mahalanobis detector) have gained much attention. In this sense, although our main task is OOD detection, we report the ID classification results in order to emphasize that our classifier succeeds to improve the performance of OOD detection without compromising the ID classification accuracy.\", \"about_your_questions\": \"1) Thanks for your suggestion, but it sounds quite challenging to learn the covariance that approximates an arbitrary matrix. By letting it be an identity matrix, our objective can be easily implemented on the deep learning framework as well as efficiently compute the confidence score.\\n\\n2) Our proposed objective enforces that the covariance of pre-trained features approximate an identity matrix. Thus, we think the Mahalanobis detector would hardly affect the performance, even though the actual covariance could not be an identity matrix exactly. \\n\\n\\n[Ruff et al. 2018] Deep One-class Classification, ICML 2018\\n[Malinin and Gales 2018] Predictive Uncertainty Estimation via Prior Networks, NIPS 2018\"}",
"{\"title\": \"Rebuttal #3\", \"comment\": \"Thanks for your review.\\n\\n1) We want to emphasize that the most important part of our proposed classifier is the regularization term, because it plays a key role to accurately detect OOD samples. The challenge of the OOD detection task is to obtain the decision boundary between ID samples and OOD samples. To this end, we aim to learn K one-class classifiers that have sphere-shaped decision boundaries with minimum volumes by using the regularization term. DeepSVDD [Ruff et al. 2018] showed that such a sphere-shaped decision boundary is effective to detect abnormal samples in one-class setting, so we extend it to multi-class setting specifically for the OOD detection task. On the other hand, the existing distance-based classifiers including [Snell et al. 2017] only focus on the ID classification based on the distance, so their OOD detection performance would be poor. In Figure 2, the classifier trained with a very small regularization coefficient $\\\\lambda=10^{-3}$ (it seems to be almost the same model with [Snell et al. 2017]) achieves the poor performance in terms of OOD detection while still showing the good performance in terms of ID classification. \\n\\n2) In Table 2 and 3, we already compared the performances with the state-of-the-art method, which is Mahalanobis method. [Lee et al. 2018] demonstrated that the Mahalanobis method outperforms both the baseline (plain) and another one (ODIN; equipped with calibration techniques) based on softmax. Furthermore, as we mentioned in the paper, calibration techniques such as temperature scaling and input perturbation are not practical because they require OOD samples from the test distribution to find the best hyperparameter values for OOD detection. For this reason, we omit the comparison with ODIN. Note that the OOD detection performance of our proposed classifier would be much better if any calibration techniques are applied to.\\n\\n[Ruff et al. 2018] Deep One-class Classification, ICML 2018\\n[Snell et al. 2017] Prototypical Networks for Few-shot Learning, NIPS 2017\\n[Lee et al. 2018] A Simple Unified Framework for Detecting Out-of-distribution Samples and Adversarial Attacks, NIPS 2018\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper presents an algorithm two learn both classifier and out-of-distribution sample detector. Instead of learning softmax weights, the proposed approach learns to project the inputs to a latent space, where each class is a Gaussian distribution. Out-of-distribution samples can be detected by the distance between the learnt representation and centers. The proposed approach can be viewed as generalization of Gaussian discriminant analysis and one-class classification. The proposed approach is technically sound, and the experiments do show some improvement over previous algorithm on out-of-distribution detection, especially on tabular datasets. However I think there are some weaknesses of this paper\", \"The novelty is a little thin. The proposed algorithm is based on just a modification of the learning objective, and there are no theoretical analysis of why the proposed approach can work better.\", \"Experimental result is somewhat weak. Improvement on the image datasets seems marginal, especially on the SVHN dataset. I also doubt if classifying Cifar10 against TinyImageNet or LSUN challenging enough, because these datasets are fairly different. I am interested in whether the proposed approach can detect novel classes, such as training using only 9 of 10 classes on Cifar10.\"], \"another_question\": \"does learning classifier as well as centers need additional optimization techniques, like special initialization?\\n\\nUpdate\\n======= \\n\\nAfter a careful read of the Mahabolis baseline (Lee et al., 2018) I agree with the authors that this paper has some novelty comparing with previous works, i.e., directly learning a generative classifier instead of converting a discrimitively trained classifier into generative. Combined with the good results obtained. I will raise my score to a weak accept (though without a strong belief).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a metric learning-based generative model for detecting the out-of-distribution examples. A new objective function is proposed to model class-dependent class-distribution into a Gaussian analysis models. For the proposed objective, the illustration of derived KL divergence under the Gaussian discriminative analysis assumption is well done. The empirical results conclude the superiority of the proposed loss function in both tabular and image datasets, when comparing the plain network and one with a softmax.\\n\\nThis study aims is to detect out-of-distribution samples for better generalization. However, the related works need to be revised and present the novelty of the work compared to some metric and distance-based learning algorithms. For example, the proposed idea is similar to adding a regularization term to the prototypical network with Euclidean distance (Snell et al. 2016). This aspect is not very well explained.\\n\\nAnother issue is the lack of comparison with state-of-the-art approaches. The Related Work section (Sec. 2) show a baseline (plain) and another one based on softmax. Experimental comparison with state-of-the-art will help to position this work.\\n\\n** Update ** I read the authors comments and other reviews. Although some clarification were useful, I still maintain my rating of \\\"weak reject\\\", I don't get much excitment and I am not feeling there is something great with this work.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\nUnlike the softmax classifier, the authors considered the generative classifier based on Gaussian discriminative analysis and showed that such deep generative classifiers can be useful for detecting out-of-distribution samples. For various benchmark tasks, the proposed method outperforms baselines based on the softmax classifier.\", \"detailed_comments\": \"\", \"the_novelty_of_this_paper_is_not_significant_due_to_the_following_reasons\": \"1. The main message (i.e. the concept of the deep generative classifier can be useful for detecting out-of-distribution samples) is not really new because it has been explored before [Lee' 18]. Even though this paper considers training a deep generative classifier directly unlike [Lee' 18], the proposed method looks like a simple variant of [Lee' 18].\\n\\n2. Missing baselines for training the deep generative classifier: training the deep generative classifier directly has been studied by [Guerriero' 18] and [Pang' 18] but the authors did not compare the proposed training method with such baselines. Because of that, it is hard to say that contributions from proposing a training method are significant.\", \"questions\": \"1. Could the authors consider a case without an identity covariance assumption? Most training methods for deep generative classifier assumes the identity covariance matrix because optimizing the log determinant is not easy. So, it would be interesting if the authors can handle this issue. \\n\\n2. Even though the authors assume the identity covariance matrix, the covariance matrix of pre-trained features can not be an identity matrix. Could the authors report the performance of Mahalanobis detector using the proposed deep generative classifier? \\n\\n[Lee' 18] Lee, K., Lee, K., Lee, H. and Shin, J., 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Advances in Neural Information Processing Systems (pp. 7167-7177).\\n\\n[Guerriero' 18] Samantha Guerriero, Barbara Caputo, Thomas Mensink, DeepNCM: Deep Nearest Class Mean Classifiers, ICLR workshop 2018.\\n\\n[Pang' 18] Pang, T., Du, C. and Zhu, J., Max-mahalanobis linear discriminant analysis networks. In ICML, 2018.\"}",
"{\"comment\": \"Thanks for your comment! The minimization of our KL-divergence term does not guarantee to achieve a unit covariance matrix in the class-conditional distribution, because it is identical to minimizing the variances as you pointed out. We observed that the term \\\\sum ||f(x)-c_k||^2 also can be derived from KL(P_k || N(c_k, \\\\sigma^2 I)) in the same way, i.e., even in the case that we assume the isotropic Gaussian distribution. This phenomenon occurs because of the empirical distribution P_k based on the dirac delta function, which is not continuous but has non-zero values only at the points where the f(x) exists. In this situation, reducing the distance between each point and the class center eventually makes the empirical distribution approximate the isotropic Gaussian distribution, regardless of its assumed variance \\\\sigma. However, it does not affect our overall framework. To use the Euclidean distance for OOD detection and ID classification, the actual values of the variances are not important as long as they are the same for all the classes. In order to control the effect of this KL-divergence term, we introduced the hyperparameter \\\\lambda, which determines the final variance of the class-conditional distributions while interacting with the log posterior term in our objective.\", \"title\": \"RE: A question about how to enforce the covariance to be an identity matrix\"}",
"{\"comment\": \"Thanks for the interesting idea! Using a generative classifier for detecting OOD makes lots of sense. It seems that using the GDA assumption while enforcing unit covariance matrix is the key step. Would you elaborate more about how the unit covariance matrix be achieved? The confusion comes from Section 2, in that the loss term derived from KL(P_k||N(c_k, I)) will have an effect of keep minimizing the variance, instead of driving its covariance to be an identity matrix. Is there an empirical observation showing that a unit covariance matrix is achieved by adding this loss term? Thanks!\", \"title\": \"A question about how to enforce the covariance to be an identity matrix\"}"
]
} |
SyxDXJStPS | Reparameterized Variational Divergence Minimization for Stable Imitation | [
"Dilip Arumugam",
"Debadeepta Dey",
"Alekh Agarwal",
"Asli Celikyilmaz",
"Elnaz Nouri",
"Eric Horvitz",
"Bill Dolan"
] | State-of-the-art results in imitation learning are currently held by adversarial methods that iteratively estimate the divergence between student and expert policies and then minimize this divergence to bring the imitation policy closer to expert behavior. Analogous techniques for imitation learning from observations alone (without expert action labels), however, have not enjoyed the same ubiquitous successes.
Recent work in adversarial methods for generative models has shown that the measure used to judge the discrepancy between real and synthetic samples is an algorithmic design choice, and that different choices can result in significant differences in model performance. Choices including Wasserstein distance and various $f$-divergences have already been explored in the adversarial networks literature, while more recently the latter class has been investigated for imitation learning. Unfortunately, we find that in practice this existing imitation-learning framework for using $f$-divergences suffers from numerical instabilities stemming from the combination of function approximation and policy-gradient reinforcement learning. In this work, we alleviate these challenges and offer a reparameterization of adversarial imitation learning as $f$-divergence minimization before further extending the framework to handle the problem of imitation from observations only. Empirically, we demonstrate that our design choices for coupling imitation learning and $f$-divergences are critical to recovering successful imitation policies. Moreover, we find that with the appropriate choice of $f$-divergence, we can obtain imitation-from-observation algorithms that outperform baseline approaches and more closely match expert performance in continous-control tasks with low-dimensional observation spaces. With high-dimensional observations, we still observe a significant gap with and without action labels, offering an interesting avenue for future work. | [
"Imitation Learning",
"Reinforcement Learning",
"Adversarial Learning",
"Learning from Demonstration"
] | Reject | https://openreview.net/pdf?id=SyxDXJStPS | https://openreview.net/forum?id=SyxDXJStPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"pZn9H9Ucnv",
"HJgm_OE2sr",
"rygFMXN2sH",
"r1eIsMLssr",
"BkxBpGwGsr",
"SyeL3GPGiS",
"B1xhyGwzjB",
"BJlj0WDfiS",
"r1epoxDGiB",
"SJg7BxvMjH",
"ryg7NuiTtH",
"BkgezmwptS",
"HyxFuo-vYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727976,
1573828715063,
1573827345312,
1573769886390,
1573184189336,
1573184173577,
1573183971821,
1573183955169,
1573183653366,
1573183546656,
1571825707246,
1571808008187,
1571392369108
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1619/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1619/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1619/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1619/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1619/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1619/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1619/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1619/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1619/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1619/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1619/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1619/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The submission performs empirical analysis on f-VIM (Ke, 2019), a method for imitation learning by f-divergence minimization. The paper especially focues on a state-only formulation akin to GAILfO (Torabi et al., 2018b). The main contributions are:\\n1) The paper identifies numerical proplems with the output activations of f-VIM and suggest a scheme to choose them such that the resulting rewards are bounded.\\n2) A regularizer that was proposed by Mescheder et al. (2018) for GANs is tested in the adversarial imitation learning setting.\\n3) In order to handle state-only demonstrations, the technique of GAILfO is applied to f-VIM (then denoted f-VIMO) which inputs state-nextStates instead of state-actions to the discriminator.\\n\\nThe reviewers found the submitted paper hard to follow, which suggests a revision might make more apparent the author's contributions in later submissions of this work.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Some replies to your reply\", \"comment\": \"\\\"We invite the reviewer to offer concrete suggestions on how to perform the suggested \\u201ctheoretical analysis\\u201d of variational function activation choices when estimating f-divergences.\\\"\\n- I can't! I just wanted to point out that the sigmoid activation is just as arbitrary as the activations proposed by Nowozin et al. (2016).\\n\\n\\\"we ask the reviewer to be precise and specify exactly which aspects of the evaluation are insufficient.\\\"\\n- I believe I did this already in my original rebuttal:\\nRegarding the regularization, I do not think that its effect can be properly evaluated when evaluating it only for a single coefficient (we apparently disagree here).\\nRegarding the state-nextState-Discriminator, I think that comparing it with a state-only-discriminator would be a useful comparison. Apparently we also disagree here. Yes, we can construct tasks where state-only matching fails. But, how does this relate to the experiments in the paper? Also note that we can also construct tasks where matching state-transitions fails, e.g. tasks where different actions lead to the same state but have different \\\"energy\\\" costs.\\n\\n\\\"reference for AIRL minimizing the reverse KL\\\"\\n- I think you referred to that paper yourself. It is:\\nGhasemipour et al.\\\"A Divergence Minimization Perspective on Imitation Learning Methods\\\". 2019.\\nSee Section 4.1 in https://arxiv.org/pdf/1911.02256.pdf\"}",
"{\"title\": \"I think reward bias is a major problem in the current evaluation\", \"comment\": \"I just want to note that I think that reward bias seems to be a highly plausible explanation for the presented results--well spotted!\", \"to_provide_some_context\": \"Kostrikov et al. (2019) showed a bug in many imitation learning implementation that stems from the fact that trajectories returned by common frameworks (include Baselines/Gym) do not include absorbing states, which prevents imitation learning algorithms from learning the reward for these states and from applying the learned reward function to these states. Instead their reward/return is implicitly set to zero. This has to be considered a bug in the implementation not a shortcoming of the derived algorithms which would require to learn the reward for all states. The effect of this bug is that methods that learn reward function that only produce negative values will always result in optimal policies that terminate the episode as quickly as possible.\\n\\nIt seems that the author's implementation suffers from this exact bug. The bad performance of negative reward functions on \\\"survival\\\"-tasks is to be expected. Relating it to the buggy implementation is thus not merely some conjecture. A fair comparison needs to either learn the rewards for absorbing state or only consider environments that can not end prematurely due to task success/failure.\"}",
"{\"title\": \"Swapping Distributions of Variational Lower Bound\", \"comment\": \"Please see the newly added Section C.3 of the supplement addressing the reviewer's idea for swapping the positions of the distributions in the variational lower bound to the f-divergence.\"}",
"{\"title\": \"Response to Reviewer #2 (continued)\", \"comment\": \"\\u201cThe experiments in Figure 2 seem unfair, since TV-VIM-sigmoid incorporates priors about survival bonuses: \\u201c \\u2014 We agree with the reviewer that the sigmoid reward used in Figure 2 does satisfy the second example of bias reward functions listed by Kostrikov et al. (2019). However, the reviewer is only conjecturing that this bias accounts for the gap between TV-VIM and TV-VIM-sigmoid; in the paper, we posit a different, equally-plausible explanation. Namely, that tanh naturally requires an imitation policy to gradually move from a region of negative reward (the lower range of tanh) to a region of positive reward by crossing an intermediate region of 0 reward. Given the nature of adversarial imitation learning, we know that this progression is monotonic (that is, we know imitation policies will start poorly in the negative region and gradually improve, moving towards the positive region). We assert that this disappearing reward signal is what causes learning in TV-VIM to stagnate. Notice that solution employed by Kostrikov et al. (2019) to resolve reward bias mirrors our own: augment or replace the reward signal altogether. Additionally, our reparameterization by itself is not tied to the sigmoid function in any way and so, in principle, a suitable unbiased alternative could also be used in conjunction with our work.\", \"minor_comments\": \"Adding the individual functions f to Table 1 would, unfortunately, push the table out of the margins; we will add such a complete table to the appendix. The title of Algorithm 1 should not be f-VIMO-sigmoid as sigmoid appears nowhere in the algorithm itself. Sigmoid is a suitable choice made for our experiments but could be replaced with another function, as discussed in Section 4.3.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"\\u201cDiscussion and comparing against a simple baseline method based on swapping distributions\\u201d \\u2014 while it is tempting to simply swap the distributions over which expectations are taken in Equation 7, this choice does have nontrivial implications for the overall f-divergence objective being minimized. In particular, we feel that this choice could be viable but perhaps ineffective due to the mode-seeking/mode-covering differences of swapping the positions of the distributions. We are currently running experiments so that we can better report on this.\\n\\n\\u201cNeed stronger baseline methods for ILfO: \\u201c \\u2014 we note that further ILfO baselines beyond GAIFO would only dilute the signal of interest for this work in establishing the effect of f-divergence choice in the context of ILfO problems. The reviewer suggests a comparison to FAIL (Sun et al., 2019) which assumes a time-factored observation space (an assumption not made in this paper) and a non-stationary policy. In practice, this results in the need for solving solving H distribution-matching (GAN) problems (where H is the finite horizon of the MDP) to recover H per-timestep policies. While the provable guarantees of FAIL are admirable, the computational expense of such a baseline is obviously impractical, something confirmed by the chosen empirical evaluation of the FAIL paper itself which further assumes a reproducing kernel Hilbert space (RKHS) to allow for closed-form computation of all divergences under the maximum mean discrepancy; the potential use of such integral probability metrics, while interesting, is not a focus of this work.\\n\\n\\u201cUsing the f-divergence for ILfO is not well motivated: \\u201c \\u2014 as mentioned in our response to all reviewers, the core hypothesis of this work is that alternative f-divergences may yield more performant ILfO algorithms, akin to the qualitative improvements of f-GANs (Nowozin et al., 2016) over traditional GANs. Again, this hypothesis mirrors past ideas in the literature while being unique to our setting and proving to be true empirically for a subset of the evaluation tasks. We would also refer the reviewers to the f-MAX paper (recent best paper at CoRL 2019) which proposes a similar framework to Ke et al. (2019). A direct quote from the abstract: \\u201cIn this work, we present a unified probabilistic perspective on IL algorithms based on divergence minimization. We present f-MAX, an f-divergence generalization of AIRL [1], a state-of-the-art IRL method. f-MAX enables us to relate prior IRL methods such as GAIL [2] and AIRL [1], and understand their algorithmic properties. Through the lens of divergence minimization we tease apart the differences between BC and successful IRL approaches, and empirically evaluate these nuances on simulated high-dimensional continuous control domains.\\u201d It is precisely because we can see these popular IL algorithms as specific members of a broader family of algorithms that such insights can be derived; beginning this path for a problem that is as challenging as ILfO is crucial for driving progress.\\n\\n\\u201cThe experiments focus on evaluating existing methods rather than the proposed methods:\\u201d \\u2014 as mentioned in the response to all reviewers, we cannot stress enough that the only existing ILfO method evaluated in this work is GAIFO; all others are novel contributions of this work.\\n\\n\\u201cThe experiments in Figure 2 do not support the claim regarding stability:\\u201d \\u2014 Figure 2 offers a clear, concrete example highlighting how our reparameterization improves imitation policy stability, ultimately leading to a performant policy for the case of total variation distance. As for the KL and RKL divergences, we cannot report results for divergences that failed to reach completion due to numerical instabilities; preliminary experiments to employ gradient norm clipping with threshold values common to deep reinforcement learning were found to be completely ineffective. Note that a naive, brute-force solution to this problem does exist in the form of grid searching over all possible thresholds for gradient norm clipping of the policy in order to find one that is as large as possible without succumbing to numerical errors. Obviously, the computational infeasibility of such a search across multiple choices of f-divergence is less than desirable and our reparameterization offers a stable optimization solution without relying on such computational inefficiency, which does constitute an important contribution.\"}",
"{\"title\": \"Response to Reviewer #1 (continued)\", \"comment\": \"\\u201c I think that a sweep over the coefficient would be mandatory, especially given that current experiments do not show a clear benefit of the regularization loss\\u201d \\u2014 preliminary experiments varying the regularization coefficient were conducted in the Ant domain where it was seen to be ineffective and was consequently not further pursued. This is consistent with the findings of Mescheder et al. (2018) who also report insensitivity to the coefficient. We do not make any claims on the necessity of discriminator regularization to achieve strong imitation performance; our empirical results suggest that it offers potential benefit that can only be established on a per-environment and per-divergence basis.\\n\\n\\u201c it seems like it would be perfectly possible to handle state-only observations by simply making the discriminator independent of the actions, i.e. using D(s,a) = D(s). Such technique matches the marginal distributions over states and is commonly applied to GAIL, e.g. by Peng et al. [1].\\u201d \\u2014 while restricting the discriminator focus to states only (instead of state transitions as done in this work and GAIFO) is possible, notice that there are cases where such a state-marginal matching imitation algorithm can dramatically fail. Consider a simple example: a tabular MDP of N states organized in a ring with transitions only between the two adjacent neighbors of each state; an expert policy that moves clockwise and an imitation policy that moves anticlockwise will yield identical state marginal distributions while clearly failing the imitation task.\\n\\n\\u201cI am maily interested in the authors response to my critique, especially regarding - the choice not to compare with state-only f-VIM, and - the motivation of the proposed output activations.\\u201d \\u2014 please see our response just above for the first question and the second response for the second question.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"\\u201c...the contributions of the paper are rather marginal\\u201d \\u2014 please see the comment made to all reviewers.\\n\\n\\u201c...the activations proposed in the current submission are also seem somewhat arbitrary and are not accompanied by any theoretical analysis.\\u201d \\u2014 We invite the reviewer to offer concrete suggestions on how to perform the suggested \\u201ctheoretical analysis\\u201d of variational function activation choices when estimating f-divergences. It would no doubt serve as a useful tool for justifying the \\u201csomewhat arbitrary\\u201d choices of activation functions offered in the original f-GAN paper (Nowozin et al. 2016) that are widely used in practice. In the meantime, we assert that the choice of sigmoid activation in this work is no more arbitrary than the variety of reward hacks employed in practice throughout deep reinforcement learning to enforce stability (Henderson et al. 2018). Analogous to the arbitrary f-GAN activation choices, these heuristic selections are used because of their widespread success in rectifying practical implementation issues; in f-GANs, this is ensuring conformity to the domain of the convex conjugate. The reason why all of our figures lack plots for these f-GAN activation choices stems directly from the resulting numerical instabilities caused by exploding policy gradients. Consequently, both our choice and those made throughout the literature were done with the goal of maintaining stability during policy gradient updates where policy returns directly scale the gradient. Note that this is typically not a concern in the traditional GAN literature that employs standard end-to-end backpropagation for training both the generator and discriminator models. Our lack of theoretical analysis for reward function choice mirrors the overall lack of a theoretical understanding for reward hacks in general throughout deep reinforcement learning.\\n\\n\\u201c2) and 3) are marginal combinations of existing work that are only insufficiently evaluated and do not seem particular effective.\\u201d \\u2014 we ask the reviewer to be precise and specify exactly which aspects of the evaluation are insufficient. As for the effectiveness of our proposed methods, we point to our Figure 4 as but a single example to note that the plots associated with traditional GAIL (top row, blue) do not strictly represent the best performance results across all environments, indicating the effectiveness of 2) and 3).\\n\\n\\u201cI am not convinced by this motivation, given that GAIL and AIRL (which approximatly minimizes the RKL) use unbounded reward functions and do not seem to suffer from such problems.\\u201d \\u2014 the reviewer is correct that GAIL and AIRL do not seem to suffer from this problem; thus, a logical conclusion of the reviewer\\u2019s observation is that not all unbounded reward functions yield exploding policy gradients. When exploding policy gradients do occur, however, we believe the reviewer may agree that having a reparameterization to alleviate the issue would be useful. If the reviewer could provide a concrete pointer to the connection between AIRL and RKL, that would be much appreciated as we could not find such a connection in the original paper. Still, our experiments left us unable to run (R)KL experiments to completion on account of exploding policy gradients; in all experiments, the use of (R)KL-VIM(O) is always done with sigmoid rewards to rectify the instability. Note that a naive, brute-force solution to this problem does exist in the form of grid searching over all possible thresholds for gradient norm clipping of the policy in order to find one that is as large as possible without succumbing to numerical errors; solving this kind of \\u201cGoldilocks\\u201d problem would be to accept the problem rather than address it. Moreover, the computational resources needed for such a search across multiple choices of f-divergence is quite intensive and our reparameterization offers a stable optimization solution without relying on such computational inefficiency, which does constitute an important contribution.\\n\\n\\u201cThe effect of the \\\"reparametrization\\\" is only evaluated for total variation\\u201d \\u2014 as mentioned in the paper, both KL and RKL lead to numerical instabilities that brought all random trials to a complete halt. Since GAIL and GAIFO are already performant imitation learning algorithms on their own (and we wish to maintain fidelity to the original algorithms as baselines), we did not examine the effect of reparameterizing with sigmoid rewards on either one.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"\\u201cLack of novelty \\u2013 Although I appreciate the reparameterization applied to f-VIM to make it potentially more stable for imitation learning in large state- and action-spaces, I don\\u2019t think that by itself meets the bar for ICLR\\u201d \\u2014 The statement that our paper does not meet \\u201cthe bar for ICLR\\u201d is completely opaque; we would appreciate an explicit description from the reviewer that characterizes the current gap between our paper and \\u201cthe bar for ICLR.\\u201d Please see the response made to all reviewers concerning the novelty of our approach.\\n\\n\\u201cWhat about the JS divergence (GAIL)? Does reparameterization help or affect that?\\u201d \\u2014 We view our reparameterization of the f-VIM/VIMO objective as a remedy for instability, without which, an IL or ILfO algorithm under a certain choice of f-divergence may trivially fail to produce meaningful imitation policies. Since GAIL and GAIFO are clearly already performant imitation algorithms on their own (and we wish to maintain fidelity to the original algorithms as baselines), we did not examine the effect of sigmoid rewards on either one. \\n\\n\\u201cIn Figure 3, is GAIL from the original paper, or does it use the sigmoid rewards?\\u201d \\u2014 As previously mentioned, our use of sigmoid rewards is only done to remedy what would otherwise be an unstable imitation algorithm. As the original GAIL and GAIFO algorithms do not suffer from such instabilities, any reported results for either algorithm adheres to their original papers; that is, we never use sigmoid rewards with either GAIL or GAIFO. We will make this clearer in the paper.\\n\\n\\u201cFigure 3 does not offer any evidence that the proposed methods in the paper lead to algorithms that should be preferred over the current state-of-the-art in imitation learning with divergence minimization such GAIL and WAIL.\\u201d \\u2014 please see our response to this comment in our reply to all reviewers.\", \"minor_comments\": \"We agree with the reviewer that GAN is not itself a divergence between probability distributions. However, the divergence optimized by GANs is also not exactly the Jensen-Shannon divergence (something we note explicitly as a footnote in our extended review of prior work in the appendix and detailed more in Nowozin et al. 2016). Our presentation is consistent with that of Nowozin et al. (2016) while maintaining correctness. We will certainly add the Jensen-Shannon divergence to our tables in order to highlight this fact explicitly.\"}",
"{\"title\": \"Comments for all Reviewers\", \"comment\": \"We thank the reviewers for providing feedback on our submission.\\n\\nThere seems to be a bit of confusion amongst some of the reviewers concerning the core contribution and central hypothesis of this paper; our apologies that these did not come across well and we plan to use the discussion during the rebuttal period to improve the overall clarity of our paper. \\n\\nConcretely, Reviewer #2 mentions that we evaluate \\u201cexisting methods (f-VIM and f-VIMO) with difference choices of divergence.\\u201d While f-VIM (Ke et al. 2019) is an algorithm from prior work, f-VIMO is a novel contribution of this paper for which the only existing counterpart is GAIFO (Torabi et al. 2018). Moreover, the central hypothesis investigated in this work is that the exploration of alternative f-divergences, rather than the standard Jensen-Shannon divergence employed by GANs, may yield benefits when learning imitation policies from observations only (without the provision of expert action labels). Reviewer #3 mentions that \\u201cFigure 3 does not offer any evidence that the proposed methods in the paper lead to algorithms that should be preferred over the current state-of-the-art in imitation learning with divergence minimization such GAIL and WAIL.\\u201d This is true and in line with the core goal of this paper: to assess the potential for superior *imitation-from-observation* algorithms, as opposed to the traditional imitation learning setting. The inclusion of imitation learning results in our plots is to act as an intuitive upper bound on what should be achievable relative to the imitation from observation setting.\\n\\nBoth Reviewers #1 and #3 comment on the lack of novelty in our paper, claiming that \\u201cthe contributions of the paper are rather marginal.\\u201d We ask the reviewers to keep in mind that the story told in this paper transitions from the GAIFO algorithm of Torabi et al., (2018) to arbitrary f-divergences through the f-VIMO algorithm. This is analogous to prior transitions from GANs (Goodfellow et al., 2014) to f-GANs (Nowozin et al., 2016) as well as from GAIL (Ho & Ermon, 2016) to f-VIM (Ke et al., 2019). We assert that the existence of these past parallelisms in the literature do not diminish the novelty of our work or alter the distinctness of our problem setting, namely the (strictly harder) imitation learning from observation (ILfO) problem. In addition to drawing the parallel, we resolve practical issues that arise when deploying these algorithms to achieve reasonable imitation policies through stable policy-gradient optimization. Additionally, while the discriminator regularization (Mescheder et al. 2018) has been used before in the traditional GAN setting, it has, to the best of our knowledge, never been assessed in the context of imitation learning or ILfO settings.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes the application of the f-VIM framework (Ke et. al., 2019) to the problem of imitation learning from observations (no expert actions). The authors first identify a potential source of numerical instability in the application of f-VIM to imitation learning \\u2013 the rewards for the policy-gradient RL are given by a combination of a convex conjugate and an activation function. To alleviate this, f-VIM is reparameterized by curating the activation using conjugate inverse (Equation 8), yielding a potentially more stable reward for deep-RL.\", \"i_have_the_following_concerns_about_the_paper\": \"1.\\tLack of novelty \\u2013 Although I appreciate the reparameterization applied to f-VIM to make it potentially more stable for imitation learning in large state- and action-spaces, I don\\u2019t think that by itself meets the bar for ICLR. Algorithm 1 is basically the GAILFO algorithm (Torabi et al. 2018) written in the f-Vim framework, with the proposed reparameterization. The discriminator regularization (Section 4.4) has been used before.\\n\\n2.\\tExperiments \\u2013 Figure 2 shows the improvement with TV when using the reparameterization, and the authors mention in text about the difficulty with KL and reverse-KL. What about the JS divergence (GAIL)? Does reparameterization help or affect that?\\n\\n3.\\tIn Figure 3, is GAIL from the original paper, or does it use the sigmoid rewards? Figure 3 does not offer any evidence that the proposed methods in the paper lead to algorithms that should be preferred over the current state-of-the-art in imitation learning with divergence minimization such GAIL and WAIL.\", \"minor_comment\": \"\", \"in_table_1\": \"GAN is not a divergence. Please use Jensen-Shannon, with the corresponding tweaks to the columns.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary: The submission performs empirical analysis on f-VIM (Ke, 2019), a method for imitation learning by f-divergence minimization. The paper especially focues on a state-only formulation akin to GAILfO (Torabi et al., 2018b). The main contributions are:\\n1) The paper identifies numerical proplems with the output activations of f-VIM and suggest a scheme to choose them such that the resulting rewards are bounded.\\n2) A regularizer that was proposed by Mescheder et al. (2018) for GANs is tested in the adversarial imitation learning setting.\\n3) In order to handle state-only demonstrations, the technique of GAILfO is applied to f-VIM (then denoted f-VIMO) which inputs state-nextStates instead of state-actions to the discriminator.\\n\\nContribution / Significance:\\nI think that the contributions of the paper are rather marginal. I do think that the choice of output activation may have large impact on the performance and it seems that the activation suggested by Ke et al. (2019) are somewhat arbitrary. However, the activations proposed in the current submission are also seem somewhat arbitrary and are not accompanied by any theoretical analysis. \\n2) and 3) are marginal combinations of existing work that are only insufficiently evaluated and do not seem particular effective.\\nHence, I think that the current submission is of rather limited interest.\", \"soundness\": \"The \\\"reparametrization\\\" of f-VIM is motivated based on exploding policy gradients when using unbounded reward functions, especially when minimizing the (R)KL. \\nI am not convinced by this motivation, given that GAIL and AIRL (which approximatly minimizes the RKL) use unbounded reward functions and do not seem to suffer from such problems.\", \"evaluation\": \"The effect of the \\\"reparametrization\\\" is only evaluated for total variation. The regularization loss is only evaluated with a single fixed coefficient of 10 on all experiment. I think that a sweep over the coefficient would be mandatory, especially given that current experiments do not show a clear benefit of the regularization loss (the regularized version performs worse on roughly half of the experiments). \\nWhen learning from observations only, the submission only evaluates the proposed combination of f-VIM and GAILfO. However, it seems like it would be perfectly possible to handle state-only observations by simply making the discriminator independent of the actions, i.e. using D(s,a) = D(s). Such technique matches the marginal distributions over states and is commonly applied to GAIL, e.g. by Peng et al. [1].\\nIt is not clear whether the reported problems of learning from observations only is really a general problem of the learning setting (as claimed in the submission) or a problem of the proposed method.\", \"clarity\": \"The paper is well written and easy to follow. Using different linestyles to distinguish the learning with regularization versus without regularization would help a lot.\", \"decision\": \"Due to the marginal contribution and the insufficient evaluation I have to recommend rejection.\", \"question\": \"I am maily interested in the authors response to my critique, especially regarding\\n- the choice not to compare with state-only f-VIM, and\\n- the motivation of the proposed output activations.\\n\\n\\n[1] Peng, Xue Bin, et al. \\\"Variational discriminator bottleneck: Improving imitation learning, inverse rl, and gans by constraining information flow.\\\" arXiv preprint arXiv:1810.00821 (2018).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"Summary:\", \"The paper proposes an IL method based on the f-divergence. Specifically, the paper extends f-VIM (Ke et al., 2019), which uses the f-divergence for IL, by using a sigmoid function for discriminator output\\u2019s activation function. This choice of activation function yields an alternative objective function, where the reward function for an RL agent does not directly depend on the convex conjugate of the function f; the paper claims that this independency improves stability of policy learning. This proposed method is named f-VIM-sigmoid. The paper extends f-VIM-sigmoid to the setting of IL with observation and proposes f-VIMO-sigmoid. Experiments on Mujoco locomotion tasks show that f-VIM-sigmoid and f-VIMO-sigmoid perform better than existing methods.\", \"Rating:\", \"The paper proposes a simple but interesting approach to improve stability of adversarial IL. However, the paper has issues regarding baseline methods, motivation, supports of the claim, and experiments (see below). These issues should be addressed. At the present, I vote for rejection.\", \"Major comments:\", \"Discussion and comparing against a simple baseline method based on swapping distributions:\", \"To make the reward function be independent of the convex conjugate f*, it is possible to simply swapping the distributions P and Q in the definition of the f-divergence. More specifically, instead of minimizing D_f(P||Q), we can minimize D_f(Q||P), where P is a data distribution and Q is a generator. In this case, pi* and pi_theta in Eq. (7) swap, and the RL agent minimizes the cost function g_f(V_w(s,a)). This cost function does not directly depend on f*, similarly to the reward function r(V_w(s,a)) in Eq. (8). This swapping is simpler and more flexible than re-parameterizing, while achieving the same goal as f-VIM-sigmoid. This swapping method should be discussed and compared against the proposed methods.\", \"Need stronger baseline methods for ILfO:\", \"The paper should evaluate f-VIMO-sigmoid against stronger baselines, e.g., forward adversarial IL (Sun et al., 2019) which outperforms GAIL-based methods in the ILfO setting.\", \"[1] Wen Sun, Anirudh Vemula, Byron Boots, and J Andrew Bagnell. Provably efficient imitation learning from observation alone. ICML, 2019.\", \"Using the f-divergence for ILfO is not well motivated:\", \"The paper does not provide good motivations for using f-divergence in ILfO. This makes the paper quite difficult to follow, since there is no connection between f-divergence and ILfO.\", \"The experiments focus on evaluating existing methods rather than the proposed methods:\", \"Specifically, the proposed methods are evaluated with only one choice of the divergence (TV) in Figure 2. Meanwhile, most of Section 6 and results (Figure 3 and 4, and additional results in the appendix) focus on evaluating the existing methods (f-VIM and f-VIMO) with different choices of divergence.\", \"The experiments in Figure 2 do not support the claim regarding stability:\", \"The paper claims to improve stability of IL by using the proposed re-parameterization. However, the experimental results do not support this claim, and the questions asked in Section 5 are not related to this claimed. Instead, it seems that re-parameterization helps avoiding local optima (possibly due to a biased reward function, see below), while stability is improved by regularizing the discriminator. I could not see how the re-parameterization improves the policy stability as claimed.\", \"The experiments in Figure 2 seem unfair, since TV-VIM-sigmoid incorporates priors about survival bonuses:\", \"Specifically, TV-VIM-sigmoid uses sigmoid which yields strictly positive rewards, while TV-VIM uses tanh which yields positive and negative rewards. As discussed by Kostrikov et al., 2019, using strictly positive rewards incorporate strong priors about the survival bonuses, which exist in the locomotion task used in the experiments. Therefore, TV-VIM-sigmoid uses strong priors while TV-VIM does not. In order to make the comparison fairer, I suggest the authors to evaluate TV-VIM with sigmoid reward output, or include environments that do not have survival bonuses.\", \"[2] Ilya Kostrikov, Kumar Krishna Agrawal, Debidatta Dwibedi, Sergey Levine, and Jonathan Tompson. Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning. ICLR, 2019\", \"Minor comments:\", \"The abstract is long and could be shorten.\", \"Figures are too small and difficult to see, especially the legends.\", \"Table 1 should describe the form of f in addition to its conjugate.\", \"The title of the Algorithm 1 should be f-VIMO-sigmoid instead of f-VIMO.\", \"** Update after response.\", \"I read the response. I thank the authors for clarifying the claims as well as the new experiments with the swap formulation. However, improving clarity of the claims is considered a major revision. I still keep the vote of rejection.\", \"Regarding reward bias. As the authors acknowledge, the improvement achieved by using reparameterization+sigmoid can be explained by two equally-plausible reasons: 1) reparameterization+sigmoid improves stability (as claimed) and 2) sigmoid gives biased rewards. The issue here is that we do not know which is the actual reason, given the current experiments in the paper. As I commented, evaluating TV-VIM with sigmoid but without reparameterization will help address this issue.\"]}"
]
} |
SJeUm1HtDH | Swoosh! Rattle! Thump! - Actions that Sound | [
"Dhiraj Gandhi",
"Abhinav Gupta",
"Lerrel Pinto"
] | Truly intelligent agents need to capture the interplay of all their senses to build a rich physical understanding of their world. In robotics, we have seen tremendous progress in using visual and tactile perception; however we have often ignored a key sense: sound. This is primarily due to lack of data that captures the interplay of action and sound. In this work, we perform the first large-scale study of the interactions between sound and robotic action. To do this, we create the largest available sound-action-vision dataset with 15,000 interactions on 60 objects using our robotic platform Tilt-Bot. By tilting objects and allowing them to crash into the walls of a robotic tray, we collect rich four-channel audio information. Using this data, we explore the synergies between sound and action, and present three key insights. First, sound is indicative of fine-grained object class information, e.g., sound can differentiate a metal screwdriver from a metal wrench. Second, sound also contains information about the causal effects of an action, i.e. given the sound produced, we can predict what action was applied on the object. Finally, object representations derived from audio embeddings are indicative of implicit physical properties. We demonstrate that on previously unseen objects, audio embeddings generated through interactions can predict forward models 24% better than passive visual embeddings. | [
"Sound",
"Action",
"Audio Representations"
] | Reject | https://openreview.net/pdf?id=SJeUm1HtDH | https://openreview.net/forum?id=SJeUm1HtDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"QLm8QXyaL",
"r1lpIChYir",
"r1gYVC3tiS",
"Syl6-0htjr",
"HJxX0ahtiB",
"Hyxux2VH9r",
"Skxm-jETFr",
"rJlDUEniFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727947,
1573666389515,
1573666352728,
1573666309233,
1573666251496,
1572322287948,
1571797755348,
1571697743176
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1617/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1617/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1617/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1617/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1617/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1617/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1617/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper investigates using sound to improve classification, motion prediction, and representation learning all from data generated by a real robot.\\n\\nAll the reviewers were intrigued by the work. The paper provides experiments on real robots (never a small task), and a data-set for the community, and a sequence of illustrative experiments. Because the paper combines existing techniques, its main contribution is the empirical demonstrations of the utility of using sound. Overall, it was not quite enough for the reviewers. The main issues were: (1) motion prediction is perhaps expected given the physical setup, (2) lack of comparison with other approaches, (3) lack of diversity in the demonstrations (10 objects, one domain).\\n\\nThe authors added two new experiments with a different setup, further demonstrating their claims. In addition the authors highlighted that the novelty of this task means there are no clear baselines (to which r3 agreed). The new experiments are briefly described in the response (and visuals on a website), but the authors did not update the paper. The new experiments could potentially significantly strength the paper. However, the terse description in the response and the supplied visuals made it difficult for the reviewers to judge their contribution.\\n\\nOverall, this is certainly a very interesting direction. The results on real world data demonstrate promise, even if they are not the benchmarking style the community is used too.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for finding our idea interesting and appreciating our new direction!\\n\\nNovelty/Technical Novelty: Kindly refer to the discussion in global comments on novelty.\", \"experimental_results\": \"Kindly refer to the discussion in global comments on comparison with SOTA and new experiments on Robotic Manipulation as you asked for.\", \"writing_style\": \"The choice of having models and results in the same section is a conscious one. Since the goal of this paper is to highlight the synergies between action audio and vision, we believed that a mixed section better reflects our contributions. However, following your suggestions, we are happy to separate section 3 into two separate sections. Thank you for pointing out the typos, this will be fixed in the final version of this work.\", \"related_works\": \"Thank you for pointing out Zhang et al. We think that this a very insightful paper that provides a way to generate audio data for object based on its 3D model and physical properties. However all the data used here is from a simulator. All of our data is from the real-world and on a real robot, because of which our learned embeddings work on real-world tasks.\\n\\nWe hope to have addressed your concerns with additional real-robot experiments. Please let us know if you have any other concerns; we will be happy to answer and clarify.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for finding our idea of connecting action and sound by using the TiltBot setup interesting, and for appreciating this new research direction.\", \"why_are_images_used_in_inverse_model\": \"Without the first image, the model would not have enough information to predict the action. This is because different combinations of start-state and action can result in the same generated sound. Using only sound information without start images leads to a lower MSE by around 5-10% across all the inverse model tasks. This shows that using the first image is important, however even without it, we can get fairly high performance. Thank you for suggesting this experiment; we will add this result in the final version of the paper.\", \"ground_truth_location_of_object_for_forward_prediction_model\": \"Given an image of an object, we first perform background subtraction to segment the object. The centroid of this segmentation is used as the ground truth state of the object. Additional details will be added in the Appendix and segmentation masks will be released along with the dataset.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for finding our data novel and our experiments interesting! Indeed, audio contains fine-grained instance level information about objects. It is also not surprising that audio contains trajectory level information since directional information can be inferred from multiple microphones. This is infact how bats echolocate. However, what is really surprising and interesting, is that audio embeddings can be used to infer object properties that can be used for forward physics modeling of the object.\", \"novelty\": \"Kindly refer to the discussion in global comments on novelty.\", \"experiments\": \"We have performed two more experiments to drive home the point of the importance of modeling audio, vision and action together. Kindly refer to the global comments for details.\\n\\nWe hope to have addressed your concerns about novelty and experiments with additional real-robot experiments. Please let us know if there are any other specific experiments and comparisons that you were expecting. To our knowledge, this is the first work that combines audio, action, and vision in a single learning framework. If you have any questions, we will be happy to answer and clarify.\"}",
"{\"title\": \"Global comments and response to reviewers\", \"comment\": \"We thank the reviewers for their time and effort. We are also thankful to some interesting suggestions given by reviewers. In global comments, we would like to handle the question of novelty and experiments. For specific questions/queries, we provide individual answers to reviewers.\", \"novelty\": \"We believe the idea of investigating the relationship between sound and action is novel, which has never been investigated before in this thorough manner. As R1 points out, using sound/audio in forward dynamics is \\u201crelatively new and may lead to many potential future developments in this direction.\\u201d Our work creates both a dataset and proposes a new framework for robotic learning with audio, and therefore we believe this work is valuable to the learning community. While some of these findings might be intuitive, our paper is the first to model these and empirically measure the role of audio in dynamics modeling.\", \"technical_novelty\": \"As a matter of conscious choice, we did not play a lot with forward/inverse models or new architectures to highlight the fact that even while using standard models, using audio in dynamics modeling can lead to significant gains. But the paper still has several technical innovations that might have been overlooked. First, creating a robotic platform and framework to get a dataset with action, audio, and vision is challenging and a novel contribution of this work. Then, using this data to show that audio contains fine-grained information that can distinguish say a screwdriver from a hammer has never been shown before, and the first in this work. Finally, we demonstrate that object representations that we generate solely from audio information is useful for downstream tasks like forward transition model learning and even in robotic manipulation as shown by Robotic Push experiments (See below).\\n\\nExperiments\", \"r1\": \"Comparison to SOTA\\nFirst, we would like to reiterate that this is the first work that combines audio, action, and vision in a single learning framework over a large and diverse variety of objects. Hence, to our knowledge, there are no published work or SOTA algorithms using this new learning framework. If there are any such algorithms and papers, please send them our way, and we will compare with them. In terms of downstream tasks, we perform forward modelling (Section 3.5), which is a precursor for planning in robot manipulation. \\n\\nR1, R3: More experiments including robot manipulation.\\n\\nWe are pleased to report two new experiments to highlight the strength of our key idea.\", \"experiment_a___robotic_pushing\": \"In the first set of experiments, we look at how audio embeddings can be used for better robotic pushing. For this, we collect a dataset of around 1000 planar pushing experiments on 10 training set object, and test the audio-conditioned pushing model on 10 testing set objects. We note that without audio embedding the MSE error of the pushing location is 0.180 (normalized coordinates), while using audio embeddings gives an MSE error of 0.159. This clearly demonstrates that audio embeddings can significantly improve robotic pushing. Robot pushing videos can be accessed on our website: https://sites.google.com/view/iclr2020-sound-action.\", \"experiment_b___few_shot_classification\": \"In the second set of experiments, we look at how audio embeddings can be used for few shot classification on previously unseen objects. The key insight from this experiment is that for all numbers few-shot examples, using audio embeddings gives a performance of around 2-3X the performance of using randomly initialized CNNs. Specifically, say for k=1, using audio embeddings give a performance of 21%, while random CNN gives 7%. This further shows that audio embeddings contains useful information for fine-grained object recognition. Full results can be seen on our website: https://sites.google.com/view/iclr2020-sound-action.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents audio-visual object classification and motion prediction work on a novel dataset of 60 different objects rolling around in a bin tilted to and fro by a robot, with video and 4-channel audio recordings of the object impacts. The data is rather novel, is large enough to do ML (around 17 hours of eventful audio/video) and is to be publicly released. The model architectures are not of theoretical novelty. However, the experiments are somewhat interesting. It was found that the audio contains significant object classification information. The audio was also good for predicting the trajectory of the object. This might not be surprising since the microphones are geometrically arranged and may contain directional information along with information about velocity and/or distance traveled. Overall the experiments are rather thin with only a few experimental results. A more thorough undertaking might be expected for ICLR papers, with more novel theoretical development and more extensive experiments.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies the role of audio in object and action perception, as well as how auditory information can help learning forward and inverse dynamics models. To do this, the authors built a 'tilt-bot', which tilts a box and the object within to collect data (sound & vision) of object interactions. The authors then tested how audio embeddings help object recognition and forward model prediction.\\n\\nThe idea presented in this paper is quite interesting. However, there are no significant technical innovations, the experimental evaluations are quite limited, and the writing can be improved. My overall recommendation is weak reject.\\n\\nThe problem of integrating audio for perception is interesting and has been quite widely explored; however, this paper extends the setup to also explore the effect of audios on dynamics modeling. This is relatively new and may lead to many potential future developments in this direction.\\n\\nTechnically, however, this paper mostly builds on existing technicals on learning forward and inverse models, except that the input is now audio in addition to video. The experimental results are also very limited. They are restricted to a single domain, a fixed collection of objects, and there are no comparisons with published, SOTA algorithms. There are also no results on downstream tasks such as robot manipulation.\", \"i_also_wonder_how_the_authors_think_of_the_related_work_from_zhang_et_al\": \"http://sound.csail.mit.edu/ , as they've also studied the effect of auditory and visual data in shape and material recognition.\\n\\nThe writing can be improved. Currently, the model and results are in the same section and mixed together. It'd prefer to separate them. There are a number of typos (incorrect spacing, etc.), especially in Section 3.4. Please double check.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper explores the interesting connections between action and sound, by building a sound-action-vision dataset with a tilt-bot. This is a good paper overall, I appreciate the efforts on the dataset, and this direction of research is worth pursuing.\\n\\nRegarding experiments, I like the way it is set up, especially the four microphones, and the action space of the robot. I\", \"a_couple_of_questions\": \"(1) In the inverse model learning, Fig 3(a) bottom, why are images used as input as well? Don't we want to predict action purely from sound?\\n(2) In forward model prediction, how are the ground truth locations defined and labeled? Is it the center of mass, and annotated by humans? More details on this experiment will help.\"}"
]
} |
SJl47yBYPS | Towards Simplicity in Deep Reinforcement Learning: Streamlined Off-Policy Learning | [
"Che Wang",
"Yanqiu Wu",
"Quan Vuong",
"Keith Ross"
] | The field of Deep Reinforcement Learning (DRL) has recently seen a surge in the popularity of maximum entropy reinforcement learning algorithms. Their popularity stems from the intuitive interpretation of the maximum entropy objective and their superior sample efficiency on standard benchmarks. In this paper, we seek to understand the primary contribution of the entropy term to the performance of maximum entropy algorithms. For the Mujoco benchmark, we demonstrate that the entropy term in Soft Actor Critic (SAC) principally addresses the bounded nature of the action spaces. With this insight, we propose a simple normalization scheme which allows a streamlined algorithm without entropy maximization match the performance of SAC. Our experimental results demonstrate a need to revisit the benefits of entropy regularization in DRL. We also propose a simple non-uniform sampling method for selecting transitions from the replay buffer during training. We further show that the streamlined algorithm with the simple non-uniform sampling scheme outperforms SAC and achieves state-of-the-art performance on challenging continuous control tasks. | [
"Deep Reinforcement Learning",
"Sample Efficiency",
"Off-Policy Algorithms"
] | Reject | https://openreview.net/pdf?id=SJl47yBYPS | https://openreview.net/forum?id=SJl47yBYPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"8Q2T8yT1vr",
"B1xrT5V2oB",
"BJgCauVnsB",
"Hkl7dBE2jr",
"Bye7Tbeiir",
"H1xldZejoH",
"SJeIf-ejjH",
"HygLsdD_oH",
"HkgCz0IdjS",
"rygGVGFzoS",
"Hkg8xaOMiH",
"Bylzx9uGiH",
"HyeO-W-0YB",
"H1x1YN36Kr",
"B1gpjwjOOS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727918,
1573829308783,
1573828806097,
1573827946616,
1573745082664,
1573744999808,
1573744909960,
1573578910434,
1573576214067,
1573192233977,
1573190893773,
1573190122187,
1571848447851,
1571828855174,
1570449316649
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1614/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1614/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1614/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1614/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1614/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1614/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1614/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1614/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1614/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1614/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1614/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1614/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1614/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1614/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper studies the role of entropy in maximum entropy RL, particularly in soft actor-critic, and proposes an action normalization scheme that leads to a new algorithm, called Streamlined Off-Policy (SOP), that does not maximize entropy, but retains or exceeds the performance of SAC. Independently from SOP, the paper also introduces Emphasizing Recent Experience (ERE) that samples minibatches from the replay buffer by prioritizing the most recent samples. After rounds of discussion and a revised version with added experiments, the reviewers viewed ERE as the main contribution, while had doubts regarding the claimed benefits of SOP. However, the paper is currently structured around SOP, and the effectiveness of ERE, which can be applied to any off-policy algorithm, is not properly studied. Therefore, I recommend rejection, but encourage the authors to revisit the work with an emphasis on ERE.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Final revision finished\", \"comment\": \"We have just finished the final revision. A summary of the changes has been posted separately for all reviewers. We would like to thank you again for your helpful feedback, they really helped us improve the quality of the paper. Thank you!\"}",
"{\"title\": \"Final revision finished\", \"comment\": \"We have just finished uploading a final revision of the paper. We have addressed all the \\\"serious weaknesses\\\" you mentioned in your comments, as well as the minor points. We have also added some other new experiments on Inverting Gradients and an exponential sampling scheme due to the request of other reviewers. A summary of the changes has been posted to all reviewers.\\nWe would like to thank you again for your helpful feedback, we feel that they helped us greatly in making this paper more refined. Thanks!\"}",
"{\"title\": \"Summary of changes\", \"comment\": \"We want to thank the reviewers again for their feedback, we have conducted a set of new experiments and now made a final revision. Here is a list of changes. We have also made some small fixes and reorganized some of the text.\", \"section_1_introduction\": [\"Emphasize contributions of TD3: clipped double Q-learning, delayed policy update and target policy smoothing.\", \"Mention additional experiments with Inverting Gradients(IG).\", \"Section 4 (Streamlined Off-Policy Algorithm)\", \"Remove the extra hyper-parameter beta.\", \"Include results comparing SAC, SOP, TD3+(TD3 plus normalization), and IG in figure 3.\", \"Add a description on the core idea of Inverting Gradients.\", \"Section 5 (Non-uniform Sampling)\", \"Include results for SAC+ERE, comparing to SAC, SOP, and SOP+ERE in figure 4.\"], \"related_work\": [\"Cite paper \\u201cUnderstanding the impact of entropy on policy optimization\\u201d.\", \"Cite several critique and reproducibility papers.\"], \"appendix_b\": [\"Add a concrete and thorough discussion of how we conducted hyper-parameter search.\", \"Provide hyper-parameters of additional experiments.\"], \"appendix_d\": [\"Add algorithmic and implementation details for Inverting Gradients.\"], \"appendix_e\": [\"Include algorithmic details and results for SOP+Exponential Sampling\"], \"appendix_g\": [\"Add discussion of implementation and computation complexity of new experiments\"], \"appendix_h\": [\"Include results for IG+ERE, and compare it with SOP+ERE, SOP and IG.\", \"Include results comparing TD3 with TD3+.\"]}",
"{\"title\": \"Reply to your review\", \"comment\": \"Thank you for your comments. We have uploaded a new version of our paper and will continue working on it. Below is a summary of the changes we made so far:\\n\\nWe now have a more detailed explanation of how SAC and TD3 are different. We explained that TD3 takes the min of 2 target critic values when computing update targets, and also added that TD3 uses delayed policy update and target policy smoothing. \\n\\nIn the revision, we discuss the paper \\u201cDeep reinforcement learning in parameterized action space\\u201d in more detail and we also implemented the Inverting Gradients (IG) technique mentioned in the paper and compare its performance with SAC and SOP in figure 3. The IG experiments for Humanoid and Ant have not finished running yet. We expect them to finish tomorrow and we will update the figure once we have them. that IG has similar overall performance compared to SOP and SAC. It learns faster in the beginning in Hopper and Ant, but is slightly weaker in HalfCheetah, and did not do as well in Humanoid. \\n\\nWe also include our results for SAC+ERE in figure 4, in comparison to the performance of SAC, SOP, and SOP+ERE. From the results, we can see that with ERE, both SAC and SOP gain a significant performance improvement in all environments. Since figure 3 and 4 now contain entirely different information, there is no redundancy. \\n\\nMoreover, we also include results for IG+ERE and compare it with IG, SOP and SOP+ERE in Appendix G (figure 8). Some IG+ERE experiments are still running. We will update the figure once we have them. \\n\\nWe have read and cited the paper \\u201cUnderstanding the impact of entropy on policy optimization\\u201d. Thank you again for recommending this very insightful paper.\\n\\nInstead of a rather vague claim on hyper-parameters, we have now added a thorough discussion of exactly how we performed hyper-parameter search and why we chose those values in Appendix B Hyper-parameters section. \\n\\nThe 23*4 images in the Appendix do take a lot of space, however, we are worried that other people may want to see the mu and action values for all action dimensions instead of just one action dimension we picked out in the main body. Thus, we decide to keep the 23*4 images but removed them to the end of the Appendix.\"}",
"{\"title\": \"Re:Reply to your review\", \"comment\": \"Thank you for the clarification! We have just uploaded a revision, here are some of the changes we made:\\n\\nWe made an efficient implementation of the exponential sampling scheme (EXP) for SOP and performed experiments, the results are now reported in section D of the appendix, where we compare the performance of SOP with three sampling schemes, ERE, PER and EXP. The results show that EXP improves the performance of SOP consistently, and does well especially in the HalfCheetah and Humanoid environments, although the performance is not as strong as ERE. \\n\\nWe have also included details on implementation and hyper-parameter search in the appendix. It turns a naive EXP sampling implementation will incur a significant computation overhead, but we avoided it by first sample segments of size 100 from the buffer, then sample a data uniformly from each segment. This modification does not really change the sampling scheme, is relatively easy to implement and has a negligible computation overhead. \\n\\nWe also performed experiments on removing the tanh from SAC. However, our results show that if we simply remove tanh, it causes SAC performance to drop significantly. We suspect that some other parts of SAC will have to be modified in order for it to work correctly without tanh, but currently, it is unclear what is the missing part. We will try to continue work on this and run more experiments. And we can later add our findings to the camera-ready version.\"}",
"{\"title\": \"Re: Reply to your review\", \"comment\": \"We have just uploaded a new version of our paper and will continue working on it.\\nFor the experiments you suggested, below is a summary of what we did so far:\\n\\nIn the revision, we implemented the Inverting Gradients(IG) technique and TD3+normalization, and compared their performance with SAC and SOP in figure 3. The IG experiments for Humanoid and Ant have not finished running yet. We expect them to finish tomorrow and we will update the figures once we have them. Our results show that IG has similar overall performance compared to SOP and SAC. It learns faster in the beginning in Hopper and Ant, but is slightly weaker in HalfCheetah, and did not do as well in Humanoid. TD3+normalization also has good performance, although not quite as good as the other schemes.\\n\\nWe also compare the performance of TD3 with TD3+normalization. The results are shown in Appendix G (figure 9). Our results indicate that for humanoid, normalization boosts the performance of TD3 significantly, but does not bring the performance to the level of SOP.\\n\\nWe also include our results for SAC+ERE in figure 4, in comparison to the performance of SAC, SOP, and SOP+ERE. From the results, we can see that with ERE, both SAC and SOP gain a significant performance improvement in all environments. Moreover, we also include results for IG+ERE and compare it with IG, SOP and SOP+ERE in Appendix G (figure 8). Some IG+ERE experiments are still running. We will update the figure once we have them. \\n\\nWe have also fixed a list of typos and formatting issues.\"}",
"{\"title\": \"Re: Reply to your review\", \"comment\": \"Thank you for running these additional experiments. If you can share their results before the end of the discussion period, that would be great.\\n\\nRegarding my first minor point, I guess this is mostly a matter of interpretation of your words. I understood them myself as \\\"the reasons why SAC works so well are not those most people think\\\", and this is something I tend to disagree with since what you show is essentially that SAC is better at exploring, and better exploration is a key motivation for the entropy maximization in SAC.\"}",
"{\"title\": \"Remove tanh but keep the entropy penalty\", \"comment\": \"Thanks for agreeing to do these additional experiments.\\n\\nI meant keeping the entropy term but remove tanh.\"}",
"{\"title\": \"Reply to your review\", \"comment\": \"Thank you for your careful reading of the paper. You have made many good suggestions, most of which we will address in the revision.\\n\\nAs you suggest, we will implement the inverted gradient technique and compare the results with SOP and SAC. We will also be sure to acknowledge more fully the insights that paper made regarding saturating squashing functions. \\n\\nAs you suggest, we are also running experiments for TD3+normalization. We agree in hindsight that this is a very natural combination to consider. Our results so far seem to indicate that for humanoid, normalization does improve the performance of TD3, but does not bring the performance to the level of SOP. We will provide the experimental results for all five environments in the revision. \\n\\nReviewer 3 also pointed out that it would be good to see experimental results for SAC+ERE. We are currently running these experiments and will provide the results in the revision. \\n\\nWe will also take care of your minor points in the revision. However, we do not fully agree with the first minor point. We feel that an important contribution of the paper is to show that entropy maximization is not a major benefit for the Mujoco environments; in fact by introducing a simple normalization of the outputs, we can achieve equivalent performance without entropy maximization.\"}",
"{\"title\": \"Reply to your comments\", \"comment\": \"Thank you for your careful review of the paper. Your comments and suggestions are very useful. Below we respond to your various points.\\n\\n\\\"The TD3 mechanism goes beyond the Double Q-learning (or DDQN) mechanism of Van Hasselt et al: it takes the min over two critics. This should be explained properly.\\\" Yes, thank you for bringing this to our attention. We will make this clear in the revision. \\n\\n\\\"The title, abstract and introduction insist more on SOP, but performance improvement seem to result more from ERE. If this is possible, studying the performance of SAC + ERE would disambiguate the relative contribution of both mechanisms.\\\" This is a good point. We are now running experiments for SAC+ERE, and we will include the results in the revision.\\n\\n\\\"About gradient squashing issues, the authors main mention de gradient inverter idea from this paper. \\\" We will discuss this paper in more detail in the revision. We will also cite the \\\"Understanding the impact of entropy...\\\" paper in the revision. Thank you for bringing it to our attention. \\n\\nWe will respond to your \\\"local points\\\" in the revision, including a more thorough discussion of how the hyper-parameters were determined.\"}",
"{\"title\": \"Reply to your comments.\", \"comment\": \"Thank you for your careful review of the paper. We are happy that you think the paper is \\\"great\\\". We also feel that it makes an important contribution both in terms in bringing insight to off-policy DRL and achieving the state-of-art performance.\\n\\nWe like your suggestion of comparing ERE with exponential sampling. We are currently running experiments, and we will include the results of these new experiments in the paper. \\n\\nConcerning your first suggestion, we would like to ask for a clarification. If we remove the entropy term from SAC, we found that the pre-tanh values become huge in magnitude, which leads to tanh saturation and poor exploration. For the additional ablation study, do you mean to keep the entropy term, but then to remove tanh entirely (so that in some cases, the chosen action will be outside the bound), or do you mean keep the term and adding L1 penalty to the pre-tanh value?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The main contribution of this paper is a normalization scheme to avoid saturating the squashing function typically used to constrain actions within a bounded range in continuous control problems. It is argued that algorithms like DDPG and TD3 suffer from such saturation, which prevents proper exploration during training, while maximum entropy algorithms like Soft Actor-Critic (SAC) avoid it thanks to their entropy bonus. The main reason behind the success of SAC would thus be its ability to keep exploring throughout training, by avoiding saturation. A second contribution is a new experience replay sampling scheme, named Emphasizing Recent Experience (ERE), based on the idea that most recently added transitions should be given higher weights when sampling mini-batches from the replay bufffer. Combining both ideas leads to the SOP (Streamlined Off-Policy)+ERE algorithm, which is shown to consistently outperform SAC on Mujoco tasks.\\n\\nAlthough this paper presents interesting insights and very good empirical results, I am currently leaning towards rejection mostly due to missing some important empirical comparisons, which hopefully can be added in a revised version.\\n\\nThe first key missing comparison (IMO) is to the Inverting Gradients approach from Hausknecht & Stone (2016), which the authors know about since it is cited in the related work section. Note that in that paper, the problem of saturating squashing functions preventing proper exploration was already mentioned, although not investigated in as much depth as in this submission (\\u00ab(\\u2026) squashing functions quickly became saturated. The resulting agents take the same discrete action with the same maximum/minimum parameters each timestep \\u00bb). Their proposed Inverting Gradients technique was found to work significantly better than squashing functions, which is why I believe it should be an obvious baseline to compare to.\\n\\nThe other important experiments which I think need to be added are simply to implement the proposed normalization scheme within DDPG & TD3 to demonstrate its usefulness as a standalone improvement over existing algorithms. This would strengthen the claim that \\u00ab algorithms such as DDPG and TD3 based on the standard objective with additive noise exploration can be greatly impaired by squashing exploration \\u00bb. Without this comparison on the same benchmark, it is difficult to fully grasp the impact of this normalization.\\n\\nFinally, regarding the ERE sampling scheme, I would appreciate to see SAC+ERE as well, to (hopefully) show that it can benefit SAC too (since this second contribution is orthogonal to the SOP algorithm).\", \"minor_points\": \"\\u2022\\tI would tone down a bit the claims for \\u00ab the need to revisit the benefits of entropy maximization in DRL \\u00bb, since better exploration has always been put forward as a major benefit (\\u00ab the maximum entropy formulation provides a substantial improvement in exploration and robustness \\u00bb, as written in \\u00ab Soft Actor-Critic Algorithms and Applications \\u00bb). To me, what this submission shows is essentially that naive implementation of additive noise exploration in e.g. DDPG is very bad for exploration, more than uncovering some novel properties of SAC.\\n\\u2022\\tBelow eq. 1: \\u00ab the optimal policy is deterministic \\u00bb => should be replaced with \\u00ab there exists an optimal policy that is deterministic \\u00bb\\n\\u2022\\t\\u00ab principle contribution \\u00bb => principal\\n\\u2022\\tThe normalization scheme does not appear in Alg. 1\\n\\u2022\\tIn Alg. 1 there are a Q_phi,i and a Q_phi,1 that should probably be Q_phi_i and Q_phi_1\\n\\u2022\\tThe results from section E in the Appendix should be mentioned in the main text\\n\\u2022\\tIn Fig. 4f the y axis\\u2019 label is a bit clipped\", \"update_based_on_new_revision\": \"thank you for adding more results. From what I can see, it is difficult to conclude on the benefits of SOP over IG, which I find really problematic. It seems to me that the most impactful result is related to the improvements brought by the ERE sampling scheme, which could probably be worth a paper on its own (by showing its benefits over a wider range of algorithms, e.g. TD3 & DQN+variants), but this would be a different paper. As a result I am sticking to \\\"Weak Reject\\\".\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"# Summary\\nThe paper identifies a problem with TD3 related to action clipping. The authors notice that SAC alleviates this problem by means of entropy regularization. Given the insight that action clipping is crucial, the authors propose an alternative approach to avoid action clipping in TD3, which is empirically shown to yield the same results as SAC. Surprisingly, with this improvement, even several parts from TD3 can be removed, such as delayed policy updates and the target policy parameters. In addition, a straightforward-to-implement experience replay scheme is proposed, that emphasizes recent experiences, which propels the proposed algorithm to achieve state-of-the-art results on MuJoCo.\\n\\n# Decision\", \"this_is_a_great_paper\": \"accept. The proposed Streamlined Off Policy (SOP) algorithm is thoroughly evaluated, ablation studies performed, code made available. Nevertheless, there are a few suggestions below that may further improve the paper.\\n\\n# Suggestions\\n1) It is said that entropy regularization leads to action not being saturated in SAC. I feel that this causal relation is very indirect. Maybe SAC with entropy just discovers better policies that do not go crazy between extremes? For example, if you would leave the entropy term but remove tanh saturation from SAC, don't you think you would also get a bang-bang policy? Adding such an ablation study could further strengthen the argument that entropy leads to no constraint violation, if it turns out true.\\n\\n2) The Emphasizing Recent Experience (ERE) replay scheme seems reminiscent of sampling according to a distribution exponentially decaying into the past. It is said that physically shrinking the allowed sampling range by dropping old experiences is better because then very old experiences cannot be used at all. It would be interesting to see a comparison to sampling according to exponential distribution from the replay queue.\\n\\n# AFTER REBUTTAL\\nTaking into account the concerns of other reviewers and the newly added evaluations, I lower my score to weak accept. Since now it seems that ERE is quite crucial, the argument of SPO outperforming SAC becomes weaker. Therefore, the authors should tone down the claims of outperforming SAC. Nevertheless, I still find the contribution of the paper valuable and think that it should be accepted, albeit with the aforementioned modifications in the camera-ready version.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors investigate the role of entropy maximization in SAC and show that entropy regularization does not do what is usually thought: in the examples they investigate, where the output of the policy network needs to be squashed to fit in the action space domain, squashing would result in having only action at the boundaries, but entropy regularization maintains some intermediate values, hence exploration. From this insight, the authors replace entropy regularization by a simpler normalization process and show equivalent performance with their simpler Streamlined Off-Policy (SOP) algorithm. Then they introduce a second \\\"Emphasizing Recent Experience\\\" mechanism and show that SOP+ERE performs better than SAC.\\n\\nA good point for the paper is that the entropy regularization study is very nice, more papers in the field should show similar detailed analyses of internal processes. But the paper suffers from a few serious weaknesses:\\n\\n- The TD3 mechanism goes beyond the Double Q-learning (or DDQN) mechanism of Van Hasselt et al: it takes the min over two critics. This should be explained properly.\\n- the title, abstract and introduction insist more on SOP, but performance improvement seem to result more from ERE. If this is possible, studying the performance of SAC + ERE would disambiguate the relative contribution of both mechanisms.\\n\\nAbout gradient squashing issues, the authors main mention de gradient inverter idea from this paper:\\n\\n@article{hausknecht2015deep,\\n title={Deep reinforcement learning in parameterized action space},\\n author={Hausknecht, Matthew and Stone, Peter},\\n journal={arXiv preprint arXiv:1511.04143},\\n year={2015}\\n}\\n\\nThe authors should also probably also cite (and read the latest arxiv version of):\\n@inproceedings{ahmed2019understanding,\\n title={Understanding the impact of entropy on policy optimization},\\n author={Ahmed, Zafarali and Le Roux, Nicolas and Norouzi, Mohammad and Schuurmans, Dale},\\n booktitle={International Conference on Machine Learning},\\n pages={151--160},\\n year={2019}\\n}\", \"more_local_points\": [\"\\\"without performing a careful hyper-parameter search\\\": so how did you choose these hyper-parameters? I see what you mean, but this is a very vague and slippery statement.\", \"I do not find the 23*4 images in Appendix B much useful\", \"Fig 3 seems to be repeated in Fig 4. Can't you just remove Fig 3?\"]}"
]
} |
SkxV7kHKvr | TWIN GRAPH CONVOLUTIONAL NETWORKS: GCN WITH DUAL GRAPH SUPPORT FOR SEMI-SUPERVISED LEARNING | [
"Feng Shi",
"Yizhou Zhao",
"Ziheng Xu",
"Tianyang Liu",
"Song-Chun Zhu"
] | Graph Neural Networks as a combination of Graph Signal Processing and Deep Convolutional Networks shows great power in pattern recognition in non-Euclidean domains. In this paper, we propose a new method to deploy two pipelines based on the duality of a graph to improve accuracy. By exploring the primal graph and its dual graph where nodes and edges can be treated as one another, we have exploited the benefits of both vertex features and edge features. As a result, we have arrived at a framework that has great potential in both semisupervised and unsupervised learning. | [
"Graph",
"Neural Networks",
"Deep Learning",
"semi-supervised learning"
] | Reject | https://openreview.net/pdf?id=SkxV7kHKvr | https://openreview.net/forum?id=SkxV7kHKvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BIO1gYTya",
"BJgD85xqqB",
"SJemyeYW5r",
"S1xQZ57pKr"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727889,
1572633166564,
1572077530657,
1571793402783
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1612/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1612/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1612/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"All three reviewers are consistently negative on this paper. Thus a reject is recommended.\", \"title\": \"Paper Decision\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper utilizes two pipelines based on the duality of a graph and the primal graph for semi-supervise learning. More specifically, the authors transform the primal graph into its dual form and build a two-pipelines architecture to learn on the primal graph and dual graph together. The goal of including a dual pipeline is to use the predictions on the dual node to affect the predictions of the primal nodes.\", \"i_decided_to_give_a_weak_reject_to_this_paper_for_the_following_shortcomings\": \"1. Novelty is not enough. Dual graphs are explored utilized to graph neural networks in some related works (such as [1]). This paper utilizes dual graphs to affect or assist the prediction on the primal nodes, which is very similar to the methods used in [1]. \\n\\n2. Experiments can not prove the effectiveness of this method very well. The experiments only conduct on 3 datasets and the analysis of the experimental results is not convincing and more details should be included. For example, if GCN(double-pipeline) performs better than GCN, this can somehow support the effectiveness of a double-pipeline that includes a dual graph. However, the results of TwinGCN can not support the effectiveness of regularization by KL-divergence. I think more experiments and analyses about why this method works should be explored in Section Experiments.\\n\\n3. The organization should be improved. In general, when introducing a model, it is better to introduce the prediction part of the model first. For example, I can get the idea of how to train the DualGCN by reading the Section Method. However, I have no idea about how the authors make a prediction on the trained DualGCN, how to transform the prediction of the dual graph to the prediction of primal nodes. I think these details are significant for this paper. Also, there are several incomplete sentences in this paper.\\n\\nOverall, this idea that utilizes the dual graph to assist the primal graph learning is good and can be explored in the future. For the current version, I give a weak reject for the above reasons.\\n\\n\\n\\n\\nReference\\n[1] Chen et al. Supervised community detection with line graph neural networks.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper describes a new dual method for graph convolutional networks that combines the features from the graph and it's dual, in two pipelines. The paper builds on the architecture as in GCN and in addition to the dual pipelines, one from the graph and other it's dual, employs KL divergence to achieve the final prediction.\\n\\nThe paper leaves inadequate explanation on the results, where the proposed TwinGCN comes short in 2 of 3 methods compared to other methods in Table 1, which cannot be ignored considering the slow convergence and marginal improvements, compared to GCN with double pipeline in Table 3. This leaves the premise of the authors on adding improvements to the learning ability by bringing in features from dual graph on shaky grounds.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes two graph convolutional network for semi-supervised node classification. The model is composed of two GCNs, one on the primal graph and one on the dual graph. The paper is well written and easy to follow. However, the novelty and contribution are rather limited, and the performance improvement of the proposed method is rather limited. Following are the detailed comments:\\n\\n1. The novelty and contribution of the paper are rather limited. The idea of dual graph has been exploited by [1]. The proposed method is simple combination of prime and dual without convincing explanation on why primal dual graph can improve the performance. The authors may need to give more explanations on why such combination can improve the performance.\\n\\n2. The performance improvement of the proposed method is marginal. In fact, the proposed method doesn\\u2019t outperform DGCN\\n\\n3. Experiments need to be improved. The authors didn\\u2019t compare with GNNs that also adopts primal and dual graphs. The authors should consider comparing the proposed method with DPGCN in [1].\\n\\n[1] Monti, Federico, et al. \\\"Dual-primal graph convolutional networks.\\\" arXiv preprint arXiv:1806.00770 (2018).\"}"
]
} |
HJemQJBKDr | Continual Density Ratio Estimation (CDRE): A new method for evaluating generative models in continual learning | [
"Yu Chen",
"Song Liu",
"Tom Diethe",
"Peter Flach"
] | We propose a new method Continual Density Ratio Estimation (CDRE), which can estimate density ratios between a target distribution of real samples and a distribution of samples generated by a model while the model is changing over time and the data of the target distribution is not available after a certain time point. This method perfectly fits the setting of continual learning, in which one model is supposed to learn different tasks sequentially and the most crucial restriction is that model has none or very limited access to the data of all learned tasks. Through CDRE, we can evaluate generative models in continual learning using f-divergences. To the best of our knowledge, there is no existing method that can evaluate generative models under the setting of continual learning without storing real samples from the target distribution. | [
"density ratio estimation",
"continual learning",
"evaluation",
"generative model",
"f divergence"
] | Reject | https://openreview.net/pdf?id=HJemQJBKDr | https://openreview.net/forum?id=HJemQJBKDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"F9q68n0sGi",
"SkeJFie2jr",
"Bkg4EhDsoS",
"Hye2JPl5jB",
"HyexPrlciS",
"HJxn_QeciH",
"rJevdma6Fr",
"Skl_FzIaYS",
"Bye8oOS6Yr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727856,
1573813111078,
1573776428079,
1573680867931,
1573680471877,
1573679987883,
1571832686798,
1571803775794,
1571801246123
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1611/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1611/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1611/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1611/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1611/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1611/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1611/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1611/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper seems technically correct and has some novelty, but the relevance of the paper is questionable. Considering the selectiveness of ICLR, I cannot recommend the paper for acceptance at this point.\", \"in_more_detail\": \"the authors propose a technique for estimating density rations between a target distribution of real samples and a distribution of samples generated by the model, without storing samples. The method seems to be technically well executed and verified. However, there was major concerns among multiple reviewers that the addressed problem does not seem relevant to the ICLR community. The question addressed seemed artificial, and it was not considered realistic (by R2 and also by R1 in the confidential discussion). R3 also expressed doubts at the usefulness of the method.\\n\\nFurthermore, some doubts were expressed regarding clarity (although opinions were mixed on that) and on the justification of the modification of the VAE objective to the continual setting.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for replying to our response\", \"comment\": \"Dear Reviewer,\\n\\nWe really appreciate your feedback on our response. We understand your concern, but we also would like to point out that generative models are not only including unsupervised approaches (like GANs or VAEs) but also including many probabilistic latent variable models that can be supervised approaches whilst we can draw samples from it. Regarding the scope of health care, it's usually difficult to obtain a large dataset for a specific problem and requires some explanation of the results, in such cases probabilistic modeling is often a better choice than deep learning methods. We show experiments on GANs and image data because they are widely used bench-mark tasks in continual learning, however, our method can be applied to any generative models as long as we can draw samples from the model.\"}",
"{\"title\": \"Thank you for the response\", \"comment\": \"Dear authors\\n\\nWe thank the author for the response.\\n\\n>There can be diverse reasons in the real world of limiting access to the raw data after a model is trained on it. For example, researchers of a hospital may have trained a model for one type of disease and the raw data of patients cannot be shared with other institutions, if they want to collaborate with another institution to enable the model capable of a similar type of disease as well, the model can only be incrementally trained on new data without sharing the previous data. Sharing the model is a lot less sensitive than directly sharing data points, though the trained model still contains information of the original data. If this is still not allowed, we may be able to use some techniques such as importance sampling to estimate the results on the original data set by another publishable data set (i.e. E_{q(x)}[p(x)/q(x) L(x)], where L(x) is the loss function of the model). Besides the privacy issue, a limited cost budget can be another cause of such a problem, such as the data storage cost is quite high, or the data is not available for free after its copyright has expired. We have added the examples in the introduction section of the revision. \\n\\nI am still not sure whether this is truly a realistic case. Basically, if we really need to solve healthcare problems, the best way would be to build a database and use supervised algorithm (of course, it may be difficult). Otherwise, the machine learning method is basically useless (as reviewer #2 pointed out, this sounds a problem for machine learning method). It would be great that authors can demonstrate this in the paper. Then, the paper is very interesting and important.\\n\\nOverall, I still like the density-ratio formulation itself. However, I am not sure the application of the approach is useful at this point.\"}",
"{\"title\": \"Response to R2\", \"comment\": \"First of all, we respectfully disagree with that generative models in continual learning are unrealistic. They certainly have realistic applications. \\u201cContinual learning (CL) is the ability of a model to learn continually from a stream of data, building on what was learned previously, hence exhibiting positive transfer, as well as being able to remember previously seen tasks.\\u201d This definition is from the workshop of continual learning in NeurIPS 2018, which has pointed out two advantages of continual learning: 1) enabling positive transferring of knowledge; 2). preventing from forgetting previously learned knowledge. Such abilities are mimicking human being\\u2019s learning abilities in the real world and can be beneficial to not only classifiers but also generative models. For example, the numerous applications of GANs can be much more powerful if they succeed in continual learning. Imagining we train a generative model to generate one type pf sounds, it would be more attractive if the model can continuously learn to generate new sounds without retraining on all learned sounds.\\n\\nSecond, generative modeling in continual learning is not less well-received at all. Actually, the most popular works for continual learning ([1,2,3,4]), including methods of parameter regularization and incrementally growing up model architectures, can be applied to generative models as well. For example, in some specific works ([5,6,7]) of GANs in continual learning, authors compare their method with EWC [1] too. Another branch of methods in continual learning is likelihood regularization, among which generative modeling itself is an important direction [8]. \\n\\nThird, we have shown f-divergences can be valid measures of generative models in the experiment results (Sec. 4) by comparing with other commonly used measures. Please also refer to the response to the last question from R1 for the variance issues. \\n\\n[1]. Kirkpatrick, James, et al. \\\"Overcoming catastrophic forgetting in neural networks.\\\" Proceedings of the national academy of sciences 114.13 (2017): 3521-3526. \\n[2]. Zenke, Friedemann, Ben Poole, and Surya Ganguli. \\\"Continual learning through synaptic intelligence.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017. \\n[3]. Cuong V Nguyen, Yingzhen Li, Thang D Bui, and Richard E Turner. Variational continual learning. In International Conference on Learning Representations, 2018. \\n[4]. Schwarz, Jonathan, et al. \\\"Progress & Compress: A scalable framework for continual learning.\\\" International Conference on Machine Learning. 2018. \\n[5]. Wu, Chenshen, et al. \\\"Memory replay GANs: Learning to generate new categories without forgetting.\\\" Advances In Neural Information Processing Systems. 2018. \\n[6]. Ostapenko, Oleksiy, et al. \\\"Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019. \\n[7]. Lesort, Timoth\\u00e9e, et al. \\\"Generative models from the perspective of continual learning.\\\" 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. \\n[8]. Shin, Hanul, et al. \\\"Continual learning with deep generative replay.\\\" Advances in Neural Information Processing Systems. 2017.\"}",
"{\"title\": \"Response to R3\", \"comment\": \"Thanks for reviewing our paper and giving valuable comments. Please find our response to your concerns in the following:\\n\\n\\\"1. ... In the introduction, authors describe that we cannot obtain data points due to privacy or limited cost budget. More specifically, if it is a privacy issue, we may not be able to use the model trained by the private data as well. Also, could you give me a couple of examples of the limited cost budget case? \\\" \\n\\nThere can be diverse reasons in the real world of limiting access to the raw data after a model is trained on it. For example, researchers of a hospital may have trained a model for one type of disease and the raw data of patients cannot be shared with other institutions, if they want to collaborate with another institution to enable the model capable of a similar type of disease as well, the model can only be incrementally trained on new data without sharing the previous data. Sharing the model is a lot less sensitive than directly sharing data points, though the trained model still contains information of the original data. If this is still not allowed, we may be able to use some techniques such as importance sampling to estimate the results on the original data set by another publishable data set (i.e. E_{q(x)}[p(x)/q(x) L(x)], where L(x) is the loss function of the model). Besides the privacy issue, a limited cost budget can be another cause of such a problem, such as the data storage cost is quite high, or the data is not available for free after its copyright has expired. We have added the examples in the introduction section of the revision. \\n\\n\\\"2. In this paper, authors employed the log-linear model. If we use another model, performance can be changed?\\\" \\n\\nYes, if using other types of models in the ratio estimator, the performance can be different. We also tried ratio estimators defined as in f-GAN [1] and found they are less robust when the difference of two distributions is significant. The performance can also be affected by different usage of the estimated ratios. As we apply it to estimate f-divergences and many of the popular members of f-divergences based on log-ratios, the log-linear model is a suitable choice. On the other hand, if estimating Pearson \\\\chi^2 divergence which based on square loss of ratios, the linear model suggested in [2] may be better. We have put some discussion in Sec.3 after Eq. 10 in the revision. \\n\\n[1]. Nowozin, Sebastian, Botond Cseke, and Ryota Tomioka. \\\"f-GAN: Training generative neural samplers using variational divergence minimization.\\\" Advances in neural information processing systems. 2016. \\n[2]. Kanamori, Takafumi, Shohei Hido, and Masashi Sugiyama. \\\"A least-squares approach to direct importance estimation.\\\" Journal of Machine Learning Research 10.Jul (2009): 1391-1445.\"}",
"{\"title\": \"Response to R1\", \"comment\": \"Thanks for reviewing our paper and giving valuable comments. Please find our response to your concerns in the following:\\n\\n\\\"- The beginning of section 3 (CDRE in continual learning), I found it difficult to understand why the model q needs to be updated (indexed by t) while p(x) is not dependent on t...\\\"\\n\\nYes, you are correct, the beginning of Sec. 3 is an introduction to continual density ratio estimation in a general form, where we only consider one target distribution throughout the whole time. p(x) represents the density function of the target distribution and thus it is independent on t. In contrast, q_t(x) represents the density function of samples generated by a model at t, which we assume it changes while t changes. We have added the explanation at the beginning of Sec. 3 in the revision. \\n\\n\\\"- The Lagrange multiplier and the bias / variance statements need elaboration...\\\"\\n\\nThe density ratio estimator of KLIEP is an asymptotically unbiased estimator when the constraint is satisfied. However, as we replace the hard constraint by a soft constraint, the larger $\\\\lambda_c$ makes the estimator with soft constraint closer to the unbiased one, which leads to smaller bias. The bias is getting less when increasing the lambda, and as a tradeoff, the variance starts to increase. We have added the discussion into the paragraph below Eq. 10 in the revision. \\n\\n\\\"- In the second part of section 3,..., it is no longer reasonable to use the symbol r_t in equation 12 which was initially defined in equation 5. \\\" \\n\\nThanks for pointing out this issue, we should adjust the notations in equation 5 to the setting of continual learning first. We have corrected Eq. 11-14 in the revision. \\n\\n\\\"- A loss for continual VAE ... here the KL is between VAE's approximate posteriors, which alone is not sufficient for keeping the information of previous tasks. \\\"\\n\\nSorry for the confusion of our notations, we have corrected the formulation in Eq.15 in the revision. \\nIn terms of adjusting the objective of VAE in VCL, we have tried the proposed objective in VCL. However, the encoder of VAE is task-specific in the experiments of VCL, which is computationally costly for a preprocessing component. We tried sharing both encoder and decoder of VAE in VCL across MNIST tasks, it generates very similar latent code z for different digits which cannot preserve the difference between two distributions and causes the estimated f-divergences always small. We chose the current form of the objective due to its simplicity and effectiveness, nonetheless, it is flexible to deploy other methods for feature generation of CDRE. We have added more discussion in the revision. \\n\\n\\\"- There's lack of analysis / interpretation of results for section 4.1... \\\" \\n\\nWe compare f-divergences and a few commonly used measures of generative models by a few toy experiments in section 4.1 because there is no prior work discussing evaluating generative models by f-divergences. We show that f-divergences may provide different rankings with FID, KID because it may pay more attention to different parts of density mass, which can be helpful for understanding the experiment results in the later sections. We have moved this part to the appendix in the revision so that the structure of the paper can be clearer. \\n\\n\\\"- Throughout section 4.2 - 4.3, it is not explained what is the source of variance in the experiment results.\\\"\", \"there_are_two_major_sources_of_the_variance_of_the_estimated_f_divergences\": \"1). The ratio estimator is a neural network which has no assumption about data distributions and trained by stochastic gradient descent, thus different initializations generated by different random seeds may cause larger variance of the results comparing with FID and KID (FID assumes data distributions are Gaussians and KID fits the first three moments by a polynomial kernel). \\n2). In the formulation of the ratio estimator, we use finite samples to estimate the expectations, which can be another source of variance, especially when the overlapping mass of the two distribution is sparse. This is demonstrated in Fig.3 & 4, the variance increases while the model distribution getting further to the raw data distribution. In this sense, the variance can also be a criterion for evaluating generative models. \\nWe have added the discussion into Sec. 4 in the revision.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, authors propose a continual learning for density-ratio estimation. The formulation of the CDRE Eq.(5)is quite intuitive, and it makes sense. Then, a log-linear model is employed for a density-ratio model, and it is estimated by using a density-ratio estimation algorithm. Through experiments, the authors show that the proposed algorithm outperforms existing methods.\\n\\nThe paper is clearly written and easy to read. The density-ratio estimation algorithm for continual learning is new and interesting.\", \"detailed_comments\": \"1. I am pretty new to the continual learning. The formulation of CDRE is interesting. However, I am still not that certain whether the setup is realistic. In the introduction, authors describe that we cannot obtain data points due to privacy or limited cost budget. More specifically, if it is a privacy issue, we may not be able to use the model trained by the private data as well. Also, could you give me a couple of examples of the limited cost budget case?\\n\\n2. In this paper, authors employed the log-linear model. If we use another model, performance can be changed?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"######### Updated Review ###########\\n\\nI'd like to thank the author(s) for their rebuttal. However, I am still on the same boat with R1 and recommend rejection for this submission. \\n\\n\\n################################\\n\\n\\nThis submission seeks to evaluate generative models in a continual learning setup without storing real samples from the target distribution. The main technique the author(s) have used is the likelihood-ratio trick. I question the scope of this paper, as this is not a topic of general interest to the community. Additionally, the density ratio estimation technique is fairly standard. I vote to reject this submission for the lack of highlights and relevant potential applications. \\n\\nMy main argument for rejection. \\nWhile continual learning is a trendy topic in the AI community, it's less well-received in the context of generative modeling, probably for the lack of real applications. Such works, including this one, fail to address any real challenge, as the hypothesized scenario is unrealistic. For example, I am not convinced of the significance of using f-div to evaluate model performance. And since importance sampling is notorious for its variance issues (the essential mathematical tool used in this model), the estimate is not expected to be reliable, say subsequent tasks q_t and q_{t-1} differ somehow. This submission feels more like playing a game with the rules defined by the author(s), not driven by practical considerations.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This submission proposes a continual density ratio estimation method and suggested its application to evaluate a continually learned generative model without requiring the previous training data. The basis of the continual density estimation is based on a recursive relationship between the density ratio at step t and that at t - 1. Continually estimating the density ratio without storing the raw data is an interesting topic and could be useful for continual learning. To my knowledge, I have not seen this in earlier publications.\", \"However, I give reject to this paper because of the following reason:\", \"The writing of this paper is not easy to follow.\", \"The beginning of section 3 (CDRE in continual learning), I found it difficult to understand why the model q needs to be updated (indexed by t) while p(x) is not dependent on t. As far as I know, under the continual learning setting the data distribution p(x) is also conditioned on t. I interpret it as a general introduction on how density ratio could be estimated continually.\", \"The Lagrange multiplier and the bias / variance statements need elaboration, I don't understand how it is affecting the bias and variance.\", \"In the second part of section 3, the continual learning setting is introduced (in equation 11), however, it is no longer reasonable to use the symbol r_t in equation 12 which was initially defined in equation 5.\", \"A loss for continual VAE is proposed in seciton Feature generation for CDRE, however, the p(x) is again independent of t. And I'm also suspicious that equation 13 is the correct way of adjusting VAE's objective with VCL. In VCL, the KL divergence is on the parameter distribution, which could help prevent forgetting, however, here the KL is between VAE's approximate posteriors, which alone is not sufficient for keeping the information of previous tasks.\", \"There's lack of analysis / interpretation of results for section 4.1, e.g. what is the motivation of the experiments and what is the conclusion.\", \"Through out section 4.2 - 4.3, it is not explained what is the source of variance in the experiment results.\"]}"
]
} |
BylQm1HKvB | CONTRIBUTION OF INTERNAL REFLECTION IN LANGUAGE EMERGENCE WITH AN UNDER-RESTRICTED SITUATION | [
"Kense Todo",
"Masayuki Yamamura"
] | Owing to language emergence, human beings have been able to understand the intentions of others, generate common concepts, and extend new concepts. Artificial intelligence researchers have not only predicted words and sentences statistically in machine learning, but also created a language system by communicating with the machine itself. However, strong constraints are exhibited in current studies (supervisor signals and rewards exist, or the concepts were fixed on only a point), thus hindering the emergence of real-world languages. In this study, we improved Batali (1998) and Choi et al. (2018)’s research and attempted language emergence under conditions of low constraints such as human language generation. We included the bias that exists in humans as an “internal reflection function” into the system. Irrespective of function, messages corresponding to the label could be generated. However, through qualitative and quantitative analysis, we confirmed that the internal reflection function caused “overlearning” and different structuring of message patterns. This result suggested that the internal reflection function performed effectively in creating a grounding language from raw images with an under-restricted situation such as human language generation. | [
"Language emergence",
"Conceptual grounding",
"Reflection",
"Cognitive bias"
] | Reject | https://openreview.net/pdf?id=BylQm1HKvB | https://openreview.net/forum?id=BylQm1HKvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"CGRpO1wco0",
"ryl4RZvKiH",
"ryl3mZvYiH",
"SyemxWPFjr",
"B1gfgBHicr",
"H1lU1caoFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798727824,
1573642700429,
1573642532170,
1573642475171,
1572717802312,
1571703262333
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1610/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1610/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1610/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1610/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1610/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper is very different from most ICLR submissions, and appears to be addressing interesting themes. However the paper seems poorly written, and generally unclear. The motivation, task, method and evaluation are all unclear. I recommend that the authors add explicit definitions, equations, algorithm boxes, and more examples to make their paper clearer.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Authors' response to the reviewers' comments.\", \"comment\": \"We thank you for your detailed review and helpful comments. We address your concerns as follows.\\n\\n-There are problems with clarity, particularly after page 4 (Sections 3.2 and 3.3)\\n\\nThank you for your comment. Please see our 2nd and 4th explanations for Reviewer 1. In particular, we have supplemented the internal reflection function section with explanations of the algorithm (p. 4,\\u00a73.3).\"}",
"{\"title\": \"Authors' response to the reviewers' comments.(part 2)\", \"comment\": \"Question\\n- The internal reflection function \\n\\nWe agree that this point requires clarification, and have added the specific algorithm to the method (p. 4,\\u00a73.3). \\u201ccomparison result\\u201d is an ambiguous word. We changed the term to \\u201cdegree of comparison similarity\\u201d. The degree of comparison similarity shows the similarity between the messages using \\u201cgestalt pattern matching\\u201d. We have also added the reference: John W. Ratcliff and David Metzener, Pattern Matching: The Gestalt Approach, Dr. Dobb\\u2019s Journal, page 46, July 1988.\\n\\n\\n- A formal definition of \\\"dream\\u201d.\\n\\nThe reviewer's comment is correct. To clarify, we have added the following text to the MODEL ARCHITECTURE and OBVERTER TECHNIQUE(p. 3,\\u00a73.2, lines 32-33): Dream is defined as an output image reconstructed by a message. It is not an output image reconstructed from the VAE input (the output image is expressed as Reconstructed Image).\\n\\n- A Jaccard similarity coefficient\\n\\nThe reviewer's comment is correct. To clarify, we have added the text to the evaluate method (p. 5,\\u00a73.4, line 1-12 ).\\n\\n[2]Choi, E., Lazaridou, A., & de Freitas, N. Compositional obverter communication learning from raw visual input. ICLR 2018\\n[3]Havrylov, S., & Titov, I. (2017). Emergence of language with multi-agent games: Learning to communicate with sequences of symbols. In Advances in neural information processing systems (pp. 2149-2159).\\n[4]Lazaridou, A., Peysakhovich, A., & Baroni, M. Multi-agent cooperation and the emergence of (natural) language. ICLR 2017\"}",
"{\"title\": \"Authors' response to the reviewers' comments.(part 1)\", \"comment\": \"We thank you for your detailed review and helpful comments. We address your concerns as follows.\\n\\n- There doesn't seem to be a proper baseline in any of the experiments conducted.\\n\\nIn current language emergence research, our sense is that there is no baseline as a common indicator because the model structure depends on the purpose. Therefore, we collected methods to find out the characteristics of the languages used in related research. We selected the ones that fitted the contents of this research.\\nWhile, to confirm the effectiveness of the internal reflection function, a comparison was made based on the evaluation axis for the presence or absence of internal reflection. Like this, to evaluate the validity of the architecture, there is a study in language emergence research that evaluates architecture by controlling and disabling functions[1].\\n\\n- The writing generally is unclear, full of ungrammatical sentences and extremely hard to read.\\n\\nWe have incorporated your comments, so we have clarified you pointed out.\\n\\n - \\u00a73.2: \\\"A language system can emerge by a pure machine, excluding the human viewpoint.\\\"(p. 4, lines 4-5)\\nThe proposed architecture enables language emergence by agent interaction without using human-prepared supervised signals and rewards.\\n\\n - \\u00a73.2: \\\"The structure, characteristics, and limitations of the messages generated by the machine can be determined.\\\"(p. 4, lines 6-7)\\nWe can explore the structure and characteristics of messages organized by agent interaction.\\n\\n - \\u00a73.2: \\\"The agents as a whole have a movement to create a system in which languages is unsupervised and one conceptual pact is formed.\\\"(p. 4, lines 22-23)\\nAn agent set takes an emergent behavior that forms a conceptual pact without supervised signals.\\n\\n - \\u00a73.3: \\\"Receive an image and generate a message based on self-knowledge.\\\"\\nPlease see our 4th explanation.\\n\\n - \\u00a73.4: \\\"This formal rule was from the human perspective\\\".(p. 5, lines 41)\\nThe analyzed formal rules such as pattern and key symbol were evaluated from a human criterion.\\n\\n- The evaluation section is very rambling and contains unnecessary parts that need not be included.\\nWe appreciate the reviewer's concerns on this point. However, we consider our original text correct. We surveyed the evaluation method in related researches and selected indicators that can analyze the qualitative changes in language emergence by internal reflection. We consider that the features of the message, grammatical analysis, and the accuracy of the image reconstructed by messages were basic indicators for confirming the emergent language features in related researches. Thus, we would like to retain the original text.\\n\\n - e.g.1: Analyze formal rules (\\u00a74.4.1 and Table 3)\\nWe obtained one trained instance from each learning object and analyzed formal rules of messages. By analyzing the structural differences in the formal rules for each instance, we are able to know the necessary symbol conditions and symbol patterns to generate a concept. This aspect is a necessary condition for any language to be considered compositional. On the other hand, We agree that different training will produce different messages. However, in other language emergence research, in addition to the quantitative characteristics of the message, they analyzed the language system in the instance to evaluate specific differences and compositional nature[2-4]. For this reason, we believe that to describe the generated language system would be more appropriate.\\n\\n\\n - e.g.2: Convergence of mean squared error loss between GRU output and latent space z (\\u00a74.1)\\nOur sense is that just confirming the generated messages is not enough to know the relationship between concepts and messages. If messages have formal rules but are not grounded in the concept, the language is considered meaningless. Therefore, we need an evaluation index that confirms the correspondence between messages and concepts. MSE Loss is an index that evaluates the validity of the ground relationship between the generated message and the corresponding label.\\n\\n[1] Das, A., Kottur, S., Moura, J. M., Lee, S., & Batra, D. (2017). Learning cooperative visual dialog agents with deep reinforcement learning. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2951-2960).\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This seems to be an ambitious paper that claims to replicate the phenomenon of \\u201cspontaneous\\u201d learning of compositional language (as in Choi 2018) under relaxed constraints. It extends the two-agent description game, which agents swap between Teacher and Student roles (\\u201cobverter technique\\u201d) into a multi-agent game in which messages can be rejected.\\n\\nThere are problems with clarity, particularly after page 4 (Sections 3.2 and 3.3), starting with the paragraph about how their architecture differs from the preexisting architectures. The lack of line numbers in the submission makes it impractical to give detailed feedback.\\n\\nOverall I have had a lot of difficulty understanding the proposed method, and would have needed more time (or more background knowledge) in order to properly evaluate this paper.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper proposes a framework for emergent communication system using internal reflection, similar to obverter technique in Batali 1998 and Choi et al., 2018, while not requiring explicit supervision. Despite the nice idea, the dire quality of writing makes it extremely difficult if not impossible to understand the proposed model and the results. Also, some of the experimental results seem unnecessary. In Table 3., for example, (1) in \\u00a74.4.1 the authors merely list the rules without giving an overall conclusion. (2) I'm not certain what's the significant of the results in Table 3., as these are messages from one trained instance, and I'm sure a different training run will yield different messages.\", \"Although I am strongly leaning towards reject, I am willing to discuss with my co-reviewers if they have a different opinion.\", \"Cons\", \"There doesn't seem to be a proper baseline in any of the experiments conducted.\", \"The writing generally is unclear, full of ungrammatical sentences and extremely hard to read: e.g. see examples below.\", \"\\u00a73.2: \\\"A language system can emerge by a pure machine, excluding the human viewpoint.\\\"\", \"\\u00a73.2: \\\"The structure, characteristics, and limitations of the messages generated by the machine can be determined.\\\"\", \"\\u00a73.2: \\\"The agents as a whole have a movement to create a system in which languages is unsupervised and one conceptual pact is formed.\\\"\", \"\\u00a73.3: \\\"Receive an image and generate a message based on self-knowledge.\\\"\", \"\\u00a73.4: \\\"This formal rule was from the human perspective\\\".\", \"The evaluation section is very rambling and contains unnecessary parts that need not be included, e.g. convergence of mean squared error loss between GRU output and latent space z.\", \"Questions\", \"The internal reflection function part, the core of this paper, is not properly explained until the end. For example, in \\u00a73.3, \\\"when the comparison result is lower than the threshold value\\\": what is the \\\"comparison result\\\"? How does one compare a message and another message? Please elaborate on this further.\", \"In the evaluation section \\u00a73.4, the authors keep referring to \\\"dream\\\": \\\"correct answer rate of Dream\\\", \\\"We entered the message into an agent to evaluate the reconstructed Dream\\\". But the authors don't provide a formal definition of \\\"dream\\\" except providing a citation to Ha and Schmidhuber, 2018. Are the readers expected to know what this means?\", \"In \\u00a73.4, \\\"Jaccard coefficient were applied to test the message similarity between labels\\\". How does one apply Jaccard index to compute similarity between labels? This seems an important part of evaluation but not explained at all.\"]}"
]
} |
Hklz71rYvS | Kernelized Wasserstein Natural Gradient | [
"M Arbel",
"A Gretton",
"W Li",
"G Montufar"
] | Many machine learning problems can be expressed as the optimization of some cost functional over a parametric family of probability distributions. It is often beneficial to solve such optimization problems using natural gradient methods. These methods are invariant to the parametrization of the family, and thus can yield more effective optimization. Unfortunately, computing the natural gradient is challenging as it requires inverting a high dimensional matrix at each iteration. We propose a general framework to approximate the natural gradient for the Wasserstein metric, by leveraging a dual formulation of the metric restricted to a Reproducing Kernel Hilbert Space. Our approach leads to an estimator for gradient direction that can trade-off accuracy and computational cost, with theoretical guarantees. We verify its accuracy on simple examples, and show the advantage of using such an estimator in classification tasks on \texttt{Cifar10} and \texttt{Cifar100} empirically. | [
"kernel methods",
"natural gradient",
"information geometry",
"Wasserstein metric"
] | Accept (Spotlight) | https://openreview.net/pdf?id=Hklz71rYvS | https://openreview.net/forum?id=Hklz71rYvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"gSjkJ6cYDSU",
"4VHp235QJ",
"AEPFs0kn1_",
"HyxyYxijoS",
"rJlR9JssoH",
"rye1IkoosB",
"Bkli_CcsjH",
"HylJnxCfoH",
"HJgT19_foS",
"BkxQyEJzsS",
"r1eCca4-sH",
"S1xohUaAFS",
"r1xIbmXaKr",
"S1gMw5s2YH"
],
"note_type": [
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1605584328807,
1576909577361,
1576798727796,
1573789814995,
1573789590076,
1573789510633,
1573789298986,
1573212326770,
1573190117054,
1573151706760,
1573109142491,
1571899059330,
1571791613811,
1571760730505
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"~XY_Tian1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1609/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1609/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1609/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1609/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1609/Authors"
],
[
"~XY_Tian1"
],
[
"ICLR.cc/2020/Conference/Paper1609/Authors"
],
[
"~XY_Tian1"
],
[
"ICLR.cc/2020/Conference/Paper1609/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1609/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1609/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"This is not a constructive comment\", \"comment\": \"Much more useful would be to discuss with the authors here. Attacking others is generally not a good practice. Instead why not provide some constructive criticism or start a discussion? Further, when giving criticism, there should be support for your claims. I hope you edit your comment.\"}",
"{\"title\": \"Does the proposed method has any advantage? REAL OR FAKE RESULTS?\", \"comment\": \"The method is shown to outperform various other optimizers on a neural net optimization problem that's artificially made ill-conditioned?\\n\\nREALLY? SGD with Momentum or ADAM perform much better than the proposed method. I do not know if the presented results are real or FAKE.\\n\\nI could not believe how this paper was reviewed!\"}",
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This is a very interesting paper which extends natural gradient to output space metrics other than the Fisher-Rao metric (which is motivated by approximating KL divergence). It includes substantial mathematical and algorithmic insight. The method is shown to outperform various other optimizers on a neural net optimization problem that's artificially made ill-conditioned; while it's not clear how practically meaningful this setting is, it seems like a good way to study optimization. I think this paper will be of interest to a lot of researchers and could open up new research directions, so I recommend acceptance as an Oral.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply\", \"comment\": \"Thanks for your comments. In the revision we've posted, we addressed most of them. Here is a summary:\", \"1__models_with_no_density\": \"we replaced the gaussian model by a model consisting of hyper-spheres in R^d parametrized by their radii and centers. These do not admit a density in R^d and do not share the same support, thus the Fisher natural gradient is not defined.\", \"2__comparison_with_the_fisher_gradient\": \"The KFAC and eKFAC method estimate the fisher natural gradient, and we compare the proposed method with those in terms of accuracy Figure 3 and timing Figure 6. That being said, we have not yet implemented the estimator for the Fisher gradient suggested by the variational formulation of Proposition 2: this is an interesting topic for future work.\", \"3__assumptions\": \"we state the assumptions briefly in the main text and provide a discussion for when assumption (D) holds: It is a mild assumption that is satisfied for instance in the case where the gradient is an empirical mean of iid samples with finite variance. This result from Chebychev\\u2019s inequality: we mention this in the main text and further discuss it in Remark 1 of appendix A.2.\\nThe remaining assumptions are also mild, in that the kernel can be chosen to satisfy them while the assumption on the implicit model is often satisfied in many cases: especially in the case of deep networks with ReLU non-linearity.\\n\\n4- Relaxation of equation in (9) ( (11) in the revised version). We make the connection more explicit by first provide the full variational expression for the natural gradient in Prop3, so that the difference is simply restricting the optimization and adding the two penalization terms.We also include the following comment to clarify the purpose of the regularization terms:\\n\\u201cThe first regularization term makes the problem strongly convex in u, while the second term makes the problem strongly concave in f when \\\\lambda > 0. When \\\\lambda=0, the problem is still concave in f. This allows to use a version of the minimax theorem (Ekeland and Temam,1999)\\u201d to exchange the order of the supremum and minimum which also holds true when \\\\lambda=0.\\nThe result of prop 3 (prop 4 in the revised version) holds without further assumptions since the problem is strongly convex in u and concave in f.\", \"5__wasserstein_vs_fisher\": \"both allow to get optimization trajectories that are invariant to parametrization, as discussed in Prop 1. From this point of view, there is no reason to prefer one over the other. The only difference is that the Wasserstein can be used in settings where the Fisher cannot be used. This is the case in the hyper-sphere model, for which the Wasserstein natural gradient can be estimated accurately, as illustrated in figure 1.\\n\\n6- A possible use case of the wasserstein would be for learning the policy of an actor- critic in reinforcement learning. In [Schulman et al. 2015] it was shown that using the Fisher natural gradient can improve the training of a policy when its density is available. Recently, [Tang and Agrawal 2019] considered a new class of policies that are parametrized implicitly. These new category of policies were motivated by the success of implicit generative models. In this case, however, the density is not available explicitly and might not be well defined. Using the Wasserstein natural gradient could lead to similar improvements as observed for in [Schulman et al. 2015] for the case when the Fisher natural gradient can be used. We leave this as a future research direction.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for the useful feedback about the high level intuition and the connection with the work in AISTATS 2019.\\n\\n1- We adapted section 2.1 to first present the general perturbative approach for any divergence of distance, we then discuss the particular case of the wasserstein in more detail in Appendix B.3 as well as the connection with the Negative sobolev distance. We highlight the fact that a kernel method was proposed for the negative sobolev distance in the introduction and at the end of section 2.3 and further comment on it after equation 14. \\nin Appendix B.3, we also discuss the difference between the Negative Sobolev distance and the wasserstein metric. In particular, we show that for models with disjoint support - namely dirac distributions - the negative sobolev distance is always infinite while the wasserstein metric will still be finite. We conclude that under some additional regularity assumptions, the wasserstein metric can be obtained as a limit of the Negative Sobolev distance.\", \"2__fisher_gradient\": \"Indeed a similar variational form for the fisher gradient holds as made more explicit in Proposition 2 of the revised version. One can also use an implicit model in the case of the Fisher gradient. However, if the model doesn\\u2019t admit a density, the variational expression will be infinite. As an illustration, this can be done in the case of the normal and log-normal distributions considered in Figure 4, but not in the case of the hyper-sphere (figure 1). We did not yet implement the variational form for the Fisher, however - this will be an interesting topic for future work. On the other hand, KFAC and eKFAC implement an approximation to the Fisher gradient to which we compare our method on cifar10 an cifar100.\", \"3__the_constraint\": \"we corrected this in the revised version. The constraint is not needed in the case of the wasserstein distance because the objective function appearing in the variational formulation only depends on derivatives of f.\", \"4__diagonal_conditioning\": \"We run an ablation study to analyse the contribution of the diagonal scaling alone. Figure 7 (red) shows that using the same diagonal conditioning D instead of KWNG doesn\\u2019t match the performance of the proposed method: (76% test accuracy vs 90% for KWNG ). However, it slightly improves on to second best method which was SGD with Momentum (73% test accuracy) Figure 3.\\nWhen setting the Diagonal term to identity, KWNG is not as effective and its performance drops to 60% which is the same performance as plain SGD (Figure 3). This is consistent with the discussion in section 3.3 about the choice of damping which is particularly important in the ill-conditioned case where the hessian is far from being isotropic. We also note that this behavior is not specific to KWNG, and affects many stochastic second order methods, as discussed in [Martens and Sutskever (section 8.2)].\", \"5__batch_normalization\": \"all experiments used batch-normalization. While we found it helpful in general, it didn\\u2019t overcome the ill-conditioning case, so the preconditioning was still required.\\nWhile spectral normalization ensures the highest eigenvalues are less than one, in principle it doesn\\u2019t affect the conditioning of the weights. However, we haven\\u2019t tried it in this work and leave it for future work.\", \"6__timing\": \"Using the same setting: batch-normalization and gradient clipping by norm, the cost per iteration is almost 4.7 times larger, which corresponds approximately to the cost of performing the additional 5 backward passes required by the algorithm to compute the nystrom approximation of the natural gradient: 0.61s vs 0.13s. Figure 6 (appendix D.1) compares the evolution of the error per second. It is favorable for KWNG in the case of ill-conditioned models and is comparable with the other methods.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thanks for your comments.\\nIt is indeed the case that the considered estimation problem is hard in general, especially as the dimension of the model increases. However, when the model enjoys some regularity properties, the statistical estimation becomes easier. The proposed estimator allows to exploit the regularity of model, whenever possible, to provide a relatively cheap and yet accurate approximation of the natural gradient as shown in the well-specified case (Thm 14) and ill-specified case (Thm 7). \\nIn the paper, the final estimator results from several approximation steps, each one of those steps results in an additional error that needs to be quantified, this is precisely what the two theorems achieve.\", \"1__choice_of__the_kernel\": \"It defines the rkhs and thus depends on the smoothness of the problem. For the problems considered, we found that the rational quadratic kernel worked as well as the gaussian kernel (Figure 7 ).\", \"2__choice_of_the_bandwidth\": \"we propose a heuristic for choosing the bandwidth in section 3.3 which is of the order of the mean distance between samples and basis points. We found that it worked well in practice and essentially avoided saturation effects due to exponentiation. In addition we included a figure 5 shows that the estimator is accurate in a wide range of bandwidths for the 3 synthetic data considered:(normal, log-normal, and uniform on a hyper-sphere). The range where the estimator is accurate depends however on the chosen model: hyper-sphere needs small bandwidth while normal and log-normal requires large bandwidths.\", \"3__invariance_property\": \"Proposition 1 explicitly explains what it is.\", \"4___adaptive_learning_methods\": \"we changed the sentence to: \\u201cUsing adaptive step sizes can help when the principal directions of curvature are aligned with the coordinates of the vector parameters. Otherwise, an additional rotation of the basis is needed to achieve this alignment.\\u201d\\nAdaptive learning methods are indeed powerful optimizers when combined with a suitable parametrization of the model. They perform a diagonal scaling of the gradient to adjust the step-size for each dimension. When the principal directions of curvature matches the coordinates of the gradient this approach would be effective, however these principal direction need not be aligned with the coordinates of the gradient in general. In this case the rotation of the gradient is also needed to first align it with the principal directions of curvature. This doesn\\u2019t prevent from using adaptive methods as they can still help, but using a non-diagonal conditioning (rotation + scaling) could be more effective than (scaling) especially when the curvature varies a lot from one direction to another. \\n\\n5- Equation 1 and 2 are correct, the minus sign compensate for another minus sign that appears after solving the minimization problem. Having the minus sign inside would work as well.\", \"6__distributional_gradient\": \"when the model has a density that is smooth, the Distributional gradient defined in Def 3. admits an expression in terms of the gradient of the density. We added Proposition 12 in appendix C.1 to make it more rigorous. This is a consequence of the general result on derivatives under the integral.\"}",
"{\"title\": \"New revision\", \"comment\": \"We thank all reviewers for the careful review and helpful feedback. We have implemented the suggested clarifications in the revised version (details below), and have run the following additional experiments:\\n\\n1- Sensitivity analysis to the bandwidth of the kernel on the synthetic data in Figure 5 of appendix D.2..\\n2- Comparison between two different choices of the kernel on cifar10 (gaussian vs rational quadratic) in Figure 7 of appendix D.1 \\n3- Comparison with Adam on Cifar10 in Figure 3 of the main text.\\n4- Timing comparison on Cifar10 between the compared methods in Figure 6 of appendix D.1.\\n5- Sensitivity to the choice of the diagonal regularization term D and a comparison with a baseline using only a diagonal preconditioning with matrix D all in Figure 7 of appendix D.1.\\n6- An additional synthetic dataset consisting of hyper-spheres parametrized by their center and radius in Figure 1 of the main text. This model doesn\\u2019t admit a density and is therefore more illustrative for the cases when the Wasserstein natural gradient can be used while the fisher is not well-defined.\", \"in_addition_to_that_we_made_the_following_changes_to_the_revised_version\": \"1- A high level introduction of the natural gradient using a perturbative approach in section 2.1.\\n2- A result showing the invariance to parameterization of natural gradient in the continuous-time limit in Proposition 1.\\n3- A short discussion with the connection with the Negative Sobolev distance and the work of Mroueh 2019 at the end of section 2.2 + a more detailed discussion in Appendix B.3 \\n4- A discussion in section 3.3 about a more stable version of the proposed estimator, which we used in the experiments and which holds when lambda = 0 (ridgeless regression).\\n5- A modification of proposition 4 to cover the case where lambda =0\\n6- A short discussion in section 3.3 about a heuristic for the choice of the bandwidth of the kernel.\\n7- We also deferred the \\u2018well specified\\u2019 case in section 3.4 to the appendix due to space constraints.\\n\\nFinally, we now better highlight the fact that we have implemented a comparison with the Fisher natural gradient on Cifar10 and Cifar100, using the KFAC and eKFAC optimizers, which compute an approximation to the Fisher gradient. However, we did not yet implement the estimator of the Fisher gradient suggested by the variational expression of proposition 2, which is an interesting topic for future work.\\n\\nWe would also like to bring to the attention of the reviewers that the proposed algorithm is competitive in terms of computational cost and overall time as shown in Figure 6 of Appendix D.1.\"}",
"{\"title\": \"Code to reproduce the results\", \"comment\": \"We're sorry to hear you had issues implementing the results. We have posted the code to obtain the figures at the anonymised link: https://www.dropbox.com/sh/2tquwlji582lk5x/AAB8L6p8ZnBF67CscKTIg3kea?dl=0.\\nYou should be able to obtain the results of our method with this code. if you still have difficulties obtaining the figures with our code, then please share your code with us on the openreview site, and we'll do our best to help.\\nSince submission, we have improved our results with better parameter choice, so the results will be improved over those posted in the figures.\"}",
"{\"title\": \"Thank you for your reply! But I believe my previous judgement is correct!\", \"comment\": \"Thank you for clarifying your algorithm. I implemented exact the same thing. But the performance is very bad. As the authors mentioned about the ill-conditioned cases? Why did not compare with ADAM and RMSPROP? SGD with appropriate hyperparameter can even solve such problem.\\n\\nAfter testing the proposed Wasserstein Natural Gradient (NGD), I found it is very problematic. The reported results almost did not compare with any popular methods.\"}",
"{\"title\": \"Reply\", \"comment\": [\"Hi XY, thank you for your interest in our paper, and for your work in checking the reproducibility of our results. We are aware that there are some subtleties in the implementation, and we\\u2019d be happy to help you get the method running. Very importantly, there is a version of the algorithm which is more stable numerically and involves first performing an svd on CC^T = US^t and then pre-multiplying T by S^{\\\\dagger}U^tT, this leads to a simplified expression which enjoys a better conditioning. We will be giving the code of our core algorithm in a separate review reply comment. We suggest that you compare this with your code, and run a sanity check that this works as expected to reproduce the Figure 1 result. If so, then try Figures 3 and 4. We will shortly be posting the anonymised code to run the entire fig. 3 and 4 experiments to the rebuttal thread, as soon as we have it ready. Meanwhile, we\\u2019re happy to answer any further questions.\", \"Regarding SGD+Momentum+WD, are the performances that you mention achieved for the well conditioned case (c,d) or for the ill-conditioned one (a,b), on which dataset? For the well conditioned case SGD+momentum achieves the best performance on cifar10 (93.5 test accuracy) which we think matches the performance usually reported for the network used here, Resnet18. The same holds for cifar100. In the ill-conditioned case, we noticed a drop in performance as shown in (a,b), we think it is due to the WD and expect that a smaller weight decay would improve the performance as it is the case for SGD+Momentum (green plot, 72% test accuracy) where the weight decay is set to 0. Is this consistent with the results you have? Could you please tell us what settings you used, and we\\u2019d be happy to investigate further.\", \"Thank you for suggesting Imagenet as an additional dataset. There is an important consideration to take into account which is the dimensionality of the logits. In the case of Imagenet it is 1000 which makes it much higher dimensional compared to cifar10 or cifar100. We think that the proposed estimator would require many more samples to get accurate estimations in the case of imagenet. The theory predicts that the number of basis points M should scale linearly with the output dimension of the network (Theorem 5). While, Theorem 5 is an asymptotic result in (5), a finite sample size bound in N (Proposition 12) also shows that N is required to be greater than a certain threshold that grows linearly with the output dimension. This is also confirmed in the experiments: In figure 1, the error increases as the dimension of the model increases. In Figure 3 and 4, there is a drop in performance as we move from cifar10 to cifar100 dataset. On the other hand, the performance on cifar100 increases when the batch-size is increased from 128 to 512.\", \"More generally, if the model has already a good conditioning one should expect little benefit in using second order methods which are generally more sensitive and require additional parameters like the damping.\"]}",
"{\"title\": \"The algorithm's performance is much worse than SGD, Adam, and any common stochastic optimization algorithm\", \"comment\": \"Initially, this paper was very interesting to me. I tried to implement the algorithms and apply the proposed algorithms to train other models. I cannot reproduce figures 3 and 4. I wish the others and authors can check these results carefully. Also, these results are very misleading, SGD + Momentum + WD with proper learning rate decay can get much better results than the reported ones. Moreover, such algorithms should also be tested on the ImageNet. I tried it, but the performance is poor compared with any common stochastic optimization algorithm.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Natural gradient has been proven effective in many statistical learning algorithms. A well-known difficulty in using natural gradient is that it is tedious to compute the Fisher matrix (if one is using Fisher-Rao metric) and the Wasserstein information matrix (if one is using Wasserstein metric). It's important to be able to estimate natural gradient in a practical way, and there have been a few papers looking at this problem but mostly for the case with a Fisher-Rao metric. This paper takes a different and general approach to approximate natural gradient by leveraging the dual formulation for the metric restricted to the Reproducing Kernel Hilbert Space. Some theoretical guarantees of the proposed method is established together to some experimental study.\\n\\nI find this work interesting with some important merit, as it tackles an important problem in statistical learning. My main concern, however, is the problems related to RKHS from a practical point of view. For example, solving optimization problem (11) is difficult and the paper makes a range of further approximations to be able to arrive at an approximate solution. Also, selecting the kernel and its bandwidth is crucial in practice. From a practical point of view, I suspect that more evidence is needed to justify if the proposed method can really offer a method of choice. \\n\\nHaving said that, I believe this paper provides an important first (and alternative) step towards an important problem. The paper is also well written and well structured. I have a few further comments below\\n1) In the abstract and introduction, the invariant property of natural gradient is mentioned several times without a detailed explanation why/what it is. Adding a brief explanation of this property is appreciated.\\n2) The sentence on line 8 in Introduction reads \\\".. It can be not alleviated by using adaptive step size...\\\". This is when the authors are talking about the adaptive learning methods. Is this a too strong comment about the adaptive learning methods? Can the authors know for sure that these methods cannot be used here?\\n3) Equations (1) and (2): Are they correct? should the minus sign just be in front of the first term \\\\mathcal{M}_t(u) only?\\n4) Page 4, first line after Def 3: \\\"one covers the usual gradient \\\\nabla\\\\rho_\\\\theta(x)\\\". It is not very clear (to me) how to get this. Can the authors please elaborate more on this?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper is not easy to follow and the high level intuition how the method works is not well explained. \\n\\nIt would be easier for the reader, to motivate the natural wasserstein descent from how one defines natural Fisher descent , where one seeks a first order approximation of $KL(p_{\\\\theta},p_{\\\\theta+ \\\\epsilon u})$ as we perturb in the parameter space and this well known that this epsilon $u^{\\\\top}F u$.\", \"summary_of_the_paper\": \"The paper provides a way to estimate the natural Wasserstein gradient using Kernel estimators. The idea is neat and novel. Natural Wasserstein Gradient similar to the so called natural fisher gradients preconditions the gradient using a matrix that uses the local curvature of the manifold of the parametric distribution. \\n\\nAuthors give variational forms of the Fisher information matrix of an explicit model , using the variational form of the chi squared or the Fisher Rao divergence. Similarly authors give a variational form of the wasserstein natural gradient . Let theta be the parameter of the parametric implicit model, theta in R^q. For a descent direction $u$, the variational form is obtained via finding an objective S, $\\\\sup_{f\\\\in C^c_{\\\\infty}} S(f, u ) = u^{\\\\top}G_{W}u$, where $G_{W}$ is a form of \\\"Wasserstein information matrix\\\".\\n\\nAuthors then propose to learn the function f in an RKHS and propose to find the descent direction by solving\\n $\\\\min_{u} <u, \\\\nabla_{\\\\theta} Loss (p_{\\\\theta})> + \\\\sup_{f\\\\in RKHS} S(f, u ) + r(u)- \\\\lambda ||f||^2_{rkhs}$\\nwhere r(u) is a quadratic regularizer on u. \\n\\nThe sup problem has a closed form solution and can be approximated using Nystrom approximation and randomization on dimensions. The problem in u has also a closed form solution , and one used u as the proxy to the natural descent. \\n\\nAuthors under some assumption show that the estimated natural W gradient in RKHS is concentrated around the true one. \\n\\nExperiments on synthetic data and in classification on CIFAR 10 and CIFAR 100 shows that the preconditioning of the gradients that the method offers allows faster convergence in both well conditioned and ill conditioned initialization of the weights of the neural network.\", \"hence_natural_gradient_descent_is\": \"$\\\\min _{u} <u, \\\\nabla_{\\\\theta}Loss(p_{\\\\theta})> + KL(p_{\\\\theta},p_{\\\\theta+ u}) \\\\approx min _{u} <u, \\\\nabla_{\\\\theta}Loss(p_{\\\\theta})> + u^{\\\\top}F u$\", \"now_for_the_wasserstein_distance_one_has_also_similarly\": \"$\\\\min _{u} <u, \\\\nabla_{\\\\theta}Loss(p_{\\\\theta})> + W^2_2(p_{\\\\theta},p_{\\\\theta+ u})$\", \"and_it_is_known_that_as_epsilon_goes_to_zero_we_have\": \"$W^2_2(p_{\\\\theta},p_{\\\\theta+ \\\\epsilon u})/\\\\epsilon = ||p_{\\\\theta} - p_{\\\\theta+ \\\\epsilon u}||^2_{H^{-1}(p_{\\\\theta})}+o(\\\\epsilon) = \\\\sup_{f} \\\\int f (p_{\\\\theta} - p_{\\\\theta+\\\\epsilon u}) - \\\\frac{1}{2} \\\\mathbb{E}_{p_{\\\\theta}}||\\\\nabla_x f(x)||^2+o(\\\\epsilon) $\", \"now_replacing_with_the_implicit_model_as_epsilon_goes_to_zero_we_get_the_expression_given_in_the_paper_using_a_simple_taylor_expansion\": \"$= \\\\sup_{f} \\\\int <\\\\nabla_{\\\\theta} h_{\\\\theta}^{\\\\top}\\\\nabla_x f(h_{\\\\theta}),u > d\\\\nu - \\\\frac{1}{2}\\\\mathbb{E} _{p_{\\\\theta}}||\\\\nabla_x f(x)||^2$\\n\\nin a sense the paper is proposing to linearize $W^2_2$ around the perturbation in the parameter space of the implicit model and this can be done using $|| .||_{H^{-1}(q)}$ , as pointed and used in many recent works. then the paper proposes to approximate $|| .||_{H^{-1}(q)}$ in RKHS which was already proposed in Mroueh et al in Sobolev Descent. AISTATS 2019. \\n\\nWe encourage the authors to layout in the beginning the derivations form this point of view which will make the paper easier to digest, the expression in Equation 7 seems mysterious and pulled out of a hat, but it is easier to understand by going to perturbation analysis usually done on KL for Fisher Natural gradient and to do it also here starting from the linearization of $W_2$ with $||.||_{H^{-1}(q)}$ , and how to approximate it in RKHS as it was already proposed in the literature in Mroueh et al Sobolev Descent. \\n\\n\\nI read carefully the proofs of Proposition 1, 2, 3. I did not ready full the proofs of the concentration of the estimator , but they seem sensible as they follow usual bounding strategies in this context.\", \"questions\": [\"There is nothing special about the wasserstein natural gradient flow variational form and implicit model, once can apply the same to the variational form of Fisher, that would be probably more efficient? It would be great to baseline this one ?\", \"-the constraint $\\\\int f(x)p_{\\\\theta}(x)=0$ is not imposed in the kernelized version?\", \"the method comes disappointing since it seems that the preconditioning that the Wasserstein gradient gives is not enough and $r(u)=u^{\\\\top}D u$ is need where D is diagonal depends on T. Have you tried with $D=Identity$? it might be that the scaling of the gradients is coming only from that $D^{-1}$?\", \"Can you give timings for computing each gradient update and how it compares to regular SGD or diagonal approximation of Fisher natural gradient?\", \"Does one need preconditioned gradient if the network was self normalized (like batch norm or spectral norm etc)?\"], \"overall_assessment\": \"That is a good theoretical work with provable guarantees. The computational complexity of each gradient estimate is large which makes the method not quite appealing in practice.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose an approximate of the natural gradient under Wasserstein metric when optimizing some cost function over a parametric family of probability distributions. The authors leverage the dual formulation and restrict the feasible space to a RKHS. The authors show a trade-off between accuracy and computational cost with theoretical guarantees for the proposed method, and empirically verify it for classification tasks.\\n\\nThe motivation of the natural gradient is well-motivated. Although the choice of Wasserstein metric is sound, especially for models that do not admit a density, it seems that there are no supporting experiments for this choice over Fisher Information metric. In general, the writing is fine. The flow idea is clear. However, the content is quite dense. Some assumptions just pump out without careful judgment. The idea to restrict into RKHS and use low-rank approach is interesting to approximate for the natural gradient under Wasserstein metric. Overall, I lean to the acceptance side.\", \"below_are_some_of_my_concerns\": \"1) It seems that the natural gradient under Wasserstein metric is well-motivated for models which do not admit a density (to compare with the natural gradient under Fisher information metric). However, it seems that there is no supporting experiments about it yet. For models in the experiments, it is better to show a comparison between natural gradient under Wasserstein metric and Fisher information metric w.r.t. time consumption and accuracy.\\n\\n2) In proposition 3 and theorem 5, they require some assumptions. The authors should place those assumptions into the main text instead of only putting it in the appendix, and should give more discussions about those assumptions. Especially, for assumption (D), why one can have this assumption? It seems that this assumption (D) has a strong influence to the complexity in Theorem 5? More detail discussion is required.\\n\\n3) For the relaxation in Equation (9), it seems that the authors do not simply add some regularization terms. How does it relate to the original Equation (7)? What is the meaning of the 3rd term in Equation (9)? and how's about the 2nd term?\\n\\n4) For the experiments, the authors evaluate the multivariate normal model and the multivariate log-normal model which are very special cases under Wasseserstein information matrix where one can compute in closed-form. The authors should show some general models, especially models which do not admit a density. For the experiments in Section 4.2, the authors should add the natural baseline: natural gradient under Fisher information metric. It is unclear to me why one needs natural gradient under Wasserstein metric over Fisher information metric for this setup? What is the benefit to use natural gradient under Wasserstein metric?\"}"
]
} |
rygGQyrFvH | The Curious Case of Neural Text Degeneration | [
"Ari Holtzman",
"Jan Buys",
"Li Du",
"Maxwell Forbes",
"Yejin Choi"
] | Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration — output text that is bland, incoherent, or gets stuck in repetitive loops.
To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models than previous decoding strategies. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass.
To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our results show that (1) maximization is an inappropriate decoding objective for open-ended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is currently the best available decoding strategy for generating long-form text that is both high-quality — as measured by human evaluation — and as diverse as human-written text. | [
"generation",
"text",
"NLG",
"NLP",
"natural language",
"natural language generation",
"language model",
"neural",
"neural language model"
] | Accept (Poster) | https://openreview.net/pdf?id=rygGQyrFvH | https://openreview.net/forum?id=rygGQyrFvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"uQ67MdQiX6",
"rkelnCk3sH",
"SJx4wSfjor",
"BygK6is5jH",
"BJeDcioqjH",
"B1ldSoo5jS",
"SygdmsocjB",
"Hyxdbsj9jB",
"S1lugevycS",
"Skxc3_RpFB",
"rJeVL4b6KS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727766,
1573809831660,
1573754204145,
1573727169154,
1573727118903,
1573727039961,
1573727008506,
1573726976032,
1571938288512,
1571838129541,
1571783755732
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1608/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1608/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1608/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1608/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1608/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1608/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1608/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1608/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1608/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1608/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents nucleus sampling, a sampling method that truncates the tail of a probability distribution and samples from a dynamic nucleus containing the majority of the probability mass. Likelihood and human evaluations show that the proposed method is a better alternative to a standard sampling method and top-k sampling.\\n\\nThis is a well-written paper and I think the proposed sampling method will be useful in language modeling. All reviewers agree that the paper addresses an important problem. \\n\\nTwo reviewers have concerns regarding the technical contribution of the paper (i.e., nucleus sampling is a straightforward extension of top-k sampling), and whether it is enough for publications at a venue such as ICLR. R2 suggests to have a better theoretical framework for nucleus sampling. I think these are valid concerns. However, given the potential widespread application of the proposed method and the strong empirical results, I recommend to accept the paper.\\n\\nAlso, a minor comment, I think there is something wrong with your style file (e.g., the bottom margin appears too large compared to other submissions).\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Follow-up and Initial Stochastic Beam Search Results\", \"comment\": \"Thank you for your quick reply and engagement with our response.\", \"we_appreciate_your_acknowledgement_of_the_empirical_analyses_performed\": \"Our perspective is that such analyses are vital to understanding the current landscape of generation, and that the analysis of methods, metrics, and models are key to studying text generation more rigorously.\\n\\nIt is true that picking the highest scoring sample generated by stochastic beam search may help to alleviate the incoherence caused by sampling from the tail. We have run stochastic beam search with a beam size of 4 (we did not have enough compute to run larger beam sizes within the given time window) and found the numbers to be very close to pure sampling:\\n\\n 1) The perplexity of the language model on text generated by stochastic beam search is 21.19, very close to pure sampling (22.73) and much higher than the perplexity of human text at 13.08. \\n 2) Self-BLEU4 is 0.30 matching the human distribution, where pure sampling was slightly too diverse at 0.28.\\n 3) The Zipf Coefficient is 0.92, slightly lower than human text (0.93) where pure sampling matched the human distribution.\\n 4) As you suggested, repetition is lower using stochastic beam search, with only 0.06% of generations ending in a repetition loop. However, this actually underestimates repetition in naturally occurring human text at 0.18%.\\n 5) HUSE requires human labels, which we could not obtain due to the limited time window, but which we will include in the final version.\\n\\nThese initial numbers, especially the high perplexity, suggest that the issue of the incoherence of pure sampling generations is still present in stochastic beam search. In the final paper we will also include multiple beam sizes for a comprehensive comparison.\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"I have carefully read all the responses from the authors. Considering the merits of the empirical analyses performed, I have raised my score from Weak Reject to Weak Accept.\", \"about_stochastic_beam_search\": \"I agree that stochastic beam search also samples from the original distribution. However, due to the nature of the beam search, it will remove the issues of the incoherence caused by independently sampling each word from the original distribution. It should avoid generating repetitive sentences. This baseline should be compared and discussed.\"}",
"{\"title\": \"Response (2/2)\", \"comment\": \"-- Comparing different versions of GPT-2 -- (Re: Minor Issue 3)\\n\\nRadford et al. also released high-quality generations from the smaller GPT-2 model - our focus is on the decoding strategy used rather than the model choice. We have updated Figure 1\\u2019s caption to make this clear. The full GPT-2 model was not publically available at the time of submission.\\n\\nReferences\\n\\nAmmanabrolu et al., 2019. \\\"Guided Neural Language Generation for Automated Storytelling.\\\" Proceedings of the Second Workshop on Storytelling.\\n\\nAnonymous, 2019. \\\"Neural text generation with unlikelihood training.\\\" https://openreview.net/forum?id=SJeYe0NtvH\"}",
"{\"title\": \"Response (1/2)\", \"comment\": \"Thank you for your positive overall assessment. We gladly respond to your concerns and provide clarifications that we hope will clear up some potential misunderstandings:\\n\\n\\n-- Evaluating Open-Ended Generation -- (Re: Con 1)\\n\\nEvaluating open-ended generation is a hard problem for which there are currently only partial solutions, but we think this should encourage rather than discourage further work towards proper evaluation. Developing better models and better evaluation criteria go hand in hand\\u2014while we propose several criteria in the paper, we do not believe that any one of them is sufficient to use directly as a training criteria.\\n\\n\\n-- Why Use Cross-Entropy Loss? -- (Re: Con 1)\\n\\nLarge language models such as GPT-2 are the best currently available models for general purpose text generation. While it is possible that training criteria other than cross-entropy could result in a better model, most other currently available criteria are not differentiable and not as scalable. In practice other training criteria such as GANs have been shown to lead to worse generation quality than cross-entropy training (see references in 2.1).\\n\\n\\n-- Why Sampling is Necessary for Good Generation -- (Re: Con 1)\\n\\nTo generate text from large language models we believe that some form of sampling is required precisely because in open-ended generation maximum probability texts do not match the human distribution of text. Indeed, we show that to match the human distribution (in terms of perplexity) it is actually better to perform truncated sampling than pure sampling (at least with current models).\\n\\n\\n-- Why Compare to Beam Search? -- (Re: Con 2)\\n\\nThe reason for comparing to beam search is that it has indeed been used in recent conditional open-ended generation work (Ammanabrolu et al., 2019, Anonymous, 2019). Furthermore, we are interested in finding the best method to generate high quality text with the same diversity of vocabulary as human text, rather than generating a diverse set of samples. Beam search could reasonably be a way to achieve that, although in practice we show that the quality of text that it generates is deficient.\\n\\n\\n-- Why is the Tail of the Distribution Considered Unreliable? -- (Re: Con 3)\\n\\nOur hypothesis is that the (relative) probability estimates of words within the tail are inaccurate (either too high or too low), rather than the overall p(tail). Pure sampling generates text which does not match the human distribution (as measured by perplexity) because when semantically inappropriate words (whose probability estimates are presumably too high) are sampled from the tail, that throws the sampled sequence off the correct distribution, leading to incoherence in practice. Therefore, lacking a better underlying model, the best solution is not to sample from the tail.\\n\\n\\n-- Novelty of Analysis -- (Re: Con 4)\\n\\nFirstly, in this paper we offer automatic metrics for choosing either p or k \\u2014 analysis lacking from previous work \\u2014 which enables us to show that higher values of k should be used. We think, conceptually, that having a dynamic k fixed by p is better than having a dynamic p fixed by k (as explained in section 3.2 and figure 4). Qualitatively the exact choice of p between 0.9 and 0.99 appears to make relatively little difference in generation quality. For top-k sampling, coherence deteriorates when k is too large, and in practice (through small-scale expert evaluations) we found it hard to find a value of k that performs well on our automatic metrics while being as coherent as text generated by Nucleus sampling. To clarify this point, we will add an expert evaluation in the final version.\\n\\n-- Perplexities of Human Text -- (Re: Question 1)\\n\\nWe report the perplexities of the original model on text produced by each method. The column with \\\"human\\\" perplexity is the perplexity of the original model on the human-written continuations in our experimental setup.\\n\\n\\n-- Scope of the paper -- (Re: Minor Issue 1)\", \"to_clarify\": \"we don't wish to claim that Nucleus Sampling is something other than a heuristic or that it \\\"solves\\\" open-ended generation. We do aim to give a better understanding of the problem of open-ended generation, the various methods that have been proposed to address it, and ways to evaluate them, rather than just comparing Nucleus Sampling to other generation strategies.\\n\\n-- Open-ended vs directed generation -- (Re: Minor Issue 2)\\n\\nThe distinction between open-ended and directed generation is not the same as the distinction between conditional and unconditional generation\\u2014indeed we use a conditional setting for open-ended generation in this paper (section 4.1). The distinction is that in open-ended generation there is much more uncertainty in the conditional distributions, which means that in practice decoding methods that work for directed generation, where the output is close to a direct transformation of the input (and therefore has low uncertainty), do not work for open-ended generation.\"}",
"{\"title\": \"Response (1/1)\", \"comment\": \"Thank you for your positive assessment.\\n\\n-- Theoretical Grounding --\\n\\nWhile we don't have a theoretical proof of Nucleus Sampling, our paper does provide strong empirical evidence to justify truncated sampling in general and Nucleus Sampling in particular. Our most principled justification lies in analyzing the perplexity of generated text, which shows that, to match the perplexity of human written text, some form of truncated sampling has to be performed and that empirically this is correlated with generation quality. We suspect that this may be due to current large language models not fitting the underlying distribution optimally, but addressing that lies outside the scope of this paper. \\n\\n\\n-- Novelty and Insight --\\n\\nIn terms of novelty, we would like to highlight three main points. First, we provide insight into why truncation is necessary and how best to truncate the distribution of neural language models, analysis not performed in the papers that introduced Top-k sampling (only introduced last year) where it was described as a detail of decoding. Second, we provide the first side-by-side empirical analysis on how the quality of language generated by different LM decoding methods compares. Identifying the weaknesses and missing inductive biases of these methods will aid future work grappling with the theoretical implications of different methods. Finally, despite being \\\"just another way of truncating the distribution\\\", Nucleus Sampling provides a practical solution for generating high-quality text in various applications that mimics the human distribution more faithfully than competing methods.\"}",
"{\"title\": \"Response (2/2)\", \"comment\": \"-- Comparison to Stochastic Beam Search -- (Re: Con 5)\\n\\nStochastic beam search (Gumble-top-k Beam Search) was proposed with a different motivation -- obtaining (pure) samples from the original distribution in parallel. The Gumble-top-k method was proposed to make beam search stochastic without truncating the distribution (as in top-k sampling or standard beam search). Our empirical findings, however, suggest that neural language models are unreliable estimators of the tail of the vocabulary distribution. Thus we intentionally truncate the search process to the head distribution and show that this produces higher quality generations that are closer to the human distribution of language. As shown in our experiments with pure sampling, sampling from the full distribution produces text that is more incoherent than decoding methods that use truncation.\\n\\n\\n---\\n\\nIn conclusion, the contribution of this paper lies as much in its technical analysis of the problem of text generation with large language models as in proposing a particular method that is robust and works well in practice.\"}",
"{\"title\": \"Response (1/2)\", \"comment\": \"Thank you for your positive comments on our paper's motivation, presentation and evaluation. We gladly respond to your concerns:\\n\\n-- Novel Insights on Top-k Sampling and Beyond -- (Re: Con 1) \\n\\nWe would like to emphasize that a primary contribution of this paper is the analysis into why truncation (such as Top-k sampling or Nucleus sampling) works and why it is necessary at all, analysis not performed in the papers that introduced Top-k sampling (introduced only as of last year), where it was described as a minor detail of decoding. We provide the first side-by-side empirical analysis and insight into the quality of language generated by current LM decoding methods to show that the tail of the distribution is unreliable. Identifying the weaknesses and missing inductive biases of these methods will aid future work grappling with the effects of different training and decoding methods. \\n\\n-- Comparison with Top-k Sampling -- (Re: Cons 1 & 2)\\n\\nWhen predicting the next word in a sequence, there will usually be a set of words that are plausible continuations, and a (usually much larger) set of words that are implausible (based on grammar or semantics). Nucleus Sampling captures the intuition that the size of the set of plausible next tokens will vary across different contexts, and can be approximated based on the probability distribution, rather than assuming a fixed-sized shortlist, as is the case with top-k sampling.\\n\\nTop-k sampling with a large value of k will cover most plausible next tokens, but also in some cases include inappropriate candidates that Nucleus Sampling would have excluded. Renormalization will increase those inappropriate candidates' probability of being sampled, which can degrade generation quality. This is further motivated in section 3.2 and Figure 4. \\n\\nImportantly, in this paper we provide extensive analysis showing why we need to perform truncated sampling to generate high-quality text from current large language models, which the papers proposing top-k sampling did not provide. Our analysis, for example, enables us to show quantitatively that top-k sampling works better with larger values of k than commonly used.\\n\\n-- Issues with Using Large k in Top-k Sampling -- (Re: Con 4)\\n\\nIt is true that with a large enough k, Top-k can theoretically produce any sentence Nucleus Sampling can. In fact, pure sampling subsumes both Top-k and Nucleus Sampling in this sense. The problem, however, is that when using Top-k sampling with k large enough to generate sentences produced by Nucleus Sampling, it is also capable of generating other sentences with low coherence.\\n\\n-- Ease of Choosing Decoding Hyper-parameters -- (Re: Con 2)\\n\\nWe believe that the choice of p is more intuitive than the choice of k because it more directly relates to the intuition that the sets of plausible and implausible candidate token can be captured by the head and the tail of the probability distribution. Equally importantly, in this paper we offer automatic metrics for choosing either p or k that were lacking from previous work. Qualitatively the exact choice of p between 0.9 and 0.99 appears to make relatively little difference in generation quality. For top-k sampling, coherence deteriorates when k is too large, and in practice (through small-scale expert evaluations) we found it hard to find a value of k that performs well on our automatic metrics while being as coherent as text generated by Nucleus sampling. To clarify this point, we will add an expert evaluation in the final version.\\n\\n\\n-- Underlying Uncertainty in Natural Language -- (Re: Con 3)\\n\\nWe agree that there is still room for improvement in underlying language models for generating long-form text. However, there will always be uncertainty in what to say next, because there is real underlying uncertainty in language itself: it is extremely unlikely that language models can achieve perplexities in the neighborhood of 1.5, which is the perplexity we show greedy and beam search generations have.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In the domain of language models, the paper introduces a new heuristic sampling method called top-p sampling, or nucleus sampling (NS). It is a variant of top-k sampling where the smallest k is selected to ensure the combined likelihood is no less than p. The paper centers on claiming and showing that the generated samples are of higher quality and more diverse than common alternatives such as beam search, pure sampling, top-k sampling, and low-temperature sampling.\\n\\nWhile overall I think the proposed method is sound as an alternative to other heuristics such as beam search, I have reservations on the presentation and arguments made in the paper.\", \"pros\": \"1. NS is sound as a heuristic sampling method.\\n2. The paper contains many interesting experimental observations and I speculate that some of them will find future uses. For example, the selection of parameter values (not just for NS, also for top-k) and the nontrivial perplexity of generated text.\", \"cons\": \"1. The ultimate performance measure (open-ended generation) of \\u201chigh quality\\u201d and \\u201cdiversity\\u201d is very vague. It seems that the authors end up doing is to evaluate by high self-BLEU, HUSE, few repetitions, and perplexity. Furthermore, it is unclear why one _should_ train with cross-entropy (trying to match the distributions) and then rely on the sampling procedure to fulfill these desiderata (See also Min1 and Min2).\\n2. The comparison with beam search (BS) is not well motivated. BS is devised to find the maximal sentence and it is not stochastic. It seems out of place in the context of generating a \\u201cdiverse\\u201d set of samples.\\n3. The arguments in the comparison with pure sampling is vague and sometimes misplaced. The key argument seems to hinge on the idea that the low likelihood tail is of \\u201clow confidence.\\u201d But this claim is problematic. If the estimate is wrong on the low probability tail, then so is the estimate on p(head) = 1-p(tail) by virtue of p being a probability measure. \\n4. The arguments in the comparison with top-k is vague and sometimes misplaced. The main argument against top-k is the \\u201c[d]ifficulty in choosing a suitable value of k\\u201d but the same can be said for choosing p. After all, top-k and top-p (NS) can be thought of as a variant of each other (by dynamically choosing k or p respectively). Moreover, in Figure 5, a selection for k value is suggested. I agree that this value might not _appear_ as intuitive as p, and maybe other works have chosen a smaller k than they should have, but similarly, people might intuitively choose too high a value for p (Figure 5). \\n\\nPossible mistakes/typos:\\n1. (2), \\u201c>=\\u201c -> \\u2265.\\n2. Figure 7, the human self-BLEU4 < human self-BLEU5 and that seems wrong, especially when all other bars show the opposite ordering.\\n3. In References, \\u201cAngela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In ACL, 2018a\\u201d is duplicated.\\n4. In References, the citation of \\u201cUnifying human and statistical evaluation for natural language generation\\u201d is from NAACL 2019, not \\u201c2018.\\u201d\\n5. In References, the first names are shown as initials in \\u201cSparse forward-backward using minimum divergence beams for fast training of conditional random fields.\\u201d\", \"questions\": \"1. In Table 1, how is Human perplexity estimated?\", \"minor_issues\": \"1. Partly due to what the authors position NS to solve, i.e. open-ended generation, the core arguments is not as precise or rigorous as it could have been in my opinion. I feel that focusing on comparing NS to other heuristics as a heuristic might make the text appeal to a wider audience and the discussion more precise. \\n2. The distinction drawn between open-ended generation and directed generation is unpersuasive to me. In the context of language modeling, the former is to approximate a distribution (over an extended alphabet) whereas the latter is to approximate a conditional distribution (given the input). However, the most common formulation to solve the former is to decompose the distribution into a product of conditional distributions (1).\\n3. The caption in Figure 1 draws a misleading comparison. The \\u201cadmirable\\u201d generation (presumably referring to the OpenAI blog post) was from the full GPT-2 model, not the initially released GPT-2-117M.\\n\\nPlease point out my misunderstanding directly. I am open to acknowledging them and revising my assessment.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper is motivated by an observation that maximization-based decoding approaches such as beam search can lead to incoherent and repetitive sentences when open-ended long-form text generation based on neural language model such as GPT-2 is performed. To solve the problem, this paper proposes a sampling method called Nucleus Sampling. Similar to Top-k sampling, Nucleus Sampling truncates the probability distribution of the words in the vocabulary. Instead of re-normalizing the probabilities for the top-k words, Nucleus Sampling re-normalizes the original probabilities for the words with values above a pre-chosen threshold p. Some quantitative and qualitative results show that the proposed sampling method can generate long-form texts with some nice properties.\", \"pros\": \"The problem addressed in this paper is highly interesting, and the proposed method is simple and intuitive. The paper is well motivated and the method is clearly presented.\\n\\nExtensive quantitative and qualitative experiments are conducted to compare different sampling methods.\", \"cons\": \"1) Although the raised problem in this paper is interesting, the proposed Nucleus Sampling seems to be a trivial variant of Top-k sampling. With a reasonably large k suitable for different practical problems in question, it is unclear that Nucleus Sampling produces significant advantages over commonly used Top-k sampling. \\n\\n2) The argued difficulty in choosing k in Top-k sampling is not that different from that of choosing the threshold p in Nucleus Sampling.\\n\\n3) In section 4.3, the argument that natural language rarely remains in a high-probability zone is questionable. This happens only because our current neural language models are not well-specified for generating long texts and modeling long-range contexts. \\n\\n4) In section 6.2, the qualitative comparison between Nucleus Sampling and Top-k sampling might be caused by randomness. With a large k, there is no technical barrier that prevents Top-k sampling from generating the sentences produced by Nucleus Sampling.\\n\\n5) A recent stochastic beam search method based on Gumbel-max-k (Kool, Hoof, and Welling, ICML 2019) should be discussed and compared. \\n\\nIn summary, although the studied problem in this paper is highly interesting, the proposed Nucleus Sampling is not technically significant compared to Top-k sampling.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Contributions:\\n\\nThis paper studies an important problem, i.e., how to find a good decoding strategy for open-ended text generation. To this end, the authors provide a deep analysis of the most common decoding methods, and propose Nucleus Sampling, a very simple yet effective method to generate higher-quality text. Compared with top-k sampling, the key idea behind the proposed method is to sample from the dynamic nucleus of tokens containing the majority of the probability mass. Experiments demonstrate that nucleus sampling is an effective decoding strategy in practice.\", \"strengths\": \"(1) Writing & Clarity: The proposed method is well motivated, the paper is carefully written, and clearly presented. I enjoyed reading the paper. \\n\\n(2) Experiments: The experiments are also carefully designed. Both quantitative and human evaluation are provided. Quality examples are also shown.\", \"weaknesses\": \"(1) Novelty: The biggest concern that I have is its technical novelty. The proposed method is effective, but it acts more like a useful trick. Also, no theoretical justification is provided, but only some intuitions. So, I would say the novelty is indeed limited. However, given the comprehensive evaluation, and high writing quality, I lean to accept this paper due to its empirical contribution. It seems that this nucleus sampling method can be applied in a wide range of text generation applications. \\n\\n\\n** Minor **\", \"typo\": \"In the line below Eqn. (2), \\\"x \\\\in V^{(k)}\\\" => \\\"x \\\\in V^{(p)}\\\", same typo in Eqn. (3).\"}"
]
} |
HkeZQJBKDB | Universal approximations of permutation invariant/equivariant functions by deep neural networks | [
"Akiyoshi Sannai",
"Yuuki Takai",
"Matthieu Cordonnier"
] | In this paper, we develop a theory about the relationship between $G$-invariant/equivariant functions and deep neural networks for finite group $G$. Especially, for a given $G$-invariant/equivariant function, we construct its universal approximator by deep neural network whose layers equip $G$-actions and each affine transformations are $G$-equivariant/invariant. Due to representation theory, we can show that this approximator has exponentially fewer free parameters than usual models. | [
"finite group",
"invariant",
"equivariant",
"neural networks"
] | Reject | https://openreview.net/pdf?id=HkeZQJBKDB | https://openreview.net/forum?id=HkeZQJBKDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"xYRNUWTG6",
"BklO8pH9oB",
"BkeHJTrqir",
"HklCuhrcsS",
"rklrHsfJ5B",
"HJxLzccRYS",
"SJlwDP3aYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727738,
1573702992321,
1573702876908,
1573702774358,
1571920701427,
1571887630127,
1571829598812
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1607/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1607/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1607/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1607/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1607/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1607/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The article studies universal approximation for the restricted class of equivariant functions, which can have a smaller number of free parameters. The reviewers found the topic important and also that the approach has merits. However, they pointed out that the article is very hard to read and that more intuitions, a clearer comparison with existing work, and connections to practice would be important. The responses did clarify some of the differences to previous works. However, there was no revision addressing the main concerns.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We thank you for your constructive comments.\\n\\n> For me, something that was not clear was how exactly this work is different from Zaheer et al., (2017). It would be nice if these differences were spelled out for me.\\n\\nIn the paper of Zaheer et al, the group is the symmetric group $S_n$ and the action deals only with permutation. In that situation, we proposed an invariant or equiavariant DNN model with $ aI + b11^T$ as an affine map between the middle layers, and experimented with various tasks to demonstrate its performance. The paper essentially shows that the universal approximation theorem for invariant functions. However, it was not mentioned whether any equivariant function could be represented in their model. Our result means that any equivariant function can be expressed, allowing a little different affine transformations. In particular, applying our results as a substitution action for $S_n$, the affine transformation from the input layer to the first hidden layer is different from $ aI + b11^T $. In that sense, it is different from the model of Zaheer et al. If there is an opportunity to modify, we will write down what affine transformations will be made and add them to the final version for clarity of comparison.\\n\\n> It was also not clear why it was necessary to present the contribution that with each symmetry you introduce you reduce the number of parameters exponentially? I guess the intuition behind this is clear, but I was wondering why this is necessary to spell out with a Theorem (2.3 I believe). What exactly doe this result imply?\\n\\nAs you can see from the curse of dimensionality, the reduction of parameters is closely related to the number of samples required to perform the actual task, and neural networks with symmetry has fewer parameters than usual. Showing them gives the intuition that the required number of samples is reduced when we adapt it to the actual task. We are not yet able to mathematically formulate this, but we believe that the result of reducing the number of parameters is worth writing in the paper.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We thank you for your definitive comments.\\n\\nThe crucial point of this paper is exactly what you pointed out. Our original motivation was to prove a universal approximation theorem with DNN with affine transformation which commutative with group action. The proof of our universal approximation theorem for general finite groups is with very complicated notations, and is very similar to the proof for the symmetric group described in the paper. So we decided to write around it. \\nAs you mentioned, the author understands that it is most important to be able to reduce the discussion of equivariant vector-valued functions to a discussion of few invariant functions. Therefore, we think that we should have described in detail how to write a $G$-equivariant function for a general finite group $G$ with an invariant function for some stabilizer groups. If allowed, we would like to revise the final version to emphasize it.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We thank you for your constructive comments.\\n\\n> - Giving an application and the corresponding construction where the increased generality w.r.t. the graph networks of (Keriven & Peyr\\u00e9 2019)\\n- In the case of a graph network already addressed by (Keriven & Peyr\\u00e9 2019) , how would this construction apply and how would the constructed network compare?\\n\\nThey prove the universal approximation theorem by graph neural networks. This is because the adjacency matrix of the graph corresponds to rank 2 tensor which is an element of $R^{n ^ 2}$, and the isomorphism classes of the graph correspond to the equivalence classes with a special action of the symmetric group $S_n$. This results in the problem of the universal approximation theorem of invariant and equivariant functions for that action of $S_n$ on $R^{n ^ 2}$. On the other hand, we are giving results for more general actions on vector spaces. In particular, if we apply our results using their actions on tensors, we can obtain a universal approximation theorem as an array of networks of tensors with the action of stabilizer groups of $S_n$. In their results, higher order tensor $R^{n^k}$ with $S_n$-action needs to be considered in the middle layer. This corresponds to considering a hypergraph with edge size $k$. On the other hand, in our method, the result is obtained if the acting group is allowed to be small. We think that one of the important questions is how to interpret the equivalence class of $R^{n^2}$ in the action of the subgroup of $S_n$ as a graph.\\nAlso, it should be noted that the structure of the approximator is different from those of us. While they think of a kind of algebra and construct approximator using it, we regard equivariant vector-valued functions as an array of invariant functions and plug-in an approximator for invariant functions. As Reviewer #1 pointed out, this is an essential advantage of our results. Our method of construction also provides a framework for adapting what can be calculated for invariant functions to equivariant functions for other problems such as generalization errors, and is considered to be expansible.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a universal approximation theorem for functions invariant and equivariant to finite group actions. It constructs the approximations using fully-connected deep neural networks with ReLU activations. It proves a bound on the number of parameters in the build equivariant model. The proof structure uses a decompostion of G-equivariant functions into Stab(1)-invariant functions, which are represented by a particular network.\\n\\nThe proof structure is reasonably well written, however more context and examples could help a reader unfamiliar with the literature. This could be achieved by proposing applications of the construction of the approximating network to a few concrete applications where equivariant representations are needed.\\n\\nAs-is the paper is difficult for a newcomer to the field, compared e.g. with (Maron et al. 2019b) and (Keriven & Peyr\\u00e9 2019) which have clearer expositions. On the other hand, the equivariant representation problem is given in more generality, stemming from finite group actions represented by actions of permutation groups.\\n\\nMy current assessment is \\u201cborderline\\u201d and might change in light of author responses and reviews from reviewers with more experience in this field to judge the significance of the results.\\n\\nI think it is necessary to do a more in-depth comparison with respect to the existing work of (Keriven & Peyr\\u00e9 2019).\\nIn this regard, I think the following could be done:\\n- Giving an application and the corresponding construction where the increased generality w.r.t. the graph networks of (Keriven & Peyr\\u00e9 2019)\\n- In the case of a graph network already addressed by (Keriven & Peyr\\u00e9 2019) , how would this construction apply and how would the constructed network compare?\\n\\nWhat do the authors call \\\"usual models\\\" in the discussion of Theorem 2.3? I assume this means models that do not exploit the equivariance of the function, but this could be more explicit.\", \"typos\": \"p.5, in the discussion of Theorem 2.1: \\\"> Then, by ... we may assume\\\"\\np.8, Prop 4.1 Proof: n devides M\\np.11: Prop A.1 Proof: can be realized as a permutation action on R^n: R should be in \\\\mathbb\\np.17: \\u201cane\\u201d instead of \\u201cand\\u201d\\n\\n***\\nUpdated review\\n\\nI have the impression all 3 reviewers have had some trouble going through the main messages of the paper. Although - at least in my case - this might be partly due to limited in-depth expertise, this seems to indicates some deeper shortcomings in the writing, as well as the insights and intuitions offered by the paper, in order to be accessible to a larger audience.\\n\\nI still believe the approach may have merits, however I do not recommend acceptance of the paper at its current state. In my opinion, the following points should be considered for a future resubmission:\\n* A friendlier introduction to the matter with more intuitions and examples where the method has interest and distinguishes itself from existing papers, e.g. (Maron et al. 2019b), (Keriven & Peyr\\u00e9 2019) (Review #2, review #3)\\n* At least 1 example (and possibly more) where the proposed method leads to a practical architecture (Review #1)\\n* A clearer explanation of the particular role played by Sn w.r.t. general finite groups in the construction (Review #1)\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a universal approximator for functions equivariant to finite group action. The idea is to draw a bijection between such equivariant universal approximators and those for functions that are \\u201cinvariant\\u201d to the stabilizer subgroup of the output indices. Using existing results for designing universal invariant approximators, the paper then seems to suggest universal equivariant approximators in the form of neural networks.\\n\\nWhile this is an important topic and the paper -- to the extent that I could follow -- seems to be technically sound, I found the paper very hard to read in part due to numerous grammatical errors.\\n\\nAnother issue is that I don\\u2019t see why the symmetric group is treated separately from the general finite groups. If in the end, the goal is to plug in \\u201ca\\u201d universal G-invariant function, the paper could leave those details out and focus on clarifying its proposed bijection. Could you please comment? Also is there a setting in which this setup leads to a practical architecture?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"*Paper summary*\\n\\nThe authors develop a universal approximation theorem for neural networks that are symmetric with respect to the symmetric group (permutations). They also formally show that the number of free parameters is to train an equivariant network is smaller that the number in a non-equivariant network, leading to better sample complexity.\\n\\n*Paper decision* \\n\\nI have decided to give this paper a weak reject. The contributions are clear and the paper is written well enough for publication. That said, the significance of the contribution is not clear to me, since there are other papers in the literature doing the same.\\n\\nI must admit that the subject material is out of my region of expertise, so my judgement may be a little miscalibrated.\\n\\n*Supporting arguments and questions for the authors* \\n\\nThe paper is clearly written by people who have a firm grasp of their subject. The contributions are well laid out and the following proofs are clearly placed in the paper. In terms of constructive criticism, I think it would be helpful to readers to give an intuition behind why the contributions are necessary and to add a sense of the motivation behind them. This would open up the paper to a broader audience.\\n\\nFor me, something that was not clear was how exactly this work is different from Zaheer et al., (2017). It would be nice if these differences were spelled out for me. It was also not clear why it was necessary to present the contribution that with each symmetry you introduce you reduce the number of parameters exponentially? I guess the intuition behind this is clear, but I was wondering why this is necessary to spell out with a Theorem (2.3 I believe). What exactly doe this result imply?\\n\\nIn terms of clarity, the preliminaries section is well written and quite clear. I enjoyed the summary. That said, it is quite advanced, and it would be useful to point more novice readers to elementary texts, where they could brush up on their group theory. Furthermore, the grammar in places is a bit tenuous, but on the whole the writing is understandable.\"}"
]
} |
HkxWXkStDB | Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation | [
"Raphael Gontijo Lopes",
"Dong Yin",
"Ben Poole",
"Justin Gilmer",
"Ekin D. Cubuk"
] | Deploying machine learning systems in the real world requires both high accuracy on clean data and robustness to naturally occurring corruptions. While architectural advances have led to improved accuracy, building robust models remains challenging, involving major changes in training procedure and datasets. Prior work has argued that there is an inherent trade-off between robustness and accuracy, as exemplified by standard data augmentation techniques such as Cutout, which improves clean accuracy but not robustness, and additive Gaussian noise, which improves robustness but hurts accuracy. We introduce Patch Gaussian, a simple augmentation scheme that adds noise to randomly selected patches in an input image. Models trained with Patch Gaussian achieve state of the art on the CIFAR-10 and ImageNet Common Corruptions benchmarks while also maintaining accuracy on clean data. We find that this augmentation leads to reduced sensitivity to high frequency noise (similar to Gaussian) while retaining the ability to take advantage of relevant high frequency information in the image (similar to Cutout). We show it can be used in conjunction with other regularization methods and data augmentation policies such as AutoAugment. Finally, we find that the idea of restricting perturbations to patches can also be useful in the context of adversarial learning, yielding models without the loss in accuracy that is found with unconstrained adversarial training. | [
"Data Augmentation",
"Out-of-distribution",
"Robustness",
"Generalization",
"Computer Vision",
"Corruption"
] | Reject | https://openreview.net/pdf?id=HkxWXkStDB | https://openreview.net/forum?id=HkxWXkStDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"OpO-ASrrsz",
"N5TpEAKcyI",
"BJxQWxB5sr",
"HyeLRDxYor",
"BJgEIn_vjS",
"HJlhgh_wjH",
"rylqOtOvoB",
"BkeUdm9QcB",
"H1xPl92pKB",
"HylUyxwTKH"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1577817600138,
1576798727708,
1573699579037,
1573615566022,
1573518412373,
1573518324446,
1573517681945,
1572213614263,
1571830254628,
1571807198390
],
"note_signatures": [
[
"~Dan_Hendrycks1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1606/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1606/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1606/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1606/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1606/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1606/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1606/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1606/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"RE: Why not submerge the image in Gaussian noise?\", \"comment\": \"> But, considering CIFAR image size is only 32x32, a patch of size 25 is quite large, how much is the method different from plain whole image Gaussian then?\\n\\nIf the patch were 32x32, or that the whole image was noisy, then the convnet would never see an image with usual local image statistics. Consequently, clean data becomes out-of-distribution or unforeseen during test time, which is not desirable. With a patch size strictly smaller than the whole image, the network can learn how to respond to noisy images and also usual images.\"}",
"{\"decision\": \"Reject\", \"comment\": \"The paper in its current form was just not well enough received by the reviewers to warrant an acceptance rating. It seems this work may have promise and the authors are encouraged to continue with this line of work.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Re: Regarding experiments\", \"comment\": \"> these experiments only shows the empirical behavior of Patch Gaussian over baselines on some sample datasets\\n\\nWe stress that ImageNet and CIFAR are the most studied vision datasets, and for robustness they are the only datasets with standardized benchmarks (ImageNet-C and CIFAR-10-C). \\n\\n> I wish we can design an experiments such that we can see the sensitivity with high frequency directly affected by the patch.\\n> When increasing noise, moving from 0 to total cut out. How the behavior progress?\\n\\nWhile we do not have time to complete this full set of experiments before the rebuttal deadline, we have analyzed the impact of noise and patch size on fourier sensitivity and have added these results to Figure 13 in the Appendix. They demonstrate that the intuition conveyed in the paper around the analysis of the fourier sensitivity plots is accurate and depends on these hyperparameters.\\n\\nSpecifically, for patch size 16, stdev=1.0 (Fig 13, center), the Patch Gaussian model demonstrates sensitivity very similar to that reported in Figure 4. The main difference being that the smaller patch size (16 here vs 25 in Figure 4) makes the model slightly more sensitive to high frequencies. This makes sense since smaller patch size moves the model further away from a Gaussian-trained one.\\n\\nWhen we make the scale smaller (patch size 16, stdev 0.3, left), less information is corrupted in the patch, which moves the model farther from the one trained with Cutout (and therefore closer to a Gaussian-trained one). This can be seen in the increased invariance to high frequencies at the first layer, which is reflected in invariance at test error as well.\\n\\nIf we, instead, make the scale larger (patch size 16, stdev 2.0, right), we move the model closer to the one trained with Cutout. Notice the higher intensity red in the first layer plot, indicating higher sensitivity to high-frequency features. We also see this sensitivity reflected in the test error, which matches the behavior for Cutout-trained models.\\n\\nThis confirms that the frequency-based analysis of the models is accurate and reflects changes in hyper-parameters of Patch Gaussian. We also note that this methodology for studying model robustness is not novel to our paper, and has previously been validated and published in NeurIPS 2019.\"}",
"{\"title\": \"Regarding experiments\", \"comment\": \"I appreciate the experiments that the authors presented in the paper and summarize in the response. However, these experiments only shows the empirical performance of Patch Gaussian over baselines on some sample datasets. I wish we can design more systematic experiments to know how Patch Gaussian works, such as if we can see the Fourier analysis (Fig 4) directly affected by the patch params.\", \"for_example\": \"1. When increasing patch size from 1 to image size, how does the sensitivity to high frequency changes. Is it monotonous or have a particular shape? \\n2. When increasing noise intensity, moving from 0 to total cut out. How does the behavior progress?\\nIf we have these, then at least empirically, we can claim that Gaussian or Cutout are specific case of Patch Gaussian and have a better understanding of the problem and solutions. Hence it would be more assured and easy for the practitioners to find a good operating point in a new setting.\\nTable 5 and graphs in the supplemental list the choices, but the affect of them only shows on performance (which can be biased to dataset), not behaviour which is more insightful.\\n\\nOverall, I buy that this may be a good practice that is useful practically. However, I am not convinced that the authors have fulfil the due diligence on proving the correctness and general behaviour of such technique.\"}",
"{\"title\": \"Response\", \"comment\": \"> Although possibly useful practically\\n\\nWe thank the reviewer for pointing out the practical applications of our method. Indeed, because it is so simple, \\u201cthe approach could become one of the standard mechanisms for data augmentation in the toolset of a practical ML engineer,\\u201d as R1 puts it.\\n\\n> this proposal lacks theoretical base on how and why it would be better\\n\\nWe grant that our work started from an empirical observation. However, we provided an experimental analysis to gain a better understanding of why it works. In particular, Section 5.1 shows that Patch Gaussian seems to allow high-frequency information through at lower layers, but still encourages relatively lower test error sensitivity at high frequencies. Indeed, when we measure accuracy on images filtered with a high-pass filter, we see that Patch Gaussian models can maintain accuracy in a similar way to the baseline and to Cutout, where Gaussian fails to. See Figure 5 for full results.\\n\\nR1 and R2 agree that our Fourier-theoretic analysis is intuitive. In addition, many practically useful techniques, such as Cutout, do not have completely rigorous mathematical analysis.\\n\\n> The experiments are rather limited to support the claim\", \"we_show_extensive_experiments_highlighting_how_patch_gaussian_is_the_only_method_that_retains_the_benefits_of_cutout_and_gaussian\": [\"We characterize a trade-off between robustness and accuracy among two standard data augmentations - Cutout and Gaussian (Section 2.1). Specifically, Cutout improves accuracy on clean test data. Despite this, we find it does not lead to increased robustness. Conversely, training with higher sigma of Gaussian can lead to increased robustness to Gaussian noise, but it also leads to decreased accuracy on clean data. Therefore, any robustness gains are offset by poor overall performance.\", \"We show that our method (Patch Gaussian) allows us to interpolate between the two augmentations above (Section 3.1), and to overcome the observed trade-off, yielding models that are robust to unseen corruptions, while also maintaining clean accuracy (Figure 1, Section 4.1). In doing so, it achieves a new state of the art in the Common Corruptions benchmark on CIFAR-C and ImageNet-C. (Section 4.2), which highlights that simple methods such as ours are competitive with complex training schemes designed for robustness.\", \"We demonstrate that Patch Gaussian can be combined with other regularization strategies (Section 4.3) and data augmentation policies (Section 4.4), and can improve COCO object detection performance as well (Section 4.5).\", \"We perform a frequency-based analysis of models trained with Patch Gaussian and find that they can better leverage high-frequency information in lower layers, while not being too sensitive to them at later ones (Section 5.1)\", \"We are open to suggestions of further experiment proposals that could convince the reviewer of this.\"]}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the thoughtful comments. We provide some answers to the concerns raised below:\\n\\n> I\\u2019m a bit concerned about the significance of the work though. The method is a straight-forward combination of existing methods, so methodologically the novelty is kind of limited.\\n\\nWe agree that the method presented is very simple. However, we\\u2019d like to emphasize that this was done by design. In showing that such a simple method can be competitive with state-of-the-art methods in the robustness literature, we show that complex training schemes may not be necessary for training models robust to unseen distributions. This is, we believe, where the significance of the work stems. Indeed, R1 mentioned that our method \\u201ccould become one of the standard mechanisms for data augmentation in the toolset of a practical ML engineer,\\u201d especially since it\\u2019s so easy to try.\\n\\n> I\\u2019m expecting more insights from the analysis of the results, to gain more understanding of why it works so well.\\n\\nIn Section 5.1, we provide an extensive frequency-based analysis and discussion of why Patch Gaussian works well: Patch Gaussian seems to allow high-frequency information through at lower layers, but still encourages relatively lower test error sensitivity at high frequencies. Indeed, when we measure accuracy on images filtered with a high-pass filter, we see that Patch Gaussian models can maintain accuracy in a similar way to the baseline and to Cutout, where Gaussian fails to. See Figure 5 for full results.\\n\\nWe will re-word this section to clarify these insights to future readers. \\n\\n> A few examples/pictures of success cases (when the method works) and failure cases (when the method doesn\\u2019t work), may help readers (I\\u2019m not an expert) to better understand the approach and get more intuitions?\\n\\nWe thank the reviewer for the suggestion. We have not examined this but we hope to include it in camera-ready. In particular, we expect that images with higher Brightness will be among the most common errors, since Patch Gaussian slightly increases error (mCE 0.592) in these corruptions with respect to the Baseline (mCE 0.582). (see Table 7 in Appendix).\\n\\n> It\\u2019s obvious that Gaussian filter blocks high-frequency components, and Cutout keeps some original parts of the image which allow high-freq details to be captured\\n\\nWe agree with the reviewer that these insights make intuitive sense. Our work provides a quantitative evaluation of this phenomenon to confirm this intuition. Further, through rigorous frequency-based sensitivity analysis we show that Patch Gaussian is able to retain both the high frequency sensitivity of Cutout and robustness gains of Gaussian augmentation.\\n\\n> a patch of size 25 is quite large, how much is the method different from plain whole image Gaussian then?\\n\\nWe remind the reviewer that, while the center of the patch needs to be inside the image, the edges can be outside. This means that, with a patch of size 25, 39.55% of the space is covered in expectation for an image of size 32. Depending on the location of the patch, 16.50% the space is covered (minimum) and other 61.04% is covered (maximum). \\n\\nIn addition, our experimental results clearly show that patch Gaussian performs significantly differently from adding Gaussian noise to the whole image. For example, as shown in Table 1 in our paper, for a Resnet-50 model on ImageNet(-C), Patch Gaussian gets a clean test accuracy of 76% and mCE of 0.714, whereas Gaussian data augmentation gets a clean test accuracy of 75.6% and mCE of 0.739.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the positive comments and helpful summary of our contributions. In particular, we appreciate the summary of the insights demonstrated with the frequency-based analysis (Section 5.1). We hope to incorporate a version of this summary in the camera-ready version as we believe it will be valuable to future readers.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a novel data augmentations approach that improves the robustness of a model on the CIFAR-10 and ImageNet Common Corruptions benchmarks while maintaining training accuracy on clean data. To achieve this, the paper proposes a rather simple augmentation mechanism that is inspired by CutOut (DeVries & Taylor 2017) and Gaussian (Grandvalet & Kanu, 1997): adding Gaussian noise to random patches in the image. This simple approach is shown to work surprisingly well on the corruption benchmarks. It seems reasonable that while adding Gaussian noise makes the model robust to high frequency noise, since Gaussian noise is not added everywhere, the model is able to exploit high frequency signal when available in the input. The paper is reasonably well written and the experimental validation is convincing.\\n\\nOverall, the approach could become one of the standard mechanisms for data augmentation in the toolset of a practical ML engineer.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a data augmentation method that interpolates between two existing methods (Cutout and Gaussian), for training robust models towards Gaussian and naturally occurring corruptions. The method is shown to improve robustness without sacrificing accuracy on clean data.\", \"pros\": \"The proposed method, despite being simple, seems to empirically work well in terms of the mCE criterion evaluated in the experiments. This does support the authors\\u2019 claim that current methods haven\\u2019t reached the robustness/accuracy tradeoff boundary yet.\", \"cons\": \"I\\u2019m a bit concerned about the significance of the work though. The method is a straight-forward combination of existing methods, so methodologically the novelty is kind of limited. Hence, I\\u2019m expecting more insights from the analysis of the results, to gain more understanding of why it works so well. However, the presentation of the experiments just seems to aim for the best numbers one can get (I\\u2019m not certain how significant the numbers are to this field though). A few examples/pictures of success cases (when the method works) and failure cases (when the method doesn\\u2019t work), may help readers (I\\u2019m not an expert) to better understand the approach and get more intuitions? The frequency analysis seems quite intuitive. It\\u2019s obvious that Gaussian filter blocks high-frequency components, and Cutout keeps some original parts of the image which allow high-freq details to be captured. But, considering CIFAR image size is only 32x32, a patch of size 25 is quite large, how much is the method different from plain whole image Gaussian then?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a hybrid approach for adding noise to training images of an image classification model. Instead of either cutting out a patch or adding gaussian noise, the authors propose to adding a patch of gaussian noise to the images. Although possibly useful practically, this proposal lacks theoretical base on how and why it would be better, besides the claim that hopefully the combination will combine the benefit and subtract the weakness. The experiments are rather limitted to support the claim.\"}"
]
} |
SkgbmyHFDS | What Can Learned Intrinsic Rewards Capture? | [
"Zeyu Zheng",
"Junhyuk Oh",
"Matteo Hessel",
"Zhongwen Xu",
"Manuel Kroiss",
"Hado van Hasselt",
"David Silver",
"Satinder Singh"
] | Reinforcement learning agents can include different components, such as policies, value functions, state representations, and environment models. Any or all of these can be the loci of knowledge, i.e., structures where knowledge, whether given or learned, can be deposited and reused. Regardless of its composition, the objective of an agent is behave so as to maximise the sum of suitable scalar functions of state: the rewards. As far as the learning algorithm is concerned, these rewards are typically given and immutable. In this paper we instead consider the proposition that the reward function itself may be a good locus of knowledge. This is consistent with a common use, in the literature, of hand-designed intrinsic rewards to improve the learning dynamics of an agent. We adopt a multi-lifetime setting of the Optimal Rewards Framework, and investigate how meta-learning can be used to find good reward functions in a data-driven way. To this end, we propose to meta-learn an intrinsic reward function that allows agents to maximise their extrinsic rewards accumulated until the end of their lifetimes. This long-term lifetime objective allows our learned intrinsic reward to generate systematic multi-episode exploratory behaviour. Through proof-of-concept experiments, we elucidate interesting forms of knowledge that may be captured by a suitably trained intrinsic reward such as the usefulness of exploring uncertain states and rewards. | [
"reinforcement learning",
"deep reinforcement learning",
"intrinsic movitation"
] | Reject | https://openreview.net/pdf?id=SkgbmyHFDS | https://openreview.net/forum?id=SkgbmyHFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"rOcr1VHdj",
"Hkg-jyQioH",
"SJltPW1ioB",
"SyglkD0diB",
"HJxsOotOiH",
"SJgBqjLujS",
"SJgRvi8djr",
"HkgqriU_iS",
"rygaes8ujB",
"HJlH6qc0YS",
"BJxcM8b5FB",
"HkeUwwNKYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727679,
1573756824837,
1573740897132,
1573607128344,
1573587826611,
1573575564973,
1573575526131,
1573575489530,
1573575412580,
1571887805247,
1571587602461,
1571534685761
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1605/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1605/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1605/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1605/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1605/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1605/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1605/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1605/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1605/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1605/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1605/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors present a metalearning-based approach to learning intrinsic rewards that improve RL performance across distributions of problems. This is essentially a more computationally efficient approach to approaches suggested by Singh (2009/10). The reviewers agreed that the core idea was good, if a bit incremental, but were also concerned about the similarity to the Singh et al. work, the simplicity of the toy domains tested, and comparison to relevant methods. The reviewers felt that the authors addressed their main concerns and significantly improved the paper; however the similarity to Singh et al. remains, and thus the concerns about incrementalism. Thus, I recommend this paper for rejection at this time.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"appreciate yet another experiment; authors performed a significant amount of extra work; support acceptance\", \"comment\": \"With a brief check, the authors' explanation that the proposed method outperforms ICM due do uncertainty of the reward for each object seems to be a very convincing explanation as current hand-designed intrinsic rewards typically struggle in uncertain environments. Due to this year's changed scoring scale, I am unable to increase my score, but I believe the paper is now a solid accept and would be worth 7/10 by previous year scale. The authors have performed a significant amount of extra work and seem to have engaged constructively and promptly with all reviewers during the rebuttal period.\"}",
"{\"title\": \"Thank you very much for your additional additional comments.\", \"comment\": \"We have incorporated empirical work on ICM as suggested by R2 and discussed below. Since we have not yet seen engagement from R1 in the rebuttal process, any consideration on your part of (further) raising the score would be very much appreciated; this would allow the paper to be considered for acceptance if you would want to encourage that outcome. Thanks.\\n\\n--------- Response to additional comments from R2 ---------\\nThank you very much for the additional constructive comments.\\n\\n# Regarding comparison to hand-designed intrinsic rewards on current tasks\\nWe added ICM (Pathak\\u201917) as an additional baseline (see Figure 3 and Section 4). Our method outperforms ICM on all four domains. We found that ICM does not explore effectively in our domains because the inverse model does not capture the uncertainty of the reward of each object, because actions can be predicted just from the agent\\u2019s movement. \\nRegarding comparison to Ostrovsky'17, both Bellemare\\u201916 and Ostrovsky'17 are designed to approximate count-based exploration methods using a density model p(x). Thus, we believe that they the count-based exploration baseline in our paper, which uses the true state-visit counts, captures a comparison to those methods.\\n\\n# Regarding comparison to approaches for changing action spaces\\nIt would be indeed interesting to investigate deeper how the learned intrinsic rewards can be used to adapt to changing action spaces. We would like to leave it as future work given the time constraint. Thank you for suggesting an interesting future direction. \\n\\n# Regarding how to use the learned intrinsic rewards\\nIn this paper, we show several possibilities of using the learned intrinsic rewards, i.e., training new agents with different action spaces or new agents using different learning algorithms. More possibilities include being combined with policy transfer methods to further improve fast adaptation performance, generalising to unseen tasks, etc. We hope our work can inspire more research towards this direction.\\n\\n--------- Response to additional comments from R3 ---------\\nThank you very much for increasing the score. We address the questions below and incorporated some of the suggestions in the revision.\\n\\n# Regarding degraded performance on Key-Box\\nWe added the \\u201cnear-optimal\\u201d curve on the key-box domain in Figure 3 - thank you for pointing this out. We found that the near-optimal performance is also worse compared to Random ABC due to the variance when sampling rewards for each object (30 randomly sampled environments are used during evaluation), though they should be the same in expectation. We noticed that some objects are sometimes not reachable due to the randomised locations, which may also explain the slight degradation. We observed that the agent has learned the same qualitative \\u201cexplore-and-then-exploit\\u201d behaviour as well as navigating to the correct object.\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Thanks for the response!\\n\\nI especially liked the increased focus on the intrinsic reward as allowing us to separate what to do from how to do it, and showing that this allows for generalization across action spaces, learning algorithms, etc. This is a good point that I hadn't thought about before.\\n\\nWith the new experiment with the key-box experiment, what reward would the optimal policy get? I would assume that it should be the same as in Random ABC, but the learned policies do significantly worse, which might mean that there are problems scaling up the method. (Nonetheless, it still seems to get close to RL^2 and is better than MAML, so probably this is normal.)\\n\\n> Regarding your question about episode 3, the colors represent the return for each trajectory not per-step reward. Therefore, the agent would not gain more rewards by moving back and forth. \\n\\nOh, that makes much more sense.\\n\\n> Regarding your question about episode 1, we conjecture that it is more optimal for the intrinsic reward to encourage the agent to commit to one particular object (either A or C) at the beginning of training. Otherwise, if the reward is equal for A and C, it would take more time for a \\u201crandomly-initialised\\u201d policy to learn to collect any of them, because going towards both objects are encouraged (and they are placed in the opposite positions).\\n\\nThis also makes more sense now that I realize the colors were just about the trajectories -- probably the rewards are positive if going towards A and negative when going away from A, which incidentally makes B be ~zero reward while C is negative.\\n\\nMostly due to the explanation of the significance of learned intrinsic rewards, I'm changing to a weak accept. (I still have some qualms about scalability, though the new key-box environments has addressed that somewhat.)\"}",
"{\"title\": \"Thanks for the thorough response!\", \"comment\": \"The authors addressed my original concerns and performed a significant amount of additional work to strengthen the experimental results. In particular, I appreciate the new findings that 1) the method is able to generalize to a different RL algorithm, which seems close to impossible with MAML-like approaches, 2) the method outperforms MAML and RL^2 on generalization to new action spaces, and 3) it also outperforms MAML overall, which could perhaps be stressed more in the paper :). I believe that the paper is much improved after the update and I am now entirely convinced the paper should be accepted.\\n\\n--------------------------------------------------------------------\\nAdditional comments\\n\\nIn general, I am under the impression that in relation to RL^2 this paper represents a more constrained approach, which adapts slower as it is less expressive, but should also generalize better. This opens the following next questions (and perhaps more of them), answering which would strengthen the paper even more. \\n- How does the method compare to hand-designed intrinsic rewards like Ostrovsky'17, Pathak'17 on the current tasks (as well as on the tasks suggested in my original review)?\\n- Does the method improve over approaches designed for changing action spaces like Devin'16 or Chandak'18?\\n- More generally, if the rewards can be a locus of knowledge, how can this knowledge be used? One could perhaps think of some kind of distillation or guided policy search mechanism, although this is just a raw thought I have. \\n\\nChandak et al., Reinforcement Learning with a Dynamic Action Set, 2018\\nDevin et al., Learning Modular Neural Network Policies for Multi-Task and-Robot Transfer, 2016\"}",
"{\"title\": \"Response to R3\", \"comment\": \"Thank you very much for constructive comments. We address the questions below and reflected some of the suggestions in the revision (see the common response above).\\n\\n# Regarding \\u201cthe goal is to find out whether reward functions can be loci of knowledge\\u201d\\nWe clarify that our goal is not just finding out whether it is possible to store knowledge into rewards. In fact, we acknowledge in the introduction that existing hand-designed rewards already show that they can be a locus of knowledge. Instead, our goal is to find out 1) whether it is feasible to capture knowledge in reward functions in a data-driven from the agent\\u2019s own experience rather than hand-designing them, 2) what kind of knowledge can be captured when they are \\u201clearned\\u201d rather than \\u201chand-designed\\u201d, and 3) to show that reward knowledge can generalise to new dynamics and new learning algorithms. We clarified this in the revision. \\n\\n# Regarding the benefits of learning intrinsic rewards in comparison to other methods\\nIn Section 5, we added a comparison to RL^2 and MAML and added one more experiment demonstrating that the intrinsic rewards learned from actor-critic agents can generalise to a different kind of learning agents, i.e. Q-learning agents. Please see the common response for details.\\n\\n# Regarding more complex domains\\nWe revised the paper with a new version of the key-box domain, where the map is a 9x9 grid world and objects are randomly placed for each episode. Due to the random placement, there are more than 3 billion distinct states. We acknowledge that this number is still tiny in comparison to domains with high-dimensional visual observations, but this shows that our method can scale up to larger domains, where it is infeasible to fully enumerate the entire state space.\\n\\n# Regarding questions about Figure 5\\nRegarding your question about episode 1, we conjecture that it is more optimal for the intrinsic reward to encourage the agent to commit to one particular object (either A or C) at the beginning of training. Otherwise, if the reward is equal for A and C, it would take more time for a \\u201crandomly-initialised\\u201d policy to learn to collect any of them, because going towards both objects are encouraged (and they are placed in the opposite positions).\\nRegarding your question about episode 3, the colors represent the return for each trajectory not per-step reward. Therefore, the agent would not gain more rewards by moving back and forth. Also, it is important to note that the intrinsic reward is a function of the agent\\u2019s history. So, it is very likely that the intrinsic reward would penalise if the agent keeps going back and forth without proper exploration/exploitation, which would be an interesting analysis to be done.\"}",
"{\"title\": \"Response to R2\", \"comment\": \"Thank you very much for constructive comments. We address the questions below and reflected some of the suggestions in the revision (see the common response above).\\n\\n# Regarding comparison to other meta-learning methods\\nWe added a comparison to two meta-learning methods (RL^2 and MAML). Please see the details in the common response (see Section 5).\\n\\n# Regarding comparison to hand-designed intrinsic rewards on hard exploration problems\\nThe goal of this paper is to show that interesting kinds of \\u201cwhat\\u201d knowledge can be captured by learned intrinsic rewards such as exploring uncertainty and provide in-depth analysis of the approach. We would like to explore scaling to hard exploration tasks like Montezuma\\u2019s Revenge as future work.\\n\\n# Regarding comparison to hand-designed intrinsic rewards on out-of-distribution tasks\\nWe demonstrated that the intrinsic reward can interpolate successfully within the same task distribution. However, it is unclear whether it can extrapolate to out-of-distribution tasks, as the neural network representation should successfully handle extrapolation, which is an active research topic in deep learning (e.g., disentangled representation). We believe that more research including representation learning is needed to learn intrinsic rewards that can generalise well to out-of-distribution tasks. We would like to investigate in this direction in the future. \\n\\n# Regarding missing references\\nWe added missing references mentioned by the reviewer in the revision.\"}",
"{\"title\": \"Response to R1\", \"comment\": \"Thank you very much for constructive comments. We address the questions below and reflected some of the suggestions in the revision (see the common response above).\\n\\n# Regarding the non-stationary learning problem and theoretical guarantee\\nAs the reviewer pointed out, the problem is indeed non-stationary from the memoryless policy\\u2019s perspective. However, we can also view the combination of the intrinsic reward function and the policy as a joint lifetime-history-based policy parameterised by $\\\\eta$ and $\\\\theta$ (see derivation in Appendix A). From this perspective, the overall learning problem can be formulated as an MDP with history as state (recall, we use RNNs for the intrinsic reward function). We revised the paper to make this point clear. (see Section 3.4)\\n\\n# Regarding systematic investigation of the learned intrinsic rewards\\nWe showed that the intrinsic reward captures quite different but appropriate knowledge by varying reward functions in ABC domain (i.e., Fixed ABC in Figure 10 \\u2192 Random ABC \\u2192 Non-stationary ABC). We agree that further systematic investigation could help and would appreciate if the reviewer makes a concrete suggestion on this. \\n\\n# Regarding what we learned beyond previous work\\nWe revised the abstract to further highlight our contribution. Specifically, we learned the following beyond previous work as follows. (1) It is possible to learn good reward functions via gradient-based meta-learning, which is much more scalable than exhaustive search (prior work). (2) The meta-learned reward functions can capture interesting kinds of ``what'' knowledge, which includes long-term exploration and exploitation. (3) Because of the indirectness of this form of knowledge the learned reward functions can generalise to other kinds of agents and to changes in the dynamics of the environment.\\n\\n# Regarding mismatch between the learning objective and the experimental results\\nThe objective for training the intrinsic reward function is to maximise cumulative lifetime rewards. By looking at the area-under-the-curve in our evaluation results, we can observe lifetime rewards. Thus, we believe that the evaluation curves show both metrics (i.e., episodic return and lifetime return). Our paper also acknowledges that the baseline reward functions are task-independent (Section 4). \\n\\n# Regarding missing architecture details and overloaded notations\\nWe added some missing details about the lifetime value function architecture and revised the notations in the revised paper.\"}",
"{\"title\": \"Common Response\", \"comment\": \"Thank you very much for constructive comments. We address some common questions below.\\n\\n# Paper revision\\nWe significantly revised the paper based on the reviewers\\u2019 suggestions as follows. \\n- We added a comparison to RL^2 and MAML in Section 5 and Figure 8 and an in-depth discussion about the difference between our approach and policy transfer approaches.\\n- To further highlight the difference between our approach and policy transfer methods, we added one more experiment demonstrating that the intrinsic rewards learned from actor-critic agents can generalise to different types of learning agents, i.e. Q-learning agents.\\n- To address the concern regarding the small state spaces, we replaced the Key-Box domain with a more challenging version, where objects are randomly placed for each episode in a larger map (5x5 to 9x9). This results in a significantly larger state space due to its combinatorial nature. (see Figure 2c and Figure 3)\\n- We addressed several writing issues including missing references, missing architecture details, overloaded notations, etc.\\n\\n# Regarding comparison to MAML and RL^2\\nWe added a comparison to RL^2 and MAML (see Section 5 and Figure 8 and Figure 9). We emphasise that the goal of our method and the goals of RL^2 and MAML are different. To summarise, our method performs better than MAML but learns slowly compared to RL^2 (with similar asymptotic performance). However, we also show that these policy transfer methods generalise poorly to unseen action-environment interfaces (or not capable of generalisation in most cases), whereas the intrinsic reward can successfully generalise. This highlights the difference between \\u201cwhat to do\\u201d knowledge captured by intrinsic rewards and \\u201chow to do\\u201d knowledge captured by policy transfer methods. We would appreciate if the reviewers take a look at the revised paper for more in-depth discussion (Section 5). \\n\\n# Regarding significance\\nAlthough some techniques used in our paper have connections to existing meta-learning work, our work is the first that proposes to learn useful intrinsic rewards across multiple lifetimes and across multiple tasks using a scalable meta-gradient method. More importantly, we believe that our empirical findings about interesting kinds of knowledge captured by intrinsic rewards (e.g., long-term exploration based on curiosity) and how they generalise to unseen agent-environment interfaces are new and worth more discussion, as intrinsic rewards have been receiving a lot of attention in recent years.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"(Originally my score was a weak reject.)\\n\\nThis paper aims to study whether a learned reward function can serve as a locus of knowledge about the environment, that can be used to accelerate training of new agents. The authors create an algorithm that learns an intrinsic reward function, that when used to train a new agent over a \\u201clifetime\\u201d (which consists of multiple episodes), leads to the best cumulative reward over the lifetime. As a result, the learned intrinsic reward is incentivized to quickly \\u201cteach\\u201d the agent when and where to explore to find out as-yet unknown information, and then exploit that information once there is no more to be had. Experiments on gridworlds demonstrate that these learned intrinsic rewards: 1. switch between early exploration and later exploitation, 2. explore only for information that is relevant for optimal behavior, 3. capture invariant causal relationships, and 4. can anticipate and adapt to changes in the extrinsic reward within a lifetime.\", \"i_very_much_appreciated_the_design_of_the_environments_to_test_for_specific_properties_within_the_learning_algorithm\": \"I think these experiments provide a very useful conceptual analysis of what learned intrinsic rewards can do.\\n\\nMy main qualm with the paper is with its significance -- the authors claim that the goal is to find out whether reward functions can be loci of knowledge, but we already know the answer is yes: the whole point of reward shaping is to improve training dynamics by building in knowledge into the reward function. It is not a surprise that learned reward functions can be loci of knowledge if our hand-designed reward functions already do so.\\n\\nTo me, the more interesting aspect of this paper is how much benefit we can get by learning intrinsic reward functions, relative to other ways of improving training dynamics. The authors do show that by allowing the intrinsic reward to be recurrent (and so dependent on past episodes), it is able to first incentivize exploration and later exploitation, which standard reward shaping cannot do (since usually reward shaping still maintains the assumption that the reward is a function of the state). However, given this motivation, it would be important to see comparisons between the proposed method of learning intrinsic rewards, and other methods for fast adaptation in the literature, such as MAML, which as I understand also has many of the properties highlighted in this paper.\", \"ideally_there_would_also_be_experiments_on_more_complex_environments\": \"the environments in the paper have 104, 25, and 49 states. If we in the ABC environments if you count \\u201cwhether or not reward(object) is known\\u201d as part of the state, that multiplies it by 2^3 = 8 giving 200 and 392 states, if you then further add the ordering of r(A), r(B), and r(C), that multiplies by a factor of 3! = 6 giving 1200 and 2352 states. These environments are excellent for demonstrating the properties of learned intrinsic rewards and I am glad the authors have done these experiments and analyzed the results. However, given that the paper aims to scale the optimal reward problem, it would have been useful to see examples where the state space cannot be fully enumerated to evaluate scalability.\", \"questions\": \"In Figure 5, in episode 1, why is the learned intrinsic reward heavily penalizing the path to C, but not penalizing the path to B? In the initial episode, the intrinsic reward should only know that B is to be avoided; it doesn\\u2019t yet know whether A or C is the better object. I would expect the learned intrinsic reward to put similar positive rewards on the path to C and the path to A, and negative reward on the path to B. (It is slightly more likely that C is the best object. This probably changes things slightly, but not significantly.)\\n\\nAlso in Figure 5, by episode 3, shouldn\\u2019t the final states (A or C) have intrinsic rewards of larger magnitude? Otherwise the agent can go back and forth on the path to collect lots of intrinsic reward without terminating the episode, even though this wouldn\\u2019t get extrinsic reward.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a meta-learning approach to learn reward functions for reinforcement learning agents. It defines an algorithm to optimize an intrinsic reward function for a distribution of tasks in order to maximise the agent\\u2019s lifetime rewards. The properties of this reward function and meta-learning algorithm are investigated through a number of proof-of-concept experiments. \\n\\nThe meta-learning algorithm and the corresponding empirical investigation are the main contributions of the paper. The algorithm seems to be similar to previous meta-learning approaches, but differs by introducing a lifetime value function. While I thought the paper raises some interesting possibilities, I am currently leaning towards rejection. The proposed algorithm does not seem like a major innovation over cited previous work. The empirical evaluation provides a number of proof-of-concept ideas, but no in depth investigation of the properties of the approach. The theoretical properties of the approach are barely discussed.\", \"detailed_remarks\": [\"The main addition to the meta-learning algorithm is the lifetime value function. The authors mention multiple times that this is crucial to learning, but the properties of this value function are not really investigated or discussed in depth:\", \"The authors mention that the value function must take into account changing future policies, but do not discuss this further. The value function update seems to be a standard on-policy TD update with the lifetime return and the complete history as input. The policy for this value function, however, is still a standard policy with only state as input (but it will be non-stationary over the agent lifetime). It would be good to discuss this learning problem in more detail.\", \"The algorithm uses an n-step return. Is this important? What effect does n have on learning?\", \"Another issue which I would have liked being discussed in more detail is the non-stationarity of the learning problem in general. Most of the approaches discussed in related work (e.g. shaping) are aimed at learning/designing more informative reward functions. These reward functions still fit in the MDP framework, however, and map from states and actions to rewards. In the case of shaping approaches guarantees can be given that this does not alter the learning problem. The intrinsic reward functions used in this paper map the full life-time history of the agent to rewards. While this is a richer framework that can express more complicated tasks (like exploration over multiple episodes), it also invalidates many of the basic assumptions of reinforcement learning. The rewards are now no longer Markovian when only observing the current state. Moreover, the reward function will change over time. To what extent does this require non-stationary / history-based policy and value function learning to solve these issues? While some of these issues also apply to count based exploration strategies, (Strehl and Littman,2008 ) provided results that the exploration bonuses result a Bellman Equation that accounts for uncertainties. No real guarantees seem to exist here.\", \"The empirical contribution focuses on trying to answer a number of questions regarding the properties of the learnt intrinsic rewards. I found these questions to be very broad, while the answers are mostly anecdotal evidence through proof-of-concept examples. These examples do show potential benefits of meta-learning intrinsic rewards, but I was somewhat disappointed that there was no more systematic investigation. For example, questions like \\u2018how does the distribution of tasks affect intrinsic rewards\\u2019 or \\u2018does intrinsic reward generalise\\u2019 are not really answered by providing metrics of performance or generalisation in controlled experiments, but by providing some example cases. Several of these questions (including optimising exploration and dealing with non-stationarity) also seem to have been investigated to some extent in the original Optimal reward papers (Singh, 2009/2010). It would be good to clearly indicate what we have learned beyond these previous results.\", \"There seems to be a bit of a mismatch between the learning objective for intrinsic rewards in the optimal reward framework and the results shown in the experiments. The learning objective aims to optimise lifetime rewards for a distribution of tasks. Most of the experiments seem to analyse episodic reward performance and compare against single-task (or task agnostic) methods.\"], \"minor_comments\": [\"The architecture / parameterization of the lifetime value function does not seem to be defined anywhere. Given that it takes histories as input I assume this is another RNN?\", \"There seems to be some small overloading in the notation with \\\\eta occasionally being used to denote the parameters of the reward function r_eta or the reward function itself.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"Summary\", \"The paper evaluates the intrinsic reward as a way of storing information about episodes. It adopts the optimal intrinsic reward setting (Singh'09), and extends its recent policy gradient implementation, LIRPG, to lifetime settings. The task in the lifetime setting is to learn an intrinsic reward such that when trained with it, the agent maximizes its total return over its lifetime. A lifetime is defined as a sequence of episodes, where the agent does not have memory of previous episodes, however, the function computing the intrinsic reward does. In proof-of-concept experiments, the paper demonstrates that the learned intrinsic reward captures properties of several gridworld environments and induces meaningful behavior in the agent, successfully transferring information from previous episodes. Interestingly, a state-based reward function also generalizes to agents with perturbed action spaces, showing that this way of storing information is agnostic to the agent\\u2019s action space.\", \"Decision\", \"The paper proposal is interesting and adequately evaluated, however, the impact of the paper might be limited by its limited technical novelty and lack of comparisons to strong baselines. I recommend marginal accept.\", \"Pros\", \"The paper is well-motivated.\", \"The paper is well-written and the method is clearly explained. The literature review is thorough.\", \"The experimental evaluation demonstrates several interesting and potentially promising phenomena.\", \"Cons\", \"The novelty of the paper is limited as it is a somewhat straightforward extension of prior work.\", \"The impact of the paper is hard to judge as the experimental evaluation does not focus on potential usecases.\", \"Questions. Here, I will focus on scientific questions, answering which would significantly improve the quality of the paper.\", \"The biggest drawback of the paper is that the proposed method has an unfair advantage as it has a way of transmitting information across episodes, which the baselines do not (as stated on the bottom of page 5). While the findings of this paper are interesting, it is unclear how it compares to methods that have memory of previous episodes, such as agents with non-episodic recurrent policies, or meta-learning agents such as Duan\\u201916, Finn\\u201917. Is it possible that the proposed method e.g. scales better than recurrent policies due to compact representations or provides better generalization to things like action space changes?\", \"How does the method compare to hand-designed intrinsic rewards on hard exploration games (such as montezuma\\u2019s revenge or pitfall Atari games)? Since it can only learn to explore on games that it previously successfully solved, it is possible that a hand-designed intrinsic reward such as RND (Burda\\u201919) would perform better on these hard games. On the other hand, it is possible that the method will in fact perform better on these games due to more directed exploration.\", \"How does the method compare to hand-designed intrinsic reward on out-of-distribution tasks? Intuitively, the method should perform the worse the further from the training distribution the task is, while the hand-designed rewards will always perform similarly. However, what is the extent to which the proposed method generalizes? It is possible that this method would be very useful in practice if it generalized well.\", \"Other potentially related work.\", \"Xu\\u201918, Learning to Explore with Meta-Policy Gradient, is a relevant work that proposes a meta-learning framework for training an exploration policy.\", \"Metz\\u201919, Meta-Learning Update Rules for Unsupervised Representation Learning, is a conceptually relevant work that proposes to meta-learn loss functions for unsupervised learning (and there is more recent related work on this topic too).\"]}"
]
} |
B1xgQkrYwS | On Iterative Neural Network Pruning, Reinitialization, and the Similarity of Masks | [
"Michela Paganini",
"Jessica Forde"
] | We examine how recently documented, fundamental phenomena in deep learn-ing models subject to pruning are affected by changes in the pruning procedure. Specifically, we analyze differences in the connectivity structure and learning dynamics of pruned models found through a set of common iterative pruning techniques, to address questions of uniqueness of trainable, high-sparsity sub-networks, and their dependence on the chosen pruning method. In convolutional layers, we document the emergence of structure induced by magnitude-based un-structured pruning in conjunction with weight rewinding that resembles the effects of structured pruning. We also show empirical evidence that weight stability can be automatically achieved through apposite pruning techniques. | [
"Pruning",
"Lottery Tickets",
"Science of Deep Learning",
"Experimental Deep Learning",
"Empirical Study"
] | Reject | https://openreview.net/pdf?id=B1xgQkrYwS | https://openreview.net/forum?id=B1xgQkrYwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"21Gokiocin",
"BJlW4p93iH",
"B1llSFchjH",
"S1etmY5hoB",
"rkeO1YchjH",
"BJeo-bA4qB",
"BJeL9xKqtB",
"r1e9ZDlfYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727640,
1573854504945,
1573853495645,
1573853473128,
1573853407797,
1572294914605,
1571618957880,
1571059457943
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1604/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1604/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1604/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1604/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1604/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1604/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1604/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This is an observational work with experiments for comparing iterative pruning methods.\", \"i_agree_with_the_main_concerns_of_all_reviewers\": \"(a) Experimental setups are of too small-scale or with easy datasets, so hard to believe they would generalize for other settings, e.g., large-scale residual networks. This aspect is very important as this is an observational paper.\\n(b) The main take-home contribution/message is weak considering the high-standard of ICLR.\\n\\nHence, I recommend rejection. \\n\\nI would encourage the authors to consider the above concerns as it could yield a valuable contribution.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Empirical Scientific Contributions in Machine Learning\", \"comment\": \"Dear area chairs, reviewers, and readers,\\n\\nThank you all for taking the time to read our contribution. We have answered concerns and questions at the individual reviewer level.\\n\\nOn the whole, the authors would like to argue, from our point of view, that the point of science is not to always be directly and immediately \\\"useful\\\" (in a SoTA sense) and that that might not be the correct lens to apply when assessing the merits or shortcomings of this work. While we strongly believe many of our contributions to be, indeed, useful (for example, that lottery tickets emerge from different strategies, or that there exists evidence of structure forming when unstructured pruning is used with rewinding -- which can inform ML engineers with inference-time concerns and hardware constraints in their choices of pruning strategy) we encourage the reviewers to consider the fact that purely observational work with well documented experiments (such as those presented in this work) alone constitute a valuable scientific contribution worthy of discussion at a conference, and capable of sparking new development in the research community. We are a long way from developing a principled understanding of deep emergent phenomena, and we believe this empirical work can successfully complement much of the theoretical work being carried out in this area.\"}",
"{\"title\": \"Rebuttal from the Authors\", \"comment\": \"Thank you for reading our contribution.\\n\\nWe would like to invite the reviewer to consider the strength and extent of experimentation performed in this paper by moving beyond the assessment of the choice of dataset and model. While we agree that one shouldn't claim that a proposed architecture achieves SoTA on \\\"solved\\\" (?) problems like MNIST, we would like to point out that the goal of this work is, in fact, _not_ to propose a model modification and use MNIST as a simple testbed to validate the performance (which we agree would be inconclusive). We maintain that MNIST is still an excellent dataset to study learning dynamics, weight co-adaptation, and deep phenomena in neural networks that we, as a field, are still far from understanding. We don't believe it advantageous to study these fundamental properties of neural networks (seen as physical objects with complex, perhaps chaotic dynamics) in large, complicated regimes and architectures when the simplest of cases (like MNIST with LeNet) is still just as poorly understood from the standpoint of the research being conducted in this paper (which, again, is not performance-oriented, with performance, instead, being a very well studied and thoroughly investigated property of this specific task).\\n\\nWe have performed a sensible search over pruning approaches of interest, and have documented nearly every decision along the way. We would like to encourage the reviewer to reconsider this point \\u2014 scientific understanding of deep learning in its fundamental form will need exhaustive experimental observation, and observational experiments are not definitionally weak.\\n\\nWe believe the contributions main points are made very clear in section 1.1, titled contributions. We believe some of these are *directly* useful, such as #5 and #6, where #6 may help guide us towards designing stability induced procedures that may help with lottery tickets in larger models. \\n\\nFinally, observations on the small model-small dataset regime are incredibly important if we are to understand the minimum setting for these methods and approaches to work. We aim to shed light on some as-of-yet undocumented behaviors, and now the natural next step is to consider why they do or do not work in the large scale setting. We strongly emphasize that the purpose of this work is understanding, and we encourage the reviewer to reconsider the value of scientific, observational work rather than work that seeks to add modifications.\"}",
"{\"title\": \"Rebuttal from the Authors\", \"comment\": \"First of all, thank you for your comments.\\n\\nWe would like to offer our point of view for why we disagree with the notion that the contributions and observations presented here are not interesting to the field. We agree that perhaps these approaches cannot directly be utilized at the moment to help reach SoTA on a given task. This utilitarian way of evaluating the contribution is at odds with the stated goal of the paper, which is to simply advance fundamental knowledge in the subdomain of science of deep learning. Many of the findings in this paper directly go to address major open questions around the nature and emergence of lottery tickets, including observations #1 and #2, which we therefore deem to be interesting and relevant to the field (or at least to those doing research in this sub-field). Objections to the absence of these studies have been raised in the community in the past to challenge the lottery ticket hypothesis itself. To the best of the authors knowledge, a thorough study of structure characterization of lottery tickets emerging from a multitude of pruning methods is itself of interest to better begin to understand more about this emergent behavior and move towards principled approaches to lottery ticket discovery.\\n\\nIn addition, we disagree that observations on small models are not significant. If we are to understand the dynamics of what is happening in pruned models, under the lottery ticket hypothesis or any other hypothesis, we need to remove factors of variation introduced by SoTA seeking architectures. Even in the case where dynamics discovered in small networks do not apply to a large, say, ResNeXt or NasNet, that alone is interesting future work and important to understand and document. We do agree that confirmatory experiments in larger more complex domains would be a useful extension of this work, but not a necessary one to make these empirical discoveries worthwhile.\\nWhile we agree that it is non-trivial to extend lottery tickets to larger models (as is well documented in the literature) we believe that understanding why and when lottery tickets emerge in smaller models will help us better apply them to larger models in the future.\\n\\nAs per your direct comments, we have improved the description of Fig. 5. \\nThe caption on Figure 7 already contains all the necessary information to decipher what the axes in the subplots represent (the numerical values are not important and the axes could be entirely removed in favor of simply showing the qualitative trend).\"}",
"{\"title\": \"Rebuttal from the Authors\", \"comment\": \"We would like to thank the reviewer for their helpful and detailed comments!\\n\\nOverall, we thank the reviewer for considering the merits of purely observational work by itself \\u2014 this is critical for moving with a scientific basis of understanding. Organizationally we believe that by stating the observations up front in Sec 1.1, we are able to lay out the story of our observations in the manner by which they were uncovered, an ordering and structure we feel to be more natural (related to point 3).\\n\\nWe will answer and respond to specific questions/comments:\\n\\n(1) No method produces identical layer-wise masks to another, unless the layer is too small to be pruned at all (see conv1 for structured pruning). In all other cases, the line at distance = 0 is the baseline, and it's shown for sanity check. \\nWe believe that the graphical form is useful from an evolution standpoint \\u2014 we note the curvature for the Jaccard distance when plotted against the pruning iteration is directly insightful. We agree that perhaps this visualization is not perfect, but, when representing it in a table, we found the information even harder to process and engage with, without immediate visual assistance.\\nAs the captions and plot labels clearly state, the Jaccard distance is computed between masks, not weights.\\nThe ordering of training samples is fixed (on top of the initialization). We tried to be as thorough as possible to control for any type of confounding factors and sources of variability that would not be directly caused by the effect we were trying to measure, i.e. the role of the pruning method.\\nAs per the lottery ticket procedure, and as we had hinted at in Sec.3.2, lottery tickets are searched for using rewinding to the initial weight values. Again, at first, to be able to focus in on the role of the choice of pruning technique, we conducted our experiments without varying any of the other knobs (the choice of reinitialization strategy being one of them). We did explore that dimension of variation as well, though, in the experiments in Appendix A. We did realize, thanks to your question, that Sec.3.2 was not entirely clear about this point, so we slightly modified the language there.\\nIn Figure 5, columns 2, 3, and 4 all use the same pruning technique (the difference here is only the reinitialization technique). The meaningful comparison is between column 1 and 2. Indeed, in the case of this specific seed and due to the very small size of the conv1 layer in LeNet, the masks do end up looking similar. For other seeds, instead, for example, although unstructured pruning continues to show structured-like patterns, the channels that end up getting pruned are _not_ the same ones that L1-structured pruning prunes. The per-layer distances are even more striking in larger, non-convolutional layers.\\n\\n(2) Yes, lines for pruned weights terminated where they are pruned; we now state that more clearly. The point of this section (\\\"we empirically find a correlation between weight stability and performance\\\") is made clearer when also considering the performance plot in Fig 1. We have added a note that makes this point clearer, also encouraging the reader to look at Fig 1 for a reminder. Regarding your later points, although we note that the evolution looks quite tangled in 6(a), in fact the weight magnitude per weight is not changing drastically from iteration to iteration and, more important, there doesn't seem to be much crossing from negative to positive, which had been identified in previous work as potentially key to the formation of lottery tickets. The same holds for 7(a), as from iteration to iteration there is little noise. We include a definition of stability and show the results you hint at in Figure 8 (see y-label). If the reviewers believe that Figure 8 is sufficient to illustrate the point and Figs 6-7 only confuse the reader, we'd be happy to remove them.\\n\\n(5) We present results on LeNet+MNIST for ease of interpretation \\u2014 many of the phenomena we document here are difficult to reason about in larger models (though we agree that this will be very important in the future!). Our extended results (contained in the appendix) confirm some critical observations on larger scale models. We believe that large experiments would detract from the main points of the work at this time, and is welcome future work. It is known, as stated in the text, that lottery tickets are harder to find in larger domains and require the introduction of tricks that would introduce confounders in our experiments and invalidate the experimental setup.\", \"notes\": [\"The axes for Fig 1 and 2 use the \\\"logit\\\" scale setting in matplotlib. We found this to be the most appealing representation for the data we were plotting.\", \"Scaling by the std deviation in this case does not change the comparative argument between methods, and we believe it would make the metric/value harder to reason about.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"*Summary*\\nThis paper compares network pruning masks learned via different iterative pruning methods. Experiments on LeNet + MNIST show (a) different methods can achieve similar accuracy, (b) pruned sub-networks may differ significantly despite identical initialization, (c) weight reinitialization between pruning iterations yields more structured convolutional layer pruning than not reinitializing, and (d) pruning methods may differ in the stability of weights over pruning iterations.\\n\\n*Rating*\\nThere are interesting bits of data in this paper, but the overall story is somewhat muddled and some inferences seem to be insufficiently supported by data (1-2 below). In addition, the text would benefit from better organization and presentation (3-4 below) and replications on other datasets and architectures (5 below). As a result, my rating is currently weak reject.\\n\\n(1) *Overlap in pruned sub-networks*: In the middle of Sec. 4, Fig 3-5 examine the similarity of pruning masks between methods. It seems clear from several of the plots that multiple methods produce identical layer-wise masks, e.g. Fig 3(a), while others show a wide variance. The overlap in lines makes this difficult to assess at times: perhaps a table would communicate it better? Also, are Fig 3-4 depicting the Jaccard distance between masks of unpruned or pruned weights? Is the ordering of training samples fixed in addition to network initialization? Is reinitialization used between iterations? Also, Fig 5 seems to contradict the conclusion that methods tend to learn different masks, since the structures are noticeably similar.\\n\\n(2) *Weight stability during pruning*: It is difficult to discern a conclusion in Sec 5. First, a clarification on the figures: are lines for pruned weights terminated where they are pruned? If so, this would be helpful to state. The 4th paragraph claims, \\\"we empirically find a correlation between weight stability and performance\\\", but this is not at all obvious from Figures 6-7. I'm not sure what a more stable evolution looks like. Hybrid is shown to be accurate in Fig 1, but the conv. weights in 6(a) are a spaghetti tangle and the FC weights in 7(a) are constantly increasing in magnitude. Perhaps a mathematical formulation for stability (perhaps based on average standard deviation of each weight's values over training) with a table of values for each method/layer would help to clarify.\\n\\n(3) *Organization*: Since the paper has many intertwined observations, a better organization would be helpful. Consider mirroring the structure of Sec 1.1 in a combined Sec. 4-5 with clear paragraph headers summarizing each conclusion.\\n\\n(4) *Presentation*: Figure is too small throughout to read from a printed copy (or even on a screen without significant zooming). Several results could be presented with less ambiguity in tabular form, as noted above.\\n\\n(5) *Replications*: The paper presents results only a single set of experiments using the MNIST dataset with the LeNet architecture. While this isn't a fatal issue, it is a significant weakness.\\n\\n*Notes*\", \"fig_1_and_2\": \"What spacing is used for the x- and y- axes?\", \"fig_8\": \"Perhaps scale vertically by the standard deviation of the weights?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper study the lottery ticket hypothesis by observing the properties of lottery tickets. In particular, the authors tested several different pruning techniques by varying evaluation criteria (L_1, L_2, L_-\\\\infty and random) and pruning structures (structured, unstructured and hybrid). The authors perform experiments mainly on LeNet with the MNIST dataset and analyze the observations.\\n\\nOverall, I think that the observations presented in the paper are not significant due to the following reasons.\\n\\nFirst, the paper consists of the list of observations but how the observations extend to is not clearly described. There are no guidelines how to utilize the observations in future research (e.g., how they can be used for verifying the lottery ticket hypothesis or how they affect to existing pruning techniques) while some observations might be trivial or not very interesting (e.g., contribution 1 and contribution 2) for me.\\n\\nSecond, the observations are only presented for LeNet and MNIST and it is non-trivial whether they extend to large scale models. The authors present VGG11 and AlexNet results in Appendix but they are not large enough to verify their hypothesis for practice. The authors mentioned that larger models are not their subject, but this significantly reduces the confidence of the observations.\", \"other_comments\": \"I think that Figure 5 is not well described. Explicitly noting the meaning of color in the figure would be better.\\n\\nTexts in Figure 7 are too small to read.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"There are major problems with this paper. It is concerned with the examination of pruning experiments for a LeNet on the MNIST dataset. I fail to see how anything useful can be derived from this, as MNIST is a completely trivial dataset and LeNet is a very old, small architecture which does not at all resemble the massive overparameterised models that we care about.\\n\\nFrom a narrative perspective, I am not sure what the key point is, what should the reader take home? What should they take account of when performing network pruning?\\n\\nIn terms of presentation, some of the figures are unreadable (figure 4). Figure 15 looks like noise. The writing is good however, if a bit grandiloquent.\\n\\nI dislike writing short reviews, but I fear this paper falls too far short of ICLR standard.\", \"pros\": [\"Well written\"], \"cons\": [\"Experiments are weak\", \"Unclear narrative; what's the one key message?\", \"I have to give this paper a reject as the experiments conducted are far too weak, and there is little evidence anything found here will, say, generalise to a ResNet/DenseNet on ImageNet.\"]}"
]
} |
BkgeQ1BYwS | Implicit Generative Modeling for Efficient Exploration | [
"Neale Ratzlaff",
"Qinxun Bai",
"Li Fuxin",
"Wei Xu"
] | Efficient exploration remains a challenging problem in reinforcement learning, especially for those tasks where rewards from environments are sparse. A commonly used approach for exploring such environments is to introduce some "intrinsic" reward. In this work, we focus on model uncertainty estimation as an intrinsic reward for efficient exploration. In particular, we introduce an implicit generative modeling approach to estimate a Bayesian uncertainty of the agent's belief of the environment dynamics. Each random draw from our generative model is a neural network that instantiates the dynamic function, hence multiple draws would approximate the posterior, and the variance in the future prediction based on this posterior is used as an intrinsic reward for exploration. We design a training algorithm for our generative model based on the amortized Stein Variational Gradient Descent. In experiments, we compare our implementation with state-of-the-art intrinsic reward-based exploration approaches, including two recent approaches based on an ensemble of dynamic models. In challenging exploration tasks, our implicit generative model consistently outperforms competing approaches regarding data efficiency in exploration. | [
"Reinforcement Learning",
"Exploration",
"Intrinsic Reward",
"Implicit Generative Models"
] | Reject | https://openreview.net/pdf?id=BkgeQ1BYwS | https://openreview.net/forum?id=BkgeQ1BYwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"DSAGURM1td",
"rkeGci93oB",
"B1ggPycusr",
"rkluCCKdsH",
"Bke0uCKuiS",
"rJgBYahg5r",
"HkggAYgJcS",
"HJexYyOstB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727609,
1573854090333,
1573588823683,
1573588687868,
1573588597842,
1572027773296,
1571912135616,
1571680119765
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1603/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1603/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1603/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1603/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1603/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1603/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1603/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"There is insufficient support to recommend accepting this paper. The authors provided detailed responses, but the reviewers unanimously kept their recommendation as reject. The novelty and significance of the main contribution was not made sufficiently clear, given the context of related work. Critically, the experimental evaluation was not considered to be convincing, lacking detailed explanation and justification, and a sufficiently thorough comparison to strong baselines, The submitted reviews should help the authors improve their paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of revisions to the paper\", \"comment\": \"We would like to thank the reviewers for the helpful comments and suggestions. We have replied to each of the reviewer's comments, and now we have incorporated much of the suggested revisions, including both additional discussion and experimental results. The revisions to the paper are as follows:\\n\\n`1) Added results with 5 random seeds to all methods and experiments. We believe these results are faithful to the methods we compare against. \\n2) Added state visitation charts to the Ant Maze experiment in section 4.2.2, figure 5.\\n * The figures show how the agent moves through the maze throughout the duration of each episode. \\n * Multiple figures (c-f) show how this develops with more training. \\n3) We added results from (Pathak et al. 2017) and (Pathak et al. 2019) on the toy NChain task to Appendix A.2.1. \\n4) Improved explanation of robot manipulation task, explaining why the task is difficult and work examining.\\n5) Added or expanded the discussion of some related works, notably (Gregor et al. 2016), (Houthooft et al. 2016), and (Fortunato et al. 2017). \\n6) Finally, we incorporated the minor changes suggested by the reviewers.\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"Thank you for taking the time to review our paper. Below, we answer each question asked by the reviewer, and we will add further discussion/content to the paper.\", \"q1\": \"Why aren\\u2019t external rewards considered?\\n\\nWe study the exploration in the task-agnostic exploration setting, from this perspective, all unseen states are equally valuable to explore, since they all may correspond to a goal state for a potential task. For future work, we will focus this exploration using extrinsic rewards, but it is important to know first if the intrinsic reward is enough to cause the agent to adapt to environmental difficulties.\", \"q2\": \"Why not compare against NoisyNets [4]?\\n\\n[4] is functionally very different from our method; adding noise to the model parameters doesn't preclude the use of other exploration methods. As we explore model-based approaches to efficient exploration, we believe comparing with more similar approaches is more informative. We could always add NoisyNets to any of the methods in our paper.\", \"q3\": \"Why not compare against VIME [5]?\\n\\nWe believe VIME is a principled method that's worth comparing against. VIME uses BNNs to maintain a probabilistic model of the environment dynamics. The predictions of the model are used to estimate compression improvement (a form of Bayesian information gain), and therefore the novelty of states. We believe that this paradigm and method are well represented in [2], which we compare against. Furthermore, [2] is more closely related to our method (by using an ensemble of probabilistic models), and thus the comparison is more informative.\\n\\n\\n1) How were the hyperparameters chosen?\\n\\nWe used the hyperparameters from the code provided by the authors of [2]. As such, the choice of hyperparameters is not biased towards our method.\\n\\n\\n2) Random Seeds.\\n\\nWe will run more trials with different seeds and add them to the paper.\\n\\n\\n3) Equation 1. \\n\\nIn equation 1 we state the variance of predictions given by an ensemble of models. We use this variance to be a measure of the model\\u2019s uncertainty\\n\\n\\n4) Other baselines for chain task.\\n\\nWe will run [2] and [3] on the chain task and update the paper.\\n\\n\\n5) Error bars on Acrobot experiment. \\n\\nWe will include error bars on the Acrobot experiment and update the paper.\\n\\n\\n6) Ant Maze: how is the percentage of the maze explored calculated?\\n\\nWe follow [2] and present the percentage of the maze explored during the entire run. We will add a plot showing state visitation behavior by episode for the maze.\\n\\n\\n7) Closeness to [2] in the robotic manipulation task. \\n\\nAs stated above, the hyperparameter selection was taken from the code provided by the authors, likely putting our method at a disadvantage instead. With regard to long-horizon behavior vs [2], the experiment is testing for efficient exploration in a difficult environment, not for time-to-complete. In difficult environments where the reward is not present, the most relevant task for the agent is to quickly explore the environment. Our experiments focus on this and show that our method is effective.\\n\\n8) How do we interpret the agent's performance on the robotic hand experiments?\\n\\nWe characterize a state as a rotation of the held block, without considering the positions of the joints. We discretize the possible block rotations into 512 possible states for this task, meaning that each state is a 45-degree increment in the x, y, z directions. The best agent (ours) explores approximately half the space. This is due to the much higher state/action dimensionality of this environment, and to a greater degree than Ant Maze, states are not uniformly accessible.\\n\\nIn the task agnostic setting, we do not consider the effect on downstream policies. Such a question is left for future work. We examine how our intrinsic reward motivates agents to overcome environmental difficulties, such as the need to learn a skill before encountering novel states. \\n\\n[1] Gangwani, Tanmay, Qiang Liu, and Jian Peng. \\\"Learning Self-Imitating Diverse Policies\\\". International conference on learning representations. 2019.\\n[2] Shyam, Pranav, Wojciech Ja\\u015bkowski, and Faustino Gomez. \\\"Model-Based Active Exploration.\\\" International conference on machine learning. 2019.\\n[3] Pathak, Deepak, Dhiraj Gandhi, and Abhinav Gupta. \\\"Self-Supervised Exploration via Disagreement.\\\" International conference on machine learning. 2019.\\n[4] Eysenbach, Benjamin, et al. \\\"Diversity is all you need: Learning skills without a reward function.\\\" arXiv preprint arXiv:1802.06070 (2018).\\n[5] Houthooft, Rein, et al. \\\"Vime: Variational information maximizing exploration.\\\" Advances in Neural Information Processing Systems. 2016.\"}",
"{\"title\": \"Thank you for the comments\", \"comment\": \"Thank you for taking the time to review our paper. We clarify some points below, and we will add further discussion to the paper.\", \"q\": \"There is no comparison to the stochastic Atari environments used in [2].\\n\\nWe think that the experiments proposed in [1] are sensible (chain and ant) with regard to assessing efficient exploration i.e. in less than hundreds of millions of steps, and so we continued this line of work. Because we studied efficient exploration, we avoided large scale experiments like Atari, in favor of simpler, difficult tasks like navigation and manipulation. In contrast to Atari environments, performance on our tasks adds to intuitions about the agent\\u2019s behavior with our model. \\n\\n[1] Shyam, Pranav, Wojciech Ja\\u015bkowski, and Faustino Gomez. \\\"Model-based active exploration.\\\" International conference on machine learning. 2019.\\n[2] Pathak, Deepak, Dhiraj Gandhi, and Abhinav Gupta. \\\"Self-Supervised Exploration via Disagreement.\\\" International conference on machine learning. 2019.\"}",
"{\"title\": \"Thank you for the comments\", \"comment\": \"Thank you for taking the time to review our paper. We clarify some points below, and we will add further discussion to the paper.\", \"q\": \"What are the benefits of using the Amortized SVGD framework?\\n\\nAs we state in section 1, using Amortized SVGD allows us to learn a more flexible approximate posterior than would be possible with (stochastic) VI. Particle-based variational inference allows us to avoid assuming a certain parametric form of the posterior (so that the KL divergence is known analytically) or performing MCMC. With SVGD we can compute the true gradient of the KL divergence between our samples and the posterior, instead of just optimizing a lower bound (ELBO) [6]. Further, amortizing SVGD by using a generator increases flexibility since we are not tied to using a fixed number of particles, but can sample any number of models directly.\\n\\n[1] Burda, Yuri, et al. \\\"Large-Scale Study of Curiosity-Driven Learning\\\". International conference on learning representations. 2019.\\n[2] Eysenbach, Benjamin, et al. \\\"Diversity is all you need: Learning skills without a reward function.\\\" International conference on learning representations. 2019.\\n[3] Gregor, Karol, Danilo Jimenez Rezende, and Daan Wierstra. \\\"Variational intrinsic control.\\\" International conference on learning representations. 2017.\\n[4] Pathak, Deepak, Dhiraj Gandhi, and Abhinav Gupta. \\\"Self-Supervised Exploration via Disagreement.\\\" International conference on machine learning. 2019.\\n[5] Shyam, Pranav, Wojciech Ja\\u015bkowski, and Faustino Gomez. \\\"Model-based active exploration.\\\" International conference on machine learning. 2019.\\n[6] Liu, Qiang, and Dilin Wang. \\\"Stein variational gradient descent: A general purpose bayesian inference algorithm.\\\" Advances in neural information processing systems. 2016.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\n\\nThis paper introduces a new intrinsic reward for aiding exploration. This one\\nis based on learning a distribution on parameters for a neural network which\\nrepresents the dynamic function. The variance the predictions from this\\ndynamic function serves as the intrinsic reward. Results are compared against\\nseveral current state-of-the-art approaches.\", \"feedback\": \"Using uncertainty as an instrinc reward to guide exploration is a\\nvery active area of research and it would have been more helpful\\nto say how this work differs from Burda et al, Eysenbach et al,\\nGregor et al. 1. The underlying algorithms are all very similar\\nand differ in only small and subtle ways. The main difference\\nwith this work and Pathak et al. seems to be that the variance is\\nall coming from one particular conditional distribution rather than\\nan ensemble of models, but in Pathak et al it is also a distribution\\nover models.\\n\\nAmortized SVGD is used instead of regular SVI in this work, but\\nit is never articulated why to problem benefits from using that\\nframework. This paper would greatly benefit from some explanation.\\nIt is mentioned as a novel aspect of the work, but never really\\njustified at all.\\n\\nThe experimental results and convincing and do show a substantial\\nimprovements over similar approaches in domains in ant maze\\nnavigation and robot hand.\\n\\n[1] Gregor, Karol, Danilo Jimenez Rezende, and Daan\\nWierstra. \\\"Variational intrinsic control.\\\" arXiv preprint\", \"arxiv\": \"1611.07507 (2016).\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Update: I thank the authors for their rebuttal. Having read the other reviews I still stand by my assessment and agree with the other reviewers that the empirical validation should stronger, adding more baselines and conducting experiments on the same environments as your main competitors for fair comparison.\\n\\nSummary\\nThis paper proposes a Bayesian approach for modeling the agent's uncertainty about forward predictions in the environment, that is, given a state and action how likely the next state is. The uncertainty is then used to define an intrinsic reward function. The paper has sufficient technical depth. However, I am disappointed by the comparison to prior work.\\n\\nStrengths\\nInteresting non-parametric approach to estimating uncertainty in the agent's forward dynamics model\\nClearly written paper with sufficient technical depth\\nWell structured discussion of related work\\n\\nWeaknesses\\nMy main problem with the paper is a missing fair comparison to prior work. The two main contenders are MAX by Shyam et al 2019 and the Disagreement approach by Pathak et al 2019. Comparing the results on AntMaze presented here with those in Shyam I see that MAX only gets to high 80s in terms of maze exploration rate, while in their paper it is in the 90s. In comparison to Pathak, as far as I understand, a different robotic manipulation task was used (HandManipulateBlock here in comparison to the grasp and push objects on a table task by Pathak). Moreover, there are no experiments comparing the proposed approach to the stochastic Atari environments investigated in Pathak et al 2019. I understand this would require dealing with discrete action spaces, but I don't see why this would be infeasible. Overall, I believe this makes it hard to draw conclusions with respect to Shyam et al and Pathak et al and adding these missing comparisons would strengthen the paper substantially in my view.\\n\\nMinor\", \"p3\": \"\\\"Let f denote the dynamics model\\\" \\u2013 I believe it would be good to mention the signature of this function (it can be inferred from Figure 1, but it would be nice to make this explicit).\\nQuestions to Authors\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Update: I thank the authors for their response. I believe the paper has been improved by the additional baselines, number of seeds, clarifications to related work and qualitative analysis of the results. I have increased my score to 3 since I still have some concerns. I strongly believe the baselines should be tuned just as much as the proposed approach on the tasks used for evaluation. The baselines were not evaluated on the same environments in the original papers, so there is not much reason to believe those parameters are optimal for other tasks. Moreover, the current draft still lacks comparisons against stronger exploration methods such as Pseudo-Counts (Ostrovski et al. 2017, Bellemare et al. 2016) or Random Network Distillation (Burda et al 2018).\", \"summary\": \"This paper proposes the use of a generative model to estimate a Bayesian uncertainty of the agent\\u2019s belief of the environment dynamics. They use draws from the generative model to approximate the posterior of the transition dynamics function. They use the uncertainty in the output of the dynamics model as intrinsic reward.\", \"main_comments\": \"I vote for rejecting this paper because I believe the experimental section has some design flaws, the choice of tasks used for evaluation is questionable, relevant baselines are missing, the intrinsic reward formulation requires more motivation, and overall the empirical results are not convincing (at least not for the scope that the paper sets out for in the introduction). \\n\\nWhile the authors motivate the use of the proposed intrinsic reward for learning to solve tasks in sparse reward environments, the experiments do not include Moreover, some of the tasks used for evaluation do not have very sparse reward (e.g. acrobot but potentially others too). Without understanding how this intrinsic reward helps to solve certain tasks, it is difficult to assess its effectiveness. While state coverage is important, the end goal is solving tasks and it would be useful to understand how this intrinsic reward affects learning when extrinsic reward is also used. Some types of intrinsic motivation can actually hurt performance when used in combination with extrinsic reward on certain tasks. \\n\\nI am not sure why the authors chose to not compare against VIME (https://arxiv.org/pdf/1605.09674.pdf) and NoisyNetworks (https://arxiv.org/pdf/1706.10295.pdf) which are quite powerful exploration methods and also quite strongly related to their our method (e.g. more so than ICM).\\n\\nOther Questions / Comments:\\n\\n1. You mention that you use the same hyperparameters for all models. How did you select the HPs to be used? I am concerned this leads to an unfair comparison given that different models may work better for different sets of HPs. A better approach would be to do HP searches for each model and select the best set for each.\\n2. Using only 3 seeds does not seem to be enough for robust conclusions. Some of your results are rather close \\n3. How did you derive equation (1)? Please provide more explanations, at least in the appendix.\\n4. Why is Figure 3 missing the other baselines: ICM & Disagreement? Please include for completeness\\n5. Please include the variance across the seeds in Figure 4 (b). \\n6. How is the percentage of the explored maze computed for Figure 5? Is that across the entire training or within one episode? What is the learned behavior of the agents? I believe a heatmap with state visitation would be useful to better understand how the learned behaviors differ within an episode. e.g. Within an episode, do the agents learn to go as far as possible from the initial location and then explore that \\u201cless explored\\u201d area or do they quasi-uniformly visit the states they\\u2019ve already seen during previous episodes? \\n7. In Figure 6 (b), there doesn\\u2019t seem to be a significant difference between your model and the MAX one. What happens if you train them for longer, does MAX achieve the same or even more exploration performance as your model? I\\u2019m concerned this small difference may be due to poor tuning of HPs for the baselines rather than algorithmic differences?\\n8. For the robotic hand experiments, can you provide some intuition about what the number of explored rotations means and how they relate to a good policy? What is the number of rotations needed to solve certain tasks? What kinds of rotations do they explore -- are some of them more useful than others for manipulating certain objects? This would add context and help readers understand what those numbers mean in practice in terms of behavior and relevance to learning good / optimal policies.\"}"
]
} |
r1l1myStwr | Continuous Meta-Learning without Tasks | [
"James Harrison",
"Apoorva Sharma",
"Chelsea Finn",
"Marco Pavone"
] | Meta-learning is a promising strategy for learning to efficiently learn within new tasks, using data gathered from a distribution of tasks. However, the meta-learning literature thus far has focused on the task segmented setting, where at train-time, offline data is assumed to be split according to the underlying task, and at test-time, the algorithms are optimized to learn in a single task. In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with a time-varying task. We present meta-learning via online changepoint analysis (MOCA), an approach which augments a meta-learning algorithm with a differentiable Bayesian changepoint detection scheme. The framework allows both training and testing directly on time series data without segmenting it into discrete tasks. We demonstrate the utility of this approach on a nonlinear meta-regression benchmark as well as two meta-image-classification benchmarks. | [
"Meta-learning",
"Continual learning",
"changepoint detection",
"Bayesian learning"
] | Reject | https://openreview.net/pdf?id=r1l1myStwr | https://openreview.net/forum?id=r1l1myStwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BZuJlWXyND",
"HkxCfJoOsS",
"HkxnkyodjS",
"S1xXa0quiB",
"B1lUKCcOiS",
"ByeuiSECKB",
"HyxY4QkRYr",
"B1eczZEItS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727578,
1573592853769,
1573592803713,
1573592762914,
1573592702311,
1571861920444,
1571840817092,
1571336465701
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1602/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1602/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1602/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1602/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1602/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1602/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1602/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"In this paper the authors view meta-learning under a general, less studied viewpoint, which does not make the typical assumption that task segmentation is provided. In this context, change-point analysis is used as a tool to complement meta-learning in this expanded domain.\\n \\nThe expansion of meta-learning in this more general and often more practical context is significant and the paper is generally well written. However, considering this particular (non)segmentation setting is not an entirely novel idea; for example the reviewers have already pointed out [1] (which the authors agreed to discuss), but also [2] is another relevant work. The authors are highly encouraged to incorporate results, or at least a discussion, with respect to at least [2]. It seems likely that inferring boundaries could be more powerful, but it is important to better motivate this for a final paper. \\n \\nMoreover, the paper could be strengthened by significantly expanding the discussion about practical usefulness of the approach. R3 provides a suggestion towards this direction, that is, to explore the performance in a situation where task segmentation is truly unavailable. \\n \\n[1] Rahaf et el. \\\"Task-Free Continual Learning\\\". \\n[2] Riemer et al. \\\"Learning to learn without forgetting by maximizing transfer and minimizing interference\\\".\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to All Reviewers\", \"comment\": \"We thank all the reviewers for their helpful responses. We have chosen to place a handful of points that are relevant to all reviewers in a general comment.\\n\\n\\u2014 Increased performance and better uncertainty quantification in experiments\\nPreviously, results were reported for a single model, tested on 200 episodes of length 400. Statistics were reported over those 200 trials. We believe that this did not capture possible variation between models, and so we now train models with multiple random seeds (3 for sinusoid, 5 for Rainbow MNIST and miniImageNet) and report mean performance and 95% confidence intervals across mean performance within each model in Figure 2. \\n\\nWe optimized the hyperparameters and training of each experiment. As a result, performance for all models has improved substantially, especially for high hazard rates. The training details are provided in the appendix. The effect of MOCA training (versus oracle) and testing is also now more clearly visualized (Figure 3), and this is discussed in the paper. \\n\\n\\u2014 Computational complexity of the BOCPD changepoint estimation approach\\nWe have empirically characterized the computational complexity of MOCA, and a plot of run time per iteration at test versus iteration is available in the appendix. We find that after 25000 iterations, the time per iteration is still only approximately 7 milliseconds. Moreover, there are a wide variety of hypothesis pruning approaches (see e.g. [1,2] in addition to the original BOCPD paper [3]). \\n\\nWhile hypothesis pruning is directly applicable to MOCA at test time, using these methods during training without breaking differentiability remains an open problem, and we defer this to future work. We present a hyperparameter evaluation of the performance of models versus the length of the training sequence (referred to as T in the paper) and found that there were diminishing marginal returns around the T=1/(hazard rate) mark (Figure 4). \\n\\n[1] Saat\\u00e7i, Yunus, Ryan D. Turner, and Carl Edward Rasmussen. \\\"Gaussian Process Change Point Models.\\\" ICML. 2010.\\n\\n[2] Wilson, Robert C., Matthew R. Nassar, and Joshua I. Gold. \\\"Bayesian online learning of the hazard rate in change-point problems.\\\" Neural computation. 2010.\\n\\n[3] Adams, Ryan Prescott, and David JC MacKay. \\\"Bayesian online changepoint detection.\\\" arXiv:0710.3742. 2007.\"}",
"{\"title\": \"Response to Review 2\", \"comment\": [\"Thank you for your helpful comments. We have addressed your comments as follows:\", \"We have clarified the discussion of the run length prior in Section 2, and how this relates to the hazard rate\", \"We have moved to using y in the BOCPD background\", \"We have simplified equation 4\", \"We now refer to the meta-learning model as the UPM, to better match Adams and Mackay (2007)\", \"We have changed the writing in the discussion of PCOC in Section 5 to be more clear (in particular, changing the q distribution to a Cat(...) distribution)\", \"All of these changes are now reflected in the paper.\", \"Finally, we have included a discussion on the computational complexity of BOCPD in our comment above.\"]}",
"{\"title\": \"Response to Review 1\", \"comment\": \"Thank you for the helpful and constructive feedback. Our revised paper addresses your comments, and in particular:\\n\\n\\u2014 Hazard rate \\nYes, the hazard rate in the experiments is that same as in equation 1. We have clarified this in Section 2. \\n\\n\\u2014 Other changepoint models\\nWhile other models could conceivably be used instead of the BOCPD algorithm, this approach has numerous features that we believe make it the ideal candidate for use within MOCA:\\n\\nFirst, BOCPD is an online changepoint detection algorithm, which we require at test-time as the system must reason about possible changepoints as it observes a stream of data. Following the meta-learning paradigm of training through the test-time algorithm, we use BOCPD at train time as well to avoid possible performance degradation from moving to a different changepoint detection scheme at test time.\\n\\nBecause it is Bayesian, and maintains a belief over run length as opposed to a point estimate of changepoints, BOCPD is differentiable. This differentiability is essential for training the underlying meta-learning model, as gradients on the observed losses must be backpropagated through the changepoint detection scheme to the underlying meta-learning model. Additionally, many existing Bayesian changepoint detection algorithms (e.g. [1], [2]) do not directly provide closed-form posterior densities, and instead only offer samples from the associated posterior. BOCPD\\u2019s closed-form posterior is essential for the MOCA framework, as it provides a training objective for minimizing negative log likelihood. \\n\\n\\u2014 Efficiency of online performance\\nWe have evaluated this, and the results are available in the appendix. We have found that after 25000 iterations, the time per episode is still only approximately 7 milliseconds. For true online operation (over potentially millions of data points), there are many possible approaches to hypothesis pruning that reduce computational complexity to constant time. These are discussed in our reply to all reviewers, above. \\n\\n\\u2014 Sensitivity to T\\nwe have evaluated the sensitivity of the model performance to the length of the train sequence (T), available in Figure 4. We found that for T greater than approximately 100 (which is 1/hazard for the experiment), there is little to no additional training performance improvement.\\n\\nSimilarly to training other sequence models, training on long sequences is potentially problematic for several reasons. First, long sequences require large amount of memory utilization, especially when computing gradients during training. Second, backpropagating through long sequences potentially results in exploding or vanishing gradients. For these reasons combined with our empirical evaluation, we believe training on relatively short sequences is justified.\\n\\n\\u2014 Citation of Aljundi et al., 2019\\nThank you for this very helpful reference. We have added a discussion of this excellent work in the related work section. Their paper, as with ours, aims to remove the notion of task segmentation from continual learning. However, it does not focus on meta-learning models as the underlying model. In our work, learning occurs on two time scales: we slowly learn prior parameters, which aid in rapid fine-tuning within each task. We do not, in this work, directly address the question of continuous learning of the prior (and features). Instead, we focus on training on a task-unsegmented time series. Thus, while the data generation setting is similar, the learning objective is distinct. Indeed, a promising avenue of future work is combining these problem statements and approaches.\\n\\n[1] Barry, Daniel, and John A. Hartigan. \\\"A Bayesian analysis for change point problems.\\\" Journal of the American Statistical Association. 1993. \\n\\n[2] Carlin, Bradley P., Alan E. Gelfand, and Adrian FM Smith. \\\"Hierarchical Bayesian analysis of changepoint problems.\\\" Journal of the Royal Statistical Society: Series C (Applied Statistics). 1992.\"}",
"{\"title\": \"Response to Review 3\", \"comment\": \"We thank the reviewer for the comments. First, we have discussed the growing computational complexity of the changepoint estimation in our comment to all the reviewers, above.\", \"other_responses_to_your_comments\": \"\\u2014 Task segmentation for experiments \\nWe have chosen experiments that are standard benchmarks in meta-learning and few shot learning, to provide the best possible comparison with that literature. Moreover, we have deliberately chosen experiments for which task segmentation is available so that we may compare to our \\u201coracle\\u201d model, and explicitly quantify the performance degradation when this information is withheld from the learner. Critically, we find that MOCA, without task-segmentation information, performs nearly just as well as an oracle model trained with task-segmentation, demonstrating the practicality of task-free meta-learning. \\n\\n\\u2014 Figure 1 and Figure 3 interpretability\\nThank you for noting this. In Figure 1, the red points correspond to data points drawn from the task that the time series is currently in (they correspond to the visualized sinusoid). The green points correspond to old tasks. The model does not have access to the information of which points belong to the current task, and which belong to a previous task. Thus, in the best case, the model would ignore green points and compute the best possible posterior based on the red points. We have modified the caption of each figure to clarify this.\\n\\nWe have clarified the discussion of the hazard throughout the paper. \\n\\n\\u2014 Meta-learning model used\\nIn our regression experiments, we use ALPaCA [1], a Bayesian, optimization-based meta-learning approach that is conceptually similar to MAML. In classification, we use PCOC, a Bayesian meta-learning approach that draws similarities to prototypical networks, which is also presented in the paper. Both are described in Section 5, and a more detailed discussion of PCOC is provided in the appendix. \\n\\n\\u2014 High hazard rate performance\\nAs the hazard rate increases, the number of labeled examples available within each class (for a given task) becomes smaller. For example, for a hazard rate of 0.2, in Rainbow MNIST, each class will in expectation have fewer than one example. Thus, as the hazard rate increases to 1, the data approaches being sampled iid from the union of all tasks, which exactly corresponds to the \\u201ctrain on everything\\u201d setting. In particular, the \\u201ctrain on everything\\u201d model treats the time series data as if they were iid draws from some training dataset, without any meta-learning or fine-tuning. Thus, we expect the performance of the MOCA and sliding windows models to at best be equivalent to the \\u201ctrain on everything\\u201d baseline as the hazard rate approaches 1. \\n\\nGenerally, we note that we do not consider the common \\u201ck-shot, n-way\\u201d problem setting, but instead consider a constant stream of individual data points. In our setting, there is no concept of a meta-test training set; there is no conditioning data at test time. Instead, a model must use the previously observed data (with no indication of the task to which it belongs) to meta-learn. \\n\\n\\u2014 MiniImageNet super-classes\\nThis is discussed in detail in appendix B. Briefly: our problem setting is a streaming classification task, in which individual images are presented sequentially, for a full time series. As such, two possible scenarios arise: either we know all possible image labels, or we have to infer the set of possible image labels online. In this work, we consider the first case. Thus, we assume the labels are known before test time. Stepping back, we can see that we have knowledge of the space of possible labels (and training examples) but do not have test-time context data. Thus, we wish to learn a prior for each super-class that can be adjusted at test time to sit a specific task. \\n\\nThis modification to the standard few shot classification problem setting required our modified version of miniImageNet. We wish to emphasize that this is not the only possible problem statement: there are many other versions that are plausible. \\n\\n[1] James Harrison, Apoorva Sharma, and Marco Pavone. Meta-learning priors for efficient online Bayesian regression. Workshop on the Algorithmic Foundations of Robotics (WAFR), 2018.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a solution for using meta-learning methods without the need of determining the task segmentation a priori. Authors identify that the task segmented setting is very similar to the problem of change-point detection (CPD) and particularly, they connect the generative model of the meta-learning approach to the Bayesian recursive method of Adams and MacKay 2007. The presentation of the meta-learning problem is well done and I understand its importance within the difficulties for modelling new unobserved tasks. The notation and description of the generative model included in the problem statement section is clearly understandable for any reader, this is a very positive point. They demonstrate a deep comprehension of the BOCPD model of Adams, and its extension for the meta-learning approach is original to me. Lastly, the presentation of the MOCA meta-learning algorithm is useful for reproducibility and I see it completely applicable to other scenarios. There is a noticeable effort of describing the solution for both regression and classification problems and the empirical results conclude a positive performance of MOCA.\\n\\nOverall, I consider that the paper is well-written with thorough explanations. Of course, there are some details that could be improved (I will comment this later). If done, I would be willing to increase my score. The contribution of the paper is significant to meta-learning models and I think it will help to spread the Adams\\u2019 model to other type of problems.\\n\\nThere is an important contribution in the paper that I have to mention and it is also relevant for the future application of the Adams model. If one reads the original BOCPD paper, in particular Eq (1), where the predictive posterior p(x_t+1|x_{1:t}) is defined from a marginalisation over the run length values, it is noticeable that this equation is not used in the final recursion of the CPD method. This is because the posterior p(r_t|x_{1:t}) is sufficient for determining if there is a CP on the t time-step or not. So the predictive posterior is never used in practice (in the original paper). When I first read Adams 2007, this detail was clear to me. Surprisingly, I find that the authors have find a practical use of this equation (Eq. (7)) and it is in the main core of the meta-learning algorithm, this is fantastic.\", \"the_details_i_think_should_be_improved_are\": [\"The conditional run-length prior p(r_t|r_{t-1}) barely appears and without any detailed description is not obvious for non-familiar readers with the Bayesian CPD approach.\", \"The first time I read the manuscript, the use of z_t in the BOCPD presentation made me feel a bit lost. Why not use x or y? At least specify that is a toy variable for the explanation. Later on, authors change again to the x,y notation.\", \"Equations 3 and 4 are too similar, this looks a bit repetitive. Why not reusing one of them or say the change from one term to another?\", \"If reading the literature of the BOCPD and posterior extensions, the likelihood model of the detector is often referred as the underlying predictive model (UPM). Using a similar term would help for orienting familiar readers into the solution.\", \"As authors should have noted on their experiments, as one makes the run-length higher, the number of parameters \\\\eta[r_t] increases. A clear state on how this is solved would help.\", \"In the PCOC subsection, I find the definition of y \\\\sim q(y) a bit weird. Using Cat(_) or Multinomial likelihood notation would be a bit better.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper pushes meta-learning towards task-unsegmented settings. Different from the traditional offline meta-learning phase with explicit task segmentation, MOCA adopts a Bayesian changepoint estimation scheme for task change detection. The setting is novel and deserves research in-depth, and the idea is easy to understand. The proposed method can learn the meta-learning model and changepoint detection model simultaneously. Besides, the MOCA framework is not designed specifically for one algorithm and can be easily combined with other meta-learning models.\\n\\nHowever, I got some questions about this paper:\", \"q1\": \"Is the \\u2018Hazard\\u2019 in the experiment the same to $\\\\lambda$ in eq.1? I think notations should be consistent if they are the same.\", \"q2\": \"Can other changepoint models be compared in the experiment? I found many in the related work.\", \"q3\": \"The running times in Figure 2 should be reported to demonstrate the efficiency of MOCA, since the method is proposing online streams, efficiency should be promised for quickly processing.\", \"q4\": \"What is \\u2018T\\u2019 in Algorithm 1? Are experiment results sensitive to it? Experiments about this should be conducted and reported.\\n\\nLastly, I think [1] should be cited as related work about continual learning for proposing task-free continual learning, which is very similar to the setting in this paper.\\n[1] Rahaf Aljundi, Klaas Kelchtermans, Tinne Tuytelaars. Task-Free Continual Learning. CVPR 2019\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper considers the meta-learning in the task un-segmented setting and apply bayesian online change point detection with meta-learning. The task un-segmented is claimed to exist in real applications and the paper explains the idea in a clear way.\", \"my_major_concerns_and_questions_are_the_following\": \"1) In Eq(4), it requires a computation of normalization constant that needs to sum over support of r_t. The support gets larger and larger with the length of sequence increases over time. Does this method scale well with long sequence?\\n\\n2) The part I feel most confused are the experiments. The paper tries to solve a problem in the task un-segmented setting, but why there is no such setting in experiments that the meta-training set cannot be segmented to different tasks?\\n\\n3) The Figure 1 is hard to read. What do the red and green points in the left panel mean? The hazard rate is not defined before the experiment which makes Figure 3 also hard to understand. \\n\\n4) What is the meta-learning algorithm used in the experiements, MAML, NP or RNN-based methods? It claims MOCA can be used with any meta-learning algorithms and how does it show in experiments?\\n\\n5) In Rainbow MNIST, why in high hazard rates, all models perform comparable to \\u201ctrain on everything\\u201d? Does it mean meta-learning does not work in this configuration and why? Does \\u201ctrain on everything\\u201d includes fine-tuning on the meta-test training set?\\n\\n6) In Mini-IMAGENET, what does it mean by \\\"we associate each class with a semantic label that is consistent between tasks\\u201d? Why it needs to form \\u201csuper-class\\u201d? \\n\\nI think the proposed method can be useful in the task un-segmented setting. But before the above-mention questions are solved and the experimental section gets more clear, I will give a conservative rating. \\n\\n\\n######################\", \"post_rebuttal_review\": \"Thanks for the authors' feedback and it resolves some confusion parts in the paper. Though the author claims the reason of experiments setting, I believe it is necessary to have one experiment where task-segmentation is impossible and compare with standard sequential learning methods in that setting. Otherwise it remains a question whether the proposed method works only when the problem basically has task-segmentation. And if the problem has task-segmentation, why not using traditional meta-learning methods, as shown as oracle methods with better performance in paper? \\nSince the major contribution of the paper is providing a meta-learning method to work in the problems where task-segmentation is unavailable, not having an experiment in this setting (withholding segmentation information does not exactly fall in this setting because it adds a condition that the task-segmentation information is originally accessible) is a major reason for my current evaluation. I recognize the careful design of the MOCA and agree with some positive points raised by other reviewers. I would not be bothered if the paper is accepted while I tend to maintain the current rating because of the above-mentioned concern.\"}"
]
} |
rJlk71rYvH | Counterfactual Regularization for Model-Based Reinforcement Learning | [
"Lawrence Neal",
"Li Fuxin",
"Xiaoli Fern"
] | In sequential tasks, planning-based agents have a number of advantages over model-free agents, including sample efficiency and interpretability. Recurrent action-conditional latent dynamics models trained from pixel-level observations have been shown to predict future observations conditioned on agent actions accurately enough for planning in some pixel-based control tasks. Typically, models of this type are trained to reconstruct sequences of ground-truth observations, given ground-truth actions. However, an action-conditional model can take input actions and states other than the ground truth, to generate predictions of unobserved counterfactual states. Because counterfactual state predictions are generated by differentiable networks, relationships among counterfactual states can be included in a training objective. We explore the possibilities of counterfactual regularization terms applicable during training of action-conditional sequence models. We evaluate their effect on pixel-level prediction accuracy and model-based agent performance, and we show that counterfactual regularization improves the performance of model-based agents in test-time environments that differ from training.
| [
"Counterfactual",
"Model-Based Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=rJlk71rYvH | https://openreview.net/forum?id=rJlk71rYvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Aqpw4UFnyx",
"ryeNrBz3jB",
"Hylr1WGlcH",
"rylvNBMCFS",
"rkgu8hhqYr"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727546,
1573819708084,
1571983580704,
1571853615400,
1571634255683
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1601/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1601/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1601/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1601/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"I agree with the reviewers that this paper has serious limitations in the experimental evaluation.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"General Response\", \"comment\": \"We would like to thank the reviewers for taking the time to review the paper and for their insightful feedback.\\n\\nRegarding environment dependency, we agree with the view that our proposed regularizations are environment-dependent; state and action spaces as well as an agent's ability to control the environment may vary.\\nWe do find some sensitivity to training hyperparameters, as well as planning horizon, and consistent with the reviewers' intuitions we would expect some level of task-specific hyperparameter tuning to be required to achieve optimal results for most tasks.\\n\\nRegarding the case of action-control regularization in an environment where an agent's action may not always have an impact on the observed state, we would note that the regularization enforces a difference between learned latent states, not necessarily between observations.\\nIt is possible in principle for a transition model to learn to produce different latent states given different agent actions, even if the latent states produce the same observations.\\n\\nRegarding the relationship between the planning algorithm used in our experiments and PlaNet, MPC in table 2 is not exactly equivalent to PlaNet. Although our model is similar in structure to Hafner et al.[1], their planning approach is based on the Cross Entropy Method [2] while we use a simpler deterministic search (select the action leading to maximum predicted reward over a fixed horizon).\\n\\nRegarding the details of the generalization test environment, the maximum score in every version of the task is 8.0, and this has been clarified in section 4.\\n\\nRegarding some of the related work mentioned by reviewer #3, we agree on the relevance of the literature on auxiliary tasks and have updated the related work section accordingly.\\n\\n[1] Hafner, Danijar, et al. \\\"Learning latent dynamics for planning from pixels.\\\" arXiv preprint arXiv:1811.04551 (2018).\\n[2] Chua, Kurtland, et al. \\\"Deep reinforcement learning in a handful of trials using probabilistic dynamics models.\\\" Advances in Neural Information Processing Systems. 2018.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a method called \\\"counterfactual regularization\\\" whereby the dynamics/transition model is encourage to not have degeneracies where the actions don't influence the state transitions. Concretely, this is done by, for every state, computing the maximum deviation of Transition model under a different action than the action taken in the history, and encourage that deviation to be as large as possible. If that maximum deviation is 0, then all actions lead to the same next state. Empirical results show reasonable improvements in the StarIntruders task.\\n\\nMy biggest complaint (and the only one barring me from supporting acceptance) is that I don't see the body of results as scientifically solid. Example of additional results that I would find much more convincing are:\\n\\n-- Experiments on more than one environment. Currently, this paper should be judged solely for its empirical improvements, because there is little formal analysis or rigorous derivations. But it's hard to judge that based on only one experiment.\\n\\n-- Deeper investigation into the effects of counterfactual regularization, including its interaction with learning disentangled representations. Right now, there is no investigation, just a numerical score of reward attained. This does not lead to much scientific insight.\\n\\n-- Exploration of the limitations of the approach. Personally, I think this approach is reasonable for video games with a few discrete actions, but quickly runs into problems for more complex action spaces.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper considers regularization based on \\\"counterfactual\\\" trajectories.\\nNamely, it suggests two losses, action-control and disentanglement regularization.\\nIt experimentally evaluates the benefits of such regularization in the StarIntruders environment.\\n\\nThe paper is well written and explained.\", \"issues\": \"1) Authors evaluated the two suggested regularizations in separate. \\nI would like to also see numbers from a combination of these.\\n\\n2) I think the related work is missing a large line of work on \\\"auxiliary tasks\\\".\\nIt seems to me that this paper would exactly fit within that scope?\\n\\n3) My main issue is the evaluation.\\nThe evaluation is done on a in-house game and compares to very few methods.\\nFor a paper that has very little theory and thus most of the value is in the empirical evaluation, I think that is a problem.\\nIf authors opted for example for Space Invaders (they do say it is similar) or simply more games, one would have many more existing numbers to compare against.\", \"minor_issues\": \"1) The first regularization - action control regularization is motivated by the idea that there is always an action that changes the state. While true for most environments, this does not hold in general.\", \"summary\": \"Overall, this paper has potential but I don not believe is good enough - I suggest a reject.\\nThe main problem is that the idea is relatively simple, there is no theory and thus the crucial piece of the paper has to be the empirical evaluation.\\nAnd the evaluation only compares to a single method with no regularization, no auxiliary tasks and reports only experiments on a single game.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents regularization techniques for model based reinforcement learning which attempt to build counterfactual reasoning into the model. In particular, they present auxiliary loss terms which can be used in \\\"what if\\\" scenarios where the actual state is unknown. Given certain assumptions, they show that this added regularization can improve generalization to unseen problem settings. Specifically they propose two forms of regularization: (1) enforcing that for different actions the predicted next state should be different (action-control) and (2) enforcing that when certain parts of the low dimensional state are perturbed, over a model rollout the perturbation should only affect the perturbed parts of the state, essentially encouraging the latent space features to be independent (disentanglement).\\n\\nOverall the idea is well motivated - incorporating counterfactual reasoning into model based RL has potential to to improve generalization. Also, while the assumptions needed for the regularization to be correct are not always true, they do seem to hold in many cases. Lastly, the results do seem to indicate that generalization is slightly improved when using the proposed forms of regularization.\", \"my_criticisms_are\": \"(1) As mentioned in the paper Action-Control assumes that at every single timestep the agent has potential to change the state. However there may be settings where the agent can always change state, but only a small component of the state. In these cases the states should be quite similar. For example a robot only moving a single object when the state consists of many objects. Also as mentioned in the paper Disentanglement will not work in stochastic environments. One concern I have is that since different environments can violate the assumptions to varying degrees, it seems like actually using the regularization and picking the correct hyperparameter to weight it will be very challenging. \\n\\n(2) The current results are only demonstrated in a single, custom environment. Additionally performance is shown on only 2 test tasks, and in all cases in Table 2 it is unclear how to interpret the reward. Does this performance constitute completing the task? What is the best possible cumulative reward in this case? The performance improvement seems small, but it is difficult to judge without knowing the details of the task.\\n\\nI think the paper would be significantly improved by (1) adding experiments in more environments, especially standard model based RL environments where the performance of many existing methods is known and (2) adding comparisons to other forms of model regularization, for example using an ensemble of models. My current rating is Weak Accept.\", \"some_other_questions\": [\"In Table 2 does MPC amount to PlaNet?\", \"How sensitive are the current numbers to planning parameters (horizon, num samples)?\", \"Can you provide error bars for the numbers in the tables?\", \"______________________________________________\", \"After author responses and closer examination of the paper I have some additional concerns about experimental details. Changing my score from 'Weak Accept' to 'Weak Reject'\"]}"
]
} |
r1xCMyBtPS | Multilingual Alignment of Contextual Word Representations | [
"Steven Cao",
"Nikita Kitaev",
"Dan Klein"
] | We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in analyzing and improving multilingual BERT. In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, remarkably matching pseudo-fully-supervised translate-train models for Bulgarian and Greek. Further, to measure the degree of alignment, we introduce a contextual version of word retrieval and show that it correlates well with downstream zero-shot transfer. Using this word retrieval task, we also analyze BERT and find that it exhibits systematic deficiencies, e.g. worse alignment for open-class parts-of-speech and word pairs written in different scripts, that are corrected by the alignment procedure. These results support contextual alignment as a useful concept for understanding large multilingual pre-trained models. | [
"multilingual",
"natural language processing",
"embedding alignment",
"BERT",
"word embeddings",
"transfer"
] | Accept (Poster) | https://openreview.net/pdf?id=r1xCMyBtPS | https://openreview.net/forum?id=r1xCMyBtPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ASh91iFZ1",
"rylhOp_sjH",
"ryxX7adssB",
"S1gW5lfuoH",
"rJe8h1ePjH",
"ByghNKkSiB",
"rye0GYyHsH",
"SklXRuyrjr",
"S1x3sOyHiS",
"Hkeb5VNZ5B",
"BJgZ3a56KH",
"BJlv3F9nYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727517,
1573780852073,
1573780762570,
1573556361029,
1573482413877,
1573349683560,
1573349653783,
1573349578913,
1573349540053,
1572058248759,
1571823016918,
1571756463305
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1600/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1600/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1600/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1600/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1600/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1600/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1600/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1600/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1600/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1600/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1600/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a method to improve alignments of a multilingual contextual embedding model (e.g., multilingual BERT) using parallel corpora as an anchor. The authors show the benefit of their approach in a zero-shot XNLI experiment and present a word retrieval analysis to better understand multilingual BERT.\\n\\nAll reviewers agree that this is an interesting paper with valuable contributions. The authors and reviewers have been engaged in a thorough discussion during the rebuttal period and the revised paper has addressed most of the reviewers concerns.\\n\\nI think this paper would be a good addition to ICLR so I recommend accepting this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks again for the feedback.\", \"comment\": \"Thanks again for the feedback. As an update, we have run our method on more distant languages (Chinese, Arabic, and Urdu), where we align a single BERT model on all eight languages (the five Europarl languages and the three additional ones) using 20K sentences per language. We see similar gains for the new languages, and the results are in the appendix. Interestingly, the Urdu parallel sentences come from the Quran, which is a very different domain from XNLI. Therefore, given that the Bible parallel corpus exists for 100 languages, it may be feasible to perform alignment for many languages.\\n\\n> Why do two \\\"rotation\\\" baselines show such bad results? Do you have any explanation for that?\\n\\nOur hypothesis is that rotation is not expressive enough, so it leads to better word retrieval results and German XNLI accuracy but lackluster results otherwise. Past work using rotations to align contextual word embeddings focus on dependency parsing (Schuster et al., 2019, Wang et al., 2019), which is word-level rather than sentence-level and more syntactic than semantic compared to XNLI. Therefore, perhaps there are key differences between the tasks that make rotation sufficient for one but not the other. We do think this discrepancy deserves further experimentation.\\n\\n> Also, if I'm not mistaken, it seems that performance with your method saturates after relying on >250K parallel sentence, but that is not the case for XLM. How can we leverage richer/larger parallel corpora with your method?\\n\\nWhile we do not have experiments using >250K sentences, we agree that accuracy gains seem to saturate as we increase the amount of parallel data. One axis along which we might modify the method would be to make it stricter (e.g. enforce a squared error loss on all of the layers of BERT, rather than just the last layer), or looser (e.g. encourage a shared embedding space without enforcing closeness in L2 norm, like the translation modeling objective in XLM). Our intuition is that stricter methods are more data efficient, but looser methods might be able to produce higher numbers given more data because they are less restrictive. It would be interesting to explore this hypothesis.\"}",
"{\"title\": \"Thanks again for the comments.\", \"comment\": \"Thanks again for the comments, which we address below:\\n\\n> Thanks for adding the two \\\"rotation\\\" baselines. However, it is surprising that none of them are able to consistently improve over the base mBERT model, which contradicts previous findings. I think that this deserves more discussion in the paper: either this approach does not work in the general case (which would be an important finding), or you are doing something differently that could explain these negative results.\\n\\nWe are also surprised that rotation does not produce consistent improvements in XNLI. Past work using rotations to align contextual word embeddings focus on dependency parsing (Schuster et al., 2019, Wang et al., 2019), which is word-level rather than sentence-level and more syntactic than semantic compared to XNLI. Therefore, perhaps there are key differences between the tasks that makes rotation sufficient for one but not the other. We do think this discrepancy deserves further examination but are hesitant to make strong negative claims without further experimentation.\\n\\n> Regarding Lample & Conneau (2019), you are right that asking to pretrain an XLM model from scratch would not be reasonable. However, you could have used the pretrained XLM models as your baseline, which would allow you to assess your method against both the MLM and the MLM+TLM variants. I think that using mBERT does not invalidate the paper, but I do see it as a weakness. Needless to say, achieving a new SOTA (by improving XLM) would have made the results much more convincing, showing that your improvements are orthogonal with those of previous work.\\n\\nWe agree and hope to explore the application of our method to XLM in future work. We think it may also be worthwhile to experiment with the recently released XLM-R (Conneau et al., 2019), which is similar to mBERT but uses much more data, more parameters, and more training time to achieve very high XNLI numbers.\"}",
"{\"title\": \"Thanks for the responses!\", \"comment\": \"I would like to thank the authors for a very substantial set of responses to my questions. While I'd still like to see more language pairs and additional experiments already in this work (and not as future work), I believe that the revisions and edits have made this submission stronger, and I'm fine with raising my score accordingly. After all, I believe that the community working on cross-lingual representations and xling transfer will find this work valuable, at least as a reference point.\\n\\nWhy do two \\\"rotation\\\" baselines show such bad results? Do you have any explanation for that?\\n\\nAlso, if I'm not mistaken, it seems that performance with your method saturates after relying on >250K parallel sentence, but that is not the case for XLM. How can we leverage richer/larger parallel corpora with your method?\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Thanks for the answer. The authors have addressed the main concerns I raised, and the revised version of the paper looks more convincing to me.\\n\\nIf the conference had a more gradual scoring system I would have risen my score, but I am keeping it as a \\\"weak accept\\\", as I am hesitant to give it the maximum score. I think that the paper makes a valuable contribution, and I do not have any major concern after the revised version, which is the reason why I lean toward acceptance. However, I partly share the general feeling of reviewer 3 that the paper is not particularly exciting (limited novelty and not fully convincing results), although I am more positive about it and I still think (more strongly now) that it would make an interesting contribution to the conference.\\n\\nTwo comments on the authors' rebuttal and the revised version:\\n\\n- Thanks for adding the two \\\"rotation\\\" baselines. However, it is surprising that none of them are able to consistently improve over the base mBERT model, which contradicts previous findings. I think that this deserves more discussion in the paper: either this approach does not work in the general case (which would be an important finding), or you are doing something differently that could explain these negative results.\\n\\n- Regarding Lample & Conneau (2019), you are right that asking to pretrain an XLM model from scratch would not be reasonable. However, you could have used the pretrained XLM models as your baseline, which would allow you to assess your method against both the MLM and the MLM+TLM variants. I think that using mBERT does not invalidate the paper, but I do see it as a weakness. Needless to say, achieving a new SOTA (by improving XLM) would have made the results much more convincing, showing that your improvements are orthogonal with those of previous work.\"}",
"{\"title\": \"Thank you for the thorough and insightful response (part 2)\", \"comment\": \"> The paper makes some claims on novelty which 1) partially overlap with prior work, or 2) it does not cite related work while it leans on its findings ... Also, the paper should do a better job in Section 2 and cover \\\"word vector alignment\\\" in more detail\\n\\nThank you for providing these references. We have edited to manuscript to cite them accordingly and improved Section 2. But on the comment about novelty, we would like to note that (1) to the best of our knowledge, our departure from linear transformations has not been done in prior work, and (2) it is not obvious without experimental study whether results about non-contextual word vectors transfer to large pre-trained models.\\n\\n> The authors mention that it may not hold for 'contextual pre-trained models given their increased complexity'. This is very imprecise writing taking place ... In fact, the paper would contribute immensely from more precise writing: e.g., on Page 3 contextual alignment of the model f is defined as accuracy in contextual word retrieval. This reads as defining a critical concept or a task as an evaluation measure (that measures the success of that task).\\n\\nThanks for pointing this out. What we meant to say was, contextual word vectors must contain much more information than non-contextual ones, including the word, its context, syntax, and more. We hypothesize that this fact makes it much less likely that two languages will be isomorphic, because the word, context, and syntax must all be isomorphically represented in the vector space. We also agree with your comment on the definition of alignment and have modified the paper accordingly. If there are more areas in the revision that you find imprecise, we would be happy to make further changes.\\n\\n> The paper aims to \\\"better understand BERT\\u2019s multilingualism\\\", but I do not see how it contributes to our better understanding of BERT's multilingualism besides a pretty straightforward claim that it shows less multilingual potential when doing experiments with Greek and Bulgarian that use different scripts. Figure 2 and Figure 3 also do not bring anything new.\\n\\nTo the best of our knowledge, Figure 2 and Figure 3 are not known in the literature. Figure 2 provides a precise way of measuring the alignment of a contextual pre-trained model, and it shows that this evaluation measure correlates very well with downstream zero-shot transfer. As far as we know, the evaluation measure and the correlation results are new. Figure 3 uses our evaluation measure to show that BERT is better aligned when the two words have similar usage frequencies. Also, Table 4 shows differences between open-class and closed-class parts-of-speech. Given that multilingual BERT is still not very well understood, we believe that these findings provide useful insights into the model. \\n\\n> The paper seems to just state known facts without proposing new solutions on how to e.g. learn better alignments for Greek or Bulgarian.\", \"we_present_our_alignment_method_as_a_solution_to_these_deficiencies\": \"it closes the gap between Greek/Bulgarian and German/Spanish/French, open and closed-class parts-of-speech, and word pairs with different frequencies.\\n\\n> One important analysis aspect is missing from the paper: there are no experiments with more distant language pairs (the most distant language pair is English-Greek). I would like to see more experiments in this space. \\n\\nThis is a good point and we hope to run our method on more language pairs, especially distant ones, in the future.\\n\\n> Another experiment which would contribute to the paper is the analysis of the importance of parallel corpora size. How much does the model lose in its performance by shrinking the parallel corpus? We cannot expect having 2M sentences for so many language pairs, and, even if we do have the data, the paper does not convince me that I should use 'Aligned BERT' instead of e.g. the XLM model of Lample and Conneau.\\n\\nThank you for pointing this out; we agree that we cannot expect to have 2M sentences for so many language pairs. As mentioned in part 1 of this comment, we use 250K sentences per pair. Also, we present new experiments for 10K and 50K sentences. Given that the alignment procedure is lightweight and can be applied to any existing pre-trained model, we envision our alignment procedure to be useful for the (104 - 15) languages that BERT was trained on but not XLM. We give further advantages of our method above in our response. Finally, while we believe that our method is practically useful, the focus of our paper is to show how embedding alignment can be applied to pre-trained models, rather than to present a new state-of-the-art.\"}",
"{\"title\": \"Thank you for the thorough and insightful response (part 1)\", \"comment\": \"Thank you for your thorough and insightful response, which we have found very useful in improving the paper. We will respond to each comment in-line.\\n\\n> I am not exactly sure that the comparison between 'Aligned BERT' and the main baseline 'Aligned fastText + sentence' is completely fair. 'Aligned BERT' uses more than 2M Europarl sentences to learn the alignment, while the standard alignment methods for learning cross-lingual word embeddings (see e.g. Ruder et al.'s survey) typically rely only on 5k translation pairs or even less pairs. There is a huge difference in the strength of the bilingual signal between 2M parallel sentences and, say, 2k, word translation pairs.\\n\\nThank you for making this point; the data requirements of the method are indeed important considering that the goal is zero-shot transfer. We have run our method using 10K, 50K, and 250K sentences per language pair, and the results are in Table 2 of the revised submission. The result is that the method produces large gains even with 10K sentences. We would also like to point out that (1) the reported numbers in our original submission use 250K sentences, and (2) we use the same level of supervision for our aligned fastText method. We have edited the paper to make these two points clearer.\\n\\n> I wonder why the authors have not compared to a more suitable XLM baseline of Lample and Conneau (NeurIPS 2019; the paper has been on arXiv since January 2019) - the XLM model uses exactly the same resources as 'Aligned BERT'\\n\\nWe agree that XLM uses parallel data in a similar way to our paper and achieves impressive XNLI numbers, so we have added it as a point of comparison in Table 1. However, we would like to note our method uses much less supervision than XLM: the numbers reported in our original submission use 250k sentences, and the method also works with 10k sentences. Our method also has the advantage that it can be performed in a day with a single GPU, whereas pre-training from scratch requires compute resources that are typically available only at large companies. If new parallel data becomes available, our method can also be quickly applied to take advantage of it. It\\u2019s also the case that the XLM numbers are not completely comparable to those of BERT: their MLM model, which uses no parallel sentences, still outperforms BERT on English XNLI and overall, as shown in Table 1 of their paper. The purpose of our paper was to explore and analyze how the idea of embedding alignment can be applied to contextual pre-trained models, so we found it more meaningful to perform controlled experiments to tease out the benefits of specific techniques, with BERT as a representative pre-trained model. However, we do agree that XLM is of interest and hope to explore the application of our method to the model in future work.\\n\\n> Regarding the baselines, it is also not clear to me why the authors have not compared to previous work of Schuster et al. (2019) and Aldarmaki and Diab (2019) at least in tasks where the models can be directly compared (XNLI or non-contextual word retrieval). \\n\\nThank you for pointing this out. We agree that the paper would benefit from more comparisons to existing methods. Therefore, we have added two comparisons that are quick to implement and most directly comparable to our method: (1) the method from Aldarmaki and Diab (2019), which aligns sentence vectors using a linear transformation, and (2) the contemporaneous method from Wang et al. (EMNLP 2019), which aligns word pairs within parallel sentences using a linear transformation. The results suggest that a linear transformation is suboptimal for producing strong alignments, as displayed in Tables 1 and 3.\\n\\n> For the 'Aligned fastText + sentence' baseline, it would be interesting to report numbers with another (hybrid) baseline model that combines aligned fastText vectors with sentence encodings produced by multilingual BERT or some other multilingual sentence encoder (such as LASER, see Schwenk et al., 2019). Simply taking min, max, and avg vectors over all the sentence words might not be the best way to encode the sentence, and I would like to see more experiments here.\\n\\nThank you for mentioning these works. We agree that more experiments with other sentence encoders could provide more insight, which we would like to experiment with in the future. Also, we would like to note that appending the min, max, and avg was shown to be state-of-the-art for cross-lingual tasks over more complex sentence encoders, so we do believe it is a competitive method (R\\u00fcckl\\u00e9 et al., 2018). We also chose this method over other more complex sentence encoders because we wanted to ask the question, \\\"What is the best we can do with non-contextual word vectors?\\\", as a direct comparison to contextual word vectors.\"}",
"{\"title\": \"Thank you for the insightful feedback.\", \"comment\": \"Thank you for the insightful feedback. We have incorporated them into the revision, and we address the comments below in-line:\\n\\n> You are not comparing to any baseline using parallel data with contextual embeddings. You should at least compare your method to Schuster et al. (2019) and/or Aldarmaki & Diab (2019), who further align multilingual BERT in a supervised manner as you do, as well as Lample and Conneau (2019), who propose an alternative method to leverage parallel data during the training of multilingual BERT. In fact, while you do improve over multilingual BERT, your results in XNLI are far from the current state-of-the-art, and this is not even mentioned in the paper.\\n\\nThank you for this comment. We have added two comparisons that are quick to implement and most directly comparable to our method: (1) the method from Aldarmaki and Diab (2019), which aligns sentence vectors using a linear transformation, and (2) the contemporaneous method from Wang et al. (EMNLP 2019), which aligns word pairs within parallel sentences using a linear transformation. In terms of comparing to the method in Lample and Conneau (2019), we do not have the compute to pre-train from scratch. Also, the numbers in their paper are not directly comparable to BERT, given that their MLM model, which uses no parallel data, still outperforms BERT on English XNLI and overall. Nonetheless, we have included their numbers in the XNLI table to represent the state-of-the-art.\\n\\n> The \\\"contextual word retrieval\\\" task you propose is rather artificial and lacks any practical interest. It is not surprising that your proposed method is strong at it, as this is essentially how you train it (you are even using different subsets of the exact same corpus for train/test). The task is still interesting for analysis -which is in fact one of the main strengths of the paper- but it should be presented as such. Please consider restructuring your paper and moving all these results to the analysis section, where they really belong.\\n\\nThis is a good point. We completely agree and have restructured the paper accordingly.\\n\\n> I do not see the point of the \\\"non-contextual word retrieval\\\" task, when you are in fact using the context (the fact that there is only one occurrence per word type doesn't change that). This task is even more artificial than the \\\"contextual word retrieval\\\" one. Again, it can have some interest as part of the analysis (showing that the gap between aligned fasttext and aligned BERT goes down from table 1 to table 2), but presenting it as a separate task as if it had some value on its own looks wrong. From my point of view, the real \\\"non-contextual word retrieval\\\" task would be bilingual lexicon induction (i.e. dictionary induction), which is more interesting as a task (as the induced dictionaries can have practical applications) and has been widely studied in the literature.\\n\\nWe agree that it would indeed be ideal to use BLI instead of non-contextual word retrieval. However, given that BERT needs the context as well as the word itself to produce a vector, non-contextual word retrieval is an easy way to produce bilingual dictionaries with sentences attached. Of course, it is not realistic to expect the source and target sentences to be parallel as well, which makes BLI a harder task for BERT. But, as you mention, we do think that the task has value for analysis because it can be accomplished without any representation of context. In particular, in the original contextual word retrieval task, BERT could be outperforming fastText for two reasons: (1) it can better represent context, and (2) it is better aligned. The point of introducing this task was to reduce the contribution of (1) and more directly compare the alignment between the two models. We have modified the framing in the paper to make this intention clearer.\\n\\n> I really dislike the statement ...\\n\\nWe agree that this statement is not well-supported, so we have removed it from the paper. As described above, we have modified the framing and claims about non-contextual word retrieval.\\n\\n> BERT works at the subword level but, from what I understand, your parallel corpus (both for train/test) is aligned at the word level.\\n\\nThank you for noticing this point, which was not addressed in the paper. We handle this issue by keeping the vector for the last subword of each word, and we have added a sentence to this effect in the paper.\\n\\n> Calling \\\"fully-supervised\\\" to the \\\"translate-train\\\" system is misleading. Please simply call it \\\"translate-train\\\". I assume you want to refer to Figure 3 instead of Figure 2 in Section 5.2.\\n\\nThanks for pointing these out; we have made these changes.\"}",
"{\"title\": \"Thank you for the interesting questions.\", \"comment\": \"Thank you for the interesting questions, which we address below:\", \"1\": \"These two metric functions are quite similar because ||a - b||^2 = ||a||^2 - 2<a, b> + ||b||^2, so if the vectors remain roughly the same length, minimizing the L2 distance is similar to maximizing the inner product. To avoid blowing up the vector lengths, we would probably want to maximize the cosine similarity instead (the inner product normalized by the norm). Given that the retrieval evaluation uses a modified version of cosine similarity, optimizing this metric instead of L2 could be interesting to explore as a way to improve alignment.\", \"2\": \"This is a good point: it would be interesting to examine how our method fares under higher noise situations. Some possible ablations might be using expert-annotated word pairs or inserting fake word pairs to simulate noise in a controlled manner, which we have not tried yet. Given that we were more interested in precision over recall, we used the intersect method to produce less noisy word pairs, with the tradeoff of lower coverage. It\\u2019s possible that the method could benefit from a higher recall approach, where we have more word pairs but they are noisier. One reason to prefer higher precision is that we only use 250K sentences from the 2M sentences in Europarl, so we could instead just increase the number of sentences if we wanted more word pairs. But in a low-resource setting, we might have fewer parallel sentences, so a higher recall approach could make sense. Characterizing the method\\u2019s robustness to noise could help us find the optimal tradeoff between precision and recall, which we hope to explore in future work.\", \"3\": \"When we fine-tune on zero-shot transfer, we allow all of the weights to change, but we also use linear learning rate warmup, which might prevent some of the initial drift you are mentioning. The rest of the optimization hyperparameters are in the appendix. It\\u2019s a good point that the model could forget the alignment while fine-tuning on a downstream task, so it might be useful to maintain some sort of regularization that keeps the embeddings aligned. This approach seems worth trying and could improve accuracy.\", \"4\": \"Thank you for pointing this out. The varying English accuracy across the zero-shot models results from our method of model selection, where we select a model based on its average accuracy across the languages. Therefore, if a model has unusually high zero-shot accuracy early on in the training procedure, we might select that checkpoint even if it has low English accuracy. We have added a sentence in the paper to explain this point.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposed a pre-training method for strengthening the contextual embeddings alignment. Given parallel sentences from a different language, the authors proposed to enforce corresponding words that have a similar representation by minimizing the squared error loss. The authors also proposed to use the an regulation that prevents the learned embedding from drift too far. The authors evaluated the proposed pre-training on the contextual alignment metric and show the BERT has variable accuracy depends on the language. The proposed method improved significantly on zero-shot XNLI compares to the base model.\\n\\nThe paper is well written, and the proposed aligned loss makes sense and should augment the multi-lingual pre-training from a high level. The authors did a good job of analyzing the bert for multi-lingual. There some details may help the reader understand the paper better\", \"1\": \"Why use L2 distance as the metric function, what is the performance of using the inner product as a metric function? and what is the difference here?\", \"2\": \"The authors mentioned the word pairs are extracted from the existing method which may be noisy. I wonder is there any ablations study with respect to how the word pairs affect the pretraining?\", \"3\": \"When finetuning on zero-shot transfer, what is the finetune setting? Is there any strategy to avoid the lower layer embedding from drifting away?\", \"4\": \"In table 3, the Fully supervised Base Bert on English is close to the zero-shot setting and the base BERT model is better than Alignment bert, I wonder can the authors explain more on this?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper presents a new method to further align multilingual BERT by learning a transformation to minimize distances in a parallel corpus.\", \"I think that this is overall a solid work. Although simple, the proposed method is well-motivated, and the reported results are generally convincing. However, I think that the paper lacks an appropriate comparison with similar methods in the literature, and the separation between the real evaluation in a downstream task (XNLI) and the analysis on a rather artificial contextual word retrieval task (which favors the proposed system) is not clear enough.\", \"More concretely, these are the aspects that I think the paper could (and should) improve:\", \"You are not comparing to any baseline using parallel data with contextual embeddings. You should at least compare your method to Schuster et al. (2019) and/or Aldarmaki & Diab (2019), who further align multilingual BERT in a supervised manner as you do, as well as Lample and Conneau (2019), who propose an alternative method to leverage parallel data during the training of multilingual BERT. In fact, while you do improve over multilingual BERT, your results in XNLI are far from the current state-of-the-art, and this is not even mentioned in the paper.\", \"The \\\"contextual word retrieval\\\" task you propose is rather artificial and lacks any practical interest. It is not surprising that your proposed method is strong at it, as this is essentially how you train it (you are even using different subsets of the exact same corpus for train/test). The task is still interesting for analysis -which is in fact one of the main strengths of the paper- but it should be presented as such. Please consider restructuring your paper and moving all these results to the analysis section, where they really belong.\", \"I do not see the point of the \\\"non-contextual word retrieval\\\" task, when you are in fact using the context (the fact that there is only one occurrence per word type doesn't change that). This task is even more artificial than the \\\"contextual word retrieval\\\" one. Again, it can have some interest as part of the analysis (showing that the gap between aligned fasttext and aligned BERT goes down from table 1 to table 2), but presenting it as a separate task as if it had some value on its own looks wrong. From my point of view, the real \\\"non-contextual word retrieval\\\" task would be bilingual lexicon induction (i.e. dictionary induction), which is more interesting as a task (as the induced dictionaries can have practical applications) and has been widely studied in the literature.\", \"I really dislike the statement that contextual methods are \\\"unequivocally better than non-contextual methods for multilingual tasks\\\" on the basis of the non-contextual word retrieval results. If you want to make such a strong statement, you should at least show that your method is better than non-contextual ones in a task where the latter are known to be strong (i.e. bilingual lexicon induction, see above). However, your comparison is limited to a new task you introduce that clearly favors your own method, and in fact requires using the non-contextual methods in a non-standard way (concatenating the word embeddings with the avg/max/min sentence embeddings). Please either remove this statement or run a fair comparison in bilingual lexicon induction (and preferably do both).\", \"BERT works at the subword level but, from what I understand, your parallel corpus (both for train/test) is aligned at the word level. It is not clear at all how this mismatch in the tokenization is handled.\"], \"minor_details_that_did_not_influence_my_score\": [\"Calling \\\"fully-supervised\\\" to the \\\"translate-train\\\" system is misleading. Please simply call it \\\"translate-train\\\".\", \"I assume you want to refer to Figure 3 instead of Figure 2 in Section 5.2\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper conducts a series of experiments on the multilingual BERT model of Devlin et al., aiming to inject stronger bilingual knowledge into the model for improved 'Aligned BERT'. The knowledge originating from parallel (Europarl) data improves the model significantly as shown on tasks such as contextual and non-contextual word retrieval as well as in zero-shot XNLI task. The paper continues the line of work on cross-lingual contextualised word embeddings, and it brings several minor contributions, but overall I do not see it as a very inspiring piece of work, and it leaves open several very important questions, in particular its relationship to prior work and some potentially stronger baselines than the ones reported in the paper, plus more experiments with more distant language pairs.\\n\\nI am not exactly sure that the comparison between 'Aligned BERT' and the main baseline 'Aligned fastText + sentence' is completely fair. 'Aligned BERT' uses more than 2M Europarl sentences to learn the alignment, while the standard alignment methods for learning cross-lingual word embeddings (see e.g. Ruder et al.'s survey) typically rely only on 5k translation pairs or even less pairs. There is a huge difference in the strength of the bilingual signal between 2M parallel sentences and, say, 2k, word translation pairs. \\n\\nThe main goal of the paper is to improve alignment of the starting multilingual BERT model, but I wonder why the authors have not compared to a more suitable XLM baseline of Lample and Conneau (NeurIPS 2019; the paper has been on arXiv since January 2019) - the XLM model uses exactly the same resources as 'Aligned BERT': parallel sentences from Europarl, while the main baseline here uses only seed dictionaries to learn the mapping. Regarding the baselines, it is also not clear to me why the authors have not compared to previous work of Schuster et al. (2019) and Aldarmaki and Diab (2019) at least in tasks where the models can be directly compared (XNLI or non-contextual word retrieval). Also, another non-contextual model which is worth trying is a joint model which relies on parallel sentences (similar to Ormazabal et al., ACL-19).\\n\\nFor the 'Aligned fastText + sentence' baseline, it would be interesting to report numbers with another (hybrid) baseline model that combines aligned fastText vectors with sentence encodings produced by multilingual BERT or some other multilingual sentence encoder (such as LASER, see Schwenk et al., 2019). Simply taking min, max, and avg vectors over all the sentence words might not be the best way to encode the sentence, and I would like to see more experiments here.\\n\\nThe paper makes some claims on novelty which 1) partially overlap with prior work, or 2) it does not cite related work while it leans on its findings. For instance on Page 4, the authors claim that their \\\"(...) alignment method departs from prior work, in which each non-English language is rotated to match the English embedding space through individual learned matrices.\\\" However, there is at least one previous paper (Heyman et al., NAACL 2019) which did the same thing as the authors and showed that departing from learning projections only to English leads to more robust multilingual embeddings. Further, also on Page 4, the authors discuss that the assumption on learning good rotation matrices relies on the assumption of rough/approximate isomorphism without citing a body of related work that actually investigated this assumption such as the work of Sogaard et al. (ACL 2018). Also, the paper should do a better job in Section 2 and cover \\\"word vector alignment\\\" in more detail (e.g., a good starting point might be Ruder et al.'s survey paper on cross-lingual word embeddings).\\n\\nThe assumption of rough/approximate isomorphism is problematic also for non-contextual cross-lingual embeddings in settings with more distant language pairs. The authors mention that it may not hold for 'contextual pre-trained models given their increased complexity'. This is very imprecise writing taking place imho: 1) it is not clear why it should not hold in the case of contextual pre-trained models (at least for similar languages). Are there any properties of the contextual models that invalidate that assumption? It is also not exactly shown why contextual pre-trained models have increased complexity compared to e.g. fastText. How does one measure that 'model complexity' in objective terms? In fact, the paper would contribute immensely from more precise writing: e.g., on Page 3 contextual alignment of the model f is defined as accuracy in contextual word retrieval. This reads as defining a critical concept or a task as an evaluation measure (that measures the success of that task). In Introduction, the paper aims to \\\"better understand BERT\\u2019s multilingualism\\\", but I do not see how it contributes to our better understanding of BERT's multilingualism besides a pretty straightforward claim that it shows less multilingual potential when doing experiments with Greek and Bulgarian that use different scripts. Figure 2 and Figure 3 also do not bring anything new - the paper seems to just state known facts without proposing new solutions on how to e.g. learn better alignments for Greek or Bulgarian.\", \"one_important_analysis_aspect_is_missing_from_the_paper\": \"there are no experiments with more distant language pairs (the most distant language pair is English-Greek). I would like to see more experiments in this space. Another experiment which would contribute to the paper is the analysis of the importance of parallel corpora size. How much does the model lose in its performance by shrinking the parallel corpus? We cannot expect having 2M sentences for so many language pairs, and, even if we do have the data, the paper does not convince me that I should use 'Aligned BERT' instead of e.g. the XLM model of Lample and Conneau.\", \"minor_remarks\": \"As a variant of the contextual word retrieval, have the authors tested if a correct target language sentence can be retrieved only looking at the context of the source language word? This would provide some insight on the importance of modeling context via BERT versus via simple context averaging.\\n\\nRegarding the analysis between closed-class and open-class words performance, the difference in performance can be due to mere frequency: closed-class word types are very scarce, but their corpus frequency is quite high which also leads to learning better representations in the first place, as well as better alignments later on.\"}"
]
} |
B1xRGkHYDS | A bi-diffusion based layer-wise sampling method for deep learning in large graphs | [
"Yu He",
"Shiyang Wen",
"Wenjin Wu",
"Yan Zhang",
"Siran Yang",
"Yuan Wei",
"Di Zhang",
"Guojie Song",
"Wei Lin",
"Liang Wang",
"Bo Zheng"
] | The Graph Convolutional Network (GCN) and its variants are powerful models for graph representation learning and have recently achieved great success on many graph-based applications. However, most of them target on shallow models (e.g. 2 layers) on relatively small graphs. Very recently, although many acceleration methods have been developed for GCNs training,
it still remains a severe challenge how to scale GCN-like models to larger graphs and deeper layers due to the over-expansion of neighborhoods across layers. In this paper, to address the above challenge, we propose a novel layer-wise sampling strategy, which samples the nodes layer by layer conditionally based on the factors of the bi-directional diffusion between layers. In this way, we potentially restrict the time complexity linear to the number of layers, and construct a mini-batch of nodes with high local bi-directional influence (correlation). Further, we apply the self-attention mechanism to flexibly learn suitable weights for the sampled nodes, which allows the model to be able to incorporate both the first-order and higher-order proximities during a single layer propagation process without extra recursive propagation or skip connection. Extensive experiments on three large benchmark graphs demonstrate the effectiveness and efficiency of the proposed model. | [
"Layerwise Sampling",
"Graph Neural Networks",
"Attention Mechanism"
] | Reject | https://openreview.net/pdf?id=B1xRGkHYDS | https://openreview.net/forum?id=B1xRGkHYDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"oOM8PH7tVx",
"rkxhzovcor",
"r1gaD7lPoS",
"BkeqTflwor",
"SkxlBGeDoH",
"HyekiqqfiH",
"H1eTO6VRYS",
"BJezS00aYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727488,
1573710612491,
1573483364945,
1573483201545,
1573483064251,
1573198486713,
1571863925163,
1571839546224
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1599/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1599/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1599/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1599/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1599/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1599/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1599/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper addresses the challenge of time complexity in aggregating neighbourhood information in GCNs. As we aggregate information from larger hops (deeper neighbourhoods) the number of nodes can increases exponentially thereby increasing time complexity. To overcome this the authors propose a sampling method which samples nodes layer by layer based on bidirectional diffusion between layers. They demonstrate the effectiveness of their approach on 3 large benchmarks.\", \"while_the_ideas_presented_in_the_paper_were_interesting_the_reviewers_raised_some_concerns_which_i_have_summarised_fellow\": \"1) Novelty: The reviewers felt that the techniques presented were not very novel and is very similar to one existing work as pointed out by R4\\n2) Writing: The writing needs to be improved. The authors have already made an attempt towards this but it could be improved further\\n3) Comparisons with baselines: R4 has raised some concerns the settings/configurations used for the baseline methods. In particular, the results for the baseline methods are lower than those reported in the original papers. I have read the author's rebuttal for this but I am not completely convinced about it. I would suggest that the authors address this issue in subsequent submissions\\n\\nBased on the above reasons I recommend that the paper cannot be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Comments to the author-response\", \"comment\": [\"I appreciate the authors for considering the comments and updating the paper and reporting results for some of the additional experiments asked.\", \"On a fair comparison with a GCN base model, BS-GCN, the improvements are not that huge as reported with BS-GAT, they are ~1%. Since the results are not reported over multiple runs (different seeds, ideally different train/test sets) the significance of this improvement is not clear.\", \"Moreover, I still stand with my statement that it is not fair to report use a setting for baselines where their performance is lower than reported except for the following two cases: 1) the case where the original implementation of the baselines yield a lower performance similar to what the authors report here, i.e if the baseline results are not reproducible 2) the case where the authors convince that the baseline setting is not acceptable for certain reasons . Otherwise, I would suggest that the authors use the settings of the baseline models to be fair for the proposed model. Unless one of the following condition is the case,\"]}",
"{\"title\": \"Response to Review #4\", \"comment\": \"Thank you very much for your insightful and helpful comments on our submitted manuscript. We fully accept these valuable comments and carefully revise the manuscript one by one. The detailed revisions are reported as follows.\\n\\n1) We re-implement all the baselines by considering the following three concerns: Firstly, the original implementations of different methods have inconsistent model tricks and data preprocessing, e.g., FastGCN uses the normalization layers after each convolution layer but others not, AS-GCN applies two model designs on the large Reddit dataset and other datasets, respectively, AS-GCN also uses an extra MLP layer for better feature learning. Therefore, it may not be very fair to use the original implementations. Secondly, some original codes (e.g. AS-GCN) have poor flexibility (e.g. adjust the layers). As the convolution layers are hard-coded into the models, it is not easy to extend them from shallow layers (e.g 3 layers) to deep layers (e.g 5 layers). Lastly but most importantly, we develop a general graph deep learning system with flexible extensity, in which we implement all the baselines with a unified paradigm (in fact, most of the existing GNN models can be very easily implemented in our unified framework). We will release the system after review.\\n\\n2) We add a \\u2018Case Study\\u2019 section in the revised manuscript to disentangle the effects of the two parts of BLS-GAN. In the Case Study, we implement two variants of our BLS-GAN: BLS-GCN(using bi-diffusion based sampling, but the constant weights of GCN instead of the attention mechanism) and AS-GAN(using the adaptive sampling of AS-GCN instead of the bi-diffusion based sampling, but the learnable weights of graph attention mechanism). The results demonstrate that: a) the bi-diffusion based sampling could achieve significant gains on all the datasets, and the deeper the layers, the greater the gains; b) the attention mechanism is obviously helpful for the embedding learning but the effect may depend on the peculiarity of target dataset.\\n\\n3) We adjust the presentation of Table 3 by showing the relative speedups with the results of GCN rather than a long table with numbers, which makes it more easy to read.\\n\\n4) We carefully revise the writing of our manuscript by removing the high-level intuitions and ill-defined terms. Moreover, we further polish our manuscript, and try our best to make it easier to read.\\n\\nFinally, thanks again for your comments, and hope this response could clear your concerns.\"}",
"{\"title\": \"Official Blind Review #1\", \"comment\": \"Thank you very much for your insightful and helpful comments on our submitted manuscript. We fully accept these valuable comments and carefully revise the manuscript one by one. The detailed revisions are reported as follows.\\n\\n1) We carefully revise the writing of our manuscript by removing the high-level intuitions and ill-defined terms. Moreover, we further polish our manuscript, and try our best to make it easier to read.\\n\\n2) We adjust the presentation of Table 3 by showing the relative speedups with the results of GCN rather than a long table with numbers, which makes it more easy to read.\\n\\n3) We add a \\u2018Case Study\\u2019 section in the revised manuscript to disentangle the effects of the two parts of BLS-GAN. In the Case Study, we implement two variants of our BLS-GAN: BLS-GCN(using bi-diffusion based sampling, but the constant weights of GCN instead of the attention mechanism) and AS-GAN(using the adaptive sampling of AS-GCN instead of the bi-diffusion based sampling, but the learnable weights of graph attention mechanism). The results demonstrate that: a) the bi-diffusion based sampling could achieve significant gains on all the datasets, and the deeper the layers, the greater the gains; b) the attention mechanism is obviously helpful for the embedding learning but the effect may depend on the peculiarity of target dataset. \\n\\nThanks again for your comments, and hope this response could clear your concerns.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you very much for your comments.\\nIn the revised manuscript, we adjust the presentation of Table 3 by showing the relative speedups with the results of GCN rather than a long table with numbers, which makes it easier to read. Moreover, we further polish our manuscript, and try our best to make it more concise.\\nThanks again for your comments.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper aims at improving the computational efficiency of GCNs to effectively capture information from the larger multi-hop neighborhood. Conventionally, GCNs use information from all the neighbors up to a certain depth; in which case, with consideration of each further hop, the neighborhood size increases exponentially. To avoid the exponentially increasing memory and computational footprints of GCNs as a result of an exponential neighborhood expansion, this paper proposes a (hop) layer-wise sampling procedure that reduces the complexity to a linear factor. The sampling of nodes at a layer, \\u2018l\\u2019 is based on the transmission probabilities of the nodes at layer \\u2018l\\u2019 and their immediate neighbors sampled earlier in layer \\u2018l+1\\u2019 from both directions of diffusion. The proposed model is based on Graph ATtention Network (GAT) which is adopted here to aggregates neighborhood information only over the nodes sampled with their bi-diffusion sampler.\", \"strengths_of_the_paper\": \"The paper intuitively suggests that some of the popular sampling-based scaling approaches for GCN may not be powerful enough as they don\\u2019t consider bi-directional influences.\", \"weaknesses_of_this_paper\": [\"Novelty: The idea is incremental. The paper is similar to the layer-wise sampling model, AS-GCN where instead of the base GCN model this paper uses GAT coupled with its proposed bi-diffusion sampler.\", \"Experimental results:\", \"(a) Inconsistent baseline results: The performance of baselines reported here on standard train/test/val splits are significantly lower than the ones reported in the original papers. For ex: with FastGCN the original papers report 0.88 and 0.937 on PPI and Reddit which is ~0.03 more than what is reported here. With the case of AS-GCN, the original performance scores are superior to the proposed model in the paper, however here they are reported ~0.04 scores lower. Since the codes for all these baselines are available, it is only fair to use the original implementation; if not, it is important to replicate the original results before using a different implementation.\", \"(b) Variance and statistical significance results are missing\", \"(c) Cluster-GCN though discussed, an experimental comparison with it is missing. Reported results from Cluster-GCN paper on Reddit and PPI suggests a superior performance over BLS-GAT.\", \"(d) BLS-GCN missing. This would be a fair comparison to FastGCN and AS-GCN.\", \"(e) Experimental comparison with Jumping Neural network (Xu et al) is missing to understand how the proposed solution improves over existing solutions for over-smoothing. It would be helpful to even couple it with Fast-GCN/AS-GCN sampler to better understand the benefits of this paper.\", \"Writing:\", \"The paper is not well written. Though there are only minor grammatical mistakes, multiple sections of the paper are not clear and are hard to read because of complex sentences and long paragraphs.\", \"Some of the terminologies used are not clearly described and are not explained prior to the usage. Some of them are neighbor-explosion, over-expansion, the width of neighborhood expansion, local correlations, etc, In some places, over-expansion is used to refer only neighbourhood explosion or only over-smoothing and both. It will be comprehensive if it is grounded.\", \"Numerous claims/ideas put forth in this paper are abstract and intuitive. The intuitions should be backed with proper support. Some of the major concerns are:-\", \"(a) proof/arguments to show that layer-wise sampling may lead to sparse mini-batches and how does that in-turn impact over-smoothing\", \"(b) how does the proposed model avoid over-smoothing?\", \"(c) why sub-graph methods are not effective ? .. etc\", \"It is true that using a fixed neighborhood weightage function as with GCNs may not be optimal. However, the discussion made on GCN and its lack of an appropriate normalization/ neighbourhood weightage function is incorrect. GCNs aggregate information from further neighborhoods according to the respective higher-order diffusion laplacian matrix entries. You can see that by simply removing the nonlinearity+weights and recursively expanding the GCN equation.\", \"Other comments:\", \"Provide the complexity of the proposed model (GCN + sampler) and compare it with other sampling approaches.\", \"The connectivity structure + signal on the nodes of the graphs is the data that is being convolved and they are not the filters. The weights being learned are the filters.\", \"In Eqn: 2, I believe you are providing an equation for GS-GCN. In which case the fraction should be N(v)+1/ (N_s(V)+1) to match the original model/implementation for GraphSAGE (GS) paper.\", \"I think the summation in the denominator for AS-GCN following Eqn: 3 should run over V instead of V_l.\", \"It will be helpful to run the model on directed datasets to see improved benefits of bi-diffusion sampling.\", \"Need more discussion about AS-GCN, Cluster-GCN and Jumping Neural networks (Xu 2018b)\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose a sampling method for graph neural networks which is applicable to very large graphs (where not all nodes can be kept in memory at the same time). The method uses the transition probabilities of a random walk to construct a sampling probability of the nodes in the lower layer given the nodes in the upper layer. Since this samples nodes which can be one or multiple hops away, an attention mechanism is used to weight updates from connected nodes. Experiments show that this method is promising.\\n\\nIn its current state I would be inclined to reject this paper, but I could be convinced otherwise. The idea, although relatively straightforward, seems powerful and the experiments seem to support it. My main concern is that paper is not well written. It contains long meandering paragraphs (e.g., all of section 2 is a single paragraph) with high-level intuitions and ill-defined terms, making it hard to read. Similarly, results are badly presented. For example, table 3 should probably be given as relative speedups with the best results bold-faced rather than a long table with numbers. Illustrations would also be very helpful to provide an intuition about formulas 4, 5, and 6. Moreover, a simple ablation study is necessary (e.g., using bi-diffusion based sampling, but using constant weights instead of the attention mechanism, or vice-versa, using the attention mechanism with other types of sampling). It is currently impossible to disentangle the effects of the two parts of BLS-GAN.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper was an interesting read. The idea of this paper is to challenge the use of Laplacian matrix in GCN. Indeed, typical GCNs use the same adjacency matrix across different layers. In particular, this typically leads in Euclidean case to learning isotropic filters (because the euclidean Laplacian is isotropic). Consequently, such filters have no selectivity at all.(in the Euclidean case, that could correspond to the selectivity to orientations - no selectivity would lead to a difference of Gaussians) Furthermore, for non-sparse graphs, computing the iterations of the Laplacian matrix can require a significant computational power.\\n\\nIn order to tackle this problem, the authors introduced a diffusion factor to sample a set of nodes to build some GCN filters with finite support. At a given layer, the diffusion factors is based on the interaction with other layers of the GCN. Then, a layer-wise attention mechanism that will allow to weight the graph connectivity of the sampled nodes is used, which is supervisedly learned. Each numerical experiments lead to a significantly better accuracy, while the method trains in reasonable time. This is thus numerically convincing. Furthermore, this method is, to my knowledge, new.\\n\\nThe paper is clearly written, the numerical experiments are convincing and the authors address a difficult problem with a simple method: I'm leaning toward an \\\"Accept\\\".\", \"minor\": [\"Tables 2/3 are hard to read.\", \"The paper is 10 pages long, yet this was an interesting read.\"], \"post_discussion\": \"The other reviewers have made some good point, and thus I decided to lower my score. I still find the paper address an interesting problem.\"}"
]
} |
rJgRMkrtDr | Learning Video Representations using Contrastive Bidirectional Transformer | [
"Chen Sun",
"Fabien Baradel",
"Kevin Murphy",
"Cordelia Schmid"
] | This paper proposes a self-supervised learning approach for video features that results in significantly improved performance on downstream tasks (such as video classification, captioning and segmentation) compared to existing methods. Our method extends the BERT model for text sequences to the case of sequences of real-valued feature vectors, by replacing the softmax loss with noise contrastive estimation (NCE). We also show how to learn representations from sequences of visual features and sequences of words derived from ASR (automatic speech recognition), and show that such cross-modal training (when possible) helps even more. | [
"self-supervised learning",
"video representations",
"cross-modal learning"
] | Reject | https://openreview.net/pdf?id=rJgRMkrtDr | https://openreview.net/forum?id=rJgRMkrtDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"87Yf_X7sy",
"H1glc6Jsir",
"BJxcPa1sjB",
"B1g4yTJjsr",
"Syl7oex0cB",
"HJeDipXRtr",
"ryg9-YehtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727459,
1573744008152,
1573743969753,
1573743836235,
1572892826767,
1571859870885,
1571715329725
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1598/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1598/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1598/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1598/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1598/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1598/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper studies self-supervised video representations with a multi-modal learning process that the authors then use for performance on a variety of tasks. The main contribution of the paper is a successful effort to incorporate BERT-like models into vision tasks.\\n\\nReviewers acknowledged the extensive empirical evaluation and the good performance of the approach. However, they raised some concerns about the lack of clarity and the absence of analysis and interpretation of the results. The AC shares this view, and recommends rejection at this time, encouraging the authors to revise their work addressing these analysis and clarity questions.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Our response\", \"comment\": \"Thank you for your positive feedback.\"}",
"{\"title\": \"Our response\", \"comment\": \"Thank you for your positive feedback.\", \"comparison_to_howto100m\": \"Although we use the HowTo100M dataset for pre-training, there are key differences to (Miech, 2019c):\\n1. Miech et al. improve text-video embedding by training on HowTo100M and show the gain by transferring it to the text to video retrieval task. In comparison, CBT focuses on learning generic visual and temporal features, with or without using text-video correspondences. We show that CBT can be transferred to various downstream tasks, such as classification, anticipation, segmentation and captioning.\\n2. Miech et al. assume the visual features to be pre-trained and fixed, while CBT can be applied for self-supervised visual representation learning, as shown in Table 1. \\n\\nDirect comparison to (Miech2019c):\\nWe evaluate our cross-modal model pre-trained on HowTo100M with the same preprocessing as in (Miech2019c), i.e. with short clips. We evaluate on the MSR-VTT clip retrieval benchmark (zero-shot settings) and can observe that we outperform their approach, see below. We will add this results to the final version of the paper if accepted. \\nHowTo100M (table 6): R@1: 7.5 R@5: 21.2 R@10: 29.6 median R: 38\", \"ours\": \"R@1: 8.3 R@5: 23.3 R@10: 33.2 median R: 30\\n\\n\\u201cThe inputs are real-valued vectors\\u201d:\\nThank you for pointing this out, we will clarify in the final version.\", \"influence_of_asr_and_punctuation\": \"The video clips and speech released by HowTo100M were preprocessed and broken into short segments. To learn long-term temporal features with the transformer, our approach requires longer input clips. Hence, we re-extract the ASR with the same algorithm as HowTo100M and run punctuation to get longer semantic coherent text segments (sentences). Furthermore, we concatenate several consecutive sentences to obtain even longer sequences of video-ASR training data.\"}",
"{\"title\": \"Our response\", \"comment\": \"Thank you for your positive feedback!\", \"in_the_following_we_summarize_our_main_contributions\": \"1. CBT objective for self-supervised visual representation learning. We study the impact of the CBT objective, which uses long temporal information as self-supervision, in Table 1. We observe that CBT outperforms alternative training objectives based on local spatial (3DRotNet) and temporal (Shuffle&Learn) objectives significantly, when they were pre-trained on the same Kinetics data with the same backbone ConvNet (Table 1 left). Our CBT method also outperforms the state of the art (Table 1 right).\\n\\n2. CBT and cross-modal objectives for temporal representation learning. In Table 2 (left) and Table 4 (left) we compare our method with VideoBERT, which applies vector quantization on visual features. We found CBT outperforms VideoBERT significantly. In Table 3 (left), we observe that the cross-modal objective further improves the performance on action anticipation tasks.\", \"choice_of_datasets\": \"The action segmentation accuracy metric, along with the baselines we compare with, were quoted from Table 3 of the COIN dataset paper. We note that their fully-supervised baseline (Tang et al. 2019) performed worse than the weakly-supervised baseline (Ding & Xu 2018). We will make the supervision type clear in the final version.\\nBesides the COIN dataset, we do provide ActivityNet evaluations in Table 2&3, along with other benchmarks such as HMDB (Table 1), UCF (Table 1), Breakfast (Table 2&3) and YouCook (Table 4). This range of datasets helps us understand the performance of CBT in diverse visual domains.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper is about a self-supervised video representation with a multi-modal learning process that the authors then use for performance on a variety of tasks. The main contribution of the paper is a successful effort to incorporate BERT-like models into vision tasks. As is detailed in the related work, the field has been inching towards this but without as much success as this paper has.\\n\\nMy main criticism of the paper is that it feels like there is everything and a bag of chips happening; It's exceptionally hard to tease apart what is the main contribution to its success. I mostly came away from the paper thinking that it was good to see an existence proof of successfully incorporating the result, but not having really understood anything more wrt why or how this works. Other than it being a good idea to have a bigger model and more varied types of gradients, it's unclear what this model does that distinguishes it from other approaches.\\n\\nOn a more specific critique level, why use COIN? And why compare on a frame accuracy metric? The comparison to Ding & Xu seems a bit odd given that they don't assume access to annotations but rather to video transcripts. There are other datasets that you could make use of here that are more applicable, like Thumos14 or ActivityNet. I understand that this is a small section, but arguably the paper would be stronger if more time was spent on the main result than on this sidebar.\\n\\nOverall, I'm giving it a weak accept because I do think that the community should be aware of this paper's result.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This is one of those papers where the number of experiments conducted to produce the results is beyond the capabilities of \\\"almost all\\\" research groups. From the paper: \\\"we use 32 Cloud TPUs. The model is trained for 2 million iterations, which takes around 2 days.\\\" However, with that being said, it's a good paper of general interest to the community.\\n\\nThe paper focuses on self-supervised learning in video, and combines two contributions. The first is using a noise contrastive estimation loss (2016) which can be used for any visual dataset. The second is a cross-modal (BERT) model that requires language and vision. A few modifications over other BERT flavours are introduced. The cross-modal BERT is not tested alone, however when added to the NCE loss function, seems to suit a range of downstream tasks from classification to anticipation and captioning. NCE alone seems to clearly produce better results over published results, however these are not compared like-to-like, as published results are used for this comparison.\\n\\nThe paper is full of technical details to reproduce the results. This makes the main novelty is actually in showing that this approach works. However, the approach is technically sound and up to my knowledge has not been attempted before.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper presents a novel method to extract cross-modal text-visual embeddings on the HowTo 100M corpus. The core idea is to extend previous work on clip-level embeddings (e.g. the max-margin ranking loss proposed for HowTo 100M) to a transformer architecture which takes into account the entire context of a video, which should lead to better learned representations and improved performance in downstream tasks. In addition, the max-margin loss is replaced by noise contrastive estimation.\", \"The paper is well written and explains the main problem well, however I do have a few questions:\", \"I do not understand the sentence \\\"However, for images and videos, the inputs are real-valued vectors.\\\" (Section 3.2) - Transformers are being used for speech recognition or speech translation - the input features are not the problem. The outputs are assumed to be discrete (in the original formulation)\", \"Why not directly compare your approach to the approach presented in (Miech, 2019c) - it would be interesting to see a direct comparison, but as far as I can tell, there is no overlap in tasks?\", \"What is the influence of adding punctuation to the ASR output, how good is it, and how good is the underlying ASR? Why did you not use the original text annotations provided by HowTo 100M, but run the audio through Google ASR (again?) It would be good to know how good the ASR is, and if adding in punctuation post-hoc works well, and how this influences your use with a pre-trained BERT model. My guess is that the BERT model will be happy as long as it sees a \\\".\\\" at the end?\", \"Also, would it be possible to compare the results of your work with some of the work in (Miech, 2019c) - it almost seems that your work avoids comparing your results to this previous work.\"]}"
]
} |
HJxnM1rFvr | HUBERT Untangles BERT to Improve Transfer across NLP Tasks | [
"Mehrad Moradshahi",
"Hamid Palangi",
"Monica S. Lam",
"Paul Smolensky",
"Jianfeng Gao"
] | We introduce HUBERT which combines the structured-representational power of Tensor-Product Representations (TPRs) and BERT, a pre-trained bidirectional transformer language model. We validate the effectiveness of our model on the GLUE benchmark and HANS dataset. We also show that there is shared structure between different NLP datasets which HUBERT, but not BERT, is able to learn and leverage. Extensive transfer-learning experiments are conducted to confirm this proposition. | [
"Tensor Product Representation",
"BERT",
"Transfer Learning",
"Neuro-Symbolic Learning"
] | Reject | https://openreview.net/pdf?id=HJxnM1rFvr | https://openreview.net/forum?id=HJxnM1rFvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Y1Me_gVCgE",
"HkehmlJ2sr",
"ryxJoJ13or",
"rkla4k1hsB",
"ByxT3ARijB",
"rJgzyIYGqr",
"B1gqjnahKB",
"B1l6scnPYr",
"BJgM--mYOS",
"rJeL3yMZOS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798727428,
1573806116148,
1573805974887,
1573805876992,
1573805748916,
1572144602014,
1571769505561,
1571437221024,
1570480378063,
1569951661922
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1595/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1595/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1595/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1595/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1595/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1595/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1595/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1595/Authors"
],
[
"~Florian_Mai1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper introduces additional layers on top BERT type models for disentangling of semantic and positional information. The paper demonstrates (small) performance gains in transfer learning compared to pure BERT baseline.\\n\\nBoth reviewers and authors have engaged in a constructive discussion of the merits of the proposed method. Although the reviewers appreciate the ideas and parts of the paper the consensus among the reviewers is that the evaluation of the method is not clearcut enough to warrant publication.\\n\\nRejection is therefore recommended. Given the good ideas presented in the paper and the promising results the authors are encouraged to take the feedback into account and submit to the next ML conference.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We would like to thank you for the comments and feedback.\\n \\nIn this work, we propose a new model combining the power of deep neural language models such as BERT with symbolic representations such as Tensor-Product Representations. To the best of our knowledge, this is the first work that examines implicit structure learning of transformer-based models on NLP tasks and provides a different way of doing transfer learning among different corpora in GLUE. We also show that this architecture can benefit other out of distribution probing tasks by achieving 2.21% absolute improvement in accuracy on the HANS dataset.\\n \\nAs you pointed out, our main motivation is separating data-specific semantics from general sentence structure by the means of a TPR layer. This is done in an unsupervised way and thus we don\\u2019t inject any prior information on what roles or fillers should be learned.\", \"response_to_your_questions\": \"1)\\nWe have reported the implementation details including d_S, d_R, n_S, n_R values in section A.3 of the paper under the implementation details section.\\nWe performed hyper-parameter tuning for both BERT and HUBERT models.\\n As for the dimension of roles and symbols, we did grid search over these values: [10, 30, 60]\", \"we_fixed_the_number_of_roles_to_35_and_searched_among_these_values_for_the_number_of_fillers\": \"[50, 100, 150].\\nWe chose the final values according to the best performance on MNLI dev set.\\nLSTM cell's hidden dimension for LSTM models is set to BERT base model hidden dimension which is 768 and for HUBERT (LSTM) is set to \\\"dS * dR = 32 * 32\\\" to eliminate the need for a projection layer when calculating new hidden states.\\n \\n2)\\nThank you for pointing this out. We have used symbol and filler notation interchangeably throughout the paper based on the context. We have added a footnote addressing this. \\n \\n3)\\nThe top 5 rows in Tables 2 and 3 show the performance of BERT model on 5 different target tasks. The baseline accuracies are when the model is initialized randomly and trained and tested on the target task. Consequently, these accuracies are different because they are measured for different tasks. The fine-tuned accuracy corresponds to a model that is fine-tuned on MNLI and then trained and tested on the target task.\\nThis happens for the bottom 5 rows of each Table showing the result for HUBERT.\\nWe report the best performing model on the target task dev set, and thus we indeed use the best accuracy as a baseline for comparison.\\nAlthough the baseline results for HUBERT are slightly worse than BERT in Table 2, we show that HUBERT performs much better when initialized with a fine-tuned model (compared to its own baseline), whereas BERT sometimes degrades the performance after knowledge transfer.\", \"response_to_the_minor_comments\": [\"Thank you for spotting the typo. It is now fixed in the revision.\", \"Figure 1 is being referred to in the introduction and Section 4.2 (Ablation Study). However, we will move it up to page 3, so that readers can refer to it earlier.\"]}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": [\"We would like to thank you for your review. Your comments on the work are much appreciated!\", \"As correctly pointed out, our work shows improvement in transfer learning across different tasks in GLUE. Please note that fine-tuning BERT model on intermediate tasks and evaluating its transferability is a challenging problem. For example, see the following papers (one of which is from the GLUE authors): https://arxiv.org/abs/1811.01088\", \", https://arxiv.org/abs/1812.10860\", \"They report that BERT (and other transformer-based models) have inconsistent results when transferring knowledge from an intermediate task to the target task, and often impact the down-stream task results negatively. This confirms our findings in this work and supports the importance of transferability among NLP tasks when finetuned on an intermediate task.\", \"To control for the randomness in the transfer learning results we ran our experiments by fine-tuning BERT with 3 different seeds and choosing the best results among them. However, for HUBERT we used the same seed for all 7 experiments and only changed the initial weights of layers in the model. We added more information regarding experiment settings in the new section we added to the paper, Section A.2 (Implementation details) and discussed the variance of the observed results.\", \"Although the baseline results for HUBERT are slightly worse than BERT in Tables 2 and 3, it is evident that HUBERT performs much better when initialized with a fine-tuned model (compared to its own baseline), whereas BERT usually degrades the performance after knowledge transfer.\", \"We added a section in the appendix (A.4) showing the interpretation and visualization of learned roles. Please refer to the updated version of the paper.\", \"response to the minor comment:\", \"Thank you for your comment. We have now moved Fig. 1 to page 3 as you suggested.\"]}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thanks for your detailed and helpful feedback!\", \"we_address_each_of_your_comments_regarding_the_empirical_gains_in_transfer_learning_below\": \"\", \"on_more_parameters_for_other_models\": \"In our initial experiments, we performed an ablation study by inserting a TPR or LSTM layer on top of certain layers of BERT (e.g. first 2 layers) and omitting the remaining layers. In those experiments, we observed that LSTM was degrading the performance whereas TPR was improving it. However, this conclusion is not True when all 12 layers of BERT-Base are used. For example, the MNLI dev accuracy of the LSTM model with only 10 layers of BERT was 82.64%, 0.84% lower than the accuracy of just the BERT model with no LSTM heads. \\nTherefore, we observed having more parameters does not necessarily result in better accuracies especially when the added layer is not pretrained with the rest of the model.\", \"on_variance_in_the_results\": \"We controlled for the randomness in the results in Table 1 by fine-tuning BERT and HUBERT with 3 different seeds in our experiments and choosing the best results among them. For the cases in which BERT has negative gains after transfer, we observed the same trend, independent of the random seed used. For all other target tasks except SNLI, the mean value for HUBERT gains were always higher than BERT gains. We have added notes regarding the variance of the results in section A.3 (Implementation Details).\", \"on_more_budget_for_hyper_parameter_search\": \"We performed hyper-parameter tuning for both BERT and HUBERT models on the MNLI dev set.\\nAs for the dimension of roles and symbols, we did grid search over these values: [10, 30, 60]\", \"we_fixed_the_number_of_roles_to_35_and_searched_among_these_values_for_the_number_of_fillers\": \"[50, 100, 150].\\nWe additionally performed some light tuning on learning rate, temperature value, and scaling value.\\nPlease refer to section A.3 in the appendix for implementation details.\", \"on_other_contributing_factors\": \"We carried out experiments by changing the ratio of fillers and roles. Making the number of roles and symbols the same would make it difficult to interpret results presented in Tables 2 and 3, as we would no longer be able to differentiate between filler and roles properly. Having a smaller number of roles than fillers corresponds to having less number of grammatical roles than semantic concepts in language. We also ran experiments with different values of \\\\lambda (regularization term) and observed that values higher than 10e-6 will decrease the final accuracy. We thus chose \\\\lambda value to be a small value lower than this threshold. It still encourages R matrix to be orthogonal but not to the extent that it hurts performance. We also updated the regularization term to account for both over-complete and under-complete matrices in the new revision of the paper.\\n \\n \\nWe hope that the above explanations have addressed your concerns. We would be happy to provide more information regarding the experimental setup or results, should you have more questions.\"}",
"{\"title\": \"Response to all the reviewers + Summary of the revisions\", \"comment\": \"We want to thank all the reviewers for their constructive feedback and helpful comments.\\n \\nOne major concern addressed by all reviewers is the interpretability of global Role and Filler matrices. \\nTo address this, we collected the POS tags and the attention vectors over the R matrix (embeddings of individual roles) for each token in the sentence. The attention scores are the distribution over the possible roles for a specific token. For each token, we chose two roles that have the highest attention scores and represent them as a tuple. We then found the distribution of the roles chosen for a specific POS tag. Our preliminary results show that there are correlations between some of the roles and POS tags showing that the learned roles can be indicative of grammar structures.\\n \\nWe have revised our paper and have submitted the new manuscript for your review. We have highlighted the parts that have been changed for better reading. Below is a summary of changes made in the new revision:\\n \\n1) We have improved the readability of our paper and made our claims in Section 4 more clear.\\n2) We have devoted a part in Section 2 to explain the previous work on BERT and fine-tuning methods such as STILTs (https://arxiv.org/abs/1811.01088).\\n3) We have done major revision on Section 4.3 by adding more results and explanations, to make the comparison between BERT+ and HUBERT+ fairer. HUBERT+ (HUBERT fine-tuned on MNLI and then SNLI subsequently) is outperforming BERT on all challenging non-entailment cases setting a new state-of-the-art average accuracy of 63.22% which is 2.21% higher than the BERT's average accuracy.\\n4) We have added more explanations on the experiment setting and implantation details in Section A.2. This should address the concerns regarding the variance in the results by reviewer 1 and 2.\\n5) We have added a new section in Appendix (A.4) on the interpretation of the learned roles.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes an alternative way of reusing pretrained BERT for downstream tasks rather than the traditional method of fine-tuning the embeddings equivalent to the CLS token.\\n\\nFor each bert embedded token, the proposed method aims at disentangling semantic information of the word from its structural role. Authors provide two ways to provide this disentagling using LSTM or transformer blocks. with several design choices such as: * a regularization term to encourages the roles matrix to be orthogonal and hence each role carry independent information * design the roles and symbols matrices so that the number of symbols is greater than the number of roles\", \"in_evaluation_authors_design_several_experiments_to_show_that\": [\"Does transferring disentangled role & symbol embeddings improve transfer learning\", \"the effectiveness of the TPR layer on performance?\", \"Transfer beyond Glue tasks?\", \"While those experiments provide empirical gains of the design choices, authors don't show enough study to attribute those empirical gains to the presented design choices:\", \"One large claim in the paper is that empirical gains in the ability of transfer between similar tasks MNLI and GLUE is because of disentangling the semantics from the role representations. We don't know if the TPR layer really manages to do that, this could have been easily verified using for example clustering word senses of the same word.\"], \"the_empirical_gains_in_transfer_learning_can_be_simply_attributed_to\": [\"More params it seems adding an LSTM over bert embeddings already does some improvement, I would have loved to see this more exploited but it wasn't. This aligns with some recent findings that BERT is undertrained (Liu et al. 2019) https://arxiv.org/abs/1907.11692\", \"Variance in the results (authors report only results of one single run not mean and std of several runs).\", \"More budget given to hyper-parameter search for the models proposed in the paper. Hyper param budget isn't also reported in the paper.\", \"other factors, not the ones associated with the claims in the paper: for example what authors claim is an ablation study was comparing several different models together. It would have been more interesting to see for example the effect of making the # symbols = # roles or removing the orthogonality loss from the roles matrix.\"], \"conclusion\": \"The paper introduces large claims and empirical results that correlate with, however the provided experiments are not done with enough control to attribute gains to the design choices provided in the paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a layer on top of BERT which is motivated by a desire to disentangle content (meaning of the tokens) and form (structural roles of the tokens). Figure 1 shows this clearly. The paper considers two variants of the disentangling layer (TPR), one with LSTMs (figure 2) and the other with attention (figure 3). The aim in both is to obtain a decomposition of the form x(t) = S a_s(v_t) a_r(v_t) R where S and R are shared matrices of parameters and v is the output of BERT.\\n\\nThe model is well motivated and includes clear reasonable design ideas, including choosing hyper-parameters so that the number of symbols (s) is greater than the number of roles (r), and forcing only the roles to be independent (eqn 6).\", \"minor\": \"I would have preferred that figure 1 appeared earlier in page 3. This would help as the authors forgot to define v in eqn 2. One has to wait for the figure. Having said this, the paper is extremely clear in the notation and does an excellent job at defining dimensions for all the quantities of interest.\\n\\nI read the paper eagerly and with excitement until I got to the results. First, it wasn't clear to me how well motivated is the idea of fine-tuning on intermediate tasks. I understand the authors are just trying to make a point that BERT does worse than their model in this case and that this is not good for transfer, but still I find this to be artificially constructed.\\n \\n The variations in the numbers seem small and possibly attributable to other factors. For this reason, I feel the authors should have continued showing results for the other baselines from the first experiment. I would also have loved to see some visualizations for a, r, A and R in the appendix. Some visualization and anecdotal results might have helped me see that the motivation is backed up by the results. I hope the authors have the time to do this and consider the extra experiments.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a fine-tune technique to help BERT models to learn & capture form and content information on textual data (without any form of structural parsing needed). They key addition to the classic BERT model is the introduction of the R and S embeddings. R &S are supposed to learn the information in text that is traditionally represented as the structural positions and the content-bearing symbols in those positions.\\n\\nIn order to effectively learn R and S embeddings, the authors propose two possible ways to do so: LSTM (Fig 2) and 1-layer Transformer (Fig 3). The main experiments are based on 1-layer transformer HUBERT b/c from a single test in Table 1, the transformer variant appears to be working better than the LSTM variant.\", \"my_main_concern_regarding_this_paper_is_two_fold\": \"limited novelty and insignificant performance gain.\\nThe authors did a great job motivating the need for separating role and filler in the intro. However, in neither implementation of HUBERT, I do not see how the structural information (e.g., a parse tree) is directly incorporated into the learning of HUBERT.\\n\\nRegarding the performance, it seems HUBERT is gaining very little over the BERT baseline. please refer to my specific question below.\", \"questions\": \"What are the numeric values for d_S, d_R, n_S, n_R (defined under Section 3 on page 2) in experiment ? I think d_S, d_R are determined at author's discretion (just like the dimensionality of, say, the LSTM hidden layer). But how are n_S and n_R determined?\\n\\nPage 7, first paragraph: what is Filler embeddings F? F is not defined in either version the proposed HUBERT( Figure 2 or Figure 3). Did the authors mean S?\\n\\nTable 2. Why do the first 5 rows and the bottom 5 rows have different baseline Acc. ? Shouldn't we always use the best accuracy as baseline for comparison? If we look at the HUBERT Fine-tuned Acc., in many cases, they are actually worse than the best baseline acc. available. (i.e., QNLI , QQP, and SST).\", \"other_comments\": \"\", \"typo_on_page_one\": \"\\u201c[] To strengthen the generality of \\u2026.\\u201d\\nFigure 1 is never referred in main text.\"}",
"{\"comment\": \"Thank you Florian for reading our paper and for your suggestions.\", \"i_will_respond_to_each_of_the_points_you_raised_below\": \"(I will only include the questions and my response because of space limitation on Openreview)\", \"question\": \"Does transferring the BERT model parameters finetuned on one GLUE task help the other tasks in the Natural Language Understanding (NLU) benchmarks?\", \"response\": \"Our goal in this work is understanding and improving transfer learning in Natural Language tasks. We suggest and show that current SOTA models are not able to efficiently transfer knowledge learned from one task/ domain to other tasks/ domains. To alleviate this problem we propose untangling semantics and structure of the learned representations by the means of TPR and only transferring those instead of the whole model.\\nWe have multiple ideas for follow-up works. What you suggested is also an interesting topic which we would like to explore more.\\n\\nWe thank you again for spending the time reading our paper and for your insightful suggestions and comments. \\n\\nSincerely,\\nAuthors\", \"first_point\": \"In fact for the results in Table 1, we trained each model using 3 different seeds and also performed hyper-parameter tuning. We then chose the model with the best accuracy on the dev set. We, however, did not perform hyper-parameter tuning for transfer learning experiments due to time and computation resources limit.\", \"second_point\": \"In our initial experiments we performed an ablation study by inserting a TPR or LSTM layer on top of certain layers of BERT (e.g. first 2 layers) and omitting the remaining layers. In those experiments, we observed that LSTM was degrading the performance whereas TPR was improving it. However, this conclusion is not True when all 12 layers of bert-base are used in this experiment. We will revise that section again and omit \\\"adding an LSTM on top of BERT is not enough\\\".\", \"third_point\": \"Although this is true, as mentioned in the paper, the point of this experiment was not to claim better results on a specific task. It mostly serves as a sanity-check for HUBERT. We did experiment with another corpus (QQP) in our transfer learning experiments to control for that effect. We are planning to run more experiments using other tasks as a source in the future.\", \"title\": \"Response to reader comments\"}",
"{\"comment\": \"Let me try to give alternative answers to some of the questions you pose in the beginning of Section 4.\", \"question\": \"Does transferring the BERT model parameters finetuned on one GLUE task help the other tasks in the Natural Language Understanding (NLU) benchmarks?\", \"your_answer\": \"No, because the \\\"Gain\\\" column doesn't have positive results.\", \"alternative_answer\": \"Your results suggest no, but you failed to acknowledge two important papers that have looked at the same question: BERT on STILTS, https://arxiv.org/pdf/1811.01088.pdf , and Can you tell me how to get past sesame street?, https://arxiv.org/pdf/1812.10860.pdf , both finding positive effects of choosing GLUE tasks (esp. MNLI) as intermediate tasks.\\n\\n\\nI think your model is interesting, and the goal to increase the linguistic intelligence of BERT is an important one, but your results do not at all match what you claim. I think your approach to measure linguistic intelligence is suboptimal in the first place: We are not so much interested in whether the peak performance on a close-to-solved benchmark improves after training on an intermediate task - we are more interested in whether your model gets to better results more quickly, i.e., with fewer training examples. To this end, you should rather consider how Yogatama et al. define linguistic intelligence in this paper: https://arxiv.org/pdf/1901.11373.pdf . If your model can improve on this metric, it will be an important result.\", \"there_are_many_issues_with_your_analysis\": \"First, 0.7 points of improvement can easily be explained with BERT's susceptibility to random seeds. As far as I see, you didn't control for that. Second, you claim that just adding an LSTM on top of BERT is not enough, only TPRs can do the trick. But putting an LSTM on top already accounts for 0.45 points of the 0.72 improvement. How can you be sure that the additional improvement actually come from TPRs, and not just from, say, more parameters? Third, your choice of MNLI for answering this question was very selective. In fact, the results of \\\"Baseline Acc\\\" in Table 2 show essentially the results of what your report in Table 1, but on other datasets. Here, the performance of HUBERT drops considerably compared to BERT.\", \"title\": \"An alternative interpretation of HUBERT's results\"}"
]
} |
HyxnMyBKwB | The Gambler's Problem and Beyond | [
"Baoxiang Wang",
"Shuai Li",
"Jiajin Li",
"Siu On Chan"
] | We analyze the Gambler's problem, a simple reinforcement learning problem where the gambler has the chance to double or lose their bets until the target is reached. This is an early example introduced in the reinforcement learning textbook by Sutton and Barto (2018), where they mention an interesting pattern of the optimal value function with high-frequency components and repeating non-smooth points. It is however without further investigation. We provide the exact formula for the optimal value function for both the discrete and the continuous cases. Though simple as it might seem, the value function is pathological: fractal, self-similar, derivative taking either zero or infinity, not smooth on any interval, and not written as elementary functions. It is in fact one of the generalized Cantor functions, where it holds a complexity that has been uncharted thus far. Our analyses could lead insights into improving value function approximation, gradient-based algorithms, and Q-learning, in real applications and implementations. | [
"the gambler's problem",
"reinforcement learning",
"fractal",
"self-similarity",
"Bellman equation"
] | Accept (Poster) | https://openreview.net/pdf?id=HyxnMyBKwB | https://openreview.net/forum?id=HyxnMyBKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"y_ONXacEL31",
"K4x7l6M_hm_",
"DzDAQvpqu-",
"DsrJqMCeO3",
"HJgFDrRioB",
"BJlURV3jjB",
"SkeF8NnojH",
"BylyIg-Kir",
"rkepo8xujr",
"HygGTf3miS",
"H1xcUBhZjH",
"BkxXYmnbsS",
"HJl_tG2WiB",
"rkecKYVccH",
"S1xIYUHRKH",
"ryeJ7a_TYB"
],
"note_type": [
"official_comment",
"comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1685116489905,
1685111366855,
1685111332006,
1576798727398,
1573803360941,
1573795022440,
1573794897510,
1573617735325,
1573549732561,
1573270202013,
1573139794425,
1573139323198,
1573139072081,
1572649346231,
1571866238402,
1571814678727
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1594/Authors"
],
[
"~Kevin_A._Wang1"
],
[
"~Kevin_A._Wang1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1594/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1594/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1594/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1594/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1594/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1594/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1594/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1594/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1594/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1594/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1594/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1594/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Thank you for pointing out the previous works!\", \"comment\": \"I agree that the writeup concludes some of the same results. We weren't able to find them during the development of our analysis. In fact, we suspected the existence of these analyses due to how \\\"vanilla\\\" the problem setting is. We thank you for directing us to a previous work.\\n\\nWe take back the words \\\"these results are not clearly pointed out in previous studies\\\" and use this comment to refer the readers to [1] for further information.\\n\\n[1] Siegrist, Kyle. \\\"How to gamble if you must.\\\" AMC 10 (2008): 12.\\n\\nI haven't got a chance to read the book, but will update the comment and references here once I finish so.\"}",
"{\"title\": \"Cite previous works?\", \"comment\": \"Hello from the future! I like this problem and your paper is great. I think the authors and reviewers at the time missed the opportunity to cite previous analyses on this problem. For example, the book \\\"Dubbins, Lester E and Savage, Leonard J. Inequalities for Stochastic Processes; How to Gamble If You Must. Dover Publications (1976).\\\"\", \"and_this_writeup_based_on_the_book\": \"https://www.maa.org/sites/default/files/pdf/joma/Volume8/Siegrist/RedBlack.pdf\\n\\nThese previous analyses arrive to some of the same results, and their existence may contradict the conclusion's statement: \\\"Despite its seeming simpleness, these results are not clearly pointed out in previous studies.\\\"\\n\\nI hope this helps anyone who wants to find more information in the future.\"}",
"{\"title\": \"Cite previous works?\", \"comment\": \"Hello from the future! I like this problem and your paper is great. I think the authors and reviewers at the time missed the opportunity to cite previous analyses on this problem. For example, the book \\\"Dubbins, Lester E and Savage, Leonard J. Inequalities for Stochastic Processes; How to Gamble If You Must. Dover Publications (1976).\\\"\", \"and_this_writeup_based_on_the_book\": \"https://www.maa.org/sites/default/files/pdf/joma/Volume8/Siegrist/RedBlack.pdf\\n\\nI hope this helps anyone who wants to find more information in the future.\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper studies the optimal value function for the gambler's problem, and presents some interesting characterizations thereof. The paper is well written and should be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you\", \"comment\": \"We thank the reviewer very much. It is encouraging to see that our manuscript is now easier to follow as well the increased score. We have also fixed the typo and added the additional notes to our draft.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for the value function example, I was not aware of this.\"}",
"{\"title\": \"Adjusted score\", \"comment\": \"Thank you for tidying up the presentation and for the clarifications, the paper is easier to follow now.\\nI have revised my score accordingly. \\n\\nTypo in the sentence \\\"Otherwise an arbitrary small amount of will have a fixed probability of reaching the target 1.\\\" -> \\\"amount of capital\\\"\"}",
"{\"title\": \"Implications are not (necessarily) immediate\", \"comment\": \"Thanks for further commenting on our manuscript. The hardness of RL is indeed already well known, but it is not known by the community that it can be as complex and pathological as a Cantor function, while this is likely generated to a large family of RL problems. In fact, hardness is mostly observed empirically, such as sample complexity, approximation error, empirical convergence rate, and etc. Our paper discusses *why* the hardness happens, similar to what [1] discusses on the complexity of MDPs.\\n\\nAs an example, \\\"representation of such function must be inexact\\\" is indeed observed for years, but why it is inexact? Even it is handled by algorithms and heuristics on some famous tasks, algorithms may fail on some other tasks and new algorithms can be brought out by new insights.\\n\\nWe extend the implications and hope to bring more insights into possible future algorithms and applications. Our results are new and might be surprising: They are new things to the community and are orthogonal to what people already know. It is reasonable to believe that it will introduce ideas for future works. \\n\\nIt is very natural to expect an algorithmic approach derived by our theorems, or even a use case, but they are unlikely to be with this specific manuscript. The derivation of the theorems itself takes 20 pages, and adding more objectives to this manuscript seems too heavy from our perspective.\\n\\n[1] The Complexity of Markov Decision Processes, Papadimitriou 1987\"}",
"{\"title\": \"Extended Discussion of Implications\", \"comment\": \"Thanks for extending the discussion of the implications. However, again for an informed outsider such as the reviewer, can you actually mention explicitly what you add on top of the existing related work? The hardness of RL is already well known. The problem of finding a good approximation (and whether this can actually be done) has also been addressed. So what does your statement \\\"representation of such function must be inexact\\\" add to the state-of-the-art? Similar for the other parts of the paper.\"}",
"{\"title\": \"Additional notes on 4) and 6)\", \"comment\": \"We thank the reviewer again for providing their helpful comments. Below are additional notes on our response.\\n\\nOn 4): It is convincing and illustrative from 6:35 and 33:20 of the video [1] that the optimal value function of Mountain Car is fractal and self-similar on a substantial region of the state space. As we stated before, we believe that this property extends beyond Gambler and Mountain Car.\\n\\nOn 6): We would like to note that technically, it is *not* important to have this assumption just to solve the problem. Without this assumption, Section 4.1 (i.e. system (ABX)) will give two solutions: v(s), and f(0)=1, f(s)=1 otherwise. It is immediate to argue that the later is not the optimal value function. Then the former must be. We put this assumption to delay this \\\"f(0)=1, f(s)=1 otherwise\\\" solution to Section 4.2 only for a better organization of our manuscript.\\n\\n[1] The role of interest in prediction and control. https://youtu.be/aFXdpCDAG2g. Valliappa Chockalingam et al 2019.\"}",
"{\"title\": \"Review response\", \"comment\": \"We thank the reviewer for their detailed and encouraging review. Your review is very helpful for us to revise the manuscript. A revision has been updated in the system that reflects these changes.\\n\\nThe topic of our manuscript is indeed unconventional but very intriguing. If the presentation is the most concern on our manuscript, we believe it can be improved rather quickly with the reviewers' help to be publishable.\", \"we_have_addressed_the_raised_concerns\": \"2), 3), 5), and the typos, are addressed as pointed out.\\n\\n1) On p.2, \\\"The optimal value function presents its self-similar, fractal and non-rectifiable form\\\". It is not clear from Thm. 11 that these properties hold. Some further explanation would be helpful\\n\\nWe added a note pointing to the exact description of the self-similarity, in Corollary 13 (which is a corollary of Lemma 5's proof).\\n \\n4) p.3 \\\"the similar fractal patterns observed empirically in other reinforcement learning tasks.\\\" Certain concrete examples would be helpful here since I was not aware that this was common.\\n\\nA concrete example we know is Mountain Car (https://gym.openai.com/envs/MountainCar-v0/), where when the optimal value function v(s) is plotted on the two-dimensional state space by a heat map, high-frequency and fractal pattern can be observed. This observation in fact was heavily discussed during a seminar the I attended by the curiosity of the audience. Though this observation is empirical by plotting and zooming in, it is convincing enough that v(s) is likely fractal. When the dimension is greater than 2 we are not sure how this can be observed even empirically.\\n\\nWe have temporarily commented out this statement until more examples are founded. But intuitively, when simple problems like Gambler and Mountain Car to have this level of complexity, there suppose to be a family of MDPs that shares the same level of complexity.\\n\\n6) p.5 \\\"Otherwise an arbitrary small amount of will have a fixed probability of reaching the target 1.\\\" I am unsure what this sentence means.\\n\\nThis means if the function is nonzero when s approaches 0^+, say the limit is C, then the gambler starting with \\\\eps capital can reach the target capital of 1 with probability at least C. Then the expected capital increased from \\\\eps to at least C * 1 = C, which contradicts with the fact p > 0.5 so that expectation must decrease as the game goes.\\n\\nThis is revised and explained in our revised manuscript. \\n\\n7) Perhaps starting with the intuitive description of v(s) (the recursive form) first before presenting the closed form solution would be easier to follow.\\n\\nWe have added a sentence that points to the intuitive description before introducing the Theorem 11, so that the reader can find the intuitive description whenever they want so.\\n\\n8) Why is there no discussion/conclusion section at the end?\\n\\nWe have added a conclusion/future work section. Indeed, it is important to have the section to better position our paper and point out the possible future works.\"}",
"{\"title\": \"Review response\", \"comment\": \"We thank the reviewer for the encouraging review and for their really enjoy reading our manuscript! Indeed, it is enjoyable to explore uncharted problems and share our discovery with the community. We believe it is beneficial to have the community to be able to read our paper and also enjoy these new findings.\\n\\nWe have made a major revision to help potential readers in the community to understand the paper and get more value out of it. The presentation has been largely improved throughout the manuscript. The implication has been extended to a section that includes rigorous statements on value function approximation, which is one of the topics that may bring some insights to the community. Other results are also backed by our theorems that are explicitly pointed to. Another notable change is that more references are added along with the discussion, including papers from reinforcement learning algorithms, MDPs, and mathematics. Armed with the revision, the informed outsiders may have a better shape to understand our work and really get inspired by the new findings. Thus we believe we are on the right track to get the paper publishable.\"}",
"{\"title\": \"Review response\", \"comment\": \"We thank the reviewer for their positive and encouraging review. We are delighted to hear that our exposition is clear and well structured. A revision has been made to improve our manuscript\\n\\nWe understand that implication is important to our theoretical contributions. Therefore, an expanded implication section (Section 1.2) now discusses the three implications in detail. These implications are now backed by the new Proposition 27, 28, and Fact 15 and Theorem 19. The two new propositions do provide analysis on how a parameterized method will actually behave, by giving the error lower bound in O(1/N) and poly(ln L).\\n\\nWe want to clarify that per our Proposition 1, the pathology happens to the discrete case as well. It has the same fractal and self-similar structure although discrete MDP can be solved numerically in practice. The final part is indeed mathy, so we limit their space (Lemma 21 - 24) to be less than 1 page at the end of the analysis. In this way, they maintain the completeness of our analysis of the Bellman equation. We hope that the extended implication in our revised manuscript does provide more value to the ML community.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper gave a very detailed analysis of the value function of the gamblers problem. In this problem, the agent decides how much to bet in each step, doubling or losing the chosen amount. There is a chance of loss of greater than 0.5 and as a result the agent would like to minimize the number of steps. While one optimal policy is extremely simple (just bet the maximum), this paper shows that the value function is deceptively complicated due to its fractal structure. In particular, the value function is continuous only when there is no discounting and does not have well behaved derivatives. The exposition is clear and well structured. The author argues such behaviors have consequences for valued function based RL such as value iterations or Q-learning, since one of the simplest RL problem has a weird value function.\\n\\nThe paper presented both an interesting issue and a clear analysis, and should be accepted. The major issue is that any implications for RL seems unverified. For example, I would be very surprised if a tabular method will not solve the problem given a reasonable update order. The author also did not provide any analysis on how a parameterized method will actually behave, given that is the likely implication. So it is a bit unclear if the pathology is due to taking the continuous limit or if it is present in discrete cases as well. The final part relying on the axiom of choice is too mathy for the problem motivation (since it applies to the rather boring p=0.5 case). The paper would be more valuable to the ML community if that part is replaced with some analysis/evidence of how the issue shows up when actually running one of the solution methods.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper derives the optimal value function for the continuous and discrete versions of the Gambler's problem. It also explores certain properties of this value function and finds that it has many fractal-like qualities.\\n\\nThis paper focuses on an unconventional but intriguing topic, bringing more insight into the possible shapes of value functions. The main result, which describes the value function for the gambler's problem, seems surprisingly complex and could have some practical implications for value-learning. Unfortunately, I find the paper is difficult to follow due its organization and the presence of many typos and unclear phrases. In summary, while I believe the topic is interesting and the work seems sound, the presentation would need to be improved significantly for me to recommend acceptance.\", \"here_are_certain_points_to_be_addressed\": \"1) On p.2, \\\"The optimal value function presents its self-similar, fractal and non-rectifiable form\\\". It is not clear from Thm. 11 that these properties hold. Some further explanation would be helpful \\n2) I think it would be helpful to include some descriptions of certain concepts that may be less familiar to the reader in the appendix. E.g. the Cantor function\\n3) Fig.1 should have larger labels and dots. It is difficult to see currently.\\n4) p.3 \\\"the similar fractal patterns observed empirically in other reinforcement learning tasks.\\\" Certain concrete examples would be helpful here since I was not aware that this was common.\\n5) p.4 The paragraph on 'self-similarity' was not clear to me. What is meant by 'chaos' or 'dimension' in this context? \\n6) p.5 \\\"Otherwise an arbitrary small amount of will have a fixed probability of reaching the target 1.\\\" I am unsure what this sentence means.\\n7) Perhaps starting with the intuitive description of v(s) (the recursive form) first before presenting the closed form solution would be easier to follow.\\n8) Why is there no discussion/conclusion section at the end?\\n\\nThere are numerous typos scattered throughout the paper. Here are some from the first page:\\n- abstract: \\\"where they mention an interesting pattern of\\nthe optimal value function with high-frequency components and repeating nonsmooth\\npoints but without further investigation.\\\" Awkward phrase\\n- abstract: \\\"With the analysis,\\\" Unnecessary phrase\\n- par. 1: \\\"is described as below\\\" -> \\\"is described below\\\"\\n- \\\"which would hide its attractiveness.\\\" Awkward phrase\\n- \\\"as an representative\\\" -> \\\"as a representative\\\"\\n- \\\"family of Markov decision process\\\" -> \\\"family of Markov decision processes\\\"\\n- \\\"the amount of bet.\\\" -> \\\"the amount bet.\\\"\\n- \\\"a round of bet.\\\" -> \\\"a round of betting.\\\"\\n- \\\"action-state value function\\\" -> \\\"state-action value function\\\"\\n- \\\"n as the starting capital (n denotes the state in the discrete setting),\\\" The phrase is a bit confusing as it suggests n denotes both the discrete state and the starting capital.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper revisits the Gambler's problem. It studies a generalized formulation with continuous state and action space and shows that the optimal value function is self-similar, fractal and non-rectifiable. That is, it cannot be described by any simple analytic formula. Based on this, it also deeply analysis the discrete case.\\n\\nOverall, the paper is extremely well written. I must admit that i have really enjoyed reading it. However, the discussion of the implications falls too short. There are a number of complexity results for MDPs and POMDPS, see e.g. (Papadimitriou, Tsitsiklis 1997; Lusena et al. JAIR 2001; Lee at al NIPS 2007). Indeed, the paper does not hide this and provides references. However, for an informed outsider such as the reviewer it does argue how exactly it extends our knowledge here.\"}"
]
} |
S1esMkHYPr | GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation | [
"Chence Shi*",
"Minkai Xu*",
"Zhaocheng Zhu",
"Weinan Zhang",
"Ming Zhang",
"Jian Tang"
] | Molecular graph generation is a fundamental problem for drug discovery and has been attracting growing attention. The problem is challenging since it requires not only generating chemically valid molecular structures but also optimizing their chemical properties in the meantime. Inspired by the recent progress in deep generative models, in this paper we propose a flow-based autoregressive model for graph generation called GraphAF. GraphAF combines the advantages of both autoregressive and flow-based approaches and enjoys: (1) high model flexibility for data density estimation; (2) efficient parallel computation for training; (3) an iterative sampling process, which allows leveraging chemical domain knowledge for valency checking. Experimental results show that GraphAF is able to generate 68\% chemically valid molecules even without chemical knowledge rules and 100\% valid molecules with chemical rules. The training process of GraphAF is two times faster than the existing state-of-the-art approach GCPN. After fine-tuning the model for goal-directed property optimization with reinforcement learning, GraphAF achieves state-of-the-art performance on both chemical property optimization and constrained property optimization. | [
"Molecular graph generation",
"deep generative models",
"normalizing flows",
"autoregressive models"
] | Accept (Poster) | https://openreview.net/pdf?id=S1esMkHYPr | https://openreview.net/forum?id=S1esMkHYPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Igv3PyV24z",
"teE0IZ_tBr3",
"fpmxv3N5I0",
"r1gWIWfssH",
"H1gP43WoiB",
"rJgzzsWjjH",
"SkxA45WosH",
"SJg3GOZssr",
"SyxTYv-iiS",
"H1e1KNvYsS",
"H1gdnWwFsr",
"HklSxJ_diS",
"BkxrWaDOor",
"Bygv3OP_ir",
"ByeVKuvusr",
"rJediPDdsS",
"rJlKI8w_sS",
"r1e4JLwdsS",
"rklLSrw_iB",
"H1gY_VP_iB",
"ByxZqYuWcr",
"S1gjCzZAFr",
"ryxiyLlF_S"
],
"note_type": [
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1595918227774,
1595825880941,
1576798727366,
1573753161331,
1573751854762,
1573751562273,
1573751350085,
1573750803854,
1573750661322,
1573643383301,
1573642671856,
1573580525234,
1573580028714,
1573578926973,
1573578876325,
1573578655641,
1573578320856,
1573578204225,
1573578046387,
1573577840742,
1572075912980,
1571848914694,
1570469347388
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1591/Authors"
],
[
"~Ziqi_Chen1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1591/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1591/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1591/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1591/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1591/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1591/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1591/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1591/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1591/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Thanks for pointing it out.\", \"comment\": \"Hi, Ziqi,\\nWe are actually aware of this issue and will report the new results as soon as possible.\"}",
"{\"title\": \"Questions about \\\"Constrained Property Optimization\\\" experiment\", \"comment\": \"Hi,\\n\\nI noticed that in your experiment about constrained property optimization, your test set contains 800 molecules with the lowest penalized logP in ZINC250k. I think this test set is not the same with the test set used in JT-VAE and GCPN. Actually, the description about test set in JT-VAE is:\\n\\n\\\"To provide the greatest challenge, we selected 800 molecules with the lowest property score y(\\u00b7) from the test set.\\\"\\n\\nThe test set in the above sentence represents the test set used in [1].\\n\\nI checked the test set provided in the google drive, and found that the penalized logp values of compounds in your test set range from -62.5 to -8.2. Considering that the penalized logp values in the test set of JT-VAE and GCPN range from -11.0 to -0.5, I think it would be unfair to compare the results on this test set with the results of JT-VAE and GCPN.\\n\\n[1]: Kusner, Matt J., Brooks Paige, and Jos\\u00e9 Miguel Hern\\u00e1ndez-Lobato. \\\"Grammar variational autoencoder.\\\" arXiv preprint arXiv:1703.01925 (2017).\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"All reviewers agreed that this paper is essentially a combination of existing ideas, making it a bit incremental, but is well-executed and a good contribution. Specifically, to quote R1:\\n\\n\\\"This paper proposes a generative model architecture for molecular graph generation based on autoregressive flows. The main contribution of this paper is to combine existing techniques (auto-regressive BFS-ordered generation of graphs, normalizing flows, dequantization by Gaussian noise, fine-tuning based on reinforcement learning for molecular property optimization, and validity constrained sampling). Most of these techniques are well-established either for data generation with normalizing flows or for molecular graph generation and the novelty lies in the combination of these building blocks into a framework. ... Overall, the paper is very well written, nicely structured and addresses an important problem. The framework in its entirety is novel, but the building blocks of the proposed framework are established in prior work and the idea of using normalizing flows for graph generation has been proposed in earlier work. Nonetheless, I find the paper relevant for an ICLR audience and the quality of execution and presentation of the paper is good.\\\"\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks\", \"comment\": \"I can read the updated manuscript now. Thanks!\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your quick response! Your reviews really help improve the paper. We really appreciate it.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks a lot, this clarification really helps and my initial concern no longer holds. Thank you also for providing these two additional references and for devoting a section in your appendix to this issue. I think with these changes I feel comfortable in recommending acceptance of this paper.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks a lot for your detailed clarifications regarding my questions on the RL objective and the BFS ordering. This is indeed clear now and I think that the revised statements in the paper should avoid further confusion on this point.\"}",
"{\"title\": \"Response to the AnonReviewer1\", \"comment\": \"Thank you for your quick response! The discussion is really helpful. Looking forward to your reply. The answers to your concerns are listed below.\", \"q3\": \"As mentioned in my initial review, RealNVP and Glow use their de-quantization technique on bit-quantized image data (unless I misunderstand their technique) where simply adding noise is justified as it \\\"spreads out\\\" the discrete values. For binary or categorical data, however, I still think that this technique is problematic, as my point from my original review (adding noise can move points outside of the probability simplex) still holds even if you use uniform noise instead of Gaussian noise (apologies for this misunderstanding). GraphNVP seems to have a similar issue (this paper is currently under review at ICLR as well and one reviewer pointed out the same concern), so I think it is not valid to point to them for justification of this approach.\", \"a3\": \"Moving the points outside of the probability simplex actually does not matter as the normalizing flows are actually defined on general continuous data. The dequantization techniques allow mapping the discrete data into continuous one by adding a small noise to each dimension. By adding a noise from U[0,1), we can assure that the range of different categories will not overlap. For example, after dequantization, the value of 1-entry in the one-hot vector lie in [1,2) while the 0-entry lie in [0,1). We can also map the generated continuous data back to the discrete data by using the argmax function in the generation process.\\nTheoretically, as shown in [1] (eq3-6) and [2], training a continuous density model on uniform dequantized data can be interpreted as maximizing a lower bound on the log-likelihood for the original discrete data. This statement holds for both image data and binary/categorical data mathematically. In addition, as suggested in [2], instead of adding random uniform noise to each discrete data for dequantization, a more advanced dequantization technique is to treat the noise as hidden variables and use variational inference to infer the optimum noise added to each discrete data. We will explore this in our future work.\\n\\nWe have added a section to discuss the dequantization techniques in the appendix. We are happy to further discuss this if you still have questions.\\n\\n[1] Theis, Lucas, A\\u00e4ron van den Oord, and Matthias Bethge. \\\"A note on the evaluation of generative models.\\\" arXiv preprint arXiv:1511.01844 (2015).\\n[2] Ho, Jonathan, et al. \\\"Flow++: Improving flow-based generative models with variational dequantization and architecture design.\\\" arXiv preprint arXiv:1902.00275 (2019).\"}",
"{\"title\": \"Response to the AnonReviewer1\", \"comment\": \"Thank you for your quick response! The discussion is really helpful. Looking forward to your reply. The answers to your concerns are listed below.\", \"q1\": \"You mention that GraphNVP is not compatible with a reinforcement learning objective for fine-tuning. Is this because of the one-shot nature of the generation process that you refer to? One example that comes to my mind where a one-shot generative process is combined with an RL objective is MolGAN [1] -- maybe you can comment on this.\", \"a1\": \"Very good point! Indeed, GraphNVP is compatible with the objective used in MolGAN. However, note that the approach used in MolGAN is not based on RL and is actually based on the one-step policy gradient algorithm, which is not the classical RL. The classical RL problem is a sequential decision process, which involves a series of states and actions. This process also allows us to introduce intermediate rewards (e.g., the penalization for valency check in each step defined in Section 4.4) and final rewards. For the approach used in MolGAN---which is only one-step decision---we are only able to provide final rewards but not able to leverage the intermediate rewards (e.g., the chemical rules for valency check).\\n\\nTo avoid misunderstanding, we have removed this statement in section 2.\", \"q2\": \"How do you canonically order nodes *within* the BFS front of the BFS-ordering? To me, it seems like typical BFS-ordering only gives you a partial ordering of the nodes in a graph as nodes within the BFS front are still ordered arbitrarily. Hence you would still not have the exact likelihood unless you find a way to break the symmetry consistently within the BFS front. Please correct me if I'm wrong.\", \"a2\": \"Thanks for raising this point again. Yes, you are right that nodes within the BFS front are still ordered arbitrarily. GraphAF is trained on all possible BFS orderings. This can be done by first randomly permuting the adjacency matrix and then randomly pick a node as BFS front. By canonical order(BFS-order), we meant that all the orders of graphs we used to train GraphAF are BFS orders. We meant the exact density of each molecule under a given order (which is sampled each time that we load a batch of training graphs) can be efficiently computed by the change-of-variables formula. We have revised the section 4.2 because we think the word \\u201ccanonical\\u201d is a little bit misleading. Thanks for the suggestions!!\"}",
"{\"title\": \"Response (part 2)\", \"comment\": \"Thank you for your response.\", \"i_have_another_clarification_question_regarding_a3\": \"As mentioned in my initial review, RealNVP and Glow use their de-quantization technique on bit-quantized image data (unless I misunderstand their technique) where simply adding noise is justified as it \\\"spreads out\\\" the discrete values. For binary or categorical data, however, I still think that this technique is problematic, as my point from my original review (adding noise can move points outside of the probability simplex) still holds even if you use uniform noise instead of Gaussian noise (apologies for this misunderstanding). GraphNVP seems to have a similar issue (this paper is currently under review at ICLR as well and one reviewer pointed out the same concern), so I think it is not valid to point to them for justification of this approach.\\n\\nWould you be able to further clarify or justify your method?\"}",
"{\"title\": \"Response (part 1)\", \"comment\": \"Thank you for your response.\\n\\nI have two follow-up questions to part 1 of your response to avoid potential misunderstandings.\", \"a1\": \"You mention that GraphNVP is not compatible with a reinforcement learning objective for fine tuning. Is this because of the one-shot nature of the generation process that you refer to? One example that comes to my mind where a one-shot generative process is combined with an RL objective is MolGAN [1] -- maybe you can comment on this.\\n\\n[1] De Cao & Kipf, \\\"MolGAN: An implicit generative model for small molecular graphs\\\", 2018\", \"a2\": \"How do you canonically order nodes *within* the BFS front of the BFS-ordering? To me it seems like typical BFS-ordering only gives you a partial ordering of the nodes in a graph as nodes within the BFS front are still ordered arbitrarily. Hence you would still not have the exact likelihood unless you find a way to break the symmetry consistently within the BFS front. Please correct me if I'm wrong.\"}",
"{\"title\": \"Response to the AnonReviewer2\", \"comment\": \"Hi,\\nWe have just uploaded the paper. I believe it is all set now. \\n\\nBests\"}",
"{\"title\": \"Cannot confirm the manuscript update\", \"comment\": \"Hi,\\n\\nThank you for your answers. \\n\\nUnfortunately I cannot confirm the updated manuscript. \\nIn fact, the header info of this page says: 26 Sep 2019 (modified: 26 Sep 2019) so not updated. \\n\\nBests\"}",
"{\"title\": \"Response to the AnonReviewer1\", \"comment\": \"Thanks for your comments and suggestions. The response to some of your concerns are listed below:\", \"q1\": \"The framework in its entirety is novel, but the building blocks of the proposed framework are established in prior work and the idea of using normalizing flows for graph generation has been proposed in earlier work (see GraphNVP[4] and GNF). Nonetheless, I find the paper relevant for an ICLR audience and the quality of execution and presentation of the paper is good.\", \"a1\": \"Our work is indeed related to the two work Graph Normalizing Flows[1] (GNF, we have added the missing reference) and GraphNVP[3]. However, our work is fundamentally different from their work. GNF defines a normalizing flow from a base distribution to the hidden node representations of a pretrained Graph Autoencoders. The graph generation is done through two separate stages by first generating the node embeddings with the normalizing flows and then generate the graphs based on the generated node embeddings in the first stage. In GraphAF, we define an autoregressive flow from a base distribution to the molecular graph structures, which can be trained end-to-end. GraphNVP also defines a normalizing flow from a base distribution to the molecular graph structures. However, the generation process of GraphNVP is one-shot, which cannot effectively capture graph structures and also cannot guarantee the validity of generated molecules (only 40% validity rate). In our GraphAF, we formulate the generation process as a sequential decision process and effectively capture the subgraph structures based on graph neural networks, based on which we define a policy function to generate the nodes and edges. The sequential generation process also allows to incorporate the chemical rules. As a result, the validity of the generated molecules can be guaranteed (100% validity rate). Moreover, we can effectively optimize the properties of generated molecules by fine-tuning the policy with reinforcement learning, which is not feasible in GraphNVP.\", \"q2\": \"Order-invariance: The paper states that the \\u201cexact density of each molecule can be efficiently computed by the change-of-variables formula\\u201d. This seems to be incorrect, as the exact density is a product overall order-specific densities for all possible permutations in which the molecular graph can be represented. The change-of-variables formula does not provide an efficient way to circumvent this order-invariance issue, at least not in the way it is presented in the paper. Even when using BFS-ordered representations, the subspace of possible permutations is still typically too large to allow for efficient evaluation of the exact density. I suspect that the authors assume a canonical ordering of the graph representations, which is a strong assumption, but does not seem to be mentioned in the paper. How is the canonical ordering chosen? How is local structural symmetry broken in a consistent manner?\", \"a2\": \"Thanks for pointing this out!! Following existing work on graph generation\\u2014GraphRNN[2] and MolecularRNN[4]\\u2014we use the BFS-ordering of a graph (mentioned in Section 4.2). BFS-ordering has been shown very effective for graph generation in previous work, which can effectively limit the number of edge predictions made for each node and hence significantly accelerate training [2].\\n\\nAnd you\\u2019re right on that we cannot calculate the exact likelihood of a molecule, which requires calculating all the permutation of the molecule. What we mean is that we can calculate the exact likelihood of the canonical order (BFS-order) of a molecule. We\\u2019ve already revised this in the new version.\"}",
"{\"title\": \"Response to the AnonReviewer1 cont.\", \"comment\": \"\", \"q3\": [\"De-quantization: The de-quantization scheme used in this paper seems to be ill-suited for categorical variables. What motivates the use of adding Gaussian noise to categorical (one-hot encoded) variables, other than that it seems to work OK in the reported experiments? Adding Gaussian noise in this way can move these variables outside of the probability simplex \\u2014 is this a valid technique in the framework of normalizing flows? Adding Gaussian noise makes sense if the data represents quantized continuous data, e.g. bit-quantized image data, but I have concerns about the validity of using this method for categorical data (both edge type and node features are categorical in this application). Other comparable generative models for graph-structured data use a relaxed discrete distribution (concrete / Gumbel softmax), e.g. in MolGAN[3], to address this issue \\u2014 would this also be applicable here?\"], \"a3\": \"Actually, instead of Gaussian noise, we used the uniform noise (Equation 5) for de-quantization.\\nThe same techniques have also been used in other normalizing flow methods for discrete data (e.g. GraphNVP[3], RealNVP[5], Glow[6]) and also shown very effective. \\n\\nNote that Gumbel softmax and dequantization are techniques for two very different problems on discrete data. The former one is used to backpropagate the gradient through discrete variables, while dequantization is used to transform discrete data into continuous since the invertible mappings defined by normalizing flows are mainly for continuous data.\\n\\nWe hope the above response could address your concerns. Please let us know if you have other questions. We\\u2019re happy to further answer. \\n\\n\\n[1] Liu et al., Graph Normalizing Flows. arXiv 2019.05.\\n[2] You et al. GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models. ICML 2018.\\n[3] Madhawa et al., GraphNVP: An invertible flow model for generating molecular graphs. arXiv 2019.05.\\n[4] Popova et al. Molecularrnn: Generating realistic molecular graphs with optimized properties. arXiv preprint arXiv:1905.13372, 2019.\\n[5] Dinh et al., Density Estimation using Real NVP. ICLR\\u201917.\\n[6] Diederik P. Kingma, Prafulla Dhariwal. Glow: Generative Flow with Invertible 1\\u00d71 Convolutions. NIPS\\u201918.\"}",
"{\"title\": \"Response to the AnonReviewer2\", \"comment\": \"Thanks for your comments and suggestions. The response to your concerns are listed below:\", \"q1\": \"The introduction of the normalizing flows (Sec 3.1) can be expanded to reach non-expert users. Advantages of using invertible flows (against other generative models such as GANs and VAEs) are not described rigorously in the current manuscript. I also suggest citing a nice review for invertible flows appeared recently.\\nExplanations of the Sec 4.4 (+ appendix B) is simply insufficient to reproduce the experiments. More descriptions or references are required.\", \"a1\": \"Thank you for suggestions. We\\u2019ve already revised this section and also cited a new introduction and survey paper on normalizing flows [1]. Advantages of flow are briefly introduced in the introduction. We\\u2019ve also revised and extended Sec 4.4 in the revised version.\", \"q2\": \"No discussion why the combination of the autoregressive flow and the RL performs greatly, compared to baselines. Some discussions will help the community to further improve the optimization tasks in the future.\", \"a2\": \"This is a very good point!! As defined in Sec 4.4, our RL process is close to the one in previous work GCPN[2]. Therefore, the good property optimization performance is believed to come from the flexibility of flow. Compared with the GAN model used in GCPN[2], which is known to suffer from the mode collapse problem, flow is flexible at modeling complex distributions and generating diverse data (as shown in Table 2). This allows to explore a variety of molecule structures in the RL process for molecule properties optimization.\\n\\nWe hope the above response could address your concerns. \\n\\n[1] Ivan Kobyzev, Simon Prince, Marcus A. Brubaker. Normalizing Flows: Introduction and Ideas. arXiv:1908.09257. \\n[2] You et al. Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation. NeurIPS 2018.\"}",
"{\"title\": \"Response to the AnonReviewer3\", \"comment\": \"Thank you very much for your constructive comments! We have conducted more experiments according to your suggestions. The response to some of your questions are listed below:\", \"q1\": \"The empirical validation of GraphAF is contained to the single dataset - ZINC with a maximum of 38 atoms. From table 2, it seems to me every prior method works pretty well on important metrics. There is very little room for improvement. I recommend including results on QM9 and CEPDB datasets.\", \"a1\": \"Thanks for your suggestion on more datasets. The reason that we only use ZINC data set is that we followed the experiment setting in existing work including GCPN[1] and JT-VAE [2]. According to your suggestions, we conducted additional experiments on QM9 (134K molecules in total) and MOSES (1.9M molecules in total). We didn\\u2019t use CEPDB since currently we can\\u2019t access to this dataset. The results on QM9 and MOSES data set are summarized below:\\n| Data. | Valid | Valid w/o check | Uniqueness | Novelty | Reconstruction |\\n| QM9 | 100 | 67 | 94.51 | 88.83 | 100 | \\n|MOSES | 100 | 71 | 99.99 | 100 | 100 |\\nWe can see that our method can generate valid, unique, and novel molecules with different training data sets. \\n\\nOn the task of density modeling, we agree that different methods perform comparably. However, noting that GraphAF (1) can achieve 68% validity rate even without leveraging chemical domain knowledge thanks to the strong capacity of normalizing flow framework (GCPN can only reach 20%); (2) enjoys parallel training and is therefore much more efficient than existing methods. Moreover, on the more challenging and more important tasks of drug discovery\\u2014Property Optimization and Constrained Property Optimization\\u2014GraphAF achieves the state-of-the-art performance (Table 5 and Table 6).\", \"q2\": \"The model being data-agnostic, it makes sense to evaluate them on generic graph datasets - synthetic and real.\", \"a2\": \"Note that GraphAF is mainly designed for molecular graph generation. However, it is indeed very general and can be generalized to generate different types of graphs by changing the Edge-MLPs and Node-MLP functions (Equation 8). Specifically, we follow the experiment setup of GNF[3](Sec.5.2 in the original paper) and run GraphAF on two generic graph datasets: COMMUNITY-SMALL(synthetic) , EGO-SMALL(real). The results are as follows:\\nCommunity Small \\t\\t\\t\\t\\tEgo Small\\nDegree Cluster Orbit \\tDegree Cluster Orbit\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nGraphRNN 0.08 0.12 0.04\\t\\t 0.09 0.22 0.003\\nGNF\\t\\t0.20\\t 0.20 0.11\\t\\t 0.03\\t 0.10 0.001\\nGraphAF 0.18 0.20 0.02\\t\\t 0.03 \\t 0.11\\t 0.001\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nGraphRNN 0.03 0.01 0.01\\t\\t 0.04 0.05 0.06\\nGNF\\t\\t0.12\\t 0.15 0.02\\t\\t 0.01\\t 0.03 0.0008\\nGraphAF 0.06 0.10 0.015\\t\\t 0.04 \\t 0.04 \\t 0.008\\n \\nThe above results demonstrate that GraphAF, when applied to generic graphs, can consistently yield comparable or better results compared with existing state-of-the-art approaches GNF[3] and GraphRNN[4]. We have added these results in the revised version.\"}",
"{\"title\": \"Response to the AnonReviewer3 cont.\", \"comment\": \"\", \"q3\": \"The novelty of the model is limited. The flow-based graph generative model is introduced in Graph Normalizing Flow (GNF) (NeurIPS'19, NeurIPS'18 workshop). The reversible flow is extended to whole graph in GraphNVP. Unlike GNF, GraphNVP[4] and GraphAF do away with decoder. The major difference being the sampling process - one-shot to sequential.\", \"a3\": \"Thanks for pointing out the related work. Our work is indeed related to the two work Graph Normalizing Flows (GNF) and GraphNVP[5]. However, our work is fundamentally different from the two works. GNF defines a normalizing flow from a base distribution to the hidden node representations of a pre-trained Graph Autoencoders. The graph generation is done through two separate stages by first generating the node embeddings with the normalizing flows and then generate the graphs based on the generated node embeddings in the first stage. In GraphAF, we define an autoregressive flow from a base distribution to the molecular graph structures, which can be trained end-to-end. GraphNVP also defines a normalizing flow from a base distribution to the molecular graph structures. However, the generation process of GraphNVP is one-shot, which cannot effectively capture graph structures and also cannot guarantee the validity of generated molecules (only 40% validity rate). In our GraphAF, we formulate the generation process as a sequential decision process and effectively capture the subgraph structures based on graph neural networks, based on which we define a policy function to generate the nodes and edges. The sequential generation process also allows incorporating the chemical rules. As a result, the validity of the generated molecules can be guaranteed (100% validity rate). Moreover, we can effectively optimize the properties of generated molecules by fine tuning the policy with reinforcement learning, which is not feasible in GraphNVP.\", \"clarification\": \"\", \"q1\": \"What are the inputs edge-mlp's operate on? Given the generation step is sequential, it is not clear to me why all the node embeddings H_i^L is given as input in eq (8). I also noted that the dimension of H_i^L varies with size of sub-graphs. Also note mismatch in the notation 'f' used in algorithm 1 and 'g' from the main text.\", \"a1\": \"A good point! With BFS order, the complexity of GraphAF scales linearly to the number of nodes or edges. Therefore, the size of the graph is not an issue. To scale to graphs with many node types, we can represent each node type with a low-dimensional vector instead of with a one-hot high-dimensional vector. Similar ideas have already been explored in using normalizing flows for text generation [6] (the size of vocabulary is very large).\\n\\nQ2. Moreover, GraphAF utilizes only single layer of flow i.e., eq (9). This is clearly not sufficient to model complex graphs. And in its current form it is not clear how one can extend to multi-layer flow.\", \"q2\": \"Please compare inference time.\", \"a2\": \"We only use one single layer of flow since it has already been shown very powerful for modeling molecular graph structures in different data sets. However, the framework is very general and can be easily extended to multi-layer flow for modeling complex graphs. Specifically, we can construct a T-layer flow to map molecular graph structures to base distribution (z->\\\\epsilon_T->\\\\epsilon_{T-1}->...->\\\\epsilon_1). In the t-th layer, we take \\\\epsilon_t as the features of nodes and edges. Since \\\\epsilon_t is continuous, we can directly input it into R-GCN (see Eq.(8)) without the dequantization process, and then perform the transformation to \\\\epsilon_{t-1} as defined in Eq.(10) .\", \"other_weakness\": \"Q1. Due to invertible flow modeling, the latent space is usually restricted to small dimension. In current case it is 9 for node feature and 3 for edge features. This drawback alongside the sequential edge generation prevents GraphAF from scaling to complex and large graphs with many labels.\"}",
"{\"title\": \"Response to the AnonReviewer3 cont.\", \"comment\": \"\", \"q3\": \"The encoder modeling in GraphAF also shares similarities with Variational graph auto-encoder. Instead of constraining latent distribution using KL divergence, GraphAF maximizes graph likelihood to enforce base distribution.\", \"a3\": \"In general, normalizing flows are indeed related to variational auto-encoders, both of which tried to explicitly model the data density and aim to maximize the data likelihood. However, flow-based methods are fundamentally different from VAE in the following perspectives: (1) flow-based methods define an invertible mapping between the latent space and observation space; (2) flow-based methods allow to calculate the exact likelihood while VAE methods can only optimize a lower bound.\\n\\nWe hope the above responses address your concerns. Please let us know if you have other questions. We\\u2019re happy to further answer the questions. \\n\\n[1] You et al. Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation. NeurIPS 2018.\\n[2] Jin et al. Junction Tree Variational Autoencoder for Molecular Graph Generation. ICML 2018.\\n[3] Liu et al., Graph Normalizing Flows. arXiv 2019.05.\\n[4] You et al. GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models. ICML 2018.\\n[5] Madhawa et al., GraphNVP: An invertible flow model for generating molecular graphs. arXiv 2019.05.\\n[6] Zachary M. Ziegler, Alexander M. Rush. Latent Normalizing Flows for Discrete Sequences. ICML\\u201919.\"}",
"{\"title\": \"Response to all the reviewers and area chair\", \"comment\": \"We would like first to thank all the reviewers for your constructive reviews. We\\u2019ve revised the paper according to your reviews. Specifically, we have made the following changes:\\n1. We conduct additional experiments on another two molecule data sets QM9 and MOSES and two generic graph data sets in Section 5.2. The results are available at Table 3, 4. Results show that our proposed method GraphAF can still get state-of-the-art or competitive results on these data sets. We also present some generated examples of generic graph in the appendix.\\n2. We expand the description of normalizing flows to make the paper more self-contained. A citation of the survey paper on normalizing flows is also given for reference. We also explained the RL process in Section 4.4 in more details. \\n3. We discuss the difference between our work and existing work including Graph Normalizing Flows and GraphNVP in more detail in Section 2.\\n4. We give a detailed explanation on why GraphAF + RL pipeline works well on the tasks of property optimization in Section 5.2. \\n5. We revise the statement of \\u201ccalculating the exact density of each molecule\\u201d with GraphAF to \\u201ccalculating the exact density of each molecule under a given order\\u201d.\\n6. We added a section to discuss the dequantization techniques in the appendix.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"# Post Rebuttal\\n\\nThe authors have partially and satisfactorily addressed my concerns. In line of this I am raising my score to Weak Accept.\\n\\nThis paper proposes a new molecular graph generative model (GraphAF) which fuses the best of two worlds of generative networks - reversible flow and autoregressive mode. Such integration enjoys a) faster training due to parallel computation b) molecular validity checker during inference supported by sequential sampling process and c) exact likelihood maximisation due to invertible encoder. In lieu of such advantages, the model trains two times faster than the existing state-of-the-art and generates 100% valid molecules when trained on ZINC dataset. Further, it also demonstrates that additionally if the chemical properties are optimised during training with reinforcement learning policy then GraphAF outperforms all the prior works.\\n\\nAlthough the paper presents an interesting fusion of different generative models, in its current form it leans towards rejection due to the following factors:\\n1) The empirical validation of GraphAF is contained to single dataset - ZINC with a maximum of 38 atoms. From the table 2, it seems to me every prior method works pretty well on important metrics. There is very little room for improvement. I recommend including results on QM9 and CEPDB datasets. \\n2) The model being data-agnostic, it makes sense to evaluate them on generic graph datasets - synthetic and real.\\n3) The novelty of the model is limited. The flow-based graph generative model is introduced in Graph Normalizing Flow (GNF) (NeurIPS'19, NeurIPS'18 workshop). The reversible flow is extended to whole graph in GraphNVP. Unlike GNF, GraphNVP and GraphAF do away with decoder. The major difference being the sampling process - one-shot to sequential.\\nI am willing to improve my rating given that some of this points are addressed.\", \"clarification\": \"1. What are the inputs edge-mlp's operate on ? Given the generation step is sequential, it is not clear to me why all the node embeddings H_i^L is given as input in eq (8). I also noted that the dimension of H_i^L varies with size of sub-graphs. Also note mismatch in the notation 'f' used in algorithm 1 and 'g' from the main text. \\n2. Please compare inference time.\", \"other_weakness\": \"1. Due to invertible flow modeling, the latent space is usually restricted to small dimension. In current case it is 9 for node feature and 3 for edge features. This drawback alongside the sequential edge generation prevents GraphAF from scaling to complex and large graphs with many labels.\\n2. Moreover, GraphAF utilizes only single layer of flow i.e., eq (9). This is clearly not sufficient to model complex graphs. And in its current form it is not clear how one can extend to multi-layer flow.\\n3. The encoder modeling in GraphAF also shares similarity with Variational graph auto-encoder. Instead of constraining latent distribution using KL divergence, GraphAF maximizes graph likelihood to enforce base distribution.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new invertible-flow based graph generation model.\\nThe main difference from the previous flow-based generation model (GraphNVP) is the choice of flow mappings (coupling flow --> autoregressive flow). \\nThe authors formulate conditional probabilities of iterative node/edge generation. Iterative sampling scheme naturally allows incorporating validity checks in each sampling step, assuring 100% validity in graph generation. The paper also proposes an implementation of molecule lead optimization combined with a RL framework. The experimental results show superiority in terms of valid graph generation and property optimization. \\n\\n\\nOverall, the paper is written well. I feel no difficulty in understanding the main idea and the equations in the paper.\\n \\nIntroduction of the normalizing flows (Sec 3.1) can be expanded to reach non-expert users. Advantages of using invertible flows (against other generative models such as GANs and VAEs) are not described rigorously in the current manuscript. I also suggest citing a nice review for invertible flows appeared recently: \\n\\nIvan Kobyzev, Simon Prince, and Marcus A Brubaker, ``Normalizing Flows: Introduction and Ideas'', arXiv: 1908.09257, 2019.\\n\\n\\nExplanations of the Sec 4.4 (+ appendix B) is simply insufficient to reproduce the experiments. More descriptions or references are required. \\n\\n\\nExperimental results seem promising. A better validity score than the previous flow model illustrates the efficacy of the autoregressive flow against the coupling flow. \\nThe performance on the property optimization (Table 3) seems brilliant. However, there is no discussion why the combination of the autoregressive flow and the RL performs greatly, compared to baselines. Some discussions will help the community to further improve the optimization tasks in the future. \\n\\n+ Overall, a good paper. well written, easy to understand. \\n+ A new variant of the invertible-flow based graph generation model. The novelty lies in the iterative generation process, naturally combined with the autoregressive flow. \\n+ Superior to the one-shot flow baseline (GraphNVP) even if additional validity checks are omitted (Table 2)\\n+ Good performances in property optimizations (Table 3, 4) \\n- The explanation for RL process is simply insufficient for reproduction.\\n-- No discussions about reasons why GraphAF+RL performs great in property optimization.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a generative model architecture for molecular graph generation based on autoregressive flows. The main contribution of this paper is to combine existing techniques (auto-regressive BFS-ordered generation of graphs, normalizing flows, dequantization by Gaussian noise, fine-tuning based on reinforcement learning for molecular property optimization, and validity constrained sampling). Most of these techniques are well-established either for data generation with normalizing flows or for molecular graph generation and the novelty lies in the combination of these building blocks into a framework. Training can be carried out in parallel over the sequential generation process, as no hidden states with sequential dependency are assumed (unlike a regular RNN). Experimental validation is carried out on a standard ZINC molecule generation benchmark (graphs with up to 48 nodes) and the reported metrics are competitive with recent related work.\\n\\nOverall, the paper is very well written, nicely structured and addresses an important problem. The framework in its entirety is novel, but the building blocks of the proposed framework are established in prior work and the idea of using normalizing flows for graph generation has been proposed in earlier work (see [1] and [2]). Nonetheless, I find the paper relevant for an ICLR audience and the quality of execution and presentation of the paper is good.\\n\\nI have two major (technical) concerns with the flow-based formulation used in the paper with regards to order-invariance and the utilized de-quantization scheme.\\n* Order-invariance: The paper states that the \\u201cexact density of each molecule can be efficiently computed by the change-of-variables formula\\u201d. This seems to be incorrect, as the exact density is a product over all order-specific densities for all possible permutations in which the molecular graph can be represented. The change-of-variables formula does not provide an efficient way to circumvent this order-invariance issue, at least not in the way it is presented in the paper. Even when using BFS-ordered representations, the subspace of possible permutations is still typically too large to allow for efficient evaluation of the exact density. I suspect that the authors assume a canonical ordering of the graph representations, which is a strong assumption, but does not seem to be mentioned in the paper. How is the canonical ordering chosen? How is local structural symmetry broken in a consistent manner?\\n* De-quantization: The de-quantization scheme used in this paper seems to be ill-suited for categorical variables. What motivates the use of adding Gaussian noise to categorical (one-hot encoded) variables, other than that it seems to work OK in the reported experiments? Adding Gaussian noise in this way can move these variables outside of the probability simplex \\u2014 is this a valid technique in the framework of normalizing flows? Adding Gaussian noise makes sense if the data represents quantized continuous data, e.g. bit-quantized image data, but I have concerns about the validity of using this method for categorical data (both edge type and node features are categorical in this application). Other comparable generative models for graph-structured data use a relaxed discrete distribution (concrete / Gumbel softmax), e.g. in MolGAN [De Cao & Kipf (2018)], to address this issue \\u2014 would this also be applicable here?\\n\\nI think that these two issues will have to be addressed before this paper can be considered for publication, and I recommend a weak reject at this point.\\n\\n[1] Madhawa et al., GraphNVP: An invertible flow model for generating molecular graphs. (2019)\\n[2] Liu et al., Graph Normalizing Flows. (2019) \\u2014 not cited\", \"update\": \"My two main technical concerns have been addressed in the rebuttal and I think that the revised version of the paper can be accepted to ICLR (my comment w.r.t. novelty still holds and hence I recommend 'weak accept').\"}"
]
} |
r1lczkHKPr | Off-policy Multi-step Q-learning | [
"Gabriel Kalweit",
"Maria Huegle",
"Joschka Boedecker"
] | In the past few years, off-policy reinforcement learning methods have shown promising results in their application for robot control. Deep Q-learning, however, still suffers from poor data-efficiency which is limiting with regard to real-world applications. We follow the idea of multi-step TD-learning to enhance data-efficiency while remaining off-policy by proposing two novel Temporal-Difference formulations: (1) Truncated Q-functions which represent the return for the first n steps of a policy rollout and (2) Shifted Q-functions, acting as the farsighted return after this truncated rollout. We prove that the combination of these short- and long-term predictions is a representation of the full return, leading to the Composite Q-learning algorithm. We show the efficacy of Composite Q-learning in the tabular case and compare our approach in the function-approximation setting with TD3, Model-based Value Expansion and TD3(Delta), which we introduce as an off-policy variant of TD(Delta). We show on three simulated robot tasks that Composite TD3 outperforms TD3 as well as state-of-the-art off-policy multi-step approaches in terms of data-efficiency. | [
"Multi-step Learning",
"Off-policy Learning",
"Q-learning"
] | Reject | https://openreview.net/pdf?id=r1lczkHKPr | https://openreview.net/forum?id=r1lczkHKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"W_ekdLMvLN",
"BJgr-K_hjH",
"BJlJWPPhjS",
"S1xhsuI2jB",
"HJlGOtH2ir",
"B1x5KM73jB",
"r1gEACM2iH",
"r1gpMq6sjr",
"SyxsEmLioS",
"BygikvHjjr",
"SylVwkXisH",
"rJlDnDzooB",
"Byx7x8zojS",
"HygzprGoiB",
"HklysSGssr",
"HkgVopXatr",
"HyxxejAhtS",
"BJlkunYoKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727336,
1573845245497,
1573840631420,
1573836964132,
1573833066272,
1573823105520,
1573822156263,
1573800468885,
1573770035330,
1573766882968,
1573756764364,
1573754798834,
1573754347020,
1573754297956,
1573754263099,
1571794332057,
1571773160470,
1571687527075
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1590/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1590/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1590/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1590/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1590/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1590/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1590/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1590/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors propose TD updates for Truncated Q-functions and Shifted Q-functions, reflecting short- and long-term predictions, respectively. They show that they can be combined to form an estimate of the full-return, leading to a Composite Q-learning algorithm. They claim to demonstrated improved data-efficiency in the tabular setting and on three simulated robot tasks.\\n\\nAll of the reviewers found the ideas in the paper interesting, however, based on the issues raised by Reviewer 3, everyone agreed that substantial revisions to the paper are necessary to properly incorporate the new results. As a result, I am recommending rejection for this submission at this time. I encourage the authors to incorporate the feedback from the reviewers, and believe that after that is done, the paper will be a strong submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Re: Re: Re: Re: Re: Re: Re: Response to Reviewer #3\", \"comment\": \"Thank you for the revision- these indeed paint a more complete picture of the algorithm. I think it might still be worthwhile to justify the choice of fixing the step size of 1e-3 for the values representing the full return, as with many TD methods, step size sensitivity can considerably vary. Acknowledging the exact decomposition of one-step Q-learning though, a quick argument for this choice would be that composite Q-learning could never do worse than one-step Q-learning if it can be set to match it exactly. :)\\n\\nWhile most of this has been about the tabular domain, this is where things are conceptually clear and can be quickly, and reasonably confidently, teased apart to (1) justify the choice of the larger step size used in Figure 2, and (2) make a case for what might be happening in the deep RL setting (or what to try next!). For example, drawing intuition from the tabular results, it may be possible that the architectural choice of outputting the value function components components at earlier hidden layers are interpretable as implicitly running things at different time scales- changes to earlier hidden layers might take longer for something further down the network to adapt. However, this is harder to verify due to other dependencies like hidden layer sizes, activation functions, optimizer, etc.\"}",
"{\"title\": \"Re: Re: Re: Re: Re: Re: Response to Reviewer #3\", \"comment\": \"We updated Section 5.1 and included a new evaluation of the learning rates of the Truncated Q-functions. We again would like the reviewer for the fruitful discussions.\"}",
"{\"title\": \"Re: Re: Re: Re: Re: Re: Response to Reviewer #3\", \"comment\": \"We are grateful for the very much non-standard and constructive efforts of the reviewer.\\n\\n\\\"Thank you for generating the additional figure. I apologize for any miscommunication, but when suggesting that the faster shifting is why it is being sped up, I wasn't referring to \\\"Shifted Q-learning\\\" where one is learning the shifted target alone without any interplay in the composition. I was referring to faster shifting *within* composite Q-learning, the green curves in the figure, and not the purple ones. As a side note, I think the figure should perhaps show \\\"Composite Q-learning (10\\u22123)\\\" or \\\"Q-learning\\\" with dashed lines, as \\\"Composite Q-learning (10\\u22123)\\\" is not visible from being directly beneath Q-learning.\\\"\\n\\nWe will change this for the camera-ready.\\n\\n\\\"Like the new figure suggests with the green lines in comparison with the yellow line, as only the step size of the shifted action-values is being varied in composite Q-learning, the faster shifting *within* composite Q-learning is what's speeding it up. Something which can further attribute the benefits to the faster shifting is that if one were to use a larger step size for the truncated values (while keeping the step sizes of the composed values and shifted values at 1e-3), it doesn't have nearly as large an improvement, and sometimes does *worse* from plateauing at a poor steady-state error!\\\"\\n\\nWe thank the reviewer for the input. We are currently evaluating the counterpart, changing the learning rates for the Truncated Q-functions while keeping the learning rates for the full Q-estimate and the Shifted Q-functions fixed. Since the deadline of the rebuttal is coming close, we will provide the results in the camera-ready latest.\\n\\n\\\"Enabling and using this larger \\\"internal\\\" step size for the shifting component (within the interplay) appears to be the key benefit of the composition, a positive result supported by your own results and figures. This result currently exists in the text as an acknowledgement that a step size of 1e-2 was used for the shifted values. The text lacks an explanation for why one might want to use a larger step size for this component, and why it can tolerate using a larger step size in general, as a larger step size for the truncated values don't provide nearly as much benefit. With the new figure, some results are there for supporting this choice, but the text should discuss and emphasize this!\\\"\\n\\nWe agree and these points were exactly what we tried to illustrate with our experiments as well. The benefit of the interplay of the components depends on having the two decoupled parts learned on two time scales via different learning rates or generalization (as in the deep RL experiments). We are happy to see the large common ground of understanding of the problems and will try to add further clarifications in the camera-ready version.\"}",
"{\"title\": \"Re: Re: Re: Re: Re: Response to Reviewer #3\", \"comment\": \"\\\"We added an analysis of the learning rate for the Shifted Q-functions in Composite Q-learning and Shifted Q-learning to the discussion of the tabular setting. We denote by \\\"Shifted Q-learning\\\" a definition of the Q-target, where the long-term value is shifted by one time step (i.e. no approximate n-step return). One can see that shifting alone does not lead to faster convergence, even when setting the learning rate of the Shifted Q-function to 1. In fact, shifting the value in time is slowing down convergence.\\\"\\n\\nThank you for generating the additional figure. I apologize for any miscommunication, but when suggesting that the faster shifting is why it is being sped up, I wasn't referring to \\\"Shifted Q-learning\\\" where one is learning the shifted target alone without any interplay in the composition. I was referring to faster shifting *within* composite Q-learning, the green curves in the figure, and not the purple ones. As a side note, I think the figure should perhaps show \\\"Composite Q-learning (10\\u22123)\\\" or \\\"Q-learning\\\" with dashed lines, as \\\"Composite Q-learning (10\\u22123)\\\" is not visible from being directly beneath Q-learning.\\n\\n\\\"We hope that this experiment convinces the reviewer that the faster convergence can only be explained by the interplay of Shifted and Truncated Q-functions.\\\"\\n\\nLike the new figure suggests with the green lines in comparison with the yellow line, as only the step size of the shifted action-values is being varied in composite Q-learning, the faster shifting *within* composite Q-learning is what's speeding it up. Something which can further attribute the benefits to the faster shifting is that if one were to use a larger step size for the truncated values (while keeping the step sizes of the composed values and shifted values at 1e-3), it doesn't have nearly as large an improvement, and sometimes does *worse* from plateauing at a poor steady-state error!\\n\\n\\\"Composite Q-learning (10^\\u22123)\\\" is also using the composition, but from the exact equivalence when using the same step size for all value functions within the composition, the composition and its interplay does not readily improve any data efficiency or decrease the bias in the estimate. This is what is meant by the composition alone not being responsible for speeding things up, and not readily taking advantage of truncated action-values converging quicker. The bias isn't readily being decreased because the truncated values have less-biased targets, but because an internal component is using a larger step size, which decreases bias quicker than a smaller step size, depending on the variability in the target. The result using 1e-3 for all components contradicts that the benefit is \\\"only due to the combination of short- and long-term predictions,\\\" but is in support that it is due to a combination of slow and fast timescale predictions within the composition. Put another way, an intuition might be that faster shifting is what lets one compose it with the truncated action-values earlier than usual, and from then on accelerates via closed-loop feedback. \\n\\nEnabling and using this larger \\\"internal\\\" step size for the shifting component (within the interplay) appears to be the key benefit of the composition, a positive result supported by your own results and figures. This result currently exists in the text as an acknowledgement that a step size of 1e-2 was used for the shifted values. The text lacks an explanation for why one might want to use a larger step size for this component, and why it can tolerate using a larger step size in general, as a larger step size for the truncated values don't provide nearly as much benefit. With the new figure, some results are there for supporting this choice, but the text should discuss and emphasize this!\"}",
"{\"title\": \"Re: Thanks\", \"comment\": \"\\\"I appreciate the revisions to the main paper, and think these improve the overall quality.\\\"\\n\\nWe thank the reviewer for the positive feedback.\\n\\n\\\"After looking through the concerns raised by reviewer one, I'm much less certain of the actual novelty of the contributions.\\\"\\n\\nAs explained in detail in the discussion with Reviewer 3, the original TD-paper indeed includes a general description of predictions for a fixed horizon via TD, however we were the first to precisely formalize said predictions in an off-policy setting. We acknowledge the work of De Asis et al. in the current revision. We would like to point out, however, that we presented initial results in a workshop paper of ours at the RSS 2019 Workshop of Combining Learning and Reasoning \\u2013 Towards Human-Level Robot Intelligence (https://sites.google.com/view/rss19-learning-and-reasoning) in June 2019, prior to the upload of De Asis et al. The workshop paper is uploaded on their website and also not mentioned in the paper of De Asis et al. The anonymous workshop paper can be found at: https://gofile.io/?c=2Omxmi\\n\\nFurthermore, in our definition of Truncated Q-functions, the action-values are w.r.t. to the full return, which is only possible due to the completion by the Shifted Q-function and a major difference to prior work. The main contribution of the paper is the analysis of the interplay between short- and long-term predictions.\\n\\n\\\"There are also lingering concerns over the claims and experiments run in this paper. Specifically, I had made the assumption you had optimized the parameters for the motivating markov chain and given reviewer one's observations using the code provided this result is much less meaningful than I had originally thought. I had in fact missed that the shifted values were using a separate learning rate, which I agree with reviewer one should be of import in the main paper.\\\"\\n\\nWe added an analysis of different learning rates for the Shifted Q-functions in Composite Q-learning and Shifted Q-learning in Section 5. \\\"Shifted Q-learning\\\" denotes a definition of the Q-target, where the long-term value is shifted by one time step (i.e. no approximate n-step return). Shifting the value in time alone is slowing down convergence and can be at most as fast as vanilla Q-learning (with a learning rate of 1.0). We hope that this experiment convincingly shows that the speed up is only due to the combination of short- and long-term predictions.\\n\\nFurthermore, we added initial results of a shallow architecture of the Q-network (no multi-layered structure) for the Walker2d-v2 environment to the appendix.\"}",
"{\"title\": \"Re: Re: Re: Re: Response to Reviewer #3\", \"comment\": \"\\\"From setting the step size of the shifted value functions to 1e-3, it can be shown that the faster shifting *is* why it is being sped up. The provided code can be run with a step size of 1e-2 for the shifted value functions (what's presented in the paper), and 1e-3 for the shifted value functions, and see that that change is what's making it learn faster. My guess as to why would be that because shifted values treat the immediate reward as 0, they only have to average out the variability in the next state when estimating the expectation, and thus can tolerate operating at a quicker timescale.\\n\\nThe composition is what lets the full return take advantage of this faster shifting, but the composition alone when using 1e-3 for *every* value function does not speed it up. Allowing for this flexibility is a real benefit of the method even in a simple setting, but it needs to be shown with a more focused analysis (either theoretical or empirical).\\\"\\n\\nWe would like to thank the reviewer for the effort in reviewing this paper and the fruitful discussion and suggestions.\\n\\nWe added an analysis of the learning rate for the Shifted Q-functions in Composite Q-learning and Shifted Q-learning to the discussion of the tabular setting. We denote by \\\"Shifted Q-learning\\\" a definition of the Q-target, where the long-term value is shifted by one time step (i.e. no approximate n-step return). One can see that shifting alone does not lead to faster convergence, even when setting the learning rate of the Shifted Q-function to 1. In fact, shifting the value in time is slowing down convergence. We hope that this experiment convinces the reviewer that the faster convergence can only be explained by the interplay of Shifted and Truncated Q-functions.\\n\\n\\\"I appreciate this revision, but a concern is that because these are the conditions where one could expect a benefit, this paper should focus its analysis on this- the part about expecting a benefit from the shifted values being set to a higher value isn't discussed anywhere.\\\"\\n\\nBesides the new evaluation in Section 5.1, we further added an initial comparison of our architecture to a shallow network in the appendix.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thank you for the response.\\n\\nI appreciate the revisions to the main paper, and think these improve the overall quality.\\n\\nAfter looking through the concerns raised by reviewer one, I'm much less certain of the actual novelty of the contributions. There are also lingering concerns over the claims and experiments run in this paper. Specifically, I had made the assumption you had optimized the parameters for the motivating markov chain and given reviewer one's observations using the code provided this result is much less meaningful than I had originally thought. I had in fact missed that the shifted values were using a separate learning rate, which I agree with reviewer one should be of import in the main paper.\\n\\nI'm still thinking on how the other reviews effects my overall opinion of this paper. I believe the idea is worth thinking about, and could be natural auxiliary tasks (as mentioned by reviewer 1). I also think this may need some more careful study to understand what components are improving performance, and there needs to be a more complete parameter study.\"}",
"{\"title\": \"Re: Re: Re: Response to Reviewer #3\", \"comment\": \"\\\"Shifting alone would not lead to a speed up... There still is a significant difference in convergence for the given fixed learning rate which can only be explained by the combination of Truncated and Shifted Q-values\\\"\\n\\nFrom setting the step size of the shifted value functions to 1e-3, it can be shown that the faster shifting *is* why it is being sped up. The provided code can be run with a step size of 1e-2 for the shifted value functions (what's presented in the paper), and 1e-3 for the shifted value functions, and see that that change is what's making it learn faster. My guess as to why would be that because shifted values treat the immediate reward as 0, they only have to average out the variability in the next state when estimating the expectation, and thus can tolerate operating at a quicker timescale.\\n\\nThe composition is what lets the full return take advantage of this faster shifting, but the composition alone when using 1e-3 for *every* value function does not speed it up. Allowing for this flexibility is a real benefit of the method even in a simple setting, but it needs to be shown with a more focused analysis (either theoretical or empirical).\\n\\n\\\"With the difference of being grounded by the unbiased immediate reward as a target for Tr_0. The targets in Composite Q-learning represent a sum of partial sums of length n, each with a lower bias (according to the lower row in Fig. 4) -- in contrast to vanilla Q-learning which bootstraps from the long-term prediction for every time step.\\\"\\n\\nBeyond the 1-step horizon, these are biased estimates of the n-step sums of rewards (which accumulate bias along the bootstrapping chain), and will further add in the bias of the shifted values. Without parameter sharing, and with the same step size for all value functions, these not only add up to be equally biased to running one-step Q-learning, but add up to the exact same update as one-step Q-learning (i.e., as if one were bootstrapping from the long-term prediction on every time step).\\n\\n\\\"The approach, however, does have the limitation, that one can only expect a benefit, if there is generalization among states (as in the TD3-experiments with the given multi-layered architecture) or if the learning rate for the Shifted Q-function can be set to a higher value. This is indeed acknowledged in the current revision.\\\"\\n\\nI appreciate this revision, but a concern is that because these are the conditions where one could expect a benefit, this paper should focus its analysis on this- the part about expecting a benefit from the shifted values being set to a higher value isn't discussed anywhere. As a side note that may be of interest, beyond expecting benefits from generalization, another possibility in the deep RL setting that's reasonably acknowledged in the literature is the representation learning benefits from predicting many relevant outputs to a task [1].\\n\\n[1] https://arxiv.org/abs/1611.05397\"}",
"{\"title\": \"Re: Re: Response to Reviewer #3\", \"comment\": \"\\\"Thank you for the clarification, and the timeline. Despite that, Sutton's original TD paper still describes the overall procedure of the consecutive bootstrapping for estimating these quantities. Of note, De Asis et al. (2019) still provide analysis motivating the use of truncating the horizon with function approximation. The completion of the return is indeed a major difference, but what makes it possibly lose the theoretical benefits of the truncated values.\\\"\\n\\nIt does in a very general manner, which is now acknowledged in related work. The benefits of bootstrapping from different greedy policies for different horizons, as in De Asis et al., come at the cost of not necessarily being optimal w.r.t. the complete task. De Asis et al. dismiss this drawback by stating that approximations always suffer from impreciseness: \\\"For a final horizon H << infinity, there may be concerns about suboptimal control. We explore this empirically in Section 5. For now, we note that optimality is never guaranteed when values are approximated.\\\" [1]\\n\\n\\\"As to why this is not readily a fair comparison, because composite Q-learning exactly decomposes one-step Q-learning, it's left to justify that the extra information available to the agent can be used to learn quicker. One can use the code that's provided and find a larger step size for vanilla Q-learning which makes it outperform *every method* presented in the figure, that it's not definitively shown that the composition is helping- I do believe that it can be shown, but the results as presented do not show this.\\\"\\n\\nThe main difficulty of the given MDP is simply the horizon and it is designed to be that way on purpose -- to rule out any other source of difference except for the horizon. Even though we do understand the point being made, the full Q-values are still updated with the same learning rate for all approaches, regardless of their given targets. Shifting alone would not lead to a speed up. This comes only due to the combination with the Truncated Q-function. The step size could be set to almost 1 for the given MDP and all approaches, which is due to the simplicity of the problem. There still is a significant difference in convergence for the given fixed learning rate which can only be explained by the combination of Truncated and Shifted Q-values. The approach, however, does have the limitation, that one can only expect a benefit, if there is generalization among states (as in the TD3-experiments with the given multi-layered architecture) or if the learning rate for the Shifted Q-function can be set to a higher value. This is indeed acknowledged in the current revision.\\n\\n\\\"The lower TD errors for the truncated horizons are not indicative of the bias of the action-values corresponding to the complete return, which is what's being used for decision making. [...] It does approximate an n-step return, in a sense that *one-step* Q-learning approximates an *infinite-step* return.\\\"\\n\\nWith the difference of being grounded by the unbiased immediate reward as a target for Tr_0. The targets in Composite Q-learning represent a sum of partial sums of length n, each with a lower bias (according to the lower row in Fig. 4) -- in contrast to vanilla Q-learning which bootstraps from the long-term prediction for every time step.\\n\\n[1] Fixed-Horizon Temporal Difference Methods for Stable Reinforcement Learning, De Asis et al., 2019. https://arxiv.org/abs/1909.03906\"}",
"{\"title\": \"Re: Response to Reviewer #3\", \"comment\": \"\\\"Furthermore, the formulation in FHTD has a small yet critical difference to the truncated formulation in our submission. The maximizing action in FHTD is according to the truncated value-function of the former step, not w.r.t. the full return as in our work. Taking the full return is only possible due to the completion based on the Shifted Q-function and is a major difference to prior work. As suggested by the reviewer, however, we added a more precise formulation of the contributions in the abstract, in the introduction and in related work.\\\"\\n\\nThank you for the clarification, and the timeline. Despite that, Sutton's original TD paper still describes the overall procedure of the consecutive bootstrapping for estimating these quantities. Of note, De Asis et al. (2019) still provide analysis motivating the use of truncating the horizon with function approximation. The completion of the return is indeed a major difference, but what makes it possibly lose the theoretical benefits of the truncated values.\\n\\n\\\"Issue 3) Shifting the value function to overcome the necessity of a model indeed imposes a bottleneck which has to be tackled by either generalization (as in the TD3 case) or by adjusting the learning rate. We would like to point out that the full value function is still updated with the same step size to its given target in all approaches. The Shifted Q-function will always be updated slower than the true value function which is why we still believe this to be a fair comparison. We added an explanation in Sections 4 and 5.\\\"\\n\\nAs to why this is not readily a fair comparison, because composite Q-learning exactly decomposes one-step Q-learning, it's left to justify that the extra information available to the agent can be used to learn quicker. One can use the code that's provided and find a larger step size for vanilla Q-learning which makes it outperform *every method* presented in the figure, that it's not definitively shown that the composition is helping- I do believe that it can be shown, but the results as presented do not show this.\\n\\n\\\"Issue 4) We would like to refer to the lower row of Fig. 4, where we compare the different TD-errors over time. Please note, that the TD-errors for the truncated Q-functions are lower across the whole of training and decrease with shorter horizons (being the least for Tr_0). Therefore, the targets for some horizon h, bootstrapping from horizon h-1, are less biased -- grounded by the target of Tr_0, which has zero bias. While we can expect the Shifted Q-approximation to be similarly biased as the full Q-approximation, the first part of the target for the full Q-estimation, which has an even higher weight due to discounting, is less biased. We can therefore consider Composite Q-learning a bias reduction technique. The price is an increase in variance which is the reason for our novel regularization technique.\\\"\\n\\nThe lower TD errors for the truncated horizons are not indicative of the bias of the action-values corresponding to the complete return, which is what's being used for decision making. They have a different target they are estimating, and while those action-values themselves have lower-biased estimators, they *exactly* decompose the bias of the estimate for the complete return. As such, the bias of the complete return has not been decreased. If one were to run the provided code under the same random seeds, with the same step size across all value functions, it can be seen that the curves are *exactly* equivalent (apart from perhaps floating point errors).\\n\\n\\\"Suggestion 3) We thank the reviewer for the suggestion and consider the hyperparameter optimization of learning rates as an extension for the camera-ready version. However, our main focus was not on maximum performance, but on the analysis of the structure of Q-functions.\\\"\\n\\nWhen one is claiming that an algorithm is better than another, then hyperparameter optimization can't really be avoided. However, it's less about optimization and more so that such analysis of the structure requires analyzing how hyperparameters interplay. Part of this suggestion is that the analysis of the structure seems to be missing, as it failed to acknowledge (1) the exact decomposition of one-step TD, and (2) justifying the use of a larger step size for the shifted action-values.\\n\\n\\\"Suggestion 4) We agree that the single term \\\"Multi-step\\\" alone usually refers to an unbiased sum of real consecutive rewards in the literature. Since our approach includes off-policy approximations of n-step returns within target calculation for Q-learning (greedy target policy), we argue that Composite Q-learning belongs to this area of research -- also with respect to the reasons outlined in detail above.\\\"\\n\\nIt does approximate an n-step return, in a sense that *one-step* Q-learning approximates an *infinite-step* return. This is misleading as one can still use actual multi-step TD methods to estimate the truncated and shifted values.\"}",
"{\"title\": \"Overview of changes in the new revision\", \"comment\": \"We would like to thank all reviewers for the constructive feedback and for their efforts in reviewing this paper.\\n\\nWe uploaded a new revision including most suggestions of the reviewers.\", \"the_main_changes_are\": \"1) A more precise formulation of the contributions\\n2) A more detailed explanation for the results in the tabular setting and average performance over multiple runs\\n3) Variance measures for the results in Tables 2 and 3\\n4) More experiments with different settings of the regularization weight and a shallow Q-network architecture\\n5) A new evaluation of different learning rates for the Shifted Q-functions in the tabular setting\\n6) A new evaluation of different learning rates for the Truncated Q-functions in the tabular setting\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"First of all, we would like to thank the reviewer for the extensive and valuable feedback.\\n\\nSuggestion 1) While we acknowledge the work of De Asis et al., we would like to mention that we presented initial results in a workshop paper of ours at the RSS 2019 Workshop of Combining Learning and Reasoning \\u2013 Towards Human-Level Robot Intelligence (https://sites.google.com/view/rss19-learning-and-reasoning) in June 2019, prior to the upload of De Asis et al. The workshop paper is uploaded on their website and also not mentioned in the paper of De Asis et al. The anonymous workshop paper can be found at: https://gofile.io/?c=2Omxmi\\n\\nFurthermore, the formulation in FHTD has a small yet critical difference to the truncated formulation in our submission. The maximizing action in FHTD is according to the truncated value-function of the former step, not w.r.t. the full return as in our work. Taking the full return is only possible due to the completion based on the Shifted Q-function and is a major difference to prior work. As suggested by the reviewer, however, we added a more precise formulation of the contributions in the abstract, in the introduction and in related work.\\n\\nSuggestion 2) We added a first comparison between our architecture and a shallow Composite Q-network for the Walker2d-v2 environment in the appendix. We will add results for the other environments in the camera-ready version.\\n\\nIssue 3) Shifting the value function to overcome the necessity of a model indeed imposes a bottleneck which has to be tackled by either generalization (as in the TD3 case) or by adjusting the learning rate. We would like to point out that the full value function is still updated with the same step size to its given target in all approaches. The Shifted Q-function will always be updated slower than the true value function which is why we still believe this to be a fair comparison. We added an explanation in Sections 4 and 5.\\n\\nIssue 4) We would like to refer to the lower row of Fig. 4, where we compare the different TD-errors over time. Please note, that the TD-errors for the truncated Q-functions are lower across the whole of training and decrease with shorter horizons (being the least for Tr_0). Therefore, the targets for some horizon h, bootstrapping from horizon h-1, are less biased -- grounded by the target of Tr_0, which has zero bias. While we can expect the Shifted Q-approximation to be similarly biased as the full Q-approximation, the first part of the target for the full Q-estimation, which has an even higher weight due to discounting, is less biased. We can therefore consider Composite Q-learning a bias reduction technique. The price is an increase in variance which is the reason for our novel regularization technique.\\n\\nSuggestion 3) We thank the reviewer for the suggestion and consider the hyperparameter optimization of learning rates as an extension for the camera-ready version. However, our main focus was not on maximum performance, but on the analysis of the structure of Q-functions.\\n\\nSuggestion 4) We agree that the single term \\\"Multi-step\\\" alone usually refers to an unbiased sum of real consecutive rewards in the literature. Since our approach includes off-policy approximations of n-step returns within target calculation for Q-learning (greedy target policy), we argue that Composite Q-learning belongs to this area of research -- also with respect to the reasons outlined in detail above.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We would like to thank the reviewer for the detailed comments. We included most suggestions in the new revision.\", \"c1\": \"We now average over 10 runs and provide two standard deviations and the results of a significance test. The results are highly significant. The submission is updated accordingly.\", \"c2\": \"The lack of clarity was unfortunate. We updated this section in the new version.\", \"s3\": \"We added an additional plot for the Shifted and Truncated Q-values. Since the main difficulty in this task is the temporal horizon, there is no meaningful difference between states.\", \"q4\": \"We added the default settings of TD3 to the text. While we agree that hyperparameter optimization is indeed very important to get to the full potential of an algorithm, the underlying algorithm in this paper is, in all cases, the same: TD3. The main difference between the approaches lies in the target calculation for Q-learning. Since we wanted to evaluate the influence of the structure of Q-functions on data-efficiency, we did not change crucial hyperparameters such as the learning rate or target updates, since this would lead to another source of potential differences. We assumed hyperparameters for TD3 to be optimized already as we took the settings of the original paper (the same holds for the discount-factor schedule in TD(Delta) or the rollout horizon of MVE-TD3), which we then used for evaluation. However, we evaluated the performance of the baselines for the extended capacity we had to use for the Composite Q-network in the appendix.\", \"s5\": \"Yes, our code is based on the original implementation of TD3 and we acknowleged this in the code submission. We now included a remark also in the text.\", \"q6\": \"We now provide variance measures in the tables and further included individual comparisons in the appendix.\", \"c7\": \"We updated the submission accordingly.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We appreciate the constructive feedback and detailed suggestions. We included most of them in the new revision.\\n\\n1) \\\"It would be useful to include error measurements (perhaps the standard error over runs) to Table 2 to see the statistical significance of those results.\\\"\\n\\nWe included variance measures in Table 2 and 3.\\n\\n2) \\\"Running a larger number of parameter settings would help to establish a clear pattern, or running each parameter setting for more independent runs could have allowed more significant results.\\\"\\n\\nWe included boxplots w.r.t. the area under the learning curve to give a better visualization of the variances. We further added two more settings of the regularization weight.\\n\\n3) \\\"For the experiment in Figure 2, why not include multi-step Q-learning with importance sampling corrections on the later steps?\\\"\\n\\nWithin an off-policy learning regime based on deterministic policies, it is unclear how to include importance sampling in a multi-step setting. In the most naive way, the importance sampling weight can either become 0 or 1 and should be mostly 0 in the later course of learning as the target policy progresses. We therefore did not add importance sampling as a baseline here.\\n\\n4) \\\"Additionally, the caption does not well explain what the four green lines at the top of the plot represent. It was difficult to interpret the plot on the first pass of the paper because of this omission. Regardless, I find the results in Figure 2 to be otherwise intriguing.\\\"\\n\\nWe thank the reviewer for the positive feedback and updated the submission accordingly.\\n\\n5) \\\"Finally, the choice of meta-parameters in this paper could negatively impact results in favor of the competitor algorithms.\\\"\\n\\nWe would like to thank the reviewer for the suggestion and agree that hyperparameter optimization could be of great use here. Within the scope of the paper, however, we were not aiming at maximum performance. We wanted to analyze the influence of the structure of Q-functions within target calculation. We therefore kept crucial parameters, such as the target update and the learning rate, the same, since it would be even harder to distinguish the influence of the different methodological choices.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes the Composite Q-learning algorithm, which combines the algorithmic ideas of using compositional TD methods to truncate the horizon of the return, as well as shift a return in time. They claim that this approach will improve the method's data efficiency relative to standard Q-learning. They demonstrate its performance relative to Q-learning in a tabular domain, as well as in deep RL domains which use the compositional idea as an off-policy critic.\\n\\nOverall, the paper has interesting algorithmic ideas, but there are critical issues in the evaluation and resulting claims being made. Based on this, I am recommending rejection of the paper. I do think there is value in the compositional idea, but for different reasons outlined in the suggestions.\", \"issues\": \"1) The truncation of the horizon is not a novel TD formulation, as claimed in the paper. This algorithm is described in the original TD paper (Sutton, 1988) as \\\"prediction by a fixed interval.\\\" Sutton's group further has a recent paper following up on the fixed-horizon TD (FHTD) idea (De Asis et al., 2019), introducing an off-policy control variant of it.\\n\\n2) Based on Theorem 1 of the TD(\\\\Delta) paper (Romoff et al., 2019), as well as the sample-complexity arguments from the FHTD paper, this compositional algorithm is *exactly* equivalent to standard TD in the tabular setting (and function approximation if value functions don't share parameters), update for update, assuming that: (1) each value function is initialized identically, and (2) the same step size is used for each value function. An intuition for why is because the accuracy of the shifted action-values depends on the accuracy of the standard TD estimate, and the TD errors can be shown to exactly decompose that of standard TD. Under this, there is no ready improvement in data efficiency due to the fixed-horizon value functions converging quicker.\\n\\n3) The results in the tabular setting seem to contradict what I described in Issue 2, because compositional Q-learning as presented did converge quicker than standard Q-learning. However, this is misleading in that the other methods used a step size of 1e-3, but the step size of the shifted value functions used, without explanation, a larger step size of 1e-2. The reason for the improved performance is that these values had a step size an order of magnitude larger than the remaining ones, and if one were to use the same step size across all value functions, it would have matched Q-learning exactly. This exact decomposition is supported by how the fixed-horizon value estimates follow Q-learning's curves exactly for the first h - 1 updates (and will converge to Q-learning's curve if h approaches infinity), and can further be verified by running the provided code with a step size of 1e-3 for the shifted value functions. Without acknowledging the equivalence when using a consistent step size across value functions, as well as sweeping over step sizes for each method, the results don't present a fair comparison and significantly misrepresent compositional TD methods.\\n\\n4) On this observation that it is an exact decomposition of TD, it is particularly an exact decomposition of *one-step* TD, as one-step TD errors are used in the fixed-horizon and shifted value function estimates. This makes it equally biased to a one-step method, and is inconsistent with the use of \\\"multi-step\\\" learning in the literature where information across several time steps is included in the estimate of the return. Truncating and shifting things in time can be contextualized as a form of time-dependent discounting, and adjusting the discount rate isn't generally viewed as performing multi-step TD.\\n\\n5) Based on the above, the benefit in the deep RL setting is not convincingly due to what is claimed (as parameters are shared, and a consistent step size is used in the optimizer). Some possible reasons might include the architectural choices in how the network represented the decomposition, as well as the representation learning benefits of predicting many relevant outputs to a task.\", \"suggestions\": \"1) The precise novelty of the work can be clarified, as the fixed-horizon TD formulation dates back to Sutton (1988), and has been extensively studied in De Asis et al. (2019). As far as I'm aware, there's novelty in the idea of shifting value functions, reconstructing the full return from decomposed value functions, and introducing a penalty to the loss based on inconsistencies in the value estimates.\\n\\n2) The motivation and claims of the paper should be revised, as the claimed data efficiency from fixed-horizon values converging quicker isn't readily true. The resulting deep RL results may need more careful experiments to tease apart why the composition might be helping. For example, it might be useful to compare a different neural network architecture, like having all of the compositional components as outputs from the same, final hidden layer (in comparison with outputting them from intermediate hidden layers).\\n\\n3) The tabular example needs to be re-worked to ensure a fair comparison between each algorithm. For example, the curves can be presented under the best step size (in terms of some metric, like area under the curve) for each algorithm. While there is an exact equivalence to standard one-step TD methods, a real benefit of the approach is that strictly more information is present to the agent, and the flexibility of being able to use separate step sizes for each value function can be favorable if it can be shown to be better after fairly tuning each algorithm. Shifting the focus toward showing that certain types of value functions are less sensitive to step sizes or work better operating at different time-scales from other components (because this seems to be what's actually happening in the results) would be a huge plus for this.\\n\\n4) Because it is using one-step TD errors to estimate each of these components, and is equally biased to one-step TD, it isn't really a multi-step method. I think it would be better to emphasize the compositional aspect and its increased flexibility, than frame it as a multi-step off-policy method.\\n\\n----------\", \"post_rebuttal\": \"I think the additional results post-discussion are good, and are on the right track of the claimed goal of analyzing the algorithm. However, the new results might be contradictory to some of the claims made earlier in the paper, and so a more involved revision seems to be needed. I do believe the algorithm has promise for the reasons teased apart in our discussion, and encourage the authors to improve their paper with these results.\", \"to_detail_a_few_things\": \"1) The new results, which now empirically demonstrate the exact equivalence with one-step Q-learning, contradicts some claims about improved data efficiency due to truncated value-functions converging quicker. While meta-parameter selection isn't the focus, if the choice of meta-parameter is what can make it differ from vanilla Q-learning, and is the key explanation for the improvements, then the analysis should focus on this.\\n\\n2) Mention of the equivalence only comes up in the experimental results, when it's a key property of the algorithm. If analysis of the composition is the paper's focus, acknowledging this property is foundational to any analysis of the method. It could have been shown analytically following the algorithm's derivation, and would have better justified some of the choices made in the experiments.\\n\\n3) Being equivalent to running *one-step* Q-learning still makes the \\\"multi-step\\\" learning emphasis appear incorrect, especially when the algorithm can trivially be extended to use actual multi-step TD methods. The title seems to come from interpreting what the composite values represent, but the horizon isn't what makes a method multi-step, and the compositional components add up to exactly one-step Q-learning's update.\", \"minor\": \"1) Arguably one of the most prevalent explanations in the deep RL literature for why one might expect improvements is the multi-task/auxiliary task hypothesis (Jaderberg et al., 2016).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"*Synopsis*:\\n This paper proposes to split the value function into two separately learned components (a short-term truncated value function, and a long-term shifted value function) suggesting the short term truncated returns should learn faster as compared to the tail of the returns. They provide temporal difference formulations for a truncated value function and shifted value function, enabling efficient learning of the two components. They also provide derivations of other similar approaches to the off-policy case. Finally, they compare their algorithm to several approaches on a subset of the MuJoCo tasks, and a novel tabular domain.\", \"main_contributions\": \"- An algorithm, Composite Q-learning, which decomposes the value function into a short-term truncated portion and a long-term shifted portion.\\n - Derivation of prior art for off-policy.\\n\\n *Review*\\n The paper is generally well written (some suggestions for improved readability can be found below), and provides some nice algorithms for the community. I especially appreciate the author's willingness to derive off-policy variants of related algorithms to compared, as opposed to relegating this to future work which is the typical case. The theory for the truncated and shifted value functions also seems correct at a light check. Overall, I am recommending this paper for a weak accept as I have some concerns over the experimental results that I would like clarified. (specifically C1, Q4, and Q6).\\n\\n\\n [Q]uestions/[C]larifications/[S]uggestions:\", \"c1\": \"For the tabular domain, are the reported results over multiple runs? If not, I think it would be worthwhile to do some more runs and provide a significance test.\", \"c2\": \"It would be beneficial to add some indication what the true value for state s_0 is in the plot (either with a horizontal dotted) for each of the methods (i.e. I would expect Tr0 to converge to a different value compared with composite Q-learning). Also, I'm unsure if you appropriately specified what Tr_0, Tr_1, ... are in the text. I might be missing this, but I think it should be more clear.\", \"s3\": \"It might be interesting to look at the value of the shifted Q-function for this domain. Also, in the appendix I think it would be worthwhile to include the results for all of the states in the MDP (or a representative subset).\", \"q4\": \"What are the default settings for TD3 and how were they set? This is an important detail to include, even if you believe they are well accepted in the field. This will make it easier to reproduce your experiments for future work. I think it seriously harms the paper by not tuning the algorithms appropriately.\", \"s5\": \"It seems as if you are using an open source implementation of TD3, if this is the case you should state this and give a link to the implementation (if you implemented yourself disregard this)\", \"q6\": \"How significant are the results in figure 4, say for Walker2d? From what I understand about IQR, significance is measured based on overlap of the medians with the competing IQRs. For example, if we look at Walker2d much of the Composite TD3 median learning curve is within the IQR of TD3(\\\\Delta) and there are many points where TD3(\\\\Delta) is also in the IQR of the Composite TD3. I think portions are significant, but it is hard to appreciate from this plot. What might be useful to get a better sense of the data is to include error bars for the results presented in table 2 and table 3. I think table 3 could also benefit with box plots for each of the domains, just to make the comparison easier.\", \"c7\": \"I think the claim \\\"We also showed that composite TD3 is able to achieve state-of-the-art data-efficiency compared...\\\" is a bit strong, especially given the needed clarifications on the significance of the results and how you set hyperparameters. I would urge the authors to soften this claim, and instead say you provide evidence of composite q-learning's data efficiency as compared to other methods.\\n\\n\\n *Other comments not taken into consideration in the review*\\n\\n - It was quite difficult to read sections 2 and 3 given how dense they are. I would recommend splitting these sections into multiple paragraphs to make the sections more readable.\\n\\n-----------\\nPost discussion/rebuttal:\\n\\nAfter reviewing the comments from other reviewers and the discussion with R3, I'm inclined to think this paper could use a bit more work. I think the idea is still interesting and worth pursuing, but given some of the new observations and experiments run the paper needs to make more changes than I would find reasonable for acceptance. \\n\\nThanks again for your hard work, and I look forward to seeing this in a future conference.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary\\n\\nThis paper introduces a new Q-learning formalism that helps reduce the bias of single step bootstrapping in Q-learning by learning multiple single step bootstrapping Q functions in parallel. This is accomplished by composing multiple n-step returns, showing that a recursive definition of n-step returns allows each return to be learned using only a single step of bootstrapping instead of at most n steps of bootstrapping. The paper solves the problem of the n-step fixed horizon by additionally composing a gamma discounted Q function that is shifted by n. In the end, the Q function used for behavior still predicts the same values as vanilla Q-learning, but with significantly less bias without a large increase in variance.\\n\\nReview\\n\\nI find this paper to be novel and insightful, the proposed algorithm is well supported theoretically and reasonably well supported empirically. I appreciated the careful demonstration on the smaller MDP with tabular features, showing the effects of multi-step Q-learning and clearly demonstrating the bias due to not truly using an off-policy formulation. I find that the demonstrations on the larger environments appear promising and suggest that composite multi-step Q-learning is a promising direction.\\n\\nThe error bars in the larger demonstrations, Figure 4, make it difficult to distinguish any meaningful differences between the algorithms. I appreciate that results are averaged over 11 runs, fortunately far more than seems to be standard at the moment, but still the amount of variance makes it difficult to say anything statistically. Table 2, then shows a reduction of the results but without mention of variance. It would be useful to include error measurements (perhaps the standard error over runs) to Table 2 to see the statistical significance of those results. Based on Figure 4, my guess is that there is negligible difference statistically.\\n\\nThe parameter sensitivity curves for the Walker2d domain also demonstrate that it is difficult to say anything meaningful about each parameter choice. Running a larger number of parameter settings would help to establish a clear pattern, or running each parameter setting for more independent runs could have allowed more significant results. The variance exhibited by one value in the regularization sensitivity curve is alone extremely interesting; perhaps using a different visualization that allowed more clear comparisons of the variance over independent runs would further motivate the utility of the regularization parameter. I think these results are interesting, but as presented do not sufficiently highlight the differences between the proposed algorithm and its competitors.\\n\\nFor the experiment in Figure 2, why not include multi-step Q-learning with importance sampling corrections on the later steps? I believe this would have fixed the bias issue, though clearly would be a tradeoff for high variance. I think this would make for a more convincing argument. Additionally, the caption does not well explain what the four green lines at the top of the plot represent. It was difficult to interpret the plot on the first pass of the paper because of this omission. Regardless, I find the results in Figure 2 to be otherwise intriguing.\\n\\nFinally, the choice of meta-parameters in this paper could negatively impact results in favor of the competitor algorithms. By choosing to fix meta-parameters based on the defaults of a competitor, this could be harmfully biasing the proposed algorithm by preventing it from choosing a better stepsize. In fact, I would suspect that the proposed algorithm would exhibit lower variance updates than TD3, meaning it could potentially take advantage of higher stepsizes. This omission makes the claims of this paper weaker than they could possibly be, leaving a slight hole in the research.\\n\\n---------\", \"edit_after_discussion_and_rebuttal_phase\": \"I read the in-depth discussion between the authors and R3 and looked at the edits to the draft. I agree with the other reviewers on the basis of understanding the importance of meta-parameter selection. During the initial review, I found the ideas of the paper interesting enough to largely out-weigh the importance of a careful meta-parameter study. After R3's demonstration that there were indeed flaws with the results under the current meta-parameter selections, I think the best course of action would be to reject the paper in its current form.\\n\\nI still strongly believe there is a place in the literature for this paper, so I hope to see this paper again at the next conference.\"}"
]
} |
H1e5GJBtDr | Axial Attention in Multidimensional Transformers | [
"Jonathan Ho",
"Nal Kalchbrenner",
"Dirk Weissenborn",
"Tim Salimans"
] | Self-attention effectively captures large receptive fields with high information bandwidth, but its computational resource requirements grow quadratically with the number of points over which attention is performed. For data arranged as large multidimensional tensors, such as images and videos, the quadratic growth makes self-attention prohibitively expensive. These tensors often have thousands of positions that one wishes to capture and proposed attentional alternatives either limit the resulting receptive field or require custom subroutines. We propose Axial Attention, a simple generalization of self-attention that naturally aligns with the multiple dimensions of the tensors in both the encoding and the decoding settings. The Axial Transformer uses axial self-attention layers and a shift operation to efficiently build large and full receptive fields. Notably the proposed structure of the layers allows for the vast majority of the context to be computed in parallel during decoding without introducing any independence assumptions. This semi-parallel structure goes a long way to making decoding from even a very large Axial Transformer broadly applicable. We demonstrate state-of-the-art results for the Axial Transformer on the ImageNet-32 and ImageNet-64 image benchmarks as well as on the BAIR Robotic Pushing video benchmark. We open source the implementation of Axial Transformers. | [
"self-attention",
"transformer",
"images",
"videos"
] | Reject | https://openreview.net/pdf?id=H1e5GJBtDr | https://openreview.net/forum?id=H1e5GJBtDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"QJEcfi_vQn",
"HkgnTM9hir",
"S1l61fO7jS",
"SyeFnbO7jB",
"r1lycyCMsH",
"SkgThTazjS",
"H1ezBTaziH",
"Bkp7J9Mor",
"BylznUt3qr",
"ryeuXv839B",
"S1lPeozCYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727307,
1573851843621,
1573253605235,
1573253552842,
1573212039352,
1573211572598,
1573211450273,
1573195556517,
1572800169595,
1572788000102,
1571855086829
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1589/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1589/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1589/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1589/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1589/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1589/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1589/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1589/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper1589/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1589/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a self-attention-based autoregressive model called Axial Transformers for images and other data organized as high dimensional tensors. The Axial Attention is applied within each axis of the data to accelerate the processing.\\n\\nMost of the authors claim that main idea behind Axial Attention is widely applicable, which can be used in many core vision tasks, such as detection and classification. However, the revision fails to provide more application for Axial attention.\\n\\nOverall, the idea behind this paper is interesting but more convincing experimental results are needed.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to all reviewers\", \"comment\": [\"Dear reviewers, thank you for your comments. We have uploaded a revised version of our paper incorporating your feedback. Specifically:\", \"We are now more explicit about the scope of our paper and its intended contribution. Our work is about autoregressive modeling for images and other data organized as multidimensional tensors -- it falls in the same line of work and scope as papers such as Pixel Recurrent Neural Networks (van den Oord et al 2016), Image Transformer (Parmar et al 2018), Subscale Pixel Networks (Menick and Kalchbrenner 2019), and many others.\", \"We have included improved results on video modeling (1.29 bits/dim on BAIR robot pushing).\", \"We have included an ablation study using a baseline architecture for our image model. Specifically, we replace the inner decoder with an LSTM. We find that the results on ImageNet 32x32 and 64x64 are slightly worse by 0.01 and 0.02 bits/dim, respectively, and also that training time is slower than our original model. See Section 4.1 in the revised paper for the full discussion.\", \"We have included discussion on relationship with other attention proposals in the computer vision literature, such as CCNet. Our contribution and emphasis is on uses of masked axial attention and how to combine it in a way that leads to a valid autoregressive image model (the dependencies between outputs and inputs must obey the raster scan order so that it defines a valid probabilistic model), whereas other works do not employ masking and are not focused on defining an autoregressive model.\", \"We have also increased our emphasis of the semi-parallel sampling aspect of our model, that is unique among autoregressive image and video models.\", \"All in all we believe the paper, proposed methods, and open source code will be very useful to the generative image modeling community, and we ask you to consider this when making your final decision.\"]}",
"{\"title\": \"based on this discussion -- changing rating to reject\", \"comment\": \"see title.\"}",
"{\"title\": \"response\", \"comment\": \"Thank you for your response.\\n\\nImage modeling attention-based architectures is a very narrow scope indeed. If this is truly the scope, I vote to reject the paper. My concerns are as follows:\\n\\nImage modeling is a very broad task. It is currently unclear whether attention-based architectures will be superior to other methods. What is the reason to limit to only this specific subset of image modeling? It seems arbitrary. If image modeling is the true task, a full list of prior work on image modeling should be included and compared with.\\nTransformers are used in many applications beyond image modeling as well. It seems as if the proposed attention mechanism could deliver significant gains in these areas. Is there a reason to focus only on generative image modeling versus other popular CV or NLP tasks as mentioned before? This would be a powerful paper if gains were shown on a wide variety of tasks, with minimal modification to the underlying method (as claimed). As it stands, the scope is too narrow\"}",
"{\"title\": \"Intended scope of paper falls exclusively within generative modelling (here too)\", \"comment\": \"Thanks for your remarks. Some of your major points (2 and to some extent 1) concern the scope of the notions that we introduce, specifically axial attention. Please note that the intended scope is only axial attention within multidimensional transformers, that is within generative modelling of multidimensional data such as images and videos. We are aiming at making this very clear in the paper. Please see our related remarks to the other reviewers.\"}",
"{\"title\": \"Intended scope of paper falls exclusively within generative modelling\", \"comment\": \"Thank you for remarks. Since one of your major objections is at its core the same objection as that by reviewer #1, please see comment above. We want to treat our paper as a new architecture for image (and video) generation and we are making this clear in the text.\"}",
"{\"title\": \"Intended scope of paper falls exclusively within generative modelling\", \"comment\": [\"Thank you for your comments. We would like to point outright that the intended scope and focus of the paper is exclusively generative models of images with an extension to videos. Some aspects of the paper make this clear:\", \"The title centrally includes \\\"multidimensional transformers\\\" that are only generative models indeed with an encoder part and a decoder part (like the original transformer for language).\", \"Our main contribution is the Axial Transformer architecture itself, i.e. how to easily apply (masked) axial attention to multi-dimensional transformers by using a number of additional features: reordering of RGB channels, shifting operation for the rows, shallow and hence faster strict autoregressive decoder, no need for custom kernels.\", \"The thorough and exclusive comparison with previous image modelling attention-based architectures.\", \"However, we also realize now that some sentences in the paper may hint at axial attention as a stand-alone operation to be used beyond generative modelling. Showing this is beyond the scope of our paper and we are working to make this clear and rephrase the relevant passages and subsections.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a novel approach to deal with the computational problems of self-attention without introducing independence assumptions. The proposed approach is simple, easy to understand, and easy to implement.\\n\\nHowever, evaluation for this paper is severely lacking. As it is, there is not enough information provided to adequately assess the proposed method's strengths in practice. The following should be added:\\n\\nEvaluation on a variety of different tasks, such as image segmentation, temporally consistent object detection, object tracking, etc. Why are the evaluations limited to generative modeling? To prove the generality of the method (as claimed), it needs to be applied to various tasks.\\nRuntime (in inference) comparisons for each of the datasets and for each of the baselines. Additionally, a theoretical analysis for runtime in terms of the size of the input should be given (the column in Table 1 should have runtimes for each method clearly specified, and this should be done for each dataset and baseline)\\nAblation study. What is the baseline architecture used without axial attention? There is only comparison to previous work which may have used a different architecture.\\n\\nIf these concerns are thoroughly addressed, I would be happy to increase my score.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #5\", \"review\": \"This paper claims to propose a new approach to solve the computational problems of self-attention. However, the paper mainly focuses on adapting Transformer for image generation, which has far less applications. The whole paper needs to be rewritten to make their target and contribution clearer.\\n\\n1. The authors overclaim that they provide a new approach for accelerating self-attention. However, they only adapted Transformer for image generation. In fact, Transformer does not equal to self-attention. Currently, two directional self-attention like Bert has much wider applications compared with Transformer like sequential self-attention. \\n\\n2. For a paper claim to improve self-attention, they should show its effectiveness on a broad range of tasks, with comprehensive experimental evaluation. However, authors mainly reported the image generation on several datasets. \\n\\nOverall, the authors need to rewrite the paper. They should either show more applications with the proposed self-attention approach or treat it as a new approach for image generation.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"It is known that the standard self-attention method is computationally expensive and cost a significantly large amount of storage when the number of points to be attended is large.\\n\\nThis paper attempts to solve this problem and proposed the Axial Attention method. It is claimed to be able to save an O(N^(d-1)/d) factor of resources over standard self-attention.\\n\\nThe proposed method looks novel to me, but some of the related works are missing and the experiment session is insufficient. \\n\\n1) The author should at least include the following works which also aim to reduce the cost of self-attention. Since the author did not mention these works which also focus on solving the same problem, It is hard for me to judge if the proposed method is better than existing works.\\n[a] CCNet: Criss-Cross Attention for Semantic Segmentation\\n[b] A^2-Nets: Double Attention Networks\\n\\n2) self-attention has shown its effectiveness on a broad range of computer vision tasks, including image generation, detection, segmentation, and classification. I do not get why the proposed method is only benchmarked for generative models. Is it because the proposed method cannot be adopted on other popular CV tasks, such as detection, segmentation, and classification? Extra experiments should be included if the proposed method is not only designed for generative models.\\n\\n3) The ablation study is missing. The author directly compared its own method with other existing methods that are implemented and trained with different hyperparameters. It is hard to know which indeed benefits the accuracy gain and how significant is the proposed method. \\n\\n4) In table 2 and 3, I do not see a clear advantage of the proposed method over the SOTA methods.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes axial attention as an alternative of self-attention for data arranged as large multidimensional tensors, which costs too much computational resource since the complexity of traditional self-attention is quadratic in order to capture long-range dependencies for full receptive fields. The axial attention is applied within each axis of the data separately while keeping information along other axes independent. Therefore, for a d-dimensional tensor with N = S^d, axial attention saves O(N^{(d\\u22121)/d}) computation over standard attention. The proposed axial attention can be used within standard Transformer layers in a straightforward manner to produce Axial Transformer layers, without changing the basic building blocks of traditional Transformer architecture. The authors did experiments on two standard datasets for generative image and video models: down-sampled ImageNet and BAIR Robot Pushing, and they claim that their proposed method matches or outperforms the state-of-the-art on ImageNet-32 and ImageNet-64 image benchmarks and sets a significant new state-of-the-art on the BAIR Robot Pushing video benchmark.\", \"reasons_to_accept\": \"1.\\tSimple, easy-to-implement yet effective approach to adapt self-attention to large multidimensional data, which can save considerable computation for efficiency, while still have competitive performance.\\n2.\\tClear writing, with sufficient but not redundant introduction of background knowledge and explanation of both the advantages and drawbacks of existing models (too large computational complexity on high-dimensional data).\", \"suggestions_for_improvement\": \"1.\\tIt would be better if the authors can provide more analysis or case study to show the reason why Axial attention (Axial Transformer) can reach good performance even if it omits considerable operations compared to traditional Transformers, or to show why the attention operations within axis are important instead of attention operations between axis. \\n2.\\tDefinition of \\u201caxis\\u201d should be more clear in section 3 (there could be some ambiguities of \\u201caxis\\u201d).\"}"
]
} |
BkgOM1rKvr | The Surprising Behavior Of Graph Neural Networks | [
"Vivek Kothari",
"Catherine Tong",
"Nicholas Lane"
] | We highlight a lack of understanding of the behaviour of Graph Neural Networks (GNNs) in various topological contexts. We present 4 experimental studies which counter-intuitively demonstrate that the performance of GNNs is weakly dependent on the topology, sensitive to structural noise and the modality (attributes or edges) of information, and degraded by strong coupling between nodal attributes and structure. We draw on the empirical results to recommend reporting of topological context in GNN evaluation and propose a simple (attribute-structure) decoupling method to improve GNN performance. | [
"Graph Neural Networks",
"Graph Toplogy",
"Noise",
"Attributed Networks"
] | Reject | https://openreview.net/pdf?id=BkgOM1rKvr | https://openreview.net/forum?id=BkgOM1rKvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"jj1Izn74N",
"Hke2YVqnir",
"BylKNV53jH",
"HJl0Hm52jS",
"H1xRWXq3sr",
"H1lbDwNCtS",
"rkxfWpoaYH",
"ryeDNdsitr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727274,
1573852291546,
1573852208650,
1573851973919,
1573851909810,
1571862360699,
1571826938114,
1571694638962
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1586/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1586/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1586/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1586/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1586/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1586/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1586/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper empirically investigates the behaviour of graph neural networks, as a function of topology, structural noise, and coupling between nodal attributes and structure. While the paper is interesting, reviewers in general felt that the presentation lacked clarity and aspects of the experiments were hard to interpret. The authors are encouraged to continue with this work, accounting for reviewer comments in subsequent versions.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Overall improvements in this version\", \"comment\": \"We thank everyone for their insight and reviews. They allowed to further our conclusion and ideas.\\n\\nOur paper is an empirical study that attempts to a highlight a gap in our understanding as researchers of the behavior of convolutional graph neural networks. It does not present a consolidated theoretical model of their behavior. We explore GNN\\u2019s ability to operate across domains and scratch the surface of GNN\\u2019s claim to utilize the topology of Real-world complex networks.\\n\\nThese experiments serve to highlight integral aspects of the impact of a network\\u2019s topology and the distribution of node attributes to a model\\u2019s performance. They are not comprehensive and representative all possible scenarios, but rather highlight important places where our intuitive understanding differs from experimental behavior. Scenarios which occur in the real world and where the application of these models may have unforeseen consequences. \\n\\nWe have improved the explanations, clarifying several procedures and providing the reasoning behind our choices. We have also augmented the appendices with extra diagrams and data where possible, and would be happy to share the datasets and other material after the paper is deanonymized.\"}",
"{\"title\": \"Incorporating feedback into the paper\", \"comment\": \"Thank you for your detailed comments. They were most helpful and we\\u2019ve incorporated all of them into our draft.\", \"we_have_explained_the_reasons_behind_the_selection_of_certain_features_and_datasets_more_clearly\": [\"Certain features can only be calculated at a graph level and others at node level (e.g. betweenness is calculated for each node); and\", \"We\\u2019ve selected datasets which span different types of connectivity scenarios (e.g. well connected with no isolated nodes vs. several disconnected components).\"], \"we_have_augmented_the_text_with_appendices_that_show\": [\"Non-averaged figures for section 3; and\", \"Topological Features of the perturbed datasets.\"], \"we_have_also_significantly_redrafted_the_paper_to_increase_clarity\": [\"The method section (section 6)\", \"In section 3, we redrafted this section to clarify the experimental results to get rid of any inconsistencies in the text. We have further supported the results realting isolated nodes by drawing the readers attention to table 5 in the appendix - which provides a more detailed description of the results.\", \"In section 6, we have redrafted with a strong emphasis on explaining our assumptions. For instance, while somewhat a hyperparameter, the percentile threshold of 60*density(G) was chosen only to retain the most similar pairs and minimizing noise in the augmented edge set. This is now reflected in the text.\"]}",
"{\"title\": \"Incorporating feedback into this version\", \"comment\": \"Thank you for your discerning review. We hope to have addressed most of your concerns:\\n-We have added an explanation of different aspects of a complex networks in the dataset\\u2019s section\\n-We have incorporated more comprehensive figure captions\\n-The discussion and conclusion sections present the improvement our experiments form the basis of and go over \\u201cwhat\\u2019s next\\u201d i.e. future work.\\n-One of the limitations of the work is the similarity function/or distance metric used. We have provided more details about the choices of function that were used such as the reasons behind selecting cosine metrics, and highlighted the limitation in our discussion.\"}",
"{\"title\": \"Incorporating feedback into the paper\", \"comment\": \"We thank the reviewer for their insight and comments.\\n \\nThe reviewer astutely points out that the paper looks primarily at convolutional and an attention based model (according to the taxonomy in https://arxiv.org/pdf/1901.00596.pdf). We\\u2019ve scoped the paper to explicitly mention this both by changing the title and reiterating it within the methods section. Expanding the paper to include additional models is in our timeline for future exploration.\\nWe also agree that the paper is an empirical one; and one of its primary goals is to emphasize the lack of a single theory to explain this behaviour of convolutional graph neural networks. We\\u2019ve more explicitly stated its empirical nature and expanded the discussion to better describe the possible causes behind the experiments.\\nIn relation to the issue of post-justification, this paper\\u2019s main purpose is to explore unexpected behaviours for GNN\\u2019s and as such is less an explication of an overall theory, and more a first step in better mapping and analysing these behaviours. The appearance of results being intuitive stems from the paper\\u2019s primary purpose in contrasting this intuitive understanding with experimental proof.\\nFinally we\\u2019ve significantly revised the writing and hope this will improve clarity of content and highlight latent points.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This\\u00a0work empirically\\u00a0 study the behavior of Graph Neural Networks in various topological\\u00a0contexts. Four sets of experiments are provided follow the setting in [1].\", \"pros\": \"1. The research problem studied in the paper is important.\\n2. Authors conduct extensive experiments on multiple dataset.\", \"cons\": \"1. The paper lacks formal justifications on the raised claims. They look intuitive and post-justified by experiments and not by rigorous arguments.\\n2. All the experiments are conducted based on the convolutional graph neural network based methods. It is suggested to evaluate on the other types of graph neural network, e.g.\\u00a0Recurrent Graph Neural Networks, Graph Autoencoder.\\n3. The writing needs to be significantly improved.\\n\\n[1] Shchur et al. \\\"Pitfalls of Graph Neural Network Evaluation\\\",\\u00a0\\u00a0arXiv preprint arXiv:1811.05868\\u00a0(2018)\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents four experimental set-ups to get a better understanding how the performance of Graph Neural Networks (GNN) depend on topological and/or nodal information.\\n\\nAs the paper is not really in my research area, I would have liked the paper to be a bit more self-contained, but the writing of the paper is generally clear. \\n\\nThe contributions of the paper are mainly experimental and show different \\\"surprising\\\" aspects about GNNs. I found the experimental results to be interesting, especially those presented in Section 5 about decoupling attribute and topological information. However, I think the presentation may be improved and more information (e.g. about how the experiments were conducted or more plots) should be reported in the main paper or the appendix.\\n\\nRegarding the experiments, the authors made choices about which subsets of datasets (Table 2) or which subsets of measures (Table 1) to use. It would be nice to explain how those choices are made and if they are well-justified. Otherwise, it feels that the presented experimental results have been hand-picked.\\n\\nRegarding Section 3, I guess that Figure 1 presents averaged results over datasets or over models. I think it would be important to share the results per datasets or per models, as averaging may hide some key aspects.\\n\\nOn page 3, the third paragraph could be illustrated with some plots. Besides, the its first sentence seems to contradict the last sentence of the paragraph before Section 4. Moreover, the text in the fourth paragraph doesn't seem to fit well Fig. 2b for Amazon computers. Am I misreading this plot?\\n\\nRegarding Section 4, I think it would have been interesting to report the metrics about the topological features for the perturbed graphs. Are they very different from those for the orignal graphs?\\n\\nRegarding Section 6, the sentence \\\"In the previous sections, we have demonstrated that GNNs .. are robust to perturbations to it\\\" on page 7 seems to contradict the conclusion of Section 4.\\nI found the experiments in this section to be less convincing. The edge addition technique seems to be ad-hoc and it is not clear why it would improve the performance or not. For instance, how was the value 6\\u00b0*density-G chosen?\\n\\nOverall, although I found some of the experimental results very intriguing, I think the paper may not be ready for publication in its current state.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper analyzes the properties of graph neural networks (GNNs). It shows that several hypothesis that one might intuitively make about the behavior of GNNs, do not actually hold. In fact, some observations are even contradictory, indicating that GNNs' performance is not robust, and care needs to be taken when using them. In particular, the authors analyze what happens when topology is altered by computing correlations between topology metrics and accuracy, dropping connections, adding extraneous connections, etc. The authors examine performance is a function of graph connectedness, and find a weak correlation.\", \"some_concerns\": \"a) The part about attributes and topology is not very clear to me. What are the attributes? What's \\\"decoupling by shuffling\\\"? \\nb) Figures could benefit from a brief \\\"so what\\\" explanation in the caption. \\nc) While the work is important because GNNs are a rising trend, it is a bit disappointing that there is no discussion of \\\"how do we fix GNNs\\\" and \\\"what's next\\\". \\nd) Topology is one important feature of graphs, but could it be examined in terms of what kinds of edges are added based on learned similarity metrics, etc? Learning the adjacency matrix is one important step in GNN methods and it would be useful to examine robustness to different ways of learning that matrix.\"}"
]
} |
ByedzkrKvH | Double Neural Counterfactual Regret Minimization | [
"Hui Li",
"Kailiang Hu",
"Shaohua Zhang",
"Yuan Qi",
"Le Song"
] | Counterfactual regret minimization (CFR) is a fundamental and effective technique for solving Imperfect Information Games (IIG). However, the original CFR algorithm only works for discrete states and action spaces, and the resulting strategy is maintained as a tabular representation. Such tabular representation limits the method from being directly applied to large games. In this paper, we propose a double neural representation for the IIGs, where one neural network represents the cumulative regret, and the other represents the average strategy. Such neural representations allow us to avoid manual game abstraction and carry out end-to-end optimization. To make the learning efficient, we also developed several novel techniques including a robust sampling method and a mini-batch Monte Carlo Counterfactual Regret Minimization (MCCFR) method, which may be of independent interests. Empirically, on games tractable to tabular approaches, neural strategies trained with our algorithm converge comparably to their tabular counterparts, and significantly outperform those based on deep reinforcement learning. On extremely large games with billions of decision nodes, our approach achieved strong performance while using hundreds of times less memory than the tabular CFR. On head-to-head matches of hands-up no-limit texas hold'em, our neural agent beat the strong agent ABS-CFR by $9.8\pm4.1$ chips per game. It's a successful application of neural CFR in large games.
| [
"Counterfactual Regret Minimization",
"Imperfect Information game",
"Neural Strategy",
"Deep Learning",
"Robust Sampling"
] | Accept (Poster) | https://openreview.net/pdf?id=ByedzkrKvH | https://openreview.net/forum?id=ByedzkrKvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"7pz9M3rCw0",
"HkgqM83ojH",
"Hkld3H2jir",
"ryg_h5Lk9r",
"HJx2eN_6Kr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798727243,
1573795345841,
1573795247961,
1571936944373,
1571812340340
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1585/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1585/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1585/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1585/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Double co\\u00fanterfactual regret minimization is an extension of neural counterfactual regret minimization that uses separate policy and regret networks (reminiscent of similar extensions of the basic RL formula in reinforcement learning). Several new algorithmic modifications are added to improve the performance.\\n\\nThe reviewers agree that this paper is novel, sound, and interesting. One of the reviewers had a set of questions that the authors responded to, seemingly satisfactorily. Given that this seems to be a high-quality paper with no obvious issues, it should be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"To Reviewer 2\", \"comment\": \"Thanks for your recognition of our work.\\n\\nExactly, we have made a lot of efforts to evaluate our method on the large-scale game (heads-up no-limit Texas Hold\\u2019em) in the past year. In the future, we plan to reproduce the latest works, such as ED, Deep CFR, Single Deep CFR, and compare them on HUNL, although it's expected that such evaluation needs a lot of work and computation resources.\"}",
"{\"title\": \"To Reviewer 3\", \"comment\": \"Thanks for your positive and constructive comments. Hopefully, this reply can address all your concerns.\\n\\n1. HUNL\\n\\n1.1 variance reduction:\\nIt\\u2019s well known that head-to-head evaluation of HUNL is challenging because of its high variance. We use AIVAT[1] to reduce evaluation variance. AIVAT is also used in DeepStack and Pluribus. \\n\\n1.2 Setting and evaluation: \\nWe train blueprint strategies on a finer-grained abstracted HUNL(2), which contains about $8*10^10$ infosets. Its terminal values are estimated by the flop network rather than auxiliary network. After that, we employ continue resolving to compute strategy in real-time for the next rounds. \\n\\nTo make a fair comparison, in Fig.7(c), we show two head-to-head competitions: DNCFR against ABS-CFR and tabular agent. Note that, DeepStack\\u2019s continue resolving is intractable to solve HUNL(2) in \\u201creal-time\\u201d. Specifically, DeepStack needs GPU (16GB) to accelerate continue resolving, however, large games will result in out of memory. DeepStack on CPU cannot obtain strategy in dozens of seconds per action. Therefore, tabular agent is a reasonable benchmark via replacing continue resolving by tabular blueprint strategy (saving in a large table). We show that the neural agent obtains a similar win rate with the tabular agent while using hundreds of times less memory.\\n\\nWe believe this setting is convincing to demonstrate the advantages of DNCFR against tabular CFR.\\n\\n1.3 long-running performance:\\nIt\\u2019s very expensive to compute exploitability in large games. Till the submission deadline, we only collected the results within hundreds of iterations (similar to Noam\\u2019s Deep CFR). Now, we report the further convergence on 1k iterations as follows. Typically, larger iterations and SGD updates will return better strategies.\\n\\nFig.7(a): the settings under embedding size 8, 16, 32, 64, 128 approach $27.48\\\\pm0.07, 26.88\\\\pm0.03,17.71\\\\pm0.03, 8.92\\\\pm0.03, 6.52\\\\pm0.02$.\\n\\nFig.7(b): the settings under SGD updates approach $16.85\\\\pm0.51, 6.52\\\\pm0.02, 1.88\\\\pm0.07$ \\n\\n2. average strategy:\\nGood question. That\\u2019s a trade-off among unbiased estimate, convergence, and data efficiency. Stochastically-weighted averaging (SWA) typically leads to a large variance as discussed in Marc\\u2019s Ph.D. thesis (p49). The classical external sampling(ES) solves this problem by only updating average strategy for $-i$. Because ES samples $k=|A(I_i)|$ actions for $i$ and only samples one action for $-i$, it\\u2019s inefficient to collect samples for average strategy at $-i$ in neural CFR. In contrast, we collect samples at $i$. Typically, when collecting average strategy samples at $i$, we need using SWA to maintain unbiased estimate of average strategy as you said. However, because of the high variance of SWA, we find the one without SWA converges more efficient empirically. Specifically, we test these methods on Leduc. After 1k iterations, exploitability of the method with/without SWA are 0.43 and 0.14 (SWA is worse because of high variance). \\n\\n3. Bias in neural network:\\nYes. It has a bias when training neural networks. Analysis of this bias is interesting and challenging, and out of the scope for this paper. Furthermore, we compare the final average strategy of tabular MCCFR and DNCFR on HUNL(1), the MSE and KL-divergence are 0.003 and 0.024 respectively (very small difference). Also, we recognize it\\u2019s just an approximate solution to analyze the overall bias because IIGs could have many Nash equilibria.\\n\\n4. generalization to unseen infoset:\\nWe learn new parameters based on the old parameters and a subset of observed samples. All infosets share the same parameters, so that the neural network can estimate the values for unseen infosets. Note that, #parameters is orders of magnitude less than #infosets in many settings, which indicates the generalization of our method. Furthermore, Fig.4(d) shows that DNCFRs are slightly better than tabular MCCFR, we think it\\u2019s because of the generalization to unseen infosets.\\n\\n5. sampling method:\\nWe set exploration as 1/t. Outcome sampling (OS) with total exploration is uniform sampling rather than robust sampling(RS) with k=1. In Sec.4.1 and Appendix.D.2, we introduce that RS is a general version of both OS and ES. In OS, each player samples one action according to her policy with exploration. For question \\u201cIs robust sampling helpful\\u201d, RS with k=1 refers to the traverser uniformly samples one action while the opponent samples one action according to her policy. Furthermore, concurrent work[2] also demonstrates that RS(k=1) is more efficient than OS and uniform sampling.\", \"reference\": \"[1] Neil Burch et.al. AIVAT: A New Variance Reduction Technique for Agent Evaluation in Imperfect Information Games. 2016.\\n[2] Trevor Davis et.al. Low-Variance and Zero-Variance Baselines for Extensive-Form Games. 2019.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper introduces two neural networks, one for average regrets and one for average strategy, to (approximately) run CFR algorithm in (large) IIG. New sampling techniques are used to train these networks to represent the resulting strategy.\\n\\nI remember seeing this paper in the last year's OpenReview for ICLR - I was not reviewing the paper.\\nI enjoyed the first version of the paper, but it was missing experiments on larger games and some questions on smaller games were unanswered.\\nIn this version, authors clearly spent a large amount of time (including re-implementing DeepStack!) so that they could compare on large games (namely HU no-limit Poker) and overall greatly improved the paper and evaluation.\\n\\nThe evaluation on small games includes comparison to NFSP/XFP, as well as investigating time/space trade-off.\\nFor the large game, I like that the authors evaluated against an ACPC agent.\\nPrevious work is well cited, and authors have a good overall map of related work (both older results and new papers).\", \"issues\": \"1) One downside of the paper is that it is very close to the \\\"Deep Counterfactual Regret Minimization\\\".\\nWhile authors devote a full paragraph in section 6 to contrast these, the difference is relatively small.\\nI do not think it is fair to dwell too much on this though, since the first version of the paper with this idea originally came *before* DeepCFR publication!\\n\\n2) Since the approach is so similar to DeepCFR, it would be nice to include it in comparison (not just NFSP/XFP).\", \"minor_details\": [\"Page 9: \\\"...which has no limited number of actions, ...\\\" rephrase please, this sounds like the game is infinite.\", \"Page 9: \\\", more abstracted action leads to better strategy...\\\" more abstracted sounds like it is smaller, rephrase please to something like \\\"finer grained abstraction\\\".\", \"Minor frequent grammatical issues, but does not derail from the flow and semantics of the paper.\"], \"conclusion\": \"Overall, the paper introduces method that is interesting to the community, scales to large games and the paper includes comprehensive evaluation section.\\nI believe it should be accepted.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors proposed a double neural counterfactual regret minimization algorithm (DNCFR) that uses a RegretSumNetwork to approximate cumulative regret and an AvgStrategyNetwork to approximate the average strategy. To help the training, the authors use robust sampling and a mini-batch training methods. The main contributions of the paper are: First, the DNCFR algorithm and the recurrent neural network architecture; Second, an efficient sampling and training method; Third, plenty of experiments that corroborate the effectiveness of DNCFR.\\n\\nIt is interesting and meaningful to develop neural-based CFR, in order to eliminate the manual abstraction and apply CFR to large-scale imperfect information games. The authors tested their algorithm on a medium scale game HUNL(1) (with 2 * 10 ^ 8 information sets) and trained a blueprint strategy on large scale game HUNL(2), which is combined with value networks from DeepStack and beats ABS-CFR a lot. It is great to see that DNCFR works on large scale games. However, both HUNL(1) and HUNL(2) are one-round games and it not clear how to combine the blueprint strategy trained by DNCFR with DeepStack. What\\u2019s more, as DNCFR is only effective on first round as the blueprint strategy trainer when played against ABS-CFR, it is more likely that DeepStack beats ABS-CFR, instead of DNCFR beats it. So the result in Figure 7(c) is not so convincing.\\n\\nUnlike tabular CFR that save regrets and strategies for all the information sets or other neural-based algorithms that need large reservoir buffers. It only needs to save data sampled from the most recent iterations, which saves much memory. In fact, this is a bootstrap method borrowed from Reinforcement learning. Though the method save memory and has lower variance than methods that use reservoir buffers, it is bias as it trains the new RSN and ASN based on the output of the old networks. It seems good when the game size is small and the CFR iterations is small. It may needs very large CFR batches and very many gradient descent updates when training on large scale games, in order to control the bias. The results in Figure 7(a) and 7(b) are limited in CFR iterations. Experiments using different gradient descent updates and different CFR batch while given more CFR iterations should be tested, in order to show the effect of the bias training.\\nIn \\u201cAlgorithm 4\\u201d. The calculation of average strategy seems wrong. Because you are using MCCFR, According to \\u201cMonte Carlo sampling and regret minimization for equilibrium computation and decision-making in large extensive form games\\u201d, you may need a method call \\u201cstochastically-weighted averaging\\u201d. It should be noted that the sampling probability of\\neach information set is not equal. You may need to discuss this.\\n\\nThe authors train the network for 2000 updates when the batch size is 256 for Leduc and 100000 for HUNL(1) and HUNL(2) in every CFR iteration (I am not sure how much gradient updates are used in HUNL(2), it is not given). There's quite a lot of updates in every CFR iteration. But it is acceptable when compared to Deep CFR proposed by Brown, which uses 4000 updates and the batch size is 10000.\", \"experiments\": \"1. In the ablation studies, the algorithms are tested on small scale game Leduc(5). It is quite a small game that event the size neural parameters is larger than the size of information sets. It is OK but larger games make more sense. Especially in the\\nexperiment of \\u201cIndividual network\\u201d, as this experiment is important to show that\\nDNCFR is comparable to tabular CFR and the bias is acceptable.\\n2. The paper didn\\u2019t show what the learned regret and average strategy looks. If they are\\nshowed, it would be helpful to understand the bias in the bootstrap learning.\\n3. In the part \\u201cIs robust sampling helpful\\u201d, the authors want to show that the robust sampling with k=1 is better than outcome sampling. But I didn\\u2019t find how they set the exploration parameter in outcome sampling and I am afraid that it doesn\\u2019t make sense. Because outcome sampling has a parameter to adjust the exploration. According to \\\"Monte Carlo sampling and regret minimization for equilibrium computation and decision-making in large extensive form games\\\", the best exploration parameter is different in different game, but it is almost sure that totally exploration is not the best setting (it is equivalent to the robust sampling with k = 1).\\n4. In the part \\u201cDo the neural networks generalize to unseen infosets\\u201d. The authors claims that it is true. But the experiment only shows that the neural network don\\u2019t forget\\ninformation sets that trained before.\\n5. In the part \\u201cHow well does DNCFR on larger games\\u201d, the DNCFR is limited to 100\\niterations while is allow to run for 1000 iterations in other experiments. 100 iterations\\nare too few to show the effectiveness of DNCFR on these games.\\n6. The algorithm is tested on HUNL(1) and HUNL(2), which are one round and action- abstracted version of HUNL. But the authors should give more detail description of\\nthese games.\\n7. It is not clear how to combine the blueprint strategy trained by DNCFR with\\nDeepStack, as DeepStack uses continual resolving and don\\u2019t need any blueprint strategy. And it would be interesting if the head-to-head performance of DNCFR agent on large scale games (for example, the FHP with two rounds and more than 1e^9 information sets) is reported, instead of the performance of the agent that combined with DeepStack.\\n8. In section 5.4, \\u201cWhen variance reduction techniques are applied, Figure 7(c)...\\u201d. The authors didn\\u2019t explain why the variance reduction techniques are needed here, but in order to compare the algorithm directly, some other advanced techniques should not be used here.\"}"
]
} |
BJe_z1HFPr | Resizable Neural Networks | [
"Yichen Zhu",
"Xiangyu Zhang",
"Tong Yang",
"Jian Sun"
] |
In this paper, we present a deep convolutional neural network (CNN) which performs arbitrary resize operation on intermediate feature map resolution at stage-level. Motivated by weight sharing mechanism in neural architecture search, where a super-network is trained and sub-networks inherit the weights from the super-network, we present a novel CNN approach. We construct a spatial super-network which consists of multiple sub-networks, where each sub-network is a single scale network that obtain a unique spatial configuration, the convolutional layers are shared across all sub-networks. Such network, named as Resizable Neural Networks, are equivalent to training infinite single scale networks, but has no extra computational cost. Moreover, we present a training algorithm such that all sub-networks achieve better performance than individually trained counterparts. On large-scale ImageNet classification, we demonstrate its effectiveness on various modern network architectures such as MobileNet, ShuffleNet, and ResNet.
To go even further, we present three variants of resizable networks: 1) Resizable as Architecture Search (Resizable-NAS). On ImageNet, Resizable-NAS ResNet-50 attain 0.4% higher on accuracy and 44% smaller than the baseline model. 2) Resizable as Data Augmentation (Resizable-Aug). While we use resizable networks as a data augmentation technique, it obtains superior performance on ImageNet classification, outperform AutoAugment by 1.2% with ResNet-50. 3) Adaptive Resizable Network (Resizable-Adapt). We introduce the adaptive resizable networks as dynamic networks, which further improve the performance with less computational cost via data-dependent inference. | [
"imagenet classification",
"resizable networks",
"resizable",
"cnn",
"arbitrary resize operation",
"weight",
"mechanism"
] | Reject | https://openreview.net/pdf?id=BJe_z1HFPr | https://openreview.net/forum?id=BJe_z1HFPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"KzPCNG-WpX",
"NMEreOFQEY",
"HylWQ0t2ir",
"ByxPE3YniH",
"SJeSE9YnjB",
"HJeQ86Ug9r",
"BygNawPTYS",
"Syl6fYR_OH"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576944644234,
1576798727210,
1573850649012,
1573850159230,
1573849645299,
1572003146558,
1571809212127,
1570461973254
],
"note_signatures": [
[
"~Jason_Kuen1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1584/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1584/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1584/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1584/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1584/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1584/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Concerns about the novelty of this work\", \"comment\": \"I have great concerns about the novelty of this work and do not quite agree with the opinions of PCs and reviewers that this work is novel.\\n\\nThe essential idea of this paper is very similar to that of our \\\"Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks\\\" paper published at CVPR 2018. The methods proposed by these two works essentially focus on training multiple sub-networks sharing the weights of a single full network based on different feature map resizing/downsampling configurations. The main objective of these two works is to enable test-time evaluation of different sub-networks (each has a different accuracy-efficiency operating point) obtained from a single model without re-training or fine-tuning. Both also share the identical idea of having separate Batch Normalization statistics for the different sub-networks -- known as Scale-Aware Batch Normalization in this work and Instance-Specific Batch Normalization in Stochastic Downsampling. Despite the strong similarities, this work does not mention or cite our Stochastic Downsampling paper at all.\\n\\nThe major difference between the two works is the \\\"search\\\" space for the feature map resizing/downsampling. Resizable Neural Networks resizes the feature map for each stage by randomly sampling a size (scaling ratio) from a predefined range of sizes. Whereas, Stochastic Downsampling samples a random layer index and a downsampling ratio for resizing the selected layer's feature map. I would say the fair sampling method introduced by this work is quite novel in the context of neural networks with multiple accuracy-efficiency operating points. But the accuracy gains shown in Table 10 (page 18) are apparently neither consistent nor significant.\\n\\nI hope the authors will take the Stochastic Downsampling paper into good consideration and include it as a related work in the future version of this work.\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper offers likely novel schemes for image resizing. The performance improvement is clear. Unfortunately two reviewers find substantial clarity issues in the manuscript after revision, and the AC concurs that this is still an issue. The paper is borderline but given the number of higher ranked papers in the pool is unable to be accepted unfortunately.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Reviewer 3:\\nThanks for your detailed and thoughtful reviews as well as your helpful suggestions.\", \"q1\": \"Regarding writing, grammatical error, typos.\", \"a1\": \"We improved the writing and fixed the typos. We enlarged Figure 2 for readability. As you suggested, we weaken the idea of sub-networks. The link of distillation paper you mentioned is out-dated, we will include this in our reference if you can kindly provide the name of the paper.\", \"q2\": \"Scale up resizable network?\", \"a2\": \"The resizable network can indeed scale up. We did preliminary experiments on scale up ResNet-50 by 4x times larger FLOPs and achieve better performance we reported in EfficientNet paper (Table 3). We will consider your advice and do more experiments to see the possibility of scaling up the resizable network.\", \"q3\": \"More details and ablation study on fair sampling.\", \"a3\": \"We improved writing and included more descriptions of fair sampling. We added an ablation study on fair sampling.\", \"q4\": \"Regarding the scaling factor combination.\", \"a4\": \"We included the choice of scaling factor combination we used in the experiments section. We added a new section in Appendix A to discuss the impact of the choice of scaling factor on our method.\", \"q5\": \"In Section 3.5 I'm not sure what is meant by \\\"It is inefficient to process all images to one-subnetwork, as the algorithm spends equal energy at each sample\\\". I assume energy use is proportional to the number of FLOPS, which in turn depends on spatial resolution.\", \"a5\": \"Yes. We understand the description of energy can be ambiguous; thus, we make a revision on Section 3.5 (Section 4.3 in new revision).\", \"q6\": \"As ImageNet is the only dataset considered in the paper, could you provide error bars?\", \"a6\": \"At this time, we can not provide error bars due to limited time constraints. We will consider your advice to provide error bars and do more experiments on other datasets (other tasks) in the future revision.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thanks for your detailed and thoughtful reviews. We have addressed all the questions below:\", \"q1\": \"Implementations details and code release.\", \"a1\": \"We understand that in our previous version, several details are missing, and some concepts are hard to understand. Besides improving writing and fixing the typos, we included more details (both method descriptions and experiment settings) in the new revision. We are working on cleaning the code, and we will soon release the code.\", \"q2\": \"Algorithm 1 has \\\"predefined spatial list L.\\\" How to choose it in practice?\", \"a2\": \"We added a subsection in method (Section 3) to define L. We included the choice of L we used in the experiments section. We added a new section in Appendix A to discuss the choice of pre-defined spatial list L.\", \"q3\": \"How to justify a longer training time for resizable network.\", \"a3\": \"Regarding your question on training time, we claim that the resizable network is a single trained network that can scale depending on a FLOPs budget. The ResizableNets indeed need more training time, because it is trained with multiple scale configurations in one network. On the one hand, it is more efficient than training hundreds of individual networks with different feature map scale configuration. On the other hand, if we have a target network and use resizable augmentation to improve generalization in Section 3.4 (Section 4.1 in the new revision), the training time is less or equal to other data augmentation techniques. Moreover, the performance gain of ResizableNet-NAS is not coming from longer training time. We did experiments to increase the training time of ShuffletNetV2, and the result is below:\\nShuffletNetV2 1.5x:\\nBaseline (250 epochs): 72.6% \\n2x longer training time (500 epochs): 72.6%\\n8x longer training time (2000 epochs: 72.6%\\nOur preliminary experiment indicates that simply extend the training time can not improve the performance of the network.\", \"q4\": \"Why Resizable-NAS is better than the all len(L) models separately to find better architectures?\", \"a4\": \"We do not claim that it is better to use Resizable-NAS for finding better architectures compared to training all len(L) models separately. Admittedly, training all models separately and then find the network in these already trained models would be the best option. However, the training cost is overwhelming. Resizable-NAS is more efficient because we train different scale configuration in a single network with shared weights, then search for the target sub-network that satisfy the efficiency constraints.\\nMore importantly, our contribution of Resizable-NAS is not the searching method since we propose to use a lookup table, which is not the most efficient way for searching (other searching methods such as evolution algorithm can be more efficient). Instead, our contribution is that we propose an application scenario for ResizableNet. As far as we are aware, our paper is the first do neural architecture search on **resolution** and showed surprisingly good results.\", \"q5\": \"How the \\\"target model\\\" is selected after the training of Resizable Networks?\", \"a5\": \"The target model is predefined. It depends on which scale configuration you wish to apply the resizable augmentation technique. The experiments on resizable augmentation in our paper use the same setting as the baseline methods (all scale factors in the scale configuration is 0.5; s = 0.5).\", \"q6\": \"Why is Resizable-Adapt omitted from Table 2?\", \"a6\": \"The Resizable-Adapt is a dynamic inference method. Since Table 2 compared the performance of different data augmentation techniques, we don't think it is fair to include Resizable-Aug in Table 2.\", \"q7\": \"References are out-dated. For example, there are no \\\"Figure 5.\\\"\", \"a7\": \"We fixed it in the new revision.\", \"q8\": \"Ablation studies for Fair Sampling?\", \"a8\": \"We included ablation studies on Fair Sampling in the Appendix.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thanks for your detailed and thoughtful reviews! We have addressed all the questions below:\\n\\nWe improved the writing and fixed the typos you have mentioned. We reorganized the position of tables and figures in our paper. We added subsection \\\"notation\\\" at the beginning of the experiment section to clarify the different words we used in our experiments. We enlarged the Figure 2 for readability.\", \"q1\": \"why Bilinear sampling was chosen? Could the authors provide a comparison with other sampling methods?\", \"a1\": \"Bilinear sampling is chosen because it can downsample the feature map to any resolution.\\nAt this time, we are unable to finish the comparison of bilinear sampling with other sampling methods due to limited time constraints, but we will finish the experiments and add the ablation study in the final revision.\", \"q2\": \"It is overclaimed that \\\"Most dynamic networks methods sacrifice accuracy in exchange of adaption in inference\\\" in the related work section for dynamic neural networks.\", \"a2\": \"You\\u2019re correct. We revised this section and included the paper [1] in the reference you have mentioned.\", \"q3\": \"How did you find the architecture shown in Figure 3 in the Appendix? What is Xception? Please specify the details.\", \"a3\": \"The Xception is proposed by the paper [2], and the search space we use is described in [3]. We used the search method in paper [4]. We included this reference in our new revision. We did not discuss the details about how we find architecture because it is not the focus of our paper.\", \"q4\": \"Regarding the choice of pre-defined spatial list of L.\", \"a4\": \"We added a subsection in method (Section 3) to define L. We included the choice of L we used in the experiments section. We added a new section in Appendix A to discuss the choice of pre-defined spatial list L.\", \"q5\": \"Unfair comparison with other data augmentation methods in terms of training budgets.\", \"a5\": \"We first note that, for all resizable augmentation methods, we use the pre-defined spatial list of [0.45, 0.64, 0.85, 1.0], thus the len(L) = 4. Moreover, our training epochs is the same as the baseline, e.g., 100 epochs for ResNet-50 with batch size 512.\\n\\nRegarding the training budget, our resizable augmentation spends less or the same training budget compare to other data augmentation methods. Notably, data augmentation methods normally spend longer training time to improve generalization. For comparison, the baseline of ResNet-50 is trained by 100 epochs with batch size 512, AutoAugment [5] spend 21.6x longer training time (270 epochs with batch size 4096) and MixUp [6] spend 4x longer training time (200 epochs with batch size 1024) to train ResNet-50 compared to baseline, according to their paper. Resizable augmentation spend approximately 4x longer training time (100 epochs with batch size 512, but len(L) = 4). Thus the comparison with the other data augmentation methods in terms of training budget is fair.\", \"q6\": \"what ResizeLearner learns and how ResizeLearner selects the optimal sub-network?\", \"a6\": \"We understand that the description in Section 3.5 is ambiguous. Thus, we made a revision on Section 3.5 (Section 4.3 in the new revision). To answer your question, in short, ResizeLearner learns the optimal scale configuration for each input image. The input of ResizeLearner is the feature map of an input image (process by the already trained ResizableNet), and the output is a scale configuration for this input. The ResizableNet can make predictions on this input use the scale configuration provided by ResizaLearner, thus make the inference data-dependent.\\n\\n\\n[1] Yu, Jiahui, et al. Universally Slimmable Networks and Improved Training Techniques, In ICCV 2019.\\n[2] Chollet, Fran\\u00e7ois Xception: Deep Learning with Depthwise Separable Convolutions, In CVPR 2017.\\n[3] Guo, Zichao, et al. Single Path One-Shot Neural Architecture Search with Uniform Sampling, https://arxiv.org/abs/1904.00420.pdf\\n[4] VAENAS: Sampling Matters in Neural Architecture Search, https://openreview.net/forum?id=S1xKYJSYwS\\n[5] Cubuk, Ekin D., et al. AutoAugment: Learning Augmentation Policies from Data. In CVPR 2019\\n[6] Zhang, Hongyi, et al. mixup: BEYOND EMPIRICAL RISK MINIMIZATION. In ICLR 2018\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new method that involves multi-scale inputs for each layer that could be used as network architecture search or data agumentation or\\n\\nPros)\\n(+) The idea looks interesting.\\n(+) The experimental results look promising.\\n\\nCons)\\n(-) Many typos when denoting figures and tables. See the minor comments below.\\n(-) I believe the authors could organize the paper better. Tables and figures that are referred in a page are hard to find quickly. I recommend the authors refine the paper again for better readability.\\n(-) Some notations (such as RS-, RS-NAS, and so on) are so vague that hard to follow. \\n(-) I recommend the authors redraw all the figures for clarity. For example, each legend in Figure 2 is hard to take a look at.\\n(-) + the comments below.\\n\\nComments)\\n- When doing feature map resize in terms of the resolution, why Bilinear sampling was chosen? Could the authors provide a comparison with other sampling methods?\\n- In the related work section for dynamic neural networks, the authors claimed that \\\"Most dynamic networks methods sacrifice accuracy in exchange of adaption in inference\\\", but it seems to be quite overclaimed. As shown in the paper [1], one can find that the author presented they could improve both accuracy and efficiency.\\n- How did you find the architecture shown In Figure 3 in the Appendix? What is Xception? Please specify the details.\\n- Designing the pre-defined spatial list of L looks critical, so the authors should describe L in the implementation details. \\n- One of the main problem I think is the training budget issue. According to algorithm 1, the inner loop of \\\"for j=0,..,len(L)\\\", the overall training time will clearly take L times longer than that of the training setting w/o resizable training. Thus, it does not seem to be fair comparison in terms of the training budget. Namely, it seems that the authors compared with the other data augmentation methods which spend much less training budgets.\\n- Hard to grasp Section 3.5 of Adaptive resizable neural network. ResizeLearner looks being attached at the last stage of the original network after the original network is trained, but there is no further information about what ResizeLearner learns and how ResizeLearner selects the optimal sub-network. \\n\\n[1] Universally Slimmable Networks and Improved Training Techniques, https://arxiv.org/pdf/1903.05134.pdf.\\n\\nMinor comments)\\n- Wrong section and figure references:\\n - 'It also mitigates the co-adaptation issue which we will discuss in Section 3.3'(indeed it is Section 3.4), 'The network architecture along with feature map resolution and channels number are shown in Figure 4' (it should be Figure 3).\\n - - Figure 3(d) referred to in section 4.4 would be Figure 2(d) indeed.\\n\\nAbout rating)\\nThe authors provided a novel technique about the resizable approach and the experimental results look promising. However, the paper needs to be revised and looks like it does not ready to be published now. If the authors could revise the paper and concern my comments well, I would increase my rating.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes Resizable Neural Networks, which trains networks with different resolution scalings at the same time with shared weights. It serves as data-augmentation and improves accuracy over base networks. Additionally, the same technique can perform an architecture search. Experimental results show significant accuracy gains.\\n\\nThe reported accuracy gains are substantial. The proposed method is potentially useful in many applications. However, several details are missing or hard to understand. Without additional descriptions, it is not straightforward to implement the method. Thus, I suggest for rejection. The score might be raised depending on updates and code release.\", \"major_comments\": \"1) I do not understand how the training times of random-sampling in Appendix C were estimated.\\n2) Concerning Table 11, how was the result when we randomly select each scale and train network with the same number of epochs with fair-sampling? If there were no much difference with the result in Table 11, it seems fair sampling does not have clear advantages over random sampling. If so, I suggest to alter fiar-sampling by random-sampling and reduce the complexity of the proposed method.\", \"minor_comments\": \"1) Table 10 has two captions.\\n2) I did not understand that L is a list of possible scaling factors (scalar) in the initial review. I guess it is partially because I did not understand why the number of sampling should be len(L) for each mini-batch (it is actually due to fair-sampling). I think adding some remarks on it will help to make the pseudo-code easier to understand. (nit: for j= 0, ..., len(L) should be for j=0, ..., len(L) -1 or for j=1, ..., len(L))\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"In a standard multi-stage neural network such as a ResNet, the resize operation between stages typically reduces the spatial resolution by 0.5. (input ---> stage 1---> 0.5 reduction ---> stage 2 --> 0.5 reduction ---> ....). In this paper the authors apply a variety of reduction factors in between these stages for different training examples (input ---> stage 1---> variable reduction ---> stage 2 --> variable reduction ---> ....). They demonstrate that simply training in this way is a powerful form of augmentation. It also produces a network which may be scaled depending on a FLOP budget.\\n\\nI think this is a really neat idea, and as far as I'm aware it is novel. It is similar in spirit to EfficientNet, although more flexible. The experimental results are good. However, the paper is let down by poor writing and a lack of detail.\\n\\nThe paper needs a rewrite as there are many grammatical errors, which cause a bad impression:\\n\\n- \\\"performs arbitrary resize operation\\\" -> \\\"performs arbitrary resize operations\\\"\\n- \\\"Scale is the fundamental component of the physical world\\\" ---> *a* fundamental component!\\n- \\\"i.e the acuracy of NISP\\\" --> e.g.\\n- \\\"compare with baseline\\\" -> \\\"compared to the baseline\\\"\\n- There are several instances of a space missed out between a letter and an open bracket \\n- \\\"The Figure 2(a)\\\" --> \\\"Figure 2(a)\\\"\\n\\nThe comparison to weight-sharing NAS methods is unnecessary. Those entail searching for architectures and sharing weights through the search process, whereas in this work it is having a network that can take different sized inputs at different stages. On that note, it doesn't feel right to me to refer to there being many \\\"subnetworks\\\". What there really is, is just a single network that is robust to multi-scaled inputs (which is a good thing!)\\n\\nFigure 1 is nice!\\n\\nEfficientNet-B0 is a base-network that can be scaled up with a compound scaling approach found through grid search. What happens if you scale up your Resizeable net to the same FLOPs as e.g. EfficientNet-B7?\\n\\nI like the old-school vision citations, although referring to object detection makes me wonder why there are no experiments on it. For distillation, I recommend you cite https://arxiv.org/abs/1312.6184, as the Hinton paper is really just an extension of this.\\n\\nThe fair sampling seems important to the method. Could a detailed explanation be included in the appendix? Do you given the performance as a result of naive sampling? What do you mean by \\\"Some convolutional layers can go through more training issues than others\\\"? \\n\\nA big problem in this paper is that (as far as I can tell) the scaling factors considered aren't given (but we are told that they lie between 0 and 1). It isn't possible to sample these arbitrarily as you indicate that a batch-norm layer is needed for each one, so these must be discrete (because of this, it isn't correct to say that you have infinite networks contained within). It therefore isn't clear e.g. in Table 1 which permutation of scaling factors were used in upsampling the networks. Are the results given representative of the best-case selection of these scaling? I hope this isn't the case. I am assuming it is uniform scaling in the case of the \\\"individually trained counterparts\\\". It would be interesting to know more generally which combinations worked well.\\n\\nIn Section 3.5 I'm not sure what is meant by \\\"It is inefficient to process all images to one-subnetwork, as the algorithm spends equal energy at each sample\\\". I assume energy use is proportional to the number of FLOPS, which in turn depends on spatial resolution.\\n\\nThe legend in Figure 2 is close-to unreadable and needs changing.\\n\\nThe results are impressive, but error bars would be appreciated if possible. As ImageNet is the only dataset considered, this would give some needed clout. \\n\\nPros\\n-------\\n\\n- Nice, novel method\\n- Good experimental results\\n\\nCons\\n--------\\n\\n- Paper is poorly written\\n- Very few details of the scaling factor variations\\n- Only one dataset considered\\n\\nAlthough the paper is written badly and the narrative is muddled, the underlying idea is a nice one, which is executed well experimentally. Because of this I would like to recommend a Weak Accept, subject to the authors (i) doing a rewrite and (ii) including more information regarding the scaling permutations.\"}"
]
} |
BkeDGJBKvB | Multitask Soft Option Learning | [
"Maximilian Igl",
"Andrew Gambardella",
"Jinke He",
"Nantas Nardelli",
"N. Siddharth",
"Wendelin Böhmer",
"Shimon Whiteson"
] | We present Multitask Soft Option Learning (MSOL), a hierarchical multi-task framework based on Planning-as-Inference. MSOL extends the concept of Options, using separate variational posteriors for each task, regularized by a shared prior. The learned soft-options are temporally extended, allowing a higher-level master policy to train faster on new tasks by making decisions with lower frequency. Additionally, MSOL allows fine-tuning of soft-options for new tasks without unlearning previously useful behavior, and avoids problems with local minima in multitask training. We demonstrate empirically that MSOL significantly outperforms both hierarchical and flat transfer-learning baselines in challenging multi-task environments. | [
"Hierarchical Reinforcement Learning",
"Reinforcement Learning",
"Control as Inference",
"Options",
"Multitask Learning"
] | Reject | https://openreview.net/pdf?id=BkeDGJBKvB | https://openreview.net/forum?id=BkeDGJBKvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"V9jLYhsgcK",
"Bkgbl8_niH",
"BJeiYdShiB",
"BJx8fVtqiH",
"HJxXf8mOjB",
"HkgLRS7dir",
"H1li36yCYS",
"S1xF8IXHtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798727179,
1573844457492,
1573832835503,
1573717005899,
1573561867393,
1573561806092,
1571843507156,
1571268176893
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1583/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1583/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1583/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1583/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1583/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1583/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1583/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": [\"Apologies for only receiving two reviews. R2 gave a WR and R3 gave an A. Given the lack of 3rd review and split nature of the scores, the AC has closely scrutinized the paper/reviews/comments/rebuttal. Thoughts:\", \"Paper is on interesting topic.\", \"AC agrees with R2's concern about the evaluation not using more complex environments like Mujoco. Without evaluation on a standard benchmark, it is difficult to know objectively if the approach works.\", \"AC agrees with authors that the DISTRAL approach forms a strong baseline.\", \"Nevertheless, the experiments aren't super compelling either.\", \"AC has some concerns about scaling issues w.r.t. model size & #tasks.\", \"The paper is very borderline, but the AC sides with R2's concerns and unfortunately feels the paper cannot be accepted without a stronger evaluation. With this, it would make a compelling paper.\"], \"title\": \"Paper Decision\"}",
"{\"title\": \"Question\", \"comment\": \"Thank you for your comments.\\n\\n>Nonetheless, while we agree in principle with your observation that additional high-dimensional experiments would further strengthen our claims, we observe that practically, the amount of compute required to be able to do is beyond our abilities.\\n\\nWould it be possible to comment on how much time does one run of the algorithm takes in an environment like Ant-Gather for example? I am curious to understand how hard is it to train with the proposed approach.\"}",
"{\"title\": \"Thank you for your prompt reply\", \"comment\": \"Thank you for your prompt reply and for valuing the contribution of our work.\\n\\nWe would first like to note that on Swimmer MSOL performs comparably with Distral, which is itself a strong multitask baseline.\\nFurthermore, we argue that hierarchical methods such as ours are best suited to domains that exhibit a strong compositional structure, like the Taxi domain does. And while Swimmer is indeed a higher dimensional domain, it does not exhibit such compositional structure. As a consequence, comparable performance against Distral is a positive mark of MSOL's general utility with higher-dimensional tasks, with the expectation that where compositional structure may be leveraged, it can do so (cf. Taxi experiments).\\n\\nNonetheless, while we agree in principle with your observation that additional high-dimensional experiments would further strengthen our claims, we observe that practically, the amount of compute required to be able to do is beyond our abilities. Note that it is not simply a case of being able to run a new environment, but to be able to control for all the confounds in the form of hyperparameters and general variance in RL experiments, to be able to properly assign credit/blame to our proposed changes. Particularly, compositional tasks when combined with continuous control, tend to become extremely complex, requiring a lot of resources to train---as in Ant-Gather, for example.\\n\\nAltogether, we believe that the experiments we perform provide good evidence for the suitability and value of MSOL for multitask learning, and hope that the lack of more complex experiments, which were not possible due to resource constraints, do not adversely affect the value of our work.\"}",
"{\"title\": \"Follow up comments\", \"comment\": \"Thank you for the clarifications.\\n\\n>Fairness of comparison:\\nThank you, that is useful. \\n\\n>Our goal with the Swimmer environment was to show that our algorithm is applicable to continuous tasks. Applying the algorithm in practice to much more complex multi-task domains in MuJoCo was infeasible given our limited computational resources.\\nConsidering swimmer is the only high dimensional experiment, and that the performance in swimmer is not very convincing; I find it hard to convince myself that the approach fairs well in high dimensional domains. It would add a lot of value to this work to validate the proposed approach in other Mujoco tasks, 3D navigation environments. As presented, it does but only in Taxi domain and marginally in Swimmer. \\n\\n>learning good options and termination functions, robustly, just from a set of tasks is already a significant enough contribution.\\nI agree that the paper presents a useful contribution. However the experiments are limited and therefore other high-dimensional environments could add value and strengthen the contributions further.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for the detailed reading of our work and your feedback. We hope we are able to address your comments below and welcome additional questions or comments.\", \"equation_6\": \"The incentive to deviate from the option-prior comes from the first term in equation 6 (which we didn\\u2019t assign a number to), the reward $r_i$. \\nSo overall, the posterior tries to optimize the reward (which might try to pull the posterior away from the prior) while deviating from the three priors as little as possible, which induces the various effects for terms 1,2,3.\\n\\nEqual value of $\\\\beta$ for terms 1,2,3:\\nWe chose to restrict ourselves to the same value of $\\\\beta$ for all three terms to show that our algorithm works well even in this restricted case.\\nHowever, you are right that this is restrictive and it is quite likely that it is possible to further improve the results of MSOL by fine-tuning all three values separately. However, this would make the comparison to Distral less fair as it only has one regularization hyperparameter. \\nConsequently, we believe that showing superior performance for equal values of $\\\\beta$ is a stronger argument in favor of MSOL.\", \"xi_in_term_3\": \"Yes, thank you!\\n\\nDelta(z_t-z_{t-1}):\\nThank you for pointing this out; we included a definition in the text.\\nSince we assume $z_t$ to be discrete, the delta here has the meaning of a Kronecker delta.\\nFor continuous $z_t$, it would correspond to the Dirac Delta distribution. \\nIn both cases we mean to express that $z_t=z_{t-1}$ with probability 1. In other words, if $b_t=0$, i.e. when the option doesn\\u2019t terminate, we don\\u2019t change the active option.\", \"typo\": \"Thank you for pointing this out.\\n\\nAppendix C.1:\\nWe believe you are referring to equation 9? \\nWe chose not to include the superscript dash here because we are using $A_{\\\\phi_i}$ twice, once in $\\\\mathcal{L}_A$ in which case it is indeed assumed constant. We are also using it in $\\\\mathcal{L}_V$, in which case it is not constant as it is optimized w.r.t $V_{\\\\phi_i}(s_t, z_{t-1})$, i.e. the last term in equation 9.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your detailed review and feedback. We welcome any further comments and questions.\", \"scalability\": \"We choose to use one network per task, but this is not strictly necessary if one wanted to scale to more tasks. For example, in the non-hierarchical case with prior/posterior networks, Galashov et al. use just one posterior network, providing task information as additional input. Their findings could be applied to our hierarchical approach. \\nFurthermore, we show that the derived loss function incentivises specialized options during training, i.e. options for which the prior and posterior are the same. Consequently, given sufficient options, eventually one can forget the posteriors as the option priors should be good enough to solve the tasks encountered so far.\", \"fairness_of_comparison\": \"Both MSOL and Distral use one network per task so this comparison is fair.\\nWe also included MSOL(frozen) which, once the prior options are trained, only uses this one network without further adaptation for transfer, allowing a fair comparison to MLSH, which it still outperforms.\", \"environmental_complexity_and_additional_baselines\": \"One key goal of MSOL is to find good (i.e. transferable) options and terminations without any additional human-designed prior knowledge regarding subgoals. This is challenging because during option-learning the algorithm not only needs to solve each task, but needs to find a way to segment tasks into reusable sub-tasks - all without additional human information. This is a main difference to many previous algorithms which receive additional information, for example in the form of landmarks, policy sketches or designated sub-tasks for each option. \\nOur goal with the Swimmer environment was to show that our algorithm is applicable to continuous tasks. Applying the algorithm in practice to much more complex multi-task domains in MuJoCo was infeasible given our limited computational resources. Hierarchical approaches applied to such settings often receive additional prior information (like sub-tasks or goals per option as in Tessler et. al) or are not constrained to learn transferable options (like in Option Critic).\", \"option_initiation_sets\": \"We agree that learning to restrict the initiation sets of options is an interesting avenue for future research, but believe that learning good options and termination functions, robustly, just from a set of tasks is already a significant enough contribution.\", \"intra_option_and_termination_learning\": \"We agree and have rephrased this sentence. We have also extended the related work section. The main difference to our work is that those approaches do not rely on a multitask setting to find good termination functions, but on other ideas, like landmarks, bottleneck states or predictability.\", \"term_1_being_nonzero\": \"When b_t=0, i.e. when we do not terminate, both the prior p^H and the posterior q^H are, by definition, delta(z_t-z_{t-1}), i.e. both assign a probability of 1 to the last active option. Consequently, in this case the fraction becomes one and we have for the term: beta * ln 1 = 0\\nOn the other hand, if b_t=1, then the prior is uniform and the posterior is the learned posterior, leading in general to a non-zero term.\", \"comparison_to_distral\": \"\", \"final_performance\": \"We would like to note that because Distal, similarly to MSOL, learns a separate posterior policy which is regularized against the prior, it will always ultimately achieve optimal performance for weak enough regularization. So it is to be expected that both MSOL and Distral achieve the same final performance.\", \"our_experiments_show_two_things\": \"For learning a hierarchy, we have a more robust optimization algorithm than MLSH\\nLearning a hierarchy (compared to a flat prior as in Distral or compared to a heuristic hierarchy as in Distral+Action) is useful because it can accelerate learning.\\n\\nWe intentionally included the non-directional Taxi domain to show in which cases a simpler architecture (here Distral+Action) is sufficient for optimal transfer, so its strong performance is expected.\\nThe main difference of this domain is that here, passing the last action is sufficiently informative to predict the likely future behavior. It is like walking in a corridor with only one door at either end: Knowing which direction we are walking in carries the same amount of information as knowing which door we want to walk towards. \\nHowever, in directional Taxi, knowing the last action is not as informative (because it might involve a rotation) and in Moving Bandits, one could infer the intended goal from the last action if one takes the goal positions into account. However, we found that Distral+Action was unable to learn this more complex relationship. In those cases a learned hierarchy in which options carry learned semantics outperforms Distral+Action in terms of transfer speed.\", \"related_literature\": \"Thank you for the pointers to additional literature. We have included them in the related work section.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThe authors propose a method for learning hierarchical policies in a multi-task setting built on options and casts the concepts into the Planning as Inference framework. They claim that the method brings several advantages such as:\\n-Stabilize end-to-end mtl,\\n-Leads to coordination b/w master policies,\\n-Allows fine-tuning of options\\n-Learn termination policies naturally\\n\\nThe proposed approach derives the reward Eq.6 by extending the graph to options for Levine\\u2019s tutorial. Eq.6 is simply the extension of the reward of maximum entropy RL to the options framework. The ideas presented in the paper are interesting, but I have concerns about the scalability of such an approach. Please see the detailed comments below. Additionally, please note that although I have marked a weak reject, I am open to adjusting my score if the rebuttal addresses enough issues.\", \"detailed_comments\": \"A primary weakness of this approach is that it seems like there is one network that learns the options and is shared across all task (that would be the prior) and then there is a task-specific network for all options (posterior), wouldn\\u2019t this be very difficult to scale if we want to learn reusable options over the lifetime of an agent? If there are n tasks, do you need to use n different networks? \\n\\nThe authors assume that all options are present everywhere i.e. I \\u2286 S. I think the work could benefit from removing this assumption.\\n\\nThe authors mention that unlike (Frans et al., 2018), they learn both intra-option and termination policies: there is definitely more work that aims to learn both the skill and its termination. It would be more complete to cite additional references here that learn both of these or rephrase this sentence. \\n\\nIt does not seem clear why \\u201cterm 1 of the regularization is only nonzero whenever we terminate an option and the master policy needs to select another to execute.\\u201d This doesn\\u2019t seem true as this is a ratio of the two probabilities and not just the instantiation of the random variable. \\n\\nThe results in moving bandits alone are very convincing. However, in Taxi (2b) distral+action seems to be as good as/even better MSOL. In directional Taxi (2c) Distral(+action) manages to reach the same final performance (if we care about that), can you please comment on this.\\n\\nSome parts of the experiments section does not seem clear to me, Does the proposed approach use a network per task? if yes, then it is obvious that their method could improve over learning on 12 tasks with one set of network. Please clarify. \\n\\nOne major concern is that the only high dimensional experiment is a swimmer and it is not immediately clear how much do we gain there. Distral is relatively closer in performance to both MSOL and MSOL frozen. I would recommend evaluation in a variety of high-dimensional domains such as other instances in Mujoco, and visual domains. In particular, the proposed ideas would make a stronger case if the baselines included other multitask hierarchical agents such as [4] for example. A discussion including some of the missing relevant related multi-task literature would also be helpful [1,2,4,5,6].\\n\\n[1] Mann, T. A., Mannor, S., & Precup, D. (2015). Approximate value iteration with temporally extended actions. JAIR, 53, 375-438.\\n[2] Konidaris, G., & Barto, A. G. (2007). Building Portable Options: Skill Transfer in Reinforcement Learning. In IJCAI,\\n[3] Andreas, J., Klein, D., & Levine, S. (2017). Modular multitask reinforcement learning with policy sketches. ICML\\n[4] Tessler, Chen, et al. \\\"A deep hierarchical approach to lifelong learning in minecraft.\\\" Thirty-First AAAI Conference on Artificial Intelligence. 2017.\\n[5] Ammar, Haitham Bou, et al. \\\"Online multi-task learning for policy gradient methods.\\\" International Conference on Machine Learning. 2014.\\n[6] Mankowitz, Daniel J., Timothy A. Mann, and Shie Mannor. \\\"Adaptive skills adaptive partitions (ASAP).\\\" Advances in Neural Information Processing Systems. 2016.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper is about learning hierarchical multitask policies over options. The hierarchical prior policy is shared amongst all tasks, while the hierarchical posterior policies are allowed to adapt to specific tasks. Once the prior is learned, it is fixed. The parameters of the posterior policies are adjusted via an adapted version of A2C.\\n\\nI liked the flow and the organization of this paper.\", \"comments_and_questions\": [\"I see that term 1 and term 3 of equation (6) working together to ensure some kind of exploration and exploitation. Term 2 controls how the option posterior deviates from the prior. However, when the ratio is 1 or less than 1, the value of (6) would increase, and both cases would have made the posterior more like the prior. There seems to be no other term that incentivizes the option posterior to deviate, and I do not see how the options are adapting to tasks.\", \"The term 1,2,3 in (6) are weighted equally by beta and cannot be fine-tuned to desired trade-offs.\", \"Should there be \\\\xi(i) multiplying the last term in (3)?\", \"What does delta(z_t - z_{t-1}) mean in section 3.1? It was not defined anywhere.\"], \"misspelling_and_typos\": [\"page 5: optimized is misspelled in \\\"Details on how we optimiszed...\\\"\", \"Appendix C.1: should there not be a superscript dash on A_{\\\\pi_i} since the superscript dash carries the meaning that the term is a constant.\"]}"
]
} |
HklvMJSYPB | Adaptive Adversarial Imitation Learning | [
"Yiren Lu",
"Jonathan Tompson",
"Sergey Levine"
] | We present the ADaptive Adversarial Imitation Learning (ADAIL) algorithm for learning adaptive policies that can be transferred between environments of varying dynamics, by imitating a small number of demonstrations collected from a single source domain. This problem is important in robotic learning because in real world scenarios 1) reward functions are hard to obtain, 2) learned policies from one domain are difficult to deploy in another due to varying source to target domain statistics, 3) collecting expert demonstrations in multiple environments where the dynamics are known and controlled is often infeasible. We address these constraints by building upon recent advances in adversarial imitation learning; we condition our policy on a learned dynamics embedding and we employ a domain-adversarial loss to learn a dynamics-invariant discriminator. The effectiveness of our method is demonstrated on simulated control tasks with varying environment dynamics and the learned adaptive agent outperforms several recent baselines. | [
"Imitation Learning",
"Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=HklvMJSYPB | https://openreview.net/forum?id=HklvMJSYPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"fjSJ9keA-l",
"BkeFzAE2jH",
"HJgnzoNosB",
"SJg7htEioB",
"ByeGcKNsor",
"SkewZK4jiB",
"rkgA3d4oiB",
"SygreDEosB",
"BJlqXqoptB",
"SkxqREopYS",
"BkgSu9lhYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727149,
1573830160730,
1573763860290,
1573763499006,
1573763466512,
1573763327114,
1573763253798,
1573762796969,
1571826210514,
1571824849663,
1571715692818
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1582/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1582/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1582/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1582/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper extends adversarial imitation learning to an adaptive setting where environment dynamics change frequently. The authors propose a novel approach with pragmatic design choices to address the challenges that arise in this setting. Several questions and requests for clarification were addressed during the reviewing phase. The paper remains borderline after the rebuttal. Remaining concerns include the size of the algorithmic or conceptual contribution of the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the revision\", \"comment\": \"I think that the presentation has been improved noticeably. I need to take a closer look on the current submission to make up my mind on whether to argue for acceptance or not. Until then, I'll keep the initial rating.\"}",
"{\"title\": \"Summary of changes to the paper\", \"comment\": \"We would like to thank the reviewers for their time and valuable feedback. We have made the following changes to the manuscript based on the reviews:\\n\\n1. Added vanilla GAIL and state-only GAIL experiments on HalfCheetah in the Appendix (Figure 9) [As suggested by R1]\\n2. Added a reference to conditional VAE (Sohn et al., 2015) in Section 3.6 and a reference to contextual policy search (Deisenroth et al., 2013) in Introduction paragraph 3. [As suggested by R1]\\n3. Added references to previous work of KL-based contrastive loss (Davis et al. 2007 and Hsu & Kira 2015) in Section 3.6\\n4. Shortened the description of GRL in Section 3.4 [As suggested by R3]\\n5. Revised Figure 1 and Figure 2\\n6. Updated term \\u201cdynamics learning\\u201d to \\u201cdynamics latent variables learning\\u201d [As suggested by R2]\\n7. Updated all typos found [As suggested by R1 and R3]\"}",
"{\"title\": \"Response to Reviewer #3 (Cont'd)\", \"comment\": \"\\u201cFigure 7 seems to imply that the method doesn\\u2019t generalize well to unseen environments, if enough environments can\\u2019t be sampled during training time\\u201d\\n\\nWe would like to clarify that the demonstrations in Figure 7 are collected from within the blackout area, as discussed in Section 4.2. Furthermore, the policy is not executed in this region at training time. This experiment demonstrates that ADAIL can generalize without training on these environments with minimal drop in performance when provided with the ground truth dynamics parameters. Figure 7 also shows that the posterior prediction is crucial when evaluated in unseen environment (comparing Figure 7a and Figure 7b). This is the main reason why we witnessed some degraded performance.\\n\\n\\u201cFigure 8. Friction value should not go from -3 to 3. \\u201c \\n\\nThanks for pointing it out! It is a typo. We have updated the values from 0 to 2 in the updated version of the draft.\\n\\n\\u201cAlso, this single result inspires no confidence in the benefit or general applicability of the VAE-based context prediction.\\u201d\\n\\nThe effectiveness of ADAIL is demonstrated in the experiment section given that we are provided with informative dynamics parameters (either ground truth or predicted) at eval time. We included an experiment of VAE-ADAIL as a proof-of-concept to demonstrate the unsupervised method. We will make sure to add more experiments with the VAE-ADAIL in the next version of the paper.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We would like to thank you for the valuable feedback! Below we address the raised concerns and answer reviewer questions:\\n\\n1a) Third person imitation learning (TPIL) Stadie et al. 2017 and our work both employed a gradient reversal layer (GRL). We explained the differences between Stadie et al. 2017 and our work in the related work section: 1) Third person imitation learning (TPIL) paper only considers two domains: a source domain where the demonstrations are collected and a target domain where the policy is evaluated, whereas, we consider learning an adaptive policy that is transferable to a class of domains with different dynamics. Compared to TPIL, this is a substantially more challenging setting, with relaxed assumptions and consequently greater potential for real robotic applications. 2) in addition, we employed a dynamics posterior to actively predict the latent dynamics parameters at each time step during evaluation making the policy agnostic to online dynamics perturbations, which is impossible with TPIL. \\n\\nWe would also like to do a comparative analysis with TPIL in the next version of the paper.\\n\\n\\u201cIn particular, what is the new element in Figure 1. and complete Section 3.4?\\u201d\\n\\nWe use Figure 1 to illustrate our discriminator architecture. In retrospect, we agree with R3 that the description of GRL might be redundant. We have shortened our description of domain adversarial loss in Section 3.4 in the updated version of the manuscript.\\n\\n1b) Conditioning the policy on a contextual variable is indeed a common idea that is shared in the literature. Our novelty, however, lies in considering a new problem formulation (which was not explored before), and unifying a number of technical solutions for solving the problem as highlighted by R1.\\n\\nThank you for making the connections with DIAYN (Eysenbach et al. 2018), and InfoGAIL (Li et al. 2017). However, we believe that DIAYN and InfoGAIL are solving different problems. We compare the problem formulations of DIAYN, and InfoGAIL with our work to further illustrate the argument:\\n\\nDiversity is all you need (DIAYN) Eysenbach et al. 2018 seeks to pretrain diversified skills without a given reward function by training a discriminator to discriminate different skills under the same dynamics. DIAYN uses maximum entropy policy to diversify the learned skills. The paper briefly mentioned learning from a few demonstrations of different skills in experiment section. Our method, however, learns an adaptive policy that is transferable under different dynamics.\\n\\nInfoGAIL Li et al. 2017 considered a setting where the dynamics does not change, and a mixture of demonstrations are collected from multiple experts that exhibit behavioral variations and/or try to achieve different goals. The InfoGAIL algorithm is not suitable for learning transferable policies under domain shifts. Our method, while sharing some architectural similarities to infoGAIL, on the other hand learns an adaptive policy that tries to achieve the same goal under different dynamics with demonstrations collected from a single source domain. \\n\\n\\n2 \\u201cThis is a severe requirement for real-word scenarios, ...\\u201d\\n\\nWe respectfully disagree that sampling environments from a distribution is an unrealistic assumption. We believe the ability to configure environments with different dynamics is a necessity for learning transferable policies under a model-free RL setting. Previous methods that try to learn transferable policies (Tobin et al., 2017; Sadeghi & Levine, 2016; Mandlekar et al., 2017; Tanet al., 2018; Pinto et al., 2017; Peng et al., 2018; Chebotar et al., 2018; Rajeswaran et al., 2016) are also reliant on the assumption of sampling environments from a distribution or the ability to perturb the environment dynamics; these authors either instrument the environment or policy (e.g. torque applied, slope of ground for locomotion tasks, etc) in order to control environment parameters during policy training, or they model parameter variation explicitly.\\n\\nAs is standard in imitation learning, we do not assume that the parameterization of the optimal policy is known. Likewise, we assume that we cannot control the environment parameters of the expert demonstration, and that the expert rollouts come from a single source environment. To the best of our knowledge, this is the first work to make this relaxed assumption. We believe that this relaxed assumption makes our proposed method more applicable to real robotic learning scenarios than much of the relevant literature.\"}",
"{\"title\": \"Response to Reviewer #1 (Cont'd)\", \"comment\": \"\\u201cComparison: How about IRL, e.g. AIRL? What about state-only GAIL?\\u201d\\n\\nWe have appended an experiment using state-only GAIL on HalfCheetah in the updated version of the paper (as updated in Figure 9). State-only GAIL-rand achieves similar performance as GAIL-rand, whereas it exhibits less variance across domains. However, ADAIL still significantly outperforms State-only GAIL-rand. State-only GAIL is a nice baseline method to have, as we have seen a handful of papers in the imitation learning literature that are based on state-only observations (despite a lack of theoretical guarantees for such a regime). It is also reasonable to use AIRL as a baseline as it claims to recover the true reward function, which might be transferable across domains. \\n\\nWe will update the experimentation section with additional baselines (State-only GAIL in additional environments, and AIRL) on all environments after the rebuttal phase as it requires some additional time for us to obtain and consolidate new experimental results beyond the timeline of discussions.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We would like to thank R1 for the thorough review and insightful feedback!\\n\\nRegarding the contribution, we first thank R1 for recognizing our contributions and novelty. We also believe that this is an important problem formulation for imitation learning.\\n\\n\\u201c... not convinced that explicitly modeling the dynamic changes is necessary\\u201d\\n\\nAs shown experimentally, the dynamics posterior is particularly useful when there are large domain shifts. Figure 5 (a) illustrates this scenario, where the direction of the force is negated in some of the test domains. ADAIL is able to cover both the case of non-directional domain shift and directional domain shift. As a comparison, GAIL with dynamics randomization fails in this setting.\\n\\n\\u201cSome existing imitation learning methods focus on a setting where the dynamics of the expert may differ from the agent, but the dynamics of agents do not change. ...\\u201d\\n\\nOur problem formulation covers the suggested setting, where the dynamics in the target domain does not change at eval time. ADAIL is also effective under online dynamics perturbations. This makes it applicable to real-world robot learning.\\n\\n\\u201c.. the last term should in my opinion not depend on theta.\\u201d \\u201cAccording to line 10 of the algorithm box only the current trajectory is used for updating the dynamics posterior. Why?\\u201d\\n\\nSorry for the confusing notation. We will update this in the next paper version. By including theta we meant that the posterior training is on-policy, which makes it dependent on theta (since the environment samples it is trained on is dependent on theta). In practice the posterior could be trained off-policy with sampled rollouts, however, this might either require an additional reply buffer or requires additional rollouts after the policy training converges. Both amount to implementation complexity. We found on-policy posterior training effective in our test scenarios, hence the design decision. \\n\\n\\u201cThe contrastive regularization loss needs to be better motivated\\u201d\\n\\nThank you for bringing it to our attention and we will add additional clarifying text to the next paper revision. The motivation for the contrastive loss is to introduce additional supervision in order to improve the robustness of the latent posterior. We find that without it, the posterior value is prone to drifting during the course of a rollout (especially for frame transitions where the dynamics are difficult to infer). The addition of this contrastive term adds the simple prior that all transitions within a single rollout have the same posterior value. This matches the fact that we sample the environment dynamics parameters once during a single rollout, and then the parameters are constant throughout. Note that in a real-world robotics application, an extension of this simple loss term would be to ensure that adjacent posterior estimates are similar (rather than constant over an entire trajectory) as it is expected that real-world dynamics exhibit high temporal coherence.\\n\\n\\u201cIf a similar regularizer has been used in prior work, such work should be referenced\\u201d\\n\\nWe agree that metric loss is commonly used, and KL-like distance measure is also not unprecedented in metric learning ([1], [2]). We have added references to the papers in the updated draft. However, to the best of our knowledge, we are the first to use KL-based metric loss to learn dynamics embeddings, especially in the context of imitation learning. \\n\\n[1] Davis, Jason V., et al. \\\"Information-theoretic metric learning.\\\" 2007.\\n[2] Hsu, and Kira. \\\"Neural network-based clustering using pairwise constraints.\\\" 2015\\n\\nRegarding the presentation/clarity, thank you for pointing out presentation related issues. We have fixed all the found typos. \\n\\n\\u201cFigure 2 is confusing and adds little compared to the text description.\\u201d\\n\\nWe believe Figure 2 helps better illustrate the system graphically to readers with limited background in this field. We have revised Figure 2 to make it more clear and illustrative in the updated draft. \\n\\n\\u201c.. the structure could be improved. .. \\u201d\\n\\nWe will make sure to further polish the structure and presentation of the paper in the next version.\\n\\n\\u201cThe paper uses some techniques such as conditional VAEs [1] or contextual policy search [2]\\nare used but not described / referenced.\\u201d\\n\\nThank you for the relevant citations. We have added references to both these works in the updated draft.\\n\\n\\u201cWhat are the network architectures?\\u201d\\n\\nWe have updated details of network architecture in Appendix in our updated draft.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We would like to thank the reviewer for the thoughtful review and valuable feedback. We found the summary in your comment accurate and reflective of the purpose and contributions of our work.\", \"we_address_the_questions_below\": \"\\u201cIn \\u2018dynamics learning\\u2019, it seems that the inference network learns is mostly the context variable c, wonder if it is better to use terms like \\\"latent variable inference\\\" to avoid confusion.\\u201d\\n\\nThank you for the suggestion, and have updated our text to reflect the new term.\\n\\n\\u201cWhat does the standard deviation mean in Figure 6? It seems a lot of them are even larger than the mean.\\u201d\\n\\nIt means the standard deviation (STD) across all domains evaluated, not the experimental variance of multiple seeds within a single domain. Some STDs are high because of the performance across some domains exhibit large expected variance, e.g., in Ant and HalfCheetah when friction is less than 0.5 (the robot is not expected to obtain high reward, even for an optimal policy, leading to high variance across all domains).\\n\\n\\u201cThere is little explanation to the VAE-ADAIL experiments -- is it safe to assume most of the experiments require certain knowledge of the latent variable in order to be successful? Maybe some of the arguments about VAE can go to the appendix.\\u201d\\n\\nThe effectiveness of ADAIL is demonstrated in the experiment section given that we are provided with informative dynamics parameters (either ground truth or predicted) at eval time. We presented an experiment of VAE-ADAIL as a proof-of-concept to demonstrate the unsupervised method. We have added references to related works in our updated paper (Section 3.6), and will further clarify the motivation (and/or add new experimental results) in the next paper revision.\\n\\n\\u201cWhy would in some cases UP-True performs much worse than ADAIL? In Ant it is even worse than PPO expert.\\u201d\\n\\nIt is a meaningful observation. Figure 6f reflects a tendency of optimizing average performance across domains using UP-True, for which the true reward signals are provided. An interpretation of this result could be that PPO is trained only in one source domain so it is more \\u201cfocused\\u201d to obtain good performance in that domain, so it might be able to obtain better policy in that domain but (as shown empirically) fails to generalize to other domains. Another angle could be that in UP-True, the policy network is shared across domains so policy updates are done using rollouts from multiple domains. Therefore it is not able to \\\"overfit\\\" to a specific domain as the PPO expert does on the source domain. Hence the performance difference. \\n\\nSince ADAIL imitates demonstrations from the PPO expert (as opposed to learning from scratch), it is able to emulate the performance in the source domain. GRL in this case helps because it mitigates the problem of inconsistent reward signals from the discriminator across shifted domains.\\n\\n\\u201cHow would you adapt to unseen environments in ADAIL-pred? I don't see explanations in the text about how this is done. Essentially, how are samples obtained from the environment in order to perform posterior inference?\\u201d\\n\\nAs shown in Section 4.2, we demonstrate ADAIL performance to unseen environments (not seen during training). The experimental procedure is as follows:\\n\\nIn a new environment, the posterior performs online inference on each time step given the current (s, a, s\\u2019) tuple. The predicted code is subsequently fed to the policy as the conditioned dynamics code. We have also tried moving average style filtering on the posterior predictions with minor performance gain. Thus we omitted multi-step posterior inference.\", \"comparison_with_related_papers\": \"Thanks for pointing us to the related recent work! We will make sure to update the related work section.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes an algorithm for imitation of expert demonstrations, in situations where the imitator is acting under a different environment (different dynamics, for instance) than the one used to collect expert demonstrations. The algorithm builds on GAIL with the following modifications \\u2013 the discriminator is made dynamics-invariant by adding a domain-adversarial loss, and the policy is made to condition on a dynamics context. A separate dynamics posterior network is trained (either supervised or unsupervised) to predict this context at test-time.\", \"i_have_the_following_concerns_about_the_paper\": \"1.\\tLack of novelty: \\n a.\\tLearning a dynamics-invariant discriminator with the gradient-reversal-layer was proposed in Stadie et. al (2017). How is this approach different? In particular, what is the new element in Figure 1. and complete Section 3.4? \\n b.\\tLearning a posterior network to predict context codes, and conditioning the policy on those was explored in papers such DIAYN and InfoGAIL. \\n\\n2.\\tThe proposed algorithm is reliant on the possibility of being able to sample from a distribution of environments (with varying dynamics), and then collect many trajectories in that environment (Line 6-7 in Algorithm 1). This is a severe requirement for real-word scenarios, and somewhat antithetic to the robotics learning motivation given by the authors in the introduction. Moreover, Figure 7 seems to imply that the method doesn\\u2019t generalize well to unseen environments, if enough environments can\\u2019t be sampled during training time.\", \"minor_comment\": \"Figure 8. Friction value should not go from -3 to 3. Also, this single result inspires no confidence in the benefit or general applicability of the VAE-based context prediction.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThe submission considers the problem of imitation learning when the dynamics of the expert are not known to the agent and the dynamics of the agent may change frequently. It is however assumed that the agent has access to a parameterized simulator that can simulate the expert dynamics. The parameters for the simulator are not known but are assumed to be drawn from a known distribution.\", \"the_proposed_method_is_based_on_gail_but_uses_several_modifications\": [\"A contextual policy is trained that also takes the dynamics-parameters as additional input. At each iteration, a new environment is sampled for performing the policy-rollout.\", \"A \\\"posterior\\\" prediction network is trained to maximize the likelihood of the parameters that were used for the different roll-outs. This network is used for the test case, where the true dynamics of the agent are not known.\", \"The discriminator might use features of the state-action input that correlate with the corresponding dynamics. Classifying based on such features may be undesirable because the discriminator might no longer produce useful rewards. In order to address this problem, an additional head is added to the discriminator that outputs a prediction of the dynamic parameters. The prediction error is trained by backpropagation, however by flipping the sign of the gradient at the last shared layer, the features of the discriminator are optimized to be unsuited for predicting the dynamic parameters (the technique is known as Gradient Reversal Layer).\", \"A VAE-based method for learning latent dynamic parameters is proposed, by training a conditional VAE to reconstruct the next state, where the current state and action are provided as context to the encoder and decoder.\"], \"contribution\": \"One of the strong-points of the submission is the fact that it features several different, orthogonal contributions. I also think that the considered problem setting is relatively interesting. However, also when considering real applications such as robotics, I am not convinced that explicitly modeling the dynamic changes is necessary. Some existing imitation learning methods focus on a setting where the dynamics of the expert may differ from the agent, but the dynamics of agents do not change. This setting does not require dynamic-contextual policies and seems to be applicable to typical robot applications.\", \"soundness\": \"The different components of the proposed methods seem reasonable to me. They do not come with (and arguably do not require) new derivations but seem rather like pragmatic solutions for the encountered problem.\\nThe optimization problem (Eq.2) seems to be formulated slightly inaccurate, because the last term should in my opinion not depend on theta. If I understand correctly, the policy should not maximize the likelihood of the dynamics posterior.\\nThe contrastive regularization loss needs to be better motivated. A high KL in the first term may not necessary be bad, for example, if the confidence in the prediction of (s_0, a_0, s'_0) is lower compared to (s_1, a_1, s'_1). If a similar regularizer has been used in prior work, such work should be referenced. Otherwise, it needs to be motivated.\\n\\nPresentation/Clarity:\\nThe presentation of the work is arguably the main weakness of the paper.\\nThe submission does not seem polished. It contains a large amount of typos. Figure 2 is confusing and adds little compared to the text description. Also the structure could be improved. For example, the submission introduces the posterior loss and outlines the algorithm before describing the individual components.\\nThe paper uses some techniques such as conditional VAEs [1] or contextual policy search [2]\\nare used but not described / referenced.\", \"evaluation\": \"I like that the different aspects of the proposed method are also evaluated individually. The ablations with respect to the adaptability and GRL are crucial. The evaluation of the performance could be improved. PPO and UP-True use the true reward function, so the only real competitor is a naive GAIL-baseline that uses randomized dynamics during training. I'm not aware of prior work that considers the exact same setting as the manuscript. However, one of the main arguments for inverse reinforcement learning is the claimed generalizability of a reward function as opposed to a policy. I see that learning a new policy after each change in the dynamics may be too costly in some settings. However, comparisons to methods such as AIRL that aim to learn reward functions that are robust to changes in the dynamics would be highly interesting.\\n\\n[1] Sohn Kihyuk, Honglak Lee, and Xinchen Yan. \\u201cLearning Structured Output Representation using Deep Conditional Generative Models.\\u201d Advances in Neural Information Processing Systems. 2015.\\n[2] Marc Peter Deisenroth, Gerhard Neumann, and Jan Peters. A survey on policy search forrobotics.Foundations and Trends in Robotics, 2(1\\u20132):328\\u2013373, 2013.\", \"comparison\": \"How about IRL, e.g. AIRL?\\nWhat about state-only GAIL?\", \"typos\": \"Equations are not properly integrated into the sentences (missing punctuations)\\n\\\"domains, as oppose to one domain.\\\"\\n\\\"Inspired by to GANs\\\"\\n\\\"and generates a environment\\\"\\nFigure 5a (legend): \\\"dyanmics\\\"\\n\\\"that can generalized across\\\"\", \"algorithmbox\": \"\\\"A environment\\\", \\\"and Generate environment\\\"\\n\\\"is achieved through 1) allowing\\\"\\n\\\"the policy is mainly concerned with the end-effector of the latent parameters\\\"\", \"question\": \"What are the network architectures?\\n\\nAccording to line 10 of the algorithmbox only the current trajectory is used for updating the dynamics posterior. Why?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper describes an approach that combines domain adversarial neural network with generative adversarial imitation learning. In the setup, each environment is dependent on some latent context variable (e.g. physics parameters) through which latent variable dependent policy and latent variable independent discriminators are learned.\\n\\nI don't think the exact same idea has appeared in existing literature. The authors justifies its connections and differences between third person imitation learning, but it seems that the proposed method bear some similarities to the NeurIPS19 papers below.\", \"https\": \"//drive.google.com/file/d/1urPE7J5tT8dzoBHSFvZKwNLsQieU706o/view\\n\\nThe following papers also assumed that GAIL-like methods in a meta learning setup, where environments depend on context. Perhaps the difference here is that the discriminator is also trained with a gradient reversal layer, so it encourages the discriminator to not use redundant state information. Also in this paper the source domain only contains demos from one env, which might highlight the importance of the gradient reversal layer.\", \"questions\": \"In \\\"dynamics learning\\\", it seems that the inference network learns is mostly the context variable c, wonder if it is better to use terms like \\\"latent variable inference\\\" to avoid confusion.\\n\\nWhat does the standard deviation mean in Figure 6? It seems a lot of them are even larger than the mean.\\n\\nThere is little explanation to the VAE-ADAIL experiments -- is it safe to assume most of the experiments require certain knowledge of the latent variable in order to be successful? Maybe some of the arguments about VAE can go to the appendix.\\n\\nWhy would in some cases UP-True performs much worse than ADAIL? In Ant it is even worse than PPO expert.\\n\\nHow would you adapt to unseen environments in ADAIL-pred? I don't see explanations in the text about how this is done. Essentially, how are samples obtained from the environment in order to perform posterior inference?\"}"
]
} |
H1eUz1rKPr | Representation Learning with Multisets | [
"Vasco Portilheiro"
] | We study the problem of learning permutation invariant representations that can capture containment relations. We propose training a model on a novel task: predicting the size of the symmetric difference between pairs of multisets, sets which may contain multiple copies of the same object. With motivation from fuzzy set theory, we formulate both multiset representations and how to predict symmetric difference sizes given these representations. We model multiset elements as vectors on the standard simplex and multisets as the summations of such vectors, and we predict symmetric difference as the l1-distance between multiset representations. We demonstrate that our representations more effectively predict the sizes of symmetric differences than DeepSets-based approaches with unconstrained object representations. Furthermore, we demonstrate that the model learns meaningful representations, mapping objects of different classes to different standard basis vectors. | [
"multisets",
"fuzzy sets",
"permutation invariant",
"representation learning",
"containment",
"partial order",
"clustering"
] | Reject | https://openreview.net/pdf?id=H1eUz1rKPr | https://openreview.net/forum?id=H1eUz1rKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"3s4kQ3fChB",
"HyldCNs2sH",
"BJlr1ejnor",
"Byei_1i2oB",
"SklYt2LaYH",
"r1gycv53Kr",
"B1x15XL7Yr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727113,
1573856464203,
1573855196781,
1573855091223,
1571806337377,
1571755911138,
1571148679106
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1581/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1581/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1581/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1581/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1581/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1581/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"While the reviewers appreciated the problem to learn a multiset representation, two reviewers found the technical contribution to be minor, as well as limited experiments. The rebuttal and revision addressed concerns about the motivation of the approach, but the experimental issues remain. The paper would likely substantially improve with additional experiments.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"Hi! Thank you for your thoughtful feedback.\\n\\nWe have taken it into as much consideration as possible in our revision.\\n\\nOur main revision is a formalization of the problem we are trying to solve. We hoped to make clear here that the symmetric difference is not really a learning criterion for semi-supervised clustering. Rather, the error on predicting symmetric difference relates directly to how well the model captures the desired notion of \\\"containment\\\" (which we state more formally in paper).\\n\\nWe are not 100% sure whether this answers your question about the normalization of \\\\phi, but we provide a formal justification for this, by relating \\\\phi to a probability distribution, i.e. a point in the probability simplex.\\n\\nWe agree that a more nuanced look at the classification accuracy (up to permutation) obtained by the semi-supervised clustering would be helpful. As this wasn't the main focus of our paper, we hope that instead our new experiments argue for the usefulness of the learned representations.\\n\\nFinally, we do mention a possible real-world application we think our model could apply to during our exposition of the problem. While, this is mentioned more as a motivation in this paper for how we define the problem, we are definitely excited about possible future applications of our methods.\"}",
"{\"title\": \"Thank you for your comments (revision notes)\", \"comment\": \"We thank you for your feedback on our paper!\\n\\nAfter reading your comments, we agreed that our motivation could be clearer. We have included a new perspective on the problem by defining it formally, which we hope addresses your concerns. \\n\\nAs mentioned in our response above, we also agree that our experiments could better isolate our contributions \\u2013 we have included new experiments to attempt to do so. We also tried to make our experimental methods a little clearer in \\\"training and evaluation procedures.\\\"\", \"on_the_fact_that_this_task_is_easy_if_you_know_the_labels_for_individual_objects\": \"we agree 100%. The reason we think our problem is interesting is because we do not have access to such labels, but still want to about the structure they exhibit. Again, we think and hope our formalization of the problem in our revision helps make this clear.\"}",
"{\"title\": \"Thank you for your comments (revision notes)\", \"comment\": \"Thank you for your thoughtful comments!\\n\\nWe have revised our work to address your concerns as much as possible.\\n\\nTo your first point, we agree that our definitions of multiset operations could be better motivated. We have added a formalization of the problem we are trying to solve and of multisets themselves, which we hope address your concerns here. We hope that our additions also make it clear that we see our contributions not just as modifications to DeepSets functions, but as fundamentally interesting/new ways of looking at containment relations. (This formalization also explains the restriction of \\\\phi to the probability simplex.) \\n\\nAs for your suggestion of comparisons with other objectives, we have included prediction the size of the intersection as a possible task, and compare it experimentally to the symmetric difference. It is not totally clear from the theory what other alternatives there are (although we're sure they exist!) We see our contribution here as an interesting first step in exploring the problem which we've formalized. From the theoretical perspective we lay out in our revision, however, we think the symmetric difference (and intersection) is well motivated.\\n\\nFinally, we agree 100% that our baselines would be more interesting if we tried harder to isolate different aspects of our model. Our revised paper includes these experiments.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Authors of this paper propose train a model by predicting the size of the symmetric difference between pairs of multisets. With the motivation from fuzzy set theory, both the multiset representation and predicting symmetric difference sizes given these representations are formulated.\\n\\nIn Section 3.3, authors stated that theorem 3.3.1 provides the compelling reason to use symmetric difference over intersection or non-symmetric difference. The statement seems not so straightforward, and how it works as the learning criterion for semi-supervised clustering in the experiments. \\n\\nFor the relaxation used for the normalization of \\\\phi(a), does this restrict the feasible space of the standard basis vectors? In Section 4.3, authors claimed that in the case of n=3, 98.9% classification accuracy can be obtained by simply picking the maximum valued coordinate of the representation of each object. A systematic comparison in terms of the classification accuracy is important for evaluating the semi-supervised clustering problem. \\n\\nIn Section 4.2, authors directly model the mean absolute errors in symmetric difference size prediction. It might be more interesting to see what real problems the proposed model can naturally be applied.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a framework for learning representations of multisets. The approach is built on top of the DeepSets (Zaheer et al., 2017) model, which already is capable of learning representations of multisets, with a new objective of predicting the size of symmetric differences between two multisets, plus a small change in the implementation of the DeepSets model by restricting the per-element module to output a point in the probability simplex.\\n\\nI liked the background on fuzzy sets and the development of fuzzy multisets and different operations on these multisets. The definitions and arguments are quite clear and helpful. One small suggestion for page 4 is that I can understand why the formulation is the only sensible choice for multisets with desired properties, but a claim like this deserves a proper proof.\", \"model_wise_the_paper_made_two_contributions_for_learning_representations_for_multisets_as_mentioned_above\": \"(1) proposed the symmetric difference prediction as a task for learning representations, the argument for this task is that predicting symmetric difference implicitly encourages the model to learn about containment; (2) a slight change in the DeepSets model architecture where the outer rho function is identity and the inner phi function has to output a point in the simplex.\\n\\nI found these technical contributions to be a bit small. In addition to this, the paper only presents results on MNIST in a toyish setting, this makes me feel the paper may be more suited for publication in a workshop (idea is interesting, small scale experiments to illustrate the insights, but not complete enough to be published at a conference).\\n\\nRegarding contribution (1), I can see why predicting symmetric difference makes sense as argued in the paper, but I\\u2019m not convinced that this is better than other alternatives. In order to show that this is a reasonable approach for learning representations, some results that compare this with other possible learning objectives would be necessary. But I don\\u2019t see any such results in this paper.\\n\\nRegarding contribution (2), I feel the restriction of the phi function to output points in simplex is not very well motivated and confusing in the first read. Again I can understand why we may want to do this but don\\u2019t see why we need to do this. I\\u2019m also concerned that such an architecture may only be good for the task of predicting symmetric difference as it is customized for this task. Figure 3 shows that an unrestricted model seems to learn better representations despite a worse symmetric difference prediction error, which again confirms the concern.\", \"another_thing_about_the_experiment_setup\": \"the second baseline, labeled \\u201cDeepSets\\u201d in Table 1 actually changed two things compared to the proposed approach: (1) changing the psi function and (2) also changed the symmetric difference function. It would be good to isolate the contribution of the two.\\n\\nOverall I feel this paper is not yet ready to be published at ICLR.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new task of learning from sets, predicting the size of the symmetric difference between multisets, and gives a new method to solve the task based on the fuzzy set theory.\\nAlthough the topic of learning from sets is relevant and using the fuzzy set theory for the task is interesting, I have the following concerns regarding with the clarity, significance, and evaluation.\\n\\n- Motivation is not clearly presented. The new task of predicting the size of the symmetric difference between multisets is proposed, while its application is not well discussed.\\n Although Theorem 1 characterizes the task using the subset inclusion relationship, its relevance to applications is still not clear.\\n- The problem to be solved is not mathematically formulated. In particular, what are input and output?\\n- More detailed explanation of the data preparation would be required.\\n How to transform images to pairs of multisets?\\n Is the label (number) of each is used as an element of a multiset?\\n- For the second comparison partner, why is \\\\Delta(A, B) defined as \\\\rho_2(\\\\Psi(A) + \\\\Psi(B))?\\n For fair comparison, this function should be the same with the proposed method, that is, ||\\\\Psi(A) - \\\\Psi(B)||_1 for the learned \\\\Psi by DeepSets.\\n- In experiments, one of the most straightforward ways is to first predict labels for each image, followed by computing the symmetric difference from the predicted labels. Comparison with such baseline should be performed.\", \"minor_comments\": [\"P.1, L.1 of the second paragraph: \\\"The the\\\" -> \\\"The\\\"\"]}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.