forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
BJg8_xHtPr | OBJECT-ORIENTED REPRESENTATION OF 3D SCENES | [
"Chang Chen",
"Sungjin Ahn"
] | In this paper, we propose a generative model, called ROOTS (Representation of Object-Oriented Three-dimension Scenes), for unsupervised object-wise 3D-scene decomposition and and rendering. For 3D scene modeling, ROOTS bases on the Generative Query Networks (GQN) framework, but unlike GQN, provides object-oriented representation decomposition. The inferred object-representation of ROOTS is 3D in the sense that it is viewpoint invariant as the full scene representation of GQN is so. ROOTS also provides hierarchical object-oriented representation: at 3D global-scene level and at 2D local-image level. We achieve this without performance degradation. In experiments on datasets of 3D rooms with multiple objects, we demonstrate the above properties by focusing on its abilities for disentanglement, compositionality, and generalization in comparison to GQN. | [
"unsupervised learning",
"representation learning",
"3D scene decomposition",
"3D detection"
] | Reject | https://openreview.net/pdf?id=BJg8_xHtPr | https://openreview.net/forum?id=BJg8_xHtPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"7zzTp7G9Lr",
"BkeDTCPhjH",
"ByxlkaX3jB",
"r1lwoLTjoB",
"HyeF6pssjr",
"B1ev9InFoB",
"HJgunr0Nir",
"SyxszVCVir",
"SyxsjzRVoS",
"BJeh8bRNor",
"H1xdNJCEiS",
"HklkZ4gVqB",
"Byl0r3K0Yr",
"ByxbeCd0tB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748211,
1573842622881,
1573825751961,
1573799582745,
1573793217264,
1573664398712,
1573344687960,
1573344275233,
1573343907383,
1573343571629,
1573343023827,
1572238327381,
1571884101708,
1571880425193
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2400/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2400/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2400/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2400/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2400/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2400/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2400/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2400/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2400/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2400/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2400/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2400/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2400/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The author proposes a object-oriented probabilistic generative model of 3D scenes. The model is based on the GQN with the key innovation being that there is a separate 3D representation per object (vs a single one for the entire scene). A scene-volume map is used to prevent two objects from occupying the same space. The authors show that using this model, it's possible to learn the scene representation in an unsupervised manner (without the 3D ground truth).\\n\\nThe submission has received relatively low scores with one weak accept and 3 weak rejects. All reviewers found the initial submission to be unclear and poorly written (with 1 reject and 3 weak rejects initially). The initial submission also failed to acknowledge prior work on object based representations in the 3D vision community. Based on the reviewer feedback, the authors greatly improved the paper by reworking the notation and the description of the model, and included a discussion of related work from 3D vision. Overall, the exposition of the paper was substantially improved. Some of the reviewers recognize the improvement, and lifted their scores. \\n\\nHowever, the work still have some issues:\\n1. The experimental section is still weak\\nThe reviewers (especially those from an computer vision background) questioned the lack of baseline comparisons and ablation studies, which the authors (in their rebuttal) felt to be unnecessary. It is this AC's opinion that comparisons against alternatives and ablations is critical for scientific rigor, and high quality work aims not to just propose new models, but also to demonstrate via experimental analysis how the model compares to previous models, and what parts of the model is necessary, coming up with new metrics, baselines, and evaluation when needed.\\n\\nIt is the AC's opinion that the authors should attempt to compare against other methods/baselines when appropriate. For instance, perhaps it would make sense to compare the proposed model against IODINE and MONet. Upon closer examination of the experimental results, the AC also finds that the description of the object detection quality to be not very precise. Is the evaluation in 2D or 3D? The filtering of predictions that are too far away from any ground truth also seems unscientific. \\n\\n2. The objects and arrangements considered in this paper is very simplistic. \\n\\n3. The writing is still poor and need improvement.\\nThe paper needs an editing pass as the paper was substantially rewritten. There are still grammar/typos, and unresolved references to Table ?? (page 8,9).\\n\\n\\nAfter considering the author responses and the reviewer feedback, the AC believe this work shows great promise but still need improvement. The authors have tackled a challenging and exciting problem, and have provided a very interesting model. The work can be strengthened by improving the experiments, analysis, and the writing. The AC recommend the authors further iterate on the paper and resubmit. As the revised paper was significantly different from the initial submission, an additional review cycle will also help ensure that the revised paper is properly fully evaluated. The current reviewers are to be commended for taking the time and effort to look over the revision.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the re-evaluation!\", \"comment\": \"We are very grateful for re-evaluating the paper and adjusting the score. In the revision, we believe that we indeed substantially rewrote many parts of the paper, particularly for the clearer exposition of the technical description. We hope you to enjoy the revised version. Thanks.\"}",
"{\"title\": \"About Revision [to All Reviewers]\", \"comment\": \"We are deeply grateful to all reviewers for taking the time to read our paper and providing constructive reviews. Following the reviewers' comments, we have made a significant update in the uploaded revision. While solving minor errors and clarifying points asked by the reviewers, we have focused on the following major updates in the revision.\\n\\nFirst of all, we significantly improved the readability of the paper. For this, we have not only fixed the notation error and inconsistency but almost fully rewrote and reorganized a substantial amount of parts in the paper. We adopted the standard GQN notation (as suggested by reviewer 4) and made all notations consistent throughout the paper including the Appendix. We fixed the missing descriptions on notations. We made Section 2 more complete so that a reader who is not familiar with GQN can understand better. We believe, with all the above updates, the revised paper is much more readable and shows the contribution more clearly.\\n\\nThe second major focus of the update is to add the description positioning our proposed work in the existing literature of 3D computer vision. We totally agree that we missed this important discussion in our initial submission and appreciate the reviewers for pointing this. We introduced a paragraph in Section 1 to clarify the different settings between our proposed work and existing works. Also, we added a considerable amount of discussion in the Related Works section along with relevant citations. While we have tried our best for this positioning and reference citation, we are willing to improve it even further if reviewers suggest additional reference. \\n\\nFinally, we added two more experiment results in the Appendix, following reviewer1's suggestion. In the first experiment A.1, we evaluate the generalization performance in the settings where the number of objects in training scenes is different from that in test scenes. In the second experiment A.2, we evaluate the composition generalization. In this setting, we compose a scene with 9 objects by using object components trained from scenes with only up to 3 objects. We show that ROOTS still generates high-quality images with proper occlusion handling.\\n\\nWith the revision, the paper is 9 pages long. We hope the reviewers kindly consider the fact that our paper should contain much larger images for figures in the experiments than other papers to report our results properly.\\n\\nAlong with the above major improvements, we believe that we have addressed all concerns of each reviewer and look forward to a positive reconsideration on the paper and hearing feedback about the updated version. We hope the reviewers can take our responses and revisions into consideration when evaluating our final score. Thanks again for your reviews!\"}",
"{\"title\": \"Thanks for the constructive review!\", \"comment\": \"We are deeply grateful for the constructive review. We found all of the comments are reasonable. We also would like to thank the reviewer for acknowledging many of our contributions and the importance of the problem/model/demonstration. Above all, we totally agree with the readability problem of the submitted version of the paper. Thanks to the reviewers, we are significantly rewriting the paper focusing on the purpose of improving readability. We will provide a much more readable and consistent version of the paper in our revision.\", \"we_also_have_made_some_visualization_available_here\": [\"https://sites.google.com/view/roots3d\", \"In the following, we respond to other questions\", \"\\u201cThis is the first unsupervised model that can identify objects in a 3D scene\\u201d is not true.\", \"> Yes, we totally agree. We will remove or more clarify that sentence in our revision\", \"What is r_C in p(z|c, r_C)?\", \"> r_C is the embedding of the scene C. It is the output of scene invariant encoder. In the revision, the meaning will be clear.\", \"Inconsistent or non-standard GQN notation\", \"> In the revision, we will do both. We will use GQN notation and make it consistent.\", \"Section 2 would not make sense to people that are not already familiar with GQN.\", \"> We will also clarify Section 2 so that such people can also understand GQN easily.\", \"Uncertainty in the model could lead to two objects being represented in the same cell.\", \"> Thanks for pointing this out. This is an interesting question. In our model, this situation is fundamentally prevented by design because a cell can only represent one object. We agree that when observing a limited context, there will be high uncertainty leading to many possible explanations. In this case, our design of the scene-volume map is to encourage the model to find explanations where two objects are not existing in the same position.\", \"The terms in Eqn 3 should be explained more explicitly. ... you talk about s_n being view-dependent with some explanation would help.\", \"> Thanks for this suggestion. We totally agree and will apply this in our revision\", \"Gaussian distribution for the position? Why not uniform?\", \"> The mean and variance of the Gaussian distribution are learned and thus it is not to represent a preference to a center area of the scene. We want to model the uncertainty of the position, ideally giving a high probability around the actual position while having a low probability for areas far from the actual position. We think the bell-shape of the Gaussian distribution seems proper to model this.\", \"What happens if object n is not present in the context image, y_c? In this case what is s_{c,n}^pos. Also, this notation: r_n = f({y_c,n}c) is a little ambiguous. I assume it means that you are applying the function f to the set of all patches in y_c? Also, what is the object invariant object encoder? A reference to details in the appendix or a footnote would suffice.\", \"> With the camara-coordination projection function f_{3D->2D}, we can know that the position after the mapping is not inside the context image y_c. Thus, in this case, we do not crop any patch from that context image. We will update the notation clearly. Yes, your understanding on the function is correct. We will also clarify what object invariant encoder either in the main text or in the Appendix.\", \"It\\u2019s great that you are able to exploit the coordinate transform and use it for rendering. The results are very impressive: being able to swap in and out objects in a scene, showing the 2D renderings of single objects from different view-points and the scene decompositions and predicting where the \\u201cmissing\\u201d object is (Figure 6).\", \"> Thanks again for acknowledging our contribution\", \"Minor\", \"More reference in the Introduction: We agree. We will add relevant references.\", \"Typos: Thanks for pointing these. We will fix all of them in our revision.\"]}",
"{\"title\": \"For All Reviewers - Part 2/2\", \"comment\": \"We agree that the paper should be improved with the notation error, inconsistency, and readability in general. We are significantly changing the writing of the paper focusing on this purpose of readability.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"Overview:\\nThis paper is certainly very interesting, unlike other papers and makes some very solid contributions. The qualitative results are very impressive. Unfortunately, the paper is very poorly written. If the authors can address the issues below and improve the quality of the writing, I would recommend that this paper be accepted. However, in its current state, I recommend that this paper be rejected.\", \"major\": \"The claim that \\u201cThis is the first unsupervised model that can identify objects in a 3D scene\\u201d is not true. There is MONet and Iodine that can identify objects without supervision in 2D projections of 3D scenes. This claim should be revised.\\n\\nWhat is r_C in p(z|c, r_C)?\\n\\nThe authors are using non-standard GQN notation and their notion is not consistent. The authors should make their notation consistent or use the standard GQN notation. Section two would not make sense to people that are not already familiar with GQN.\\n\\nThe scene volume map is interesting, the inductive bias preventing two objects being present in the same location is interesting, however one could imagine a case where uncertainty in the model could lead to two objects being represented in the same cell.\\n\\nThe terms in Equation 3 should be explained more explicitly. The way I understand it p(s_n|z, x) is the view-point dependent representations of the objects and this is why you condition on z, x. Placing p(s_n|z, x) in the text where you talk about s_n being view dependant with some explanation would help.\\n\\nWhy do you use a Gaussian distribution for the position? Would it not make more sense to use a uniform distribution? Could this be related to bias in your data? I.e. more objects in the centre of the scene?\\n\\nWhat happens if object n is not present in the context image, y_c? In this case what is s_{c,n}^pos. Also, this notation: r_n = f({y_c,n}c) is a little ambiguous. I assume it means that you are applying the function f to the set of all patches in y_c? Also, what is the object invariant object encoder? A reference to details in the appendix or a footnote would suffice.\\n\\nIt\\u2019s great that you are able to exploit the coordinate transform and use it for rendering.\", \"the_results_are_very_impressive\": \"being able to swap in and out objects in a scene, showing the 2D renderings of single objects from different view-points and the scene decompositions and predicting where the \\u201cmissing\\u201d object is (Figure 6).\", \"minor\": [\"The introduction could be strengthened with additional references. There are claims that object-wise factorisation will help with transfer, it would be good to have references to other work that supports this view. Also the claim that humans have 3D representations for objects requires a reference.\", \"Typos (there are too many to list here, these are just a few):\", \"Abstract: and and rendering\", \"\\u201cand\\u201ds should be replaced with commas in the second line of the intro.\", \"Generally the paper is not written well.\", \"The GQN, as a conditional \\u2192 is a conditional\", \"Target observations (in section 2) does not need a capital.\", \"This sentence does not make sense: \\u201cinstead of encoding compressing the whole scene\\u201d\", \"Because of intractable posterior\", \"There are many additional grammatical errors.\", \"-----------\"], \"edit\": \"Following changes made to the paper, I am now more satisfied. The writing should still be improved further and suggest that the authors fully revise the paper before the camera ready version, if the paper is accepted. I have increased my score to 6.\"}",
"{\"title\": \"Rebuttal to AnonReviewer1 - Part 2\", \"comment\": \"[Continued from Rebuttal-Part-1]\\n2. Experiments. \\n\\n(a) \\u201cWhy not show that GQN breaks in terms of MSE with a larger number of objects? The difference right now are fairly small.\\u201d\\n\\n> We would like to refer the reviewer first to our common answer A1 and A5. We think the question may stem from a misunderstanding about our contribution and the purpose of Figure 2 and Table 1. The main contribution of this work is not to achieve better generation quality or MSE, but to obtain OO3D representation. Thus, the suggested experiment seems to be not very relevant to our claimed contribution because, unlike ROOTS, regardless of how many number of objects are given, GQN cannot provide object-level 3D representation.\\n\\n(b) Ablation study\\n\\n> Please find the corresponding answer in the answer to AnonReviewer2.\\n\\n(c) Fair training in terms of model capacity and training epoch.\\n\\n> Regarding the same capacity, we do not need to use the same capacity because we are not solving the same problem. The same argument we used before applies here: a video classification network does not need to use the same network capacity as an MNIST classification network. Similarly, AIR and SQAIR both use a significantly more complex network than their baselines. Regarding the training epoch, we trained both networks until they fully converge. We will clarify this in the revision. \\n\\n(d) NLL results are very weak. \\n\\n> As answered in A5 and A1 and (a) in the above, the purpose of this experiment is to show that our goal (OO3D representation learning) is achieved while not hurting other criteria (generation quality). So, we believe that the NLL results are still something worth to show that this purpose is satisfied. We will clarify this point more in the revision.\\n\\n(e) Why not try object-detection using GQN and compare to ROOTS\\n\\n> GQN cannot provide object detection because its encoding is scene-level, not object-level. So, without comparison to GQN, we think the number in Table 1 still provides some important baseline for follow-up research, and the precision-recall is still an interpretable metric even without a relative comparison.\\n\\n(f) Showing that GQN doesn\\u2019t do any of the qualitative tasks that ROOTS can do.\\n\\n> From the architecture of GQN, it is clear that GQN cannot do this because GQN only provides scene-level representation where objects are entangled. It's not about whether ROOTS can do it *better*, but GQN clearly cannot do it at all without a significant modification on the architecture or training objective. Please also read the common answer A2. \\n\\n(g) \\u201cThe ability to handle an arbitrary number of objects\\u201d & \\u201ctraining on 2-3 objects and test on 3-5 objects by simply changing the prior on K\\u201c\\n\\n> \\u201cThe ability to handle an arbitrary number of objects\\u201d: We think the question is a little vague to us, because \\u201chandle\\u201d could mean \\u201cgenerate\\u201d, \\u201cdetect\\u201d or \\u201ccomposite\\u201d for ROOTS. We assume the reviewer is referring to the composition task. We give answer based on this assumption. This is an interesting suggestion. We think that ROOTS can composite a scene with a larger number of objects than the number of objects used in training scenes. This can be done by compositing learned object representations from other scenes. As stated previously, GQN cannot do this because it does not provide object-level representations. We will be running experiments for this and hope to add it in the revision if time allows.\\n\\n> \\u201ctraining on 2-3 objects and test on 3-5 objects by simply changing the prior on K\\u201c: We could not understand the part \\u201cchanging the prior on K\\u201d and thus we answer it based on our best understanding of your question. We consider this as a generalization test on generation task w.r.t. the number of scene objects in training and test. We think both GQN and ROOTS will show some ability for this generalization but we are not sure if ROOTS should be better than GQN. The first scene encoding network of ROOTS seems to have the same capacity as GQN for this problem. But, we would like to note that this is not our claimed contribution (please refer to A1). This is not a weakness of the model, we simply have not considered this scalability factor in our design because that is not the goal of the model. Nevertheless, we think this is an interesting idea. We started running this experiment for our curiosity and hope to report the results within the rebuttal discussion time. Otherwise, we will consider adding it to the camera-ready. \\n\\n3. Related Works and Others\\n\\n> Title - We agree. We will update the title in our revision.\\n> \\u201cFirst unsupervised \\u2026\\u201d - We agree that the sentence is overly broad. We will update it in our revision.\\n> Related Works. - We agree and thanks for pointing the related works. We agree that we need to cite more related works and clarify the position of our work in comparison to existing works. We will do it in our revision. Please also read the common answer A3 and A4. \\n\\n4. Small Issues. \\nThanks for pointing these. We will fix the errors.\"}",
"{\"title\": \"Rebuttal to AnonReviewer1 - Part 1\", \"comment\": \"\", \"we_kindly_suggest_the_reviewer_first_to_check_our_general_response\": \"\\\"For All Reviewers\\\"\", \"we_also_have_made_some_visualization_here\": \"https://sites.google.com/view/roots3d\\n\\n*Rebuttal Summary*\\n\\n> Thanks for the constructive review. We summarize our rebuttal here: (i) We agree that the description and Appendix need to improve by fixing typos, missing words, and minor errors. All will be thoroughly fixed in the revision. (ii) On experiments, we provide more clarification and reasoning in response to the questions. (iii) We agree on all points on related works and minor comments. We will add a significant amount of related works. We hope our argument sounds reasonable to the reviewer, and if so, we hope the reviewer to be open and flexible in adjusting the score. More detail discussions follow below:\\n\\n1. Description clarity and model complexity. \\n\\n> We totally agree that those minor errors in our first submitted version could make readers feel that the method is complex. Thanks for pointing those in the description and Appendix. We will fix all pointed errors in the revision and will keep improving until the end of the rebuttal period, and further if accepted. We also would like to note that this problem of clarity should be separately considered from the inherent model complexity because the former can be fixed easily in revisions.\\n\\n> As the reviewer agrees, we believe the necessity of a more complex model should be considered in relation to the difficulty of the given problem. For example, it is clearly not a problem to see a model, designed to deal with complex natural videos, having a more complex architecture than a model designed for simple MNIST classification. Similarly, AIR (Eslami et. al. 2016), which is adding OO representation to VAE, has a significantly more complex model than VAE. SQAIR (Kosiorek, 2018), which is adding OO representation to Variational RNN, has also significantly more complex architecture than VRNN. We do not say that what they achieve is not interesting simply because they use a more complex model. These are reasonable architectures because they are solving a more complex new problem. (Of course, in the future we may see a simpler model for the same problem.) Our problem is also significantly more challenging and solves a very different aspect (obtaining OO representation) than the problem of GQN. [More is described in common answer A2].\\n\\n> We also believe that the reviewer\\u2019s argument makes sense if our claimed contribution is to improve a performance metric, e.g., generation quality or prediction accuracy, in comparison to other models (GQN in this case). In that case, we agree that getting better performance by merely using a more complex model may not be much surprising. But, this is not the case of our paper. \\n\\nMinor comments\\n> We will fix all the pointed minor errors in the revision.\\n- Minor 1 - missing explanation on order-invariant object encoder: We thought that the meaning would be clear for readers who are familiar with GQN. But, we will clarify it more in the revision.\\n- Minor 2 - missing explanation on f_3D-to-2D: In Section 3.2, we mentioned that \\u201cusing the 3D-to-2D coordinates projection operation f_{3D\\u21922D}, we can compute the center location of an object existing in a context image.\\u201d In the revision, we will clarify it more and fix the missing section in the Appendix.\\n- Minor 3 - missing explanation on STN-INV. Yes, inverse spatial transformer. We will clarify in the revision.\\n- Minor 4 - s^{what}: We will clarify this in the revision.\\n- Minor 5 - We will fix all in the revision.\\n\\nReference.\\n[1] SM Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, and Geoffrey E Hinton. Attend, infer, repeat: Fast scene understanding with generative models. In Advances in Neural Information Processing Systems, pp. 3225\\u20133233, 2016.\\n\\n[2] Kosiorek, Adam and Kim, Hyunjik and Teh, Yee Whye and Posner, Ingmar, Sequential attend, infer, repeat: Generative modelling of moving objects, Advances in Neural Information Processing Systems 2018.\"}",
"{\"title\": \"Rebuttal to AnonReviewer2\", \"comment\": \"We kindly suggest the reviewer to first check the general reply \\\"For All Reviewers\\\"\", \"we_also_have_made_some_visualization_here\": \"https://sites.google.com/view/roots3d\\n\\n*Rebuttal Summary*\\n\\n> Thanks for the constructive review. We summarize our rebuttal here first: (1) We do not agree that our work is a minor contribution given GQN and literature in 3D computer vision. (2) We believe that an ablation study is not necessary in our case for the following reasons. First, the paper is not about improving a performance metric with a new architecture. Second, the result of some ablation study (e.g., removing hierarchical object representation) is obvious. (3) We agree that we need to add a significant amount of related works on 3D CV and we will do it in our revision. We hope our argument sounds reasonable to the reviewer, and if so we hope the reviewer to be open and flexible in adjusting the score. More detail discussions follow below:\\n\\n1. Is our work a minor contribution on top of GQN and 3D representation studied in the vision community?\\n\\n> Please first refer to the related common answer A1 and A2 on why this is an important and challenging problem and the answer A3 on how our problem setting is different from existing literature in 3D computer vision. Given our arguments, if you still think it is a minor contribution, we would kindly like to ask answers from the reviewer on (1) how and why the existing works in 3D computer vision (specifically, which paper and what aspect of the model in the paper) make our contribution --- learning to disentangle each object in a scene containing many objects with occlusion and partial observability and learning its 3D representation --- a minor contribution, and (2) how our contribution can be considered a minor one, given GQN whose design is not intended for the object-level representation. We think these points were not justified in the first review. \\n\\n2. Ablation study and how much the hierarchical representation help.\\n\\n> We would like to first refer to the related common answer A1, A2, and A5. We agree that an ablation study is needed when the goal of a paper is to improve some quantity like generation quality (in NLL or MSE) because then we need to identify which part contributes to the performance to what extent. But, this is not the case in our paper. The purpose of our new architecture is not to improve generation quality, but to obtain object-oriented representation from scenes with occluded multi-objects. The hierarchical modeling is required to *enable* (not to improve) the OO representation both in 3D and 2D-level. Thus, even without an ablation study, we believe its role is already clear from our existing experiment results, for example, from the fact that we obtain object-wise disentanglement in 2D images in Figure 3. When there is no hierarchical representation, the result is also clear without ablation study: we just cannot obtain such object-oriented representation at the 2D image level. And, in this case, we do not need to care about whether the performance decreases or not after removing the hierarchy. Furthermore, the generation quality after removing the hierarchy is not relevant to any of our claimed contributions. Also, similar papers like AIR and SQAIR that provide an OO representation version of VAE and VRNN, respectively, do not provide an ablation study but rather focus on what can be done with the OO-representation because that is what those papers are about.\\n\\n3. Adding more related works from 3D computer vision. \\n\\n> We totally agree. Please refer to Answer A3 and A4.\\n\\n4. Minor comments\\n\\n(a) Eqn number for Eq 2 in Appendix -> Will be updated in the revision.\\n(b) Definition of y. -> yes, they are the same. It is introduced in page 2, and re-appears in the next page. We will remind the reader in the revision.\\n(c) Why learn the camera projection? Isn't this deterministic using a camera matrix? -> The camera projection is a deterministic function and not learned. We will clarify this in the revision.\\n(d) Is the lighting fixed throughout the training and testing? -> yes, the lighting is fixed throughout the training and testing.\"}",
"{\"title\": \"Rebuttal to AnonReviewer3\", \"comment\": \"We kindly suggest the reviewer please first check another reply \\\"For All Reviewers\\\"\", \"we_also_have_made_some_visualization_here\": \"https://sites.google.com/view/roots3d\\n\\n*Rebuttal Summary*\\n\\n> Thanks for the review. In this review, we could not find an argument pointing to the major limitation of our paper. We found what is pointed are either (i) a misunderstanding of the reviewer and hasty generalization, or (ii) minor things like typos or missing reference that can be fixed easily. It is quite surprising to see that: first, the reviewer uses these minor errors that can be very easily fixed, as the major factors of the decision, while another reviewer (R1) points these as constructive minor comments for further improvements not affecting the score. Second, there is no question or discussion about the main contribution (see A1) of the paper. We hope our argument sounds reasonable to the reviewer, and if so, we hope the reviewer to be open and flexible in adjusting the score. More detail discussions follow below:\\n\\n1. The reviewer claims that the paper contains claims that are not supported by the experiments, and mention that we have a sentence \\u201cROOTS has clearer generations than GQN\\u201d while the reviewer cannot see a difference *at all*.\\n\\n> Please first refer to the answer A1 and A5 in the above common answers. To summarize, first, the purpose of the pointed experiment is to show that our goal (OO3D representation learning) is achieved while not hurting other criteria (generation quality). So, the generation quality of ROOTS doesn\\u2019t need to be better than that of GQN. Second, we definitely see a difference where ROOTS generates sharper edges that are closer to those in the ground-truth images while GQN generates a bit more blurry images in general. This point is also confirmed by another reviewer AnonReviewer 1 describing in his/her review that \\u201cGQN is a little more blurry (than ROOTS) in Figure 2\\u201d. So, although it can be seen not a significant difference, we believe it is not correct to generalize and say that we are claiming what is not supported **at all** by the experiments. Please, clarify if there are other such arguments not supported by the experiments. In the revision, we will update the sentence more to clarify how and why they are different. \\n\\n> For this kind of minor comments, reviewers usually suggest moderating the tone of the sentence. \\n\\n2. The reviewer says the text has not discussed what GoodNet is in the caption of Figure 3. It\\u2019s also unclear what is depicted in each of the columns in Figure 3\\u201d\\n\\n> Thanks for pointing this. GoodNet was the name of the model before we changed it to ROOTS. In our revision, we will fix the typo in the caption. We will also clarify the meaning of the columns in Fig. 3. We believe that these are minor factors that can be easily fixed and thus usually suggested as a minor comment not affecting the score.\\n\\n3. Figure 1 is not referred to in the text. \\n\\n> Thanks for pointing this. We will update it in the revision. We believe that this is also a minor factor that can be easily fixed and thus usually suggested as a minor comment not affecting the score.\\n\\n4. Minor comment\\n\\n4.a. Making precision recall-curve table to a curve & comparison to CGQN\\n\\n> We already provide the precision-recall table. We think it is also a good idea to make it a precision-recall curve. \\n\\n4.b. Why not compare to CGQN? \\n\\n> Our baseline is indeed CGQN. So, it is compared to CGQN. In the submission version, we already mentioned that, for brevity, we use \\u2018GQN\\u2019 in our paper to actually mean CGQN. In the revision, we will clarify again at the beginning of the Experiment section, that the \\u2018GQN\\u2019 label actually means CGQN.\"}",
"{\"title\": \"For ALL Reviewers - Part 1/2\", \"comment\": \"We thank all the reviewers for taking time read our paper and provide insightful feedback. We would like to first provide answers and further clarification commonly applying to all reviewers. So, we kindly suggest all reviewers read this part first. We use the term GQN as a general framework including both the original GQN and CGQN. We first upload this rebuttal while working on the revision. What we promise here will all be updated in the coming revision. We use OO to stand for Object-Oriented and OO3D for OO and 3D.\\n\\n[A1] Our main contribution is not to improve 3D generation quality.\\n\\n> but is to learn, in the GQN setting, to obtain disentangled OO3D representation, as written in Introduction. This representation is independent, modular and 3D-viewpoint-invariant. GQN cannot provide such representation due to its scene-level representation where objects are entangled. We believe that obtaining such representation is a very important problem in deep representation learning, which is the main theme of this conference. The main purpose of obtaining such representation is not to improve generation quality but to enable new important abilities such as compositionality, transferability, better generalization, and variable binding (for causal inference and reasoning), which are main unsolved challenges in contemporary deep learning. Thus, our experiments focus on demonstrating such advantages of OO3D representation (compositionality and transferability). Therefore, based on our understanding, the main decision criteria should be based on the claimed contribution: (i) the importance of making it possible to obtain such representation in a challenging setting and (ii) how well the benefits of the representation (e.g., compositionality and transferability, etc.) is demonstrated. \\n\\n[A2] We achieve this in a significantly challenging setting. \\n\\n> (i) Our model is unsupervised, not using any annotations on voxels, cloud points, meshes, segmentation or object box. (ii) It is generative and learns both representation and rendering. (iii) It learns a single 3D-representation from which multiple views can be generated. (iv) It is end-to-end. (v) It deals with scenes with multiple objects with occlusion and partial observability. Solving these problems altogether is a significantly challenging problem and has not been tried before. As explained below [A3], some existing works from 3D computer vision are somewhat relevant to a part of the above challenges, but to our knowledge, we are not aware of any work that deals with the same level of challenge (particularly, (iii) and (v) are rare in 3D CV literature). Also, the GQN design does not consider object-level representation, and thus, although we start from GQN, still a significant amount of investigation, observation, ideation, design and optimization is required to develop a new model that can achieve the new abilities.\\n\\n[A3] Relation to 3D computer vision (CV). \\n\\n> The problem setting of those works in 3D CV is quite different from ours. To our knowledge, they focus on either (i) supervised approaches using voxel, cloud points, or mesh annotations, or (ii) generating images of different 2D perspectives of a 3D scene (e.g., using GANs) but not obtaining the 3D-representation, i.e., there is no single representation from which multiple views can be generated, or (iii) solving single object problems rather than disentangling objects from a scene containing multiple objects. \\n\\n[A4] Related Works. \\n\\n> We totally agree that we should discuss the relevant works from 3D computer vision in our initial submission. We thank reviewers for pointing this. We will discuss the pointed papers and also others we found relevant. We did not intend to ignore those important works which we also have been inspired by.\\n\\n[A5] The goal of the experiments in Fig 2 and Table 1 (on qualitative generation quality, NLL and MSE, in comparison to GQN) is to show that our goal (learning OO3D representation) is achieved while not hurting other criteria (generation quality).\\n\\n> As discussed in [A1], our goal is not to show that the proposed model can significantly improve generation quality. We would like to note that it is not easy to retain this quality under the constraint of the discrete representation structure in ROOTS. Specifically, using discrete structure in neural networks provides advantages such as interpretability and compositionality but usually comes with some performance degradation. This is because it limits the model space and optimization compared to continuous representation. In our case, we however actually achieve a comparable generation quality that is less blurry than that of GQN (as agreed by AnonReviewer #1). Thus, we think the argument ---our model achieves minor contributions because of little improvement in generation quality--- is not correct. Instead, the representation aspect of the proposed model and its importance should be considered as the main criteria for the score.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2400\", \"review\": \"The paper presents a framework for 3D representation learning from images of 2D scenes. The proposed architecture, which the authors call ROOTS (Representation of Object-Oriented Three-dimension Scenes), is based on the CGQN (Consistent Generative Query Networks) network. The paper provides 2 modifications. The representation is 1. factorized to differentiate objects and background and 2. hierarchical to first have a view point invariant 3D representation and then a view-point dependent 2D representation. Qualitative and qualitative experiments are performed using the MuJoCo physics simulator [1] (please add citation in the paper).\\n\\n[1]Emanuel Todorov, Tom Erez, and Yuval Tassa. MuJoCo: A physics engine for model-based\\ncontrol. In ICIRS, 2012.\\n\\n+Learning 3D representations from 2D images is an important problem. \\n+The proposed methodology learns representations that are more interpretable, with higher compositionally. \\n\\nWhile the paper takes a step towards a potentially impactful work, I cannot recommend it for publication in its current form. \\n\\n1. There are claims in the paper that are not supported by the experiments. For example, \\u201cAs seen in Figure 2, ROOTS has clearer generations than GQN. \\u201d However, Figure 2 does not show this at all. It shows no difference between ROOTS and GQN. \\n\\n2. The paper can benefit from further clarity throughout\\u2014in general it seems a bit rushed. For example, the caption on Figure 3 reads \\u201cFor example, GoodNet segments a scene into foreground and background first and decompose foreground into each individual object further\\u2026\\u201d The text has not discussed what GoodNet is. Its also unclear what is depicted in each of the columns in Figure 3. This should be clearly explained. \\n\\n3. I suggest clarifying Figure 1 further and referring to it in section 3. Its currently not referred in the text although it is an overview of the proposed architecture. \\n\\n\\nOther comments\\n-Table 2: Why not have a precision-recall curve (as is standard) and report average precision numbers?\\n-Table 2: Why not compare to CGQN?\\n\\nMinor\\n-There are typos throughout the text.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"TLDR: Interesting idea that seems promising, but lacks the maturity required to pass the ICLR bar: Lacks proper citations, comparisons with the latest works, no ablation study of their contributions.\\n\\nThe paper presents an extension of the Generative Query Network to incorporates 1) 3D grid for the state representations 2) hierarchical representation and 3) unsupervised model for explicit object representation (which is tied to 1.\\n\\nThe unsupervised representation is interesting, but this is a minor contribution on top of GCN and 3D representations have been widely studied in the vision community.\\n \\nAlso, except for the 3D representation, I am not sure how much the hierarchical representation helps. This leads to the question of why the authors did not perform ablation studies on each component.\\n\\nFinally, it seems that the authors did not add proper citations. First, the 3D representation has been studied widely in the vision community and 3D-R2N2, ECCV'16 proposed using an RNN to encode a series of 2D images to learn 3D grid representation which seems quite similar to what the authors are proposing as an encoder and representation. Secondly, there are numerous methods on 3D neural rendering such as DeepVoxels, CVPR19 and all of the baselines in their experiments. The paper seems to completely ignore the works in this field.\", \"questions\": \"Please point out the equation number of implementation of Eq.2 in the appendix.\\n\\nIs the y in Eq.3 the same y defined in the preliminary? Also, consider defining y again. There are almost two pages between the definition and Eq.3.\", \"minor\": \"Why learn the camera projection? $f_{3D\\\\rightarrow 2D}$? Isn't this deterministic using a camera matrix?\\n\\nIs the lighting fixed throughout the training and testing?\\n\\nSec.2 first sentence: an query --> a query\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a model building off of the generative query network model that takes in as input multiple images, builds a model of the 3D scene, and renders it. This can be trained end to end. The insight of the method is that one can factor the underlying representation into different objects. The system is trained on scenes rendered in mujoco.\", \"summary_of_positives\": \"+The factoring makes sense, and the use of voxels\\u00a0+ the physical property to enforce that two objects can't be superimposed in z_pres is a good strategy.\\n+There are a number of good qualitative results for understanding the learned object-oriented representation.\\n+The approach of learning 3D representations by comparing projections to observations is a good direction.\", \"summary_of_negatives\": [\"The method is quite complex and explained, in my view poorly (although I'm open to the other reviewers' opinion on the matter).\", \"The experiments are weak\", \"The manuscript makes fairly broad claims that aren't substantiated and ignores a great deal of work in the vision community on this topic.\", \"Given the three largely orthogonal and fairly strong negatives, I lean heavily towards rejecting this paper. Independently each of these is an issue that would be push me to at least lean towards rejection. However, I would encourage a revision and resubmission with improved method explanation, stronger experiments, and a clearer picture with respect to existing work.\"], \"in_more_detail\": \"\", \"method\": \"-I found the method section quite difficult to read, in part because the method is quite complex. This isn't intrinsically a bad thing, but complex methods with lots of steps should come with few surprises and descriptions that make the method accessible. In particular, the method section would benefit from a stronger figure that in part introduces the notation, as well as a little more thought in terms of the introduction of the method. A few instances:\\u00a0\\n1) \\\"This is done by an order-invariant object encoder r_n = f_{order-invar-obj}(...)\\\". One turns to the appendix, and tries to find this function. It's not explicitly there -- instead you need to match r_{n,C} = \\\\sum_{i} .... , then look up above at the note that \\\"ObjectRepNet is the module we use for object level order invariant encoder\\\", then remember that sum is order invariant.\\u00a0\\n2) I searched throughout the paper and couldn't find precisely what model f3d->2d was. The figure suggests a projective camera and the text says \\\"For more details on the coordinate projection, refer to Appendix.\\\", but there's none in the appendix as far as I can see.\\u00a0\\n3) STN-INV is nowhere defined -- inverse spatial transformer?\\u00a0\\n4) s^{what} doesn't appear to be anywhere in the appendix -- is s^{what} factored into y^att and alpha^{att}? By matching the RHS, this seems to be the only possibility, but in the main body it's called ConvDRAW aka the GQN decoder, but in the appendix it's called Renderer.\\u00a0 \\u00a0\\n5) There are lots of other little things -- e.g., a figure that refers to a variable that doesn't exist (see small details section )\\n\\nI don't see why a paper this complex can't be accepted at ICLR, but I think at a minimum, the appendix should be more complete so that things are well-defined. I'm open to the possibility that I may just be slow so long as the other reviewers think the paper is crystal clear down to the details. However, I think even if I'm just the slow one, the authors should think about writing this more clearly and using consistent notation and function names.\", \"experiments\": \"-As far as I can see, the difference between\\u00a0ROOTS and GQN is that GQN is a little more blurry in its output (Figure 2) and ROOTS has a slightly better MSE for lots of objects (Table 1) but produces NLLs similar to GQN. There are a few issues with this:\\n(a) It's surprising that the correlation between larger numbers of objects and better MSE isn't really investigated -- why not show that GQN breaks at some point?\\u00a0The differences right now are fairly small, and I think the paper ought to delve into details to demonstrate that the differences are important.\\n(b) There are so many changes between ROOTS and GQN that I don't know if this has to do with the object-bits of it, or something else. This is part of a larger problem where there are no ablations. A large complex system is trained, and lots of changes are made to GQN. But when there are no ablations, it's unclear what parts of the changes are important and which parts aren't.\\n(c) It's not clear whether the GQN and ROOTS are being trained fairly -- do they have the same capacity? Why are they both trained for the same number of epochs? It seems entirely likely that ROOTS may train faster than GQN (or the reverse!). If there's only one experiment like this, why not train for a long enough time to ensure convergence and then take the checkpoint with best validation performance?\\u00a0\\n-The NLL results are very weak and probably not worth putting in, at least without some sort of explanation for why this gap is significant.\\n-The object detection quality experiment is incomplete -- I just do not know how to parse the numbers that are presented without some sort of simple baseline in order to make sense of things. Why not also try something like this on GQN?\\u00a0\\n-The qualitative experiments are nice but would be substantially improved by showing that:(a) that GQN doesn't do any of these\\u00a0(b) that ROOTs can train on 2-3 objects and test on 3-5 objects by simply changing the prior on K -- this is one of the primary advantages of object-centric representations of scenes (the ability to handle arbitrary numbers of objects).\", \"related_work\": \"The paper really needs to make its claims more specific and position itself better with respect to related work.\", \"two_gratuitousexamples\": \"1) The title is \\\"Object-oriented representation of 3D scenes\\\", which covers decades of work in robotics and vision. This title should be changed. \\n\\n2) \\\"First unsupervised\\u00a0model that can identify objects in a 3D scene\\\" is exceptionally broad: voxel segmentation is already a standard feature in point cloud libraries (e.g., \\\"Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds\\\" Papon et al CVPR 2013). Is the manuscript and the Papon paper the same at all? No. But are they both unsupervised models that can identify objects in scenes. I'm not demanding that the authors write out a claim of novelty that's like a patent, but claiming \\\"first unsupervised model that can identify objects in a 3D scene\\\" is, in my opinion, clearly incorrect and needs to be qualified.\\n\\n\\nThe paper should also better position itself compared to the wide variety of work that's been done on unsupervised 3D shape estimation/feature learning using reprojection. For instance (among many): \\n(1) Geometry-Aware Recurrent Neural Networks for Active Visual Recognition. Cheng et al. NeurIPS 2018\\n(2)\\u00a0Learning a multi-view stereo machine. Kar et al. NeurIPS 2017.\\n(3)\\u00a0Multi-view Supervision for Single-view Reconstruction via Differentiable Ray Consistency. Tulsiani et al. 2017.(4)\\u00a0Unsupervised Learning of Depth and Ego-Motion from Video. Zhou et al. CVPR 2017 (not voxels, but 2.5D or a form of 3D)\\n\\nas well as the vast array of work on 3D reconstruction, including work that is object-oriented\\n(1)\\u00a0Learning to Exploit Stability for 3D Scene Parsing. Du et al. NeurIPS 2018.\\n(2)\\u00a0Cooperative Holistic Scene Understanding: Unifying3D Object, Layout, and Camera Pose Estimation. Huang et al. NeurIPS 2018\\n(3)\\u00a0Factoring Shape, Pose, and Layout from the 2D Image of a 3D Scene, Tulsiani et al. CVPR 2018\\n(4) Potentially not out at ICLR submission deadline, but\\u00a03D Scene Reconstruction with Multi-layer Depth and Epipolar Transformers. Shin et al. ICCV 2019.\\n\\nI agree that there are differences between these works and the manuscript, but it's really peculiar to work on inferring a 3D volume of scenes from a 2D image or set of images, and only cite YOLO, faster RCNN, and FCNs from the world of CVPR/ICCV/ECCV etc where this work is done very frequently. These works do indeed sometimes rely on a bit more supervision (but not always). But they're tested on data that's far more complex than a set of spheres and cubes.\\n\\n\\n\\nSmall issues that do not affect my score.\\n- The claim that the method is unsupervised when it has access to precise camera poses seems a bit like a stretch to me. It's common enough that I've given up quibbling about it. Peoples' sense of distance is not exact. This deserves some further thought.\\n-The authors should go through and double check their use of GQN and CGQN -- it's said at the beginning that GQN just means CGQN, but then it's occasionally dropped (e.g., right before Table 1)\\n- Fig 1 shows z^{where}, which I guess got renamed?\\n- \\\"The Scene Representation Network is modified bases on Representation Network in GQN.\\\" -> this sentence is presumably a typo/cut off halfway through.\\n- \\\" This helps remove the sequential object processing as dealing with an object does not need to consider other objects if the features are already from spatially distant areas.\\\" -> this is unclear\\n- Eqn 11 appendix \\\"sacle\\\" -> \\\"scale\\\"\\n- \\\"Object Representation Network\\\" in A.2 \\\"objcet\\\" -> \\\"object\\\"\\n-Equation 15 -- the parentheses put i \\\\in \\\\mathcal{I}(D) inside the Renderer, which is presumably not true.\\n-Table 1 -- table captions go on top\\n\\n\\n\\n\\n\\n\\n\\n\\n----------------------------------------------------\", \"post_rebuttal_update\": \"\", \"ac\": \"I realize that I'm just the cranky computer vision person shouting about numbers and I may be out of my element here, so take this as you want. But in my view, things really need to be evaluated (since vision struggled for many years because people showed a few nice qualitative results and didn't put their ideas to the test).\\n\\n2) Clarity: I completely disagree with the authors that clarity issues were minor -- they really weren't and all reviewers agreed on this. Typos are thing lke tihs that are easy to read through without thinking. These were things that required thinking, flipping back-and-forth-etc. \\n\\nThe revision appears to be close to a full-rewrite, to the point where the diff system is useless -- it's all red/green. It appears to be clearer, but I haven't checked thoroughly. The ICLR 2020 guide is unclear how you should treat this (the AC guide says \\\"you can ignore this revision if it is substantially different from the original version.\\\"). Personally, I don't think it's fair to authors who spent their time making their paper clear in the first place rather than on new results.\", \"indeed\": \"the experiments in the revision show that ROOTS often does *worse* in generalization performance to previously unseen objects (improving in only 6/9). This is surprising -- if GQN is supposed to break, why doesn't it break here? I appreciate the author's response that there's a latent variable of # of objects that needs to be adjusted in the case of ROOTs, but this should be investigated.\\n\\nThe same thing goes with the claim that an ablation study is only necessary for improving results. This is just baffling -- is it possible that only certain parts of the method are necessary? Surely this is a problem that is worth studying. What if it's just that some aspect of the system has higher capacity than the equivalent in GQN and just works better?\", \"smaller_stuff\": \"-The authors have misunderstood my statement on paper complexity (although I now realize the comment has been edited)-- my point is that people who present a complex system have a strong obligation to present a clear explanation (since there's little opportunity for redundancy in explanation unlike a simple approach).\\u00a0\\n\\n-\\\"AP interpretable metric without a relative comparison\\\": this is just not true although openreview is probably not the place to litigate this and I recognize that this is my outlook. Accuracy is highly interpretable: 90% top-1 accuracy on mnist would have been boring in 2005, and 90% top-1 accuracy on imagenet would be very exciting today.\\u00a0\\n\\n-f_{3D-2D} There are multiple camera models. Skimming the revision suggests it's perspective projection, but the authors should realize that there are others (orthographic, weak perspective, etc) and they're often used because they're easier to learn with.\"}"
]
} |
HJl8_eHYvS | Discriminative Particle Filter Reinforcement Learning for Complex Partial observations | [
"Xiao Ma",
"Peter Karkus",
"David Hsu",
"Wee Sun Lee",
"Nan Ye"
] | Deep reinforcement learning is successful in decision making for sophisticated games, such as Atari, Go, etc.
However, real-world decision making often requires reasoning with partial information extracted from complex visual observations. This paper presents Discriminative Particle Filter Reinforcement Learning (DPFRL), a new reinforcement learning framework for complex partial observations. DPFRL encodes a differentiable particle filter in the neural network policy for explicit reasoning with partial observations over time. The particle filter maintains a belief using learned discriminative update, which is trained end-to-end for decision making. We show that using the discriminative update instead of standard generative models results in significantly improved performance, especially for tasks with complex visual observations, because they circumvent the difficulty of modeling complex observations that are irrelevant to decision making.
In addition, to extract features from the particle belief, we propose a new type of belief feature based on the moment generating function. DPFRL outperforms state-of-the-art POMDP RL models in Flickering Atari Games, an existing POMDP RL benchmark, and in Natural Flickering Atari Games, a new, more challenging POMDP RL benchmark introduced in this paper. Further, DPFRL performs well for visual navigation with real-world data in the Habitat environment. | [
"Reinforcement Learning",
"Partial Observability",
"Differentiable Particle Filtering"
] | Accept (Poster) | https://openreview.net/pdf?id=HJl8_eHYvS | https://openreview.net/forum?id=HJl8_eHYvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Up_neGBg1Y",
"B1eqKjQ3sS",
"SJgVcy72sH",
"HJx3PRVusB",
"HJeL4A4dsB",
"B1gmGR4OiS",
"S1l7IWx99B",
"HJeKIBfg9H",
"SkepmDSCYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748181,
1573825410269,
1573822348276,
1573568099911,
1573568045617,
1573568010974,
1572630858959,
1571984720863,
1571866405090
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2399/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2399/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2399/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2399/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2399/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2399/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2399/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2399/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors introduce an RL algorithm / architecture for partially observable\\nenvironments. \\nAt the heart of it is a filtering algorithm based on a differentiable version of \\nsequential Monte Carlo inference. \\nThe inferred particles are fed into a policy head and the whole architecture is \\ntrained by RL. \\nThe proposed methods was evaluated on multiple environments and ablations \\nestablish that all moving parts are necessary for the observed performance. \\n \\nAll reviewers agree that this is an interesting contribution for addressing the \\nimportant problem of acting in POMDPs. \\n \\nI think this paper is well above acceptance threshold. However, I have a few points that I\", \"would_quibble_with\": \"1) I don't see how the proposed trampling is fully differentiable; as far as I \\nunderstand it, no credit is assigned to the discrete decision which particle to \\nreuse. Adding a uniform component to the resampling distribution does not \\nmake it fully differentiable, see eg [Filtering Variational Objectives. Maddison \\net al]. I think the authors might use a form of straight-through gradient approximation. \\n2) Just stating that unsupervised losses might incentivise the filter to learn \\nthe wrong things, and just going back to plain RL loss is not in itself a novel \\ncontribution; in extremely sparse reward settings, this will not be \\nsatisfactory.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"We thank the reviewer for the encouragement\", \"comment\": \"We thank the reviewer for the encouragement!\\n\\nWe will further study the learned representation to provide intuitions for designing better representation learning algorithms.\"}",
"{\"title\": \"Official Blind Review #1 update\", \"comment\": \"Thank you for the clarifications, additional analyses and references. The paper is improved and I have updated the score accordingly.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for the positive feedback. We are grateful for the many suggestions for improvement, which we have mostly incorporated in the revised manuscript. We would like to further clarify some of the questions below.\", \"q\": \"Additional references and pseudocode\", \"a\": \"Thank you for the useful suggestions. We have added the references to Section 4.2 and the pseudocode of our algorithm to Appendix D.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for the positive feedback. We have further revised the manuscript according to others suggestions.\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"Thank you for your valuable feedback, and suggestions for improvement. We have revised the paper accordingly. We would also like to answer some of the questions below.\", \"q\": \"Natural flickering Atari games are artificial & types of POMDPs considered in the paper.\", \"a\": \"Indeed, we see the benefit of the natural flickering Atari in that they provide a controlled benchmark to understand the influence of complex observations in POMDPs. We agree that flickering observations do not capture the most interesting class of POMDPs and that Natural flickering Atari games are still far from real-world applications. We have chosen Mountain Hike and flickering Ataris games mainly for demonstration purposes in well-understood settings, and the Habitat domain for a more realistic application with challenging partial observability. We have updated the section 4.3 to better reflect these motivations.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"What is the specific question/problem tackled by the paper?\\n\\nRepresentation learning in POMDPs in order to ignore spurious information in observations.\\n\\n\\nIs the approach well motivated, including being well-placed in the literature?\\n\\nSome comparisons to related work are missing; while the comparisons would enrich the paper, their absence is not fundamentally limiting to the conclusions.\\n\\nThere's an additional PSR-related work that can be seen as learning representations for POMDPs (Guo et al., Neural predictive belief representations, arXiv:1811.06407). This work is in line with the work of Gregor et al., 2019, and both provide suitable representation learning techniques for POMDPs. \\n\\nThese representation learning in the paper is based on action-conditional predictions of future quantities, which is complementary to the approach proposed in the paper. That is, one could conceive adding action-conditional predictions of the future with the particles as the RNN states.\\n\\n\\nDoes the paper support the claims? This includes determining if results, whether theoretical or empirical, are correct and if they are scientifically rigorous.\\n\\nI think the support is somewhat adequate.\\n\\nThe claim that the proposed method handles spurious information is well supported by the experiment in mountain hike, but not quite so by the Atari experiments. The performance (upon introduction of the \\\"natural\\\" on top of flickering) takes a big hit for both DPFRL and DVRL. Still, the performance improvement of DPFRL over DVRL is still an encouraging result.\\n\\n\\nSummarize what the paper claims to do/contribute. Be positive and generous.\\n\\nThe paper proposes a neural implementation of particle filters, by treating samples of RNN states as particles. The particles are used to estimate moment-generating functions evaluated at trained vectors, which in turn are supposed to provide more information for the policy's decision making. The paper uses a discriminator to shape the representation.\\n\\nThe ablation study suggests that all three components (particles, MGFs & discrimination) are necessary. However, the third component has been shown not to be exclusively helpful for representation learning (Gregor et al., Guo et al.) I would suggest a study in comparison to Gregor et al.'s method (DRAW) instead.\\n\\n\\nClearly state your decision (accept or reject) with one or two key reasons for this choice.\\n\\nI vote for acceptance.\\n\\n\\nProvide supporting arguments for the reasons for the decision.\\n\\nI think the algorithmic idea in this paper is a step in the right direction and can be of interest for the community. I would hope for the benchmarks to be more like the Habitat, and less like Atari with background videos. The conclusions in the latter benchmark seem less likely to apply to tasks in physically structured environments.\\n\\n\\nProvide additional feedback with the aim to improve the paper. Make it clear that these points are here to help, and not necessarily part of your decision assessment.\\n\\nI think it is important for the paper to qualify the kind of POMDPs being considered. The defining features of most of the environments being used is that the state is observed through a noisy channel. Many POMDPs are of interest because the observations are really providing partial information about the state, even if it is noiseless. This is the case for the Habitat setting.\\n\\nBecause the paper's claims about the adequacy of the method for POMDPs rests on the choice of environments, I think it's important to quality what kind of POMDPs are being considered here. I would also caution against stating that the environment is closer to the real world. It would perhaps be better to say that the natural flickering is more interesting than the natural and the flickering because it benchmarks robustness to irrelevant information in observations, provided almost in tandem with state information; with intermittently missing observations.\\n\\nPlease add some explanation about how the negative examples are sampled for the contrastive estimation.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This is a well written paper. It introduces a principled method for POMDP RL: Discriminative Particle Filter Reinforcement Learning (DPFRL).\\nIt combines the strength of Bayesian filtering and policy-oriented discriminative modeling. DPFRL encodes a differentiable particle filter with learned transition & observation models in a neural network, allowing for reasoning with partial observations over multiple time steps. It performs explicit belief tracking with discriminative learnable particle filters optimized directly for the RL policy. \\n\\nExperimental results show that DPFRL achieves state-of-the-art on POMDP RL benchmarks. I especially like the paper covers a diverse set of applications, including Mountain Hike, the classic Atari games, and visual navigation (Habitat). Improved performance is reported. Results show that the particle filter structure is effective for handling partial observations, and the discriminative parameterization allows for complex observations.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Update: my concerns have been addressed and I have updated the score to 8\\n****\\n\\nThis paper introduces 3 neat ideas for training deep reinforcement learning (DRL) agents with state variables so that they can handle partially observed environments:\\n1) model the latent state variable as a belief distribution, using a collection of weighted hidden states, like in the particle filter (PF), with an explicit belief update of each particle, calculation of the weight using an observation function, and a differentiable re-weighting function to get the new belief distribution,\\n2) base the policy on the whole set of particles, by quantifying that set using its mean as well as a collection of K learnable moments (specifically, K Moment Generating Functions, each one corresponding to a dot product between the moment variable and the hidden state of the particle),\\n3) instead of generating the observations, take again the idea from PF which is to measure the agreement between the current observation o_t and the i-th particle state variable h_t^i, via a learnable discriminative function.\\nFrom what I understand, the only gradients in the model come from the usual 3 RL losses, and the observation functions in the discriminative PF are trained because they weigh the particles.\\n\\nThe model, trained using Advantage Actor Critic (A2C) works well on the (contrived, more on that later) \\\"flickering Natural\\\" Atari RL environment as well as on the Habitat navigation challenge, outperforming both the GRU-based deep RL agent and the Deep Variational RL based agent that uses variational sequence auto-encoders (and extra gradients from the observation function...). The ablation analysis confirms the advantages of the 3 ideas introduced in the paper.\\n\\nThe paper is a very well written and the experiments are very well executed. I believe that the idea is novel. I gave this paper only a weak accept because of unclear explanation and of several missed opportunities:\\n\\n* The observation function f_{obs}(h_t^i, o_t) is insufficiently explained. I understood it was trained using discriminative training. Does it mean that different observations o_t are used, and if so, how many? Or is the observation o_t the current observation of the agent, but only the h_t^i change? In which case, what makes it discriminative? Isn't there a posterior collapse, with all particles ending up bearing the same state? Does the function f_{obs} input o_t or u(o_t), where u is the convolutional network?\\n\\n* These questions could be easily answered with pseudocode in the appendix.\\n\\n* In section 3.1, what is the relationship between p_t(i) and f_{obs}(h_t^i, o_t)?\\n\\n* Particle filters in navigation enable to store the history of the observations of the mobile robot, accumulating the laser range scans and matching them to the observations. At the end, one can visualise the map stored in a given particle, as well as visualise the point cloud of the particle coordinates and show the trajectories of these particles. Here the particles contain the hidden states of the agent. Could you similarly to traditional PF, visualise the position of the agent by matching the point cloud {{h_t^i}_i}_t to a set of observations o_k taken from the whole environment, and plotting a 2D map of weights coming from function f_{obs}(h_t, o_k) evaluated over all k?\\n\\n* In the discussion, can you comment on the relationship between Monte-Carlo Tree Search in RL agents (sampling different trajectories) vs. here (sampling different states)?\\n\\n* While I understand the need to use that environment for the sake of comparison to DVRL, the Atari + flickering + natural images dataset is very artificial and contrived. I would be interested in seeing more analysis of the discriminative PF RL algorithm on navigation tasks, given that that's what PF were designed for.\", \"some_missing_references\": \"* Early references on DRL for navigation:\\nZhu et al (2016) \\\"Target-driven visual navigation in indoor scenes using deep reinforcement learning\\\"\\nLample & Chaplot (2016) \\\"Playing FPS games with deep reinforcement learning\\\"\\nMirowski et al (2016) \\\"Learning to navigate in complex environments\\\"\"}"
]
} |
rkl8dlHYvB | Learning to Group: A Bottom-Up Framework for 3D Part Discovery in Unseen Categories | [
"Tiange Luo",
"Kaichun Mo",
"Zhiao Huang",
"Jiarui Xu",
"Siyu Hu",
"Liwei Wang",
"Hao Su"
] | We address the problem of learning to discover 3D parts for objects in unseen categories. Being able to learn the geometry prior of parts and transfer this prior to unseen categories pose fundamental challenges on data-driven shape segmentation approaches. Formulated as a contextual bandit problem, we propose a learning-based iterative grouping framework which learns a grouping policy to progressively merge small part proposals into bigger ones in a bottom-up fashion. At the core of our approach is to restrict the local context for extracting part-level features, which encourages the generalizability to novel categories. On a recently proposed large-scale fine-grained 3D part dataset, PartNet, we demonstrate that our method can transfer knowledge of parts learned from 3 training categories to 21 unseen testing categories without seeing any annotated samples. Quantitative comparisons against four strong shape segmentation baselines show that we achieve the state-of-the-art performance. | [
"Shape Segmentation",
"Zero-Shot Learning",
"Learning Representations"
] | Accept (Poster) | https://openreview.net/pdf?id=rkl8dlHYvB | https://openreview.net/forum?id=rkl8dlHYvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"-K7FkhWk5",
"BkxB6Uchir",
"Byl4piU2jS",
"HyeB3Y2IsS",
"H1eFDFn8sH",
"S1xe6dhLiS",
"B1gRgSUX5B",
"Hyx-1l2lqr",
"ByeojHyTYB",
"HkxK3bljFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1576798748152,
1573852861333,
1573837755532,
1573468588583,
1573468512945,
1573468344212,
1572197621864,
1572024281210,
1571775906597,
1571647921479
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2398/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2398/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2398/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2398/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2398/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2398/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2398/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2398/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2398/Authors"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents and evaluates a technique for unsupervised object part discovery in 3d -- i.e. grouping points of a point cloud into coherent parts for an object that has not been seen before. The paper received 3 reviews from experts working in this area. R1 recommended Weak Accept, and identified some specific technical questions for the authors to address in the response (which the authors provided and R1 seemed satisfied). R2 recommends Weak Reject, and indicates an overall positive view of the paper but felt the experimental results were somewhat weak and posed several specific questions to the reviewers. The authors' response convincingly addressed these questions. R3 recommends Accept, but suggests some additional qualitative examples and ablation studies. The author response again addresses these. Overall, the reviews indicate that this is a good paper with some specific questions and concerns that can be addressed; the AC thus recommends a (Weak) Accept based on the reviews and author responses.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for the updates!\\n\\nWe also believe that the two points will strengthen the contribution of the proposed method, and include them in Appendix C.1. Thank you for the valuable suggestion!\\n\\nSorry for the confusion. What we want to point out is that the capacity of the network also matters the performance of our method. If we cut the width of the model by half, the performance of both seen and unseen categories will drop slightly. Our intuition about this point is that the intermediate sub-parts generated during the grouping process may have various patterns and are irregular. Larger capacity will help the model recognize the various patterns. This intuition is also one of our motivations to introduce RL for learning to select pairs of sub-parts.\"}",
"{\"title\": \"Thank you\", \"comment\": \"This is helpful.\\n\\nHaving fewer parameters and using less GPU memory are nice attributes of your model, which I encourage you to mention in the paper. (I have not noticed this appear in the draft so far.)\", \"i_did_not_understand_this_sentence_from_your_response\": \"\\\"it will slightly degrade the performance on both seen and unseen categories if half the width of the model\\\". Could you please say that another way? Perhaps a word got deleted accidentally.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank Reviewer #2 for the feedback and suggestions. The suggestions are helpful, and we are open to further discussions.\\n\\nFrom the comments, we infer that Reviewer #2 assumes our claim to be that top-down approaches perform worse than bottom-up approaches in terms of generalization abilities. This is not exactly our view. Here, we precisely lay out our argument: Using features with the global context may hurt part segmentation performance in unseen categories. In most top-down pipelines and some bottom-up pipelines, the features extracted for each point to be fed to the classifier would include the global context. This point will be further discussed when addressing specific concerns. \\n\\n\\n[Regarding \\u201cPartNet-InsSeg\\u201d outperforms \\u201cSGPN\\u201d in novel categories]\\nBoth \\u201cPartNet-InsSeg\\u201d (top-down) and the \\u201cSGPN\\u201d (bottom-up) involve global context to learn point features and make decisions, thus give inferior segmentation results on unseen categories. This is consistent with our conclusions. We are happy to make this point crystally clear in the revised version. \\n\\n\\n\\n[Regarding the performance of the tradition segmentation methods and the proposed method]\\nWCseg is one of the most feasible traditional segmentation methods, whose results are provided in Table 1. Compared to the learning-based methods, It champions 6 out of 21 unseen categories. Also, we have added more qualitative results to Appendix C.3, which demonstrates the performance of both the traditional segmentation method and the proposed method.\\n\\n\\n\\n[Regarding the ablation studies]\\nThanks for pointing this out, and we made the ablation studies more thorough in revision, including the effects of involving more context on both seen and unseen categories, more components analysis, and qualitative results of the rectification module. Please refer to Appendix B for details.\\n\\nSince the policy scores sum to one overall pairs of sub-parts, there is no explicit signal from the policy network whether the pair should be grouped. We therefore introduce the termination module to verify whether we should group the pair of sub-parts, selected based on the score from the policy module. We noticed that the name of \\u201ctermination module\\u201d may have confused reviewers, so we would rename it as \\u201cverification module\\u201d. Also, there is indeed a cascaded structure where the termination module will focus on the samples selected by the policy module. This serves as a kind of hard example mining and complements the policy module, which needs to recognize so many samples. We will make the related descriptions clearer in revision. \\n\\n\\n\\n[Regarding the proposed method performs worse than Mo et al. (2019) in seen categories]\\nWith involved limited context only for seen categories, our proposed method further improves the performance in seen categories. Please refer to Table 1,6 for new results and Appendix B.1 for details.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We greatly appreciate Reviewer #1 for the analysis, which precisely states the contributions of this paper. We have added more details and made descriptions clearer in revision.\", \"here_are_the_answers_to_the_questions_and_concerns\": \"[Regarding the naming of modules, especially the \\u201ctermination module\\u201d]\\nThanks for pointing this out. After reading the comments from all reviewers, we also feel that renaming some modules may help to clarify confusion. \\n\\nSince the policy scores sum to one overall pairs of sub-parts, there is no explicit signal from the policy network whether the pair should be grouped. We therefore introduce the termination module to verify whether we should group the pair of sub-parts, selected based on the score from the policy module. In fact, this module will verify pairs of merge candidates ranked by their scores from the policy module. The first pair that passes the verification will be grouped. Only if no pairs pass the verification the whole algorithm will terminate. Based on the functionality, we would rename this module as \\u201cverification module\\u201d. The verification module complements the policy module, which needs to recognize so many samples. The pipeline thus can be viewed as a kind of hard example mining. We have made the related descriptions clearer in both Section 4.1 and Appendix C.1.\\n\\nCurrently, we named the modules according to their functionalities. We thanks for the suggestion and will seriously consider using unary/pairwise(binary) naming the purity/rectification modules which indicate the type of information input into the module.\\n\\n\\n\\n[Regarding the overstatement]\\nThanks for the suggestion. We have articulated this statement more precisely in revision. The phrase \\u201cguarantees the generalizability\\u201d is replaced by a milder one \\u201cencourages the generalizability\\u201d. \\n\\n\\n\\n[Regarding the proposed model has fewer parameters than baselines]\\nThanks for pointing this out. The number of parameters for different methods is listed below:\\n\\n--------------------------\", \"partnet\": \"1.93e+06\", \"sgpn\": \"1.55e+06\", \"gspn\": \"14.80e+06 (Shape Proposal Net: 13.86e+06)\", \"our\": \"0.64e+06\\n--------------------------\\n\\nCurrently, our model does have fewer parameters than compared learning methods. But, we would like to point out that it will slightly degrade the performance on both seen and unseen categories if half the width of the model. Our intuition is that the intermediate sub-parts generated during the grouping process may have various patterns and are irregular. This increases the burden of models to recognize, and we widen the network can alleviate this situation. This intuition is also one of the motivations why we introduce the RL to learn to select the pairs. We want to use the policy network to help form more regular intermediate sub-parts during the grouping process. The size-equal rule learned by our policy network is a positive signal on this point. Please also refer to Appendix B.3 and see some related qualitative results.\\n\\nBesides, the input for our modules is not the whole shape point cloud, but sampling points of sub-parts from the shape. In our experiments, the size of input point clouds for our method is 1024, while for compared baselines are 10000. So the proposed method has advantages in GPU memory cost. \\n\\n\\n\\n[Regarding more statistics on the earliest stage of the method]\\nThanks for pointing this out, and we have added more details and statistics in Appendix A and C.1.\\n\\nWhen we train on Chair and test on Chair of the level-3 annotation, the average number of initial proposals on is 124 and the average number of pairs for the initial pool is 658. The number of valid pairs will decrease quickly as the grouping process going, and we usually have a total of 137 iterations. When employing the model on 1080Ti, it will cost 3s for processing one shape.\\n\\n\\n\\n[Regarding the part selection]\\nYes, we adopted similarly mentioned conditions that two sub-parts of a pair are constrained to be close to each other at the early grouping stage. Please refer to Appendix C.1 to see more implementation details.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank Reviewer #3 for the comments and suggestions. The suggestions are helpful in further improving our work.\", \"here_are_the_answers_to_the_questions_and_concerns\": \"[Regarding the purity score and the purity module]\\nSimilar to the objectness score used in object detection, the purity score serves as the partness score to measure the quality of the sub-part. The purity score is higher, the sub-part is more likely to only cover one part in ground-truth. We will use this score to measure the quality of our initial sub-part proposals and remove the low-quality subparts to form our initial sub-parts pool. We add more related descriptions in Appendix A and C.1.\\n\\nFor the policy network, we use the purity module to process unary information and the rectification module to process binary information. While the purity score as a unary term could be learned through the policy gradient, we observe that we can give direct supervision, which is effective. In revision, we add related ablation results in Appendix B.2 and justify that the purity module helps to learn the policy.\\n\\n\\n\\n[Regarding making the rectification module larger]\\nLarger networks have larger capacities but higher risks to overfit to training data. Since our approach purely exploits local context, larger networks may have less impact on the overfitting issue for our method compared to those baselines with inputting the global context. We enlarge the rectification module and gain improvements in seen categories. For the unseen categories which have similar part patterns with the training categories, we obtain some improvements. But, for the unseen categories (e.g. scissors) which have relatively large different part patterns with the training categories (chair, storage furniture, lamp), we observe inconsistent improvements or declinations. Thanks for the suggestion. We will study this point thoroughly, and include it in revision.\\n\\nPlease also refer to Appendix B.1 to see the related ablation studies about the effects of involving more context on both seen and unseen categories.\\n\\n\\n\\n[Regarding the levels being inconsistent between categories]\\nThanks for pointing this out. The mentioned statements are not clear, and we have fixed it in revision. The segmentation levels for different categories may not share consistent part granularity, which is the reason that we gather together the part proposals predicted by networks at all three levels as a joint pool of proposals for evaluation on unseen categories. The levels of different categories may not correspond exactly; however, the joint part proposals can cover multiple levels of parts for unseen categories. Our three training categories have several thousands of models per category, thus providing a large variety of parts at different granularities for learning.\\n\\n\\n\\n[Regarding integrating the termination module into the policy module]\\nIn our pipeline, we will use the policy network to pick the pair of sub-parts and use the termination network to determine whether we should group the pair. The termination module is the basic building block of our pipeline. We noticed that the name of \\u201ctermination module\\u201d may have confused readers, so we would rename it as \\u201cverification module\\u201d and made related descriptions clearer in revision. Also, we would like to point out that the termination module will focus on the samples selected by the policy module. This cascade structure serves as a kind of hard example mining and will improve the performance.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper studies the problem of part segmentation in objects represented as a point cloud. The main novelty is in the fact that the proposed method uses a bottom-up iterative merging framework inspired by perceptual grouping and finds that it transfers better to unseen categories. In zero-shot transfer experiments, the proposed method performs better than all four other baselines compared; but is worse than Mo et al. (2019) in known categories.\", \"The paper hypothesizes that top-down approaches do not generalizes well to new categories because they end up overfitting to the global context. While this is reasonable, I find that the experiments are not sufficient to validate this claim (please see questions below). Evaluation on unseen object categories is an underexplored topic, and the paper is generally well written. I think the submission can be an above-threshold paper if the questions are addressed.\", \"I\\u2019d like to see some evidence for the claim that classic segmentation methods \\\"can perform much better for unseen object classes\\\" (last paragraph of page 1), and see how the proposed method compares to those baselines.\", \"If my understanding of Table 3 is correct, \\\"PartNet-InsSeg\\\" (Mo et al. 2019) is a top-down approach yet it performs better than SGPN which is a bottom-up grouping method (as summarized on page 7) in novel categories. If so, can it be explained in a way that is consistent with the paper's findings?\", \"Table 4 shows some ablation study in an attempt to justify the proposed design, but I think it should be more thorough. e.g. it is not immediately obvious why the authors did not included a baseline that consists only of the rectification module with a termination threshold (seems like the most basic design that doesn't have the large-part bias or explicitly require a termination module).\"], \"typos\": \"psilon-greedy (page 6 paragraph 2)\\nbackpropogation (page 6 under training losses)\\nIn consequences (page 5 under termination network)\\nepilson (page 5, under network training)\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a method for part segmentation in object pointclouds. The method is to (1) break the object into superpixel-like subparts (without semantic meaning yet), then (2) score pairs of parts on their mergeability, (3) greedily merge the best pair, and repeat. The scoring has a unary component (called a \\\"purity\\\" module), and a pairwise component (called a \\\"rectification\\\" module); the unary component determines if the joined pointcloud of two sub-parts appears part-like, and the pairwise component determines if the features of the two sub-parts appear compatible. These components are implemented as pointnets/MLPs. Finally there is a termination module, which sigmoid-scores part pairs on whether they should actually merge (and the algorithm continue), or not (and we stop). The purity and termination modules are trained supervised, to mimic intersection-like and mergeability scores, and the rectification module with a \\\"reward\\\" which is another mergeability score (coming from GT and the purity module).\\n\\nThe method is interesting for being (1) iterative, and (2) driven by purely local cues. The iterative approach, with small networks doing the work, is a nice relief from the giant-network baselines (such as PartNet-InsSeg) that take the entire pointcloud as input and produce all instance segmentations directly. Also, whereas most works try to maximize the amount of contextual input to the learning modules, this work makes the (almost certainly correct) observation that the smaller the contextual input, the smaller the risk for overfitting. This is a bit like making the approach \\\"convolutional\\\", in the sense that the same few parameters are used repeatedly over space (and in this case, also repeated over scale). The design of the local modules makes sense, although I would prefer they be called unary/pairwise instead of purity/rectification, and the RL training procedure looks reasonable also.\\n\\nI am not totally clear on how the termination module actually comes into play. From the name, it sounds like this network would output 1 when the algorithm should terminate, but in its usage, it seems to output 1 when the best-scored pair should be merged. So then, does the algorithm terminate when this module decides to NOT merge the best-scored pair? This sounds like it bears great risk of early stopping. I would appreciate some clarification on this.\\n\\nThe abstract says that locality \\\"guarantees the generalizability to novel categories\\\". This is an overstatement, since \\\"guarantees\\\" implies some theoretical proof, and also since the paper's own results (in Table 1 and 3) indicate that cross-category generalization is far from addressed, and depends partly on the categories used in training (shown in Table 2). \\n\\nI assume that this method has (or at least can have) far fewer parameters than the baselines, since the components never need to learn broad contextual priors. Can the authors clarify and elaborate on this please? If you can show that your method has far fewer parameters than the baselines, it would improve the paper I think.\\n\\nCan the authors please provide some statistics on the earliest stage of the method, where superpixel-like parts are proposed? How many proposals, and how many pairs does this make, and how slowly do the main modules proceed through these pairs? \\n\\nIs there a missing step that makes the part selection non-random? It seems like many of the pairs can be rejected outright early on, such as ones whose centroids exceed some distance threshold in 3D.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper describes a method for segmenting 3D point clouds of objects into component parts, with a focus on generalizing part groupings to novel object categories unseen during training. In order to improve generalization, the paper argues for limiting the influence of global context, and therefore seeks to build compact parts in a bottom-up fashion by iterative merging of superpixel-like point subsets. This is achieved by defining a RL merge policy, using merge and termination scores formed by a combination of explicitly trained part purity (each part should comprise one true part), and policy-trained pair comparison network. The system is evaluated using PartNet, using three categories for training and the rest for testing, showing strong performance relative to baselines.\\n\\nThe system is described well, and shows good performance on a nicely motivated task. A few more ablations would have been nice to see (in questions below), as might more qualitative results. Overall, the method is presented and evaluated convincingly.\", \"questions\": [\"What is the effect of the purity score regression? Since the policy network is trained using a pair-comparison module anyway, what happens if the explicit purity score supervision is removed?\", \"What if the \\\"rectifier\\\" module is made larger (with or without purity module), e.g. the same size as the termination network? Does this improve or overfit to the training categories?\", \"Sec 5.3 mentions \\\"segmentation levels for different categories may not share consistent part granularity .... Thus, ... we train three networks corresponding to three levels of segmentation for training categories\\\". While it makes sense to have three networks for the three levels (each have different termination points, and perhaps even merge paths), I don't see how this follows from the levels being inconsistent between categories. In fact, it seems just the opposite, that if the levels are inconsistent, this could pose a problem when a part at one level for one category is \\\"missing\\\" from the other category, due to level numbers not coinciding. Or, is this actually not a problem because on the three training categories selected, the levels are in fact consistent?\", \"Can termination be integrated into the policy network or policy itself?\"], \"a_couple_typos_i_noticed\": \"p.5 \\\"In consequences,\\\" --> \\\"As a consequence,\\\"\\np.11 \\\"in-balanced\\\" --> \\\"unbalanced\\\"\"}",
"{\"title\": \"More Qualitative Results\", \"comment\": \"Thanks for your interest. We've included more qualitative results in the revision. Please refer to Appendix C.3.\\n\\nAlso, our approach will learn local-context part knowledge from training categories that is able to transfer to unseen categories. You can find some relating experiments and discussions in Section 5.4.\"}"
]
} |
rylrdxHFDr | State Alignment-based Imitation Learning | [
"Fangchen Liu",
"Zhan Ling",
"Tongzhou Mu",
"Hao Su"
] | Consider an imitation learning problem that the imitator and the expert have different dynamics models. Most of existing imitation learning methods fail because they focus on the imitation of actions. We propose a novel state alignment-based imitation learning method to train the imitator by following the state sequences in the expert demonstrations as much as possible. The alignment of states comes from both local and global perspectives. We combine them into a reinforcement learning framework by a regularized policy update objective. We show the superiority of our method on standard imitation learning settings as well as the challenging settings in which the expert and the imitator have different dynamics models. | [
"Imitation learning",
"Reinforcement Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=rylrdxHFDr | https://openreview.net/forum?id=rylrdxHFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"EnY6q3beyN",
"SJer6JonjB",
"SylTOhF3or",
"ryxKWFghsH",
"Skg4aNy2jB",
"HJgcSiqjsB",
"B1x0yM-ssB",
"Bkl7NsAqsH",
"B1x5zqAcjH",
"SJl5qjdYiB",
"BJgKvjdFsB",
"SJlgjNxPiS",
"SJe4LmxwoB",
"HJeDsmmIqS",
"rkgo-346tS",
"r1gjKT42Kr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748121,
1573855165002,
1573850229191,
1573812480919,
1573807292179,
1573788482082,
1573749221743,
1573739307375,
1573739025708,
1573649297556,
1573649249325,
1573483671784,
1573483340275,
1572381598590,
1571798019178,
1571732866769
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2397/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2397/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2397/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2397/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2397/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2397/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2397/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2397/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2397/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2397/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2397/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2397/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2397/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2397/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2397/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper seeks to adapt behavioural cloning to the case where demonstrator and learner have different dynamics (e.g. human demonstrator), by designing a state-based objective. The reviewers agreed the paper makes an important and interesting contribution, but were somewhat divided about whether the experiments were sufficiently impactful. They furthermore had additional concerns regarding the clarity of the paper and presentation of the method. Through discussion, it seems that these were sufficiently addressed that the consensus has moved towards agreeing that the paper sufficiently proves the concept to warrant publication (with one reviewer dissenting).\\n\\nI recommend acceptance, with the view that the authors should put a substantial amount of work into improving the presentation of the paper based on the feedback that has emerged from the discussion before the camera ready is submitted (if accepted).\", \"title\": \"Paper Decision\"}",
"{\"title\": \"revision uploaded\", \"comment\": \"We have uploaded a revision with modified part marked in red. When the whole section is newly created (Appendix D), we mark the title as red.\"}",
"{\"title\": \"An Update for Experimental Comparison with State Aware Imitation Learning\", \"comment\": \"In practice, this algorithm is computationally expensive due to the need to compute a big Jacobian matrix for each expert transitions (Eq 5 in their paper). While it has been verified on some small environments in the original paper, we find that it is not a straightforward effort to make it computationally affordable for environments in our submission.\\n\\nIn the experiment, we compare this algorithm with ours in the Disabled-Swimmer environment whose imitator has short legs. With our implementation that uses a small neural network (due to the complexity concern mentioned above), this algorithm can obtain 39 points in 100 test episodes, while our method can obtain more than 300 points.\"}",
"{\"title\": \"Response to Authors\", \"comment\": \"Thank you for your detailed response to my comments, (and apologies for not being able to respond to them earlier).\\n\\nI appreciate that you have provided a thorough motivation of your choices of approach here, and hope that these choices are included in the paper. Likewise, the analysis you provide in the above comments is insightful, and I hope that this analysis can be provided in the paper (with details or supporting experiments in the supplementary). \\n\\nConsidering these details of motivation and analysis are added to the paper, I change my decision to weak accept.\\nI am still unconvinced regarding the exact differences between state matching and state alignment - please consider making this distinction clear in the final version of the paper.\"}",
"{\"title\": \"A short update on Atari environments\", \"comment\": \"We have tried our method on Atari and now update the result here. Due to the time limit, we only tested our method on Pong. And in Atari experiments, the expert and the imitator have the same dynamics model because it is very hard to modify the Atari game at the engine level. Since computing Wasserstein distance on image domain is pretty difficult, we remove the global alignment component in our method. Given 20 expert demonstration trajectories (all of them have the perfect score 21), our imitator can also achieve the average score of 21 in 30 testing episodes. Given more time, we may try to figure out a way to test the imitation with different dynamics in Atari.\"}",
"{\"title\": \"Response to Authors and Final Decision\", \"comment\": \"Thanks for your descriptive reply!\\n\\nAbout Q1, while I agree that page limit restricts the space you could have spent describing background approaches, I strongly feel that notation needs to be standardized in each paper. VAE/PPO/WGAN are not papers that every IL/RL researcher needs to read and refer to while reading your paper to understand the notation. Expecting the reader to put in a lot of effort understanding your work highlights that the writing is lacking. \\n\\nQ2, I would like to see the notation introduced in a future manuscript.\\n\\nQ3, I just did a word search on the manuscript and there is no phrase \\\"agent dynamics\\\" anywhere in the paper. What the abstract and introduction do say is that the expert and imitator have different dynamics models, which as I said could mean both environment/agent dynamics. Please be specific in a technical paper.\\n\\nQ4, please include the explanations given in the rebuttal in the future manuscript. \\n\\nQ5, I don't think unless you quantify how different the dynamics are for a lighter and heavier ant by providing trajectory statistics, you can make a statement that your approach can deal with differing agent dynamics. I personally liked your toy example much more than the experiments done in the paper. Please choose an experimental domain that clearly highlights a need for an approach that is robust to changes in agent dynamics and show how your approach solves the challenge.\\n\\nQ6, defer to Q1\\n\\nQ7, update the manuscript as you described\\n\\nQ8, please provide the details in the rebuttal in the paper as well\\n\\nGiven the current state of the paper and the details provided in the rebuttal, I am going to change my decision to a weak reject but can't improve it any further. The authors should seriously consider a more challenging realistic domain that showcases the strengths of their approach. In addition to this, the authors should spend more time on writing the paper to make sure its as self-contained as possible (to the extent where readers don't have to read other papers to understand notation.)\\n\\nThanks for spending the time writing the rebuttal and I hope you keep up the good work, irrespective of the final decision. :)\"}",
"{\"title\": \"Response to authors\", \"comment\": \"I thank the authors for their detailed response. I encourage the authors to revise their draft for clarity and include some of the discussion in the responses to the reviews in the appendix. I think the example of where AIRL fails is also nice and would be a good addition to the appendix.\\n\\nThe authors have addressed all of my main concerns. I think this paper will be of interest to many in the imitation learning and reinforcement learning communities. Based on the novelty of the approach, the importance of imitation learning with different agent dynamics, and the significant improvement over state-of-the-art, I think the paper should be accepted into ICLR.\"}",
"{\"title\": \"[cont'd]\", \"comment\": [\"In Fig 3, the other methods are not pre-trained with BC because we already listed BC as a baseline. In Sec 5.2 (actors of the same dynamics), we showed the performance of GAIL pre-trained with BC, which did not outperform BC in general (Table 2, Table 7-12). In practice, while the pre-trained policy network performs reasonably well at initialization, the policy update in GAIL is likely to worsen the initial policy, especially when the demonstration is abundant. The observation leads us to believe that the performance of pure inverse RL methods (e.g., GAIL and AIRL) at equilibrium cannot be simply improved by BC pretraining. Therefore, we didn\\u2019t compare GAIL+BC or AIRL+BC in other settings.\"], \"conceptual_questions\": \"Q1. How do you account for cases where due to differing dynamics, states reached by expert in demonstrations are unreachable by the imitator?\", \"a\": \"Compared to the state-based GAIL, we have a local alignment component to make sure the relationships between consecutive states are preserved. This is necessary because matching global distribution alone would be misleading, as you said.\"}",
"{\"title\": \"Response to Reviewer#1\", \"comment\": \"Q1: The paper is very poorly written\", \"a\": \"Thanks for your proofreading. \\\\tau is the trajectory of the agent. The 6 and 7 lines should be reverted. And the reward is defined on states, so it should be r(s, s\\u2019). We will update these points during the rebuttal period.\", \"q2\": \"some notations (\\\\phi, \\\\theta_old, \\\\sigma)\", \"q3\": \"Different dynamics can mean a lot more than what was accounted for in the paper. Different agent dynamics versus different environment dynamics. Problem setup vague\", \"q4\": \"Blanket statements\", \"q5\": \"Motivation. The toy example in the introduction was good but the experiments did not reflect the complexity of that example, ...\", \"q6\": \"Figures\", \"q7\": \"Algorithm\", \"q8\": [\"Experiments\", \"AIRL is the paper \\u2018LEARNING ROBUST REWARDS WITH ADVERSARIAL INVERSE REINFORCEMENT LEARNING\\u2019. We will explicitly define this acronym.\", \"For the policy prior, the variance is a hyper-parameter. We use 0.1 in our experiments. The policy network is initialized to have the same variance as the policy prior and then adjusted online, so it\\u2019s not a constant during the interactions.\", \"We train VAE using the demonstrations. The input to VAE is s_t and the supervision is s_{t+1}. We train the inverse dynamics using the rules described in the first 5 lines of the algorithm pseudo-code.\", \"All MuJoCo environments have artificial models. But it\\u2019s still worthwhile to test an algorithm on that given the massive publications in ICLR/ICML/NeurIPS that only test in simulators.\", \"For Sec 5.1.2, as the two agents have different action dimensions, the baseline methods cannot complete this task.\"]}",
"{\"title\": \"Response to Reviewer#2 [cont'd]\", \"comment\": \"Q7: The authors do not mention the work by Brown et al. \\\"Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations\\\", ICML 2019. This work also learns only from observations and does not require any pre-training. How is the current work different?\", \"a\": \"The work by Ayatar et al. reduces the domain gap between YouTube video and Atari by using metric learning. They also define rewards based on the local state matching in the embedding space, so that an RL can be used to solve the task. This work shares similar spirit as ours that both works are state-based. However, their core challenge lies in how to bridge the domain gap between YouTube and Atari visual appearances, yet its reward design is much easier since the action space in Atari games is quite limited, in comparison to our continuous and high-dimensional action space. By our experiments, the reward function in Sec 4 by Ayatar et al. is not feasible in solving our MuJoCo imitation learning tasks.\\n\\nWe feel that our approach should be able to play Atari games from raw visual trajectories. If time permits, we would give a try.\", \"q8\": \"The authors cite the work by Ayatar on imitation learning from observing YouTube videos. This method is also state-based and uses a similar state occupancy matching reward that would also work with different dynamics. How is the current method different. Would the current method work on domains such as learning to play Atari from raw visual trajectories?\"}",
"{\"title\": \"Response to Reviewer#2\", \"comment\": \"Thank you for your encouraging and constructive comments!\", \"q1\": \"The success of BC is interesting. Why does it do so well? This seems to violate the motivation of the paper\", \"a\": \"We sincerely thank you for the insightful perspective. There indeed exist theoretical reasons that our approach may work better than AIRL in certain cases. We have created a PDF under an anonymous link to illustrate this point: https://drive.google.com/file/d/1yqwYfLuVg0gw61gVu7g91sxD_cNPI4u0/view?usp=sharing. About AIRL\\u2019s performance difference of disabled ant in our paper versus in the original paper, it is because our environment is based on the ant model provided in OpenAI gym, but AIRL uses rllab. Additionally, we kept the original control range unchanged [-150, 150], while AIRL mitigated the control range to [-30, 30] of the ant to make it more stable.\", \"q2\": \"I thought the experiments for different action dynamics was very nice. Do you assume that is prior knowledge? Is this something that could be learned or inferred? How?\", \"q3\": \"How is the potential updated. Eq(3) doesn't give an update rule. From later discussion it appears you mean take the \\\\phi that results in the supremum. Is this value learned? How is the optimization performed to solve Eq(3)?\", \"q4\": \"How would you make the policy prior in Eq(7) work if actions are discrete? What if actions are multidimensional, with different ranges where some actions are not important? What is sigma?\", \"q5\": \"In the RL community there is interest in transfer learning when the dynamics change. If the reward were observable, would the current approach be potentially useful for boosting the performance of transfer learning in RL?\", \"q6\": \"The authors cite AIRL, but could do a better job distinguishing between AIRL and the current work. AIRL also tries to learn a state-based reward that is disentangled from the dynamics. Are there theoretical reasons why this work is better? Why does the proposed method work so much better in practice. On a related note, the results for the disabled ant seem much lower than those presented in the original AIRL paper. Why is this?\"}",
"{\"title\": \"Response to Reviewer#3 [cont'd]\", \"comment\": \"Q3 (from Cons 3): The analysis of the relative performance of the proposed approach against the baselines is lacking.\", \"a\": \"First of all, since NNs are involved, even predicting actions would not guarantee the feasibility of actions, especially in unseen states. Note that the actions from our method can be viewed as predicted by a big network that composes a state prediction network and an inverse dynamics network, both of which are trained in a supervised manner (supervision are obtained from the demonstration or random trials). In the cross-morphology imitation setting, predicting actions is actually even more infeasible than our method. In practice, we always clip the actions to make them feasible.\", \"q1\": \"How is the choice of the form of prior made? It is unclear why it is better to have a prior learned over states converted to actions via eq. 7, versus a similarly designed prior over actions.\", \"q2\": \"Is the function of the per-timestep reward simply to provide a denser signal to policy optimization?\", \"q3\": \"Whether the unified objective can be regarded as a novelty.\", \"q4\": \"What purpose does Sec 5.3.2 serve beyond reiterating the deviation problem?\", \"q5\": \"State-matching (and the implied use of inverse models) means the feasibility of retrieved actions is not guaranteed, as compared to models that predict actions directly.\"}",
"{\"title\": \"Response to Reviewer#3\", \"comment\": \"Per your request for detailed motivation and analysis, we compose this lengthy response. Thank you for your patience!\", \"response_to_major_concerns\": \"Q1 (Cons 1): The idea of matching state distributions is not new.\", \"a\": \"We introduced VAE and Wasserstein distance in the background section without much explanation and motivation when using them, due to their great popularity in the ML community and space limitation. Here, we motivate their usage with details.\\n\\nVAE (beta-VAE): VAE is a popular tool in image modeling. This approach has an inherent data augmentation step by sampling in the latent space. VAE can predict robustly even when the test data is slightly off the training data manifold, thus benefiting many reconstruction tasks such as image denoising. We are hence motivated to use VAE in our work: We need a tool to robustly predict the next state based on the previous state, a state which may be off demonstration due to the cross-morphology setting. For the theoretical analysis of VAE, please refer to the paper by Dai et al. cited in our submission. Beta-VAE is a modified version of the original VAE. It adds a hyperparameter, beta, to control how much the variance would be when sampling augmented data.\", \"wasserstein_distance\": [\"This tool is widely used in fields such as generative adversarial networks (e.g., WGAN by Arjovsky et al.) and image retrieval (e.g., EMD distance by Rubner et al.). Compared with KL divergence (and its variants f-divergence), Wasserstein distance has many nice theoretical and numerical properties:\", \"First, Wasserstein distance allows us to compare the discrepancy between distributions of a broader family: when two distributions have negligible intersections, computing KL family of divergences will have trouble (not defined or simply infinite). This situation is quite common for matching high dimensional state space. More explanations and examples can be found in the WGAN paper.\", \"Second, optimizing Wasserstein distance is numerically more stable than KL/JS divergence (WGAN paper gives examples that KL/JS cannot optimize).\", \"Third, Wasserstein distance is a proper metric and is suitable for continuous interpolation of distributions, as shown in paper [Convolutional Wasserstein Distances, SIGGRAPH2015 by Solomon et al]. In our setting, we also want the distribution of visited states by the imitator to gradually move towards the demonstration distribution.\", \"The choices we make are based on the above theoretical knowledge and our empirical evaluation.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Review for \\\"State alignment-based imitation learning.\\\"\", \"summary\": \"This paper addresses the problem of learning from demonstrations where the demonstrator may potentially have different dynamics from the learner. To address this, the paper proposes to align state-distributions (rather than state-action distributions) between the demonstrations and the learner. To alleviate issues arises from doing this alone, they also propose to use a local learned prior to guide their policy to select actions that take it back to the original demonstrations. The paper shows a number of experiments on control tasks in support of their claims.\", \"pros\": \"+ The problem that the paper solves is fairly relevant, and some experiments (such as cross morphology imitation learning) are promising in concept. \\n+ The paper is mostly well written (save some small improvements that could be made in clarity) and can be followed. \\n+ The paper presents a series of experiments across several agents and some other baselines.\\n\\nCons (and primary concerns): \\n\\n1) The idea of matching state distributions in the context of learning behaviors is not new. In particular, [1] also uses similar ideas of matching state distributions in the imitation learning context, if via different machinery. [4] points towards such ideas as well (noted on page 43). Works such as [2, 3] also use this idea in the context of Reinforcement Learning. Further, ideas of deviation correction in the imitation learning domain have been addressed before in [5]. The paper would benefit from a more thorough treatment of these related works, and how the proposed work differs from these. \\n\\n2) The choice of approach (in particular, the use of the Wasserstein distance to match state distributions, and the manner of learning a local prior by training an autoregressive Beta VAE) are lacking motivation, and it is unclear if or why these choices are the best way to approach the problem. \\n\\n3) While the paper presents a large number of comparisons, the analysis of the relative performance of the proposed approach against the baselines is lacking. For example, in section 5.1.1., vanilla BC seems to do very well - why is it the proposed approach only marginally outperforms BC on several of these tasks? In section 5.2, why is SAIL able to outperform other IL techniques on same-dynamics tasks? What about SAIL provides this performance benefit? Similarly, in section 5.3.3, what is it about the Wasserstein objective and the KL that together enables good learning? This ablation seems crucial to assessing the paper, and is lacking a deeper analysis. Further, the relevance of section 5.3.1 is questionable - as no new insight is provided over the original Beta-VAE paper.\", \"other_concerns\": \"1) The paper ultimately uses a form of prior that is defined over actions, and not states (so that it may be used in the KL divergence term). How is the choice of form of prior made? It is unclear why it is better to have a prior learned over states converted to actions via eq. 7, versus a similarly designed prior over actions. \\n\\n2) It is unclear why the expression of the reward function (Eq. 4) is necessary - if it is possible to compute the Wasserstein distance (and hence the cummulative reward), it is possible to update the policy purely from this cummulative reward. \\nIs the function of the per-timestep reward simply to provide a denser signal to the policy optimization? \\n\\n3) The authors claim to introduce a \\\"unified RL framework\\\" in their regularized policy objective. It appears that this is simply the addition of the KL between the policy and the prior $p_a$ to the global alignment objective (subsumed into $L_{CLIP}$), hence the reviewer questions whether this can indeed be treated as a novel contribution of the paper. \\n\\n4) The problem this paper addresses (and the fundamental thesis for its approach) is that action-predictive methods are likely to suffer from deviation from the original demonstrations, as compared to state-predictive methods. \\nWhat purpose does section 5.3.2 serve beyond reiterating this point? \\n\\n5) State-matching (and the implied use of inverse models) means the feasibility of retrieved actions is not guaranteed, as compared to models that predict actions directly.\", \"minor_points\": \"1) Explaining the various phases of training (as observed in the algorithm) would be useful. \\n\\n2) Discussing how states are compared in the cross morphology experiments (Section 5.1.2.) would also be useful.\", \"cited_literature\": \"[1] State Aware Imitation Learning, Yannick Shroecker and Charles Isbell, https://papers.nips.cc/paper/6884-state-aware-imitation-learning.pdf\\n[2] Efficient Exploration via State Marginal Matching, Lisa Lee et. al., https://arxiv.org/abs/1906.05274\\n[3] State Marginal Matching With Mixtures Of Policies, Lisa Lee et. al., https://spirl.info/2019/camera-ready/spirl_camera-ready_25.pdf\\n[4] An Algorithmic Perspective on Imitation Learning, Takayuki Osa et. al., https://arxiv.org/pdf/1811.06711.pdf\\n[5] Improving Multi-step Prediction of Learned Time Series Models, Arun Venkatraman et. al., https://www.ri.cmu.edu/pub_files/2015/1/Venkatraman.pdf\", \"initial_decision\": \"Weak reject\\n\\n#######\", \"post_rebuttal_comments\": \"Considering the authors' motivations of approach and the additional analysis provided in the comments below, I change my decision to weak accept. I would like to encourage the authors to include the details listed below in their paper.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper seeks a solution to the problem of performing imitation learning when the dynamics of the demonstrator are different from the dynamics of the imitator. The authors present a novel approach that combines global alignment by minimizing the Wasserstein distance between state occupancies with local alignment via a state-predictive VAE and inverse dynamics model. The experimental results support the claims that the method works for different dynamics and the proposed approach usually outperforms existing imitation learning methods.\\n\\nThe problem of dealing with different dynamics between a demonstrator and imitator is an important, but often overlooked problem in imitation learning. The combination of the global and local alignment is novel, nicely motivated, and ablation studies demonstrate that both are needed for good performance. Given the extensive experimental results showing the efficacy of this method I recommend that the paper be accepted. \\n\\nHowever, I feel that the paper can still be improved. Below are some of my questions and suggestions.\\n\\nThe success of BC is interesting. Why does it do so well? This seems to violate the motivation of the paper that using (s,a) for imitation learning won't work if the dynamics change. \\n\\nI thought the experiments for different action dynamics was very nice. The paper mentions that even state-spaces cannot be matched between the point mass and Ant. How do you know what part of the state space to imitate? Do you assume that is prior knowledge? Is this something that could be learned or inferred? How?\\n\\nHow is the potential updated. Eq(3) doesn't give an update rule. From later discussion it appears you mean take the \\\\phi that results in the supremum. Is this value learned? How is the optimization performed to solve Eq(3)?\\n\\nHow would you make the policy prior in Eq(7) work if actions are discrete? What if actions are multidimensional, with different ranges where some actions are not important? What is sigma?\\n\\nIn the RL community there is interest in transfer learning when the dynamics change. If the reward were observable, would the current approach be potentially useful for boosting the performance of transfer learning in RL?\", \"related_work\": \"The authors cite AIRL, but could do a better job distinguishing between AIRL and the current work. AIRL also tries to learn a state-based reward that is disentangled from the dynamics. Are there theoretical reasons why this work is better? Why does the proposed method work so much better in practice. On a related note, the results for the disabled ant seem much lower than those presented in the original AIRL paper. Why is this?\\n\\nThe authors do not mention the work by Brown et al. \\\"Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations\\\", ICML 2019. This work also learns only from observations and does not require any pretraining. How is the current work different?\\n\\nThe authors cite the work by Ayatar on imitation learning from observing YouTube videos. This method is also state-based and uses a similar state occupancy matching reward that would also work with different dynamics. How is the current method different. Would the current method work on domains such as learning to play Atari from raw visual trajectories?\", \"typos\": \"\\\"... able to resume to the demonstration trajectory by itself.\\\"\\n--maybe say \\\" able to return to the demonstration trajectory by itself.\\\"\\n\\n\\\"... pairs as in an observation-based GAIL (Ho and Ermon 2016). I think this should be Torabi et al. instead.\\n\\nPage 5 \\\"state predictive VAE and an inverse *dynamics*\\\"\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary of claims:\\n\\nThe paper proposes an imitation learning method that aims to align state distributions rather than state-action distributions to account for cases where the imitator dynamics differ from expert dynamics. They achieve this by two objectives: one local, the other global. The local objective aligns the next state to be close to the expert's next state in each transition by first training a VAE on the expert demonstrations, and using the trained VAE in conjunction with a pretrained inverse dynamics model to compute the action that the imitator needs to imitate. The global objective tries to do a global alignment of states encountered in the imitator and expert trajectories, by minimizing the Wasserstein distance between the two trajectory distributions. The paper claims that using these two objectives results in a method that outperforms existing inverse reinforcement learning and behavior cloning approaches in settings where the imitator and expert dynamics differ.\", \"decision\": \"I recommend the paper to be rejected. I have three main reasons for my decision (with more details in the next section):\\n1. The paper is very poorly written : A lot of details are missing in the paper, notation is not standardized, related work is just a list of previous papers without any context on how the proposed method is related, previous methods are referred to without any citations, and quite a few blanket statements which are not substantiated.\\n2. Incomplete approach description: Quite a few components of the approach are not explained (or even discussed), no intuition provided for the choices made in the approach, the concept of different dynamics is not formalized, some technical inconsistencies in the algorithm, no formal problem statement (which would really help in standardizing notation), and most claims made about the approach are not justified or substantiated\\n3. Poor experiments: Experiments are not well chosen to reflect the premise and claims of the paper, little to no details given for how the baseline approaches were trained, no details on policy parameterization, and missing comparison with baseline approaches in some experiments\", \"comments\": \"(1) Problem setup: \\n(a)Problem setup is very vague and not formalized. \\n(b) Differing dynamics could mean several things: different agent dynamics (like different action spaces; different actuators; etc.), different environment dynamics (different moving obstacles in the world;) etc. \\n(c) Basically, different dynamics can mean a lot more than what was accounted for in the paper\\n\\n(2) Blanket Statements: \\n(a) The authors keep saying that their framework is more flexible without any justification as to why, \\n(b) \\\"simply train an inverse dynamics model\\\"- training an inverse dynamics model can be very hard especially when environment dynamics are stochastic/when the inverse model is multimodal, \\n(c) \\\"constraint becomes loosened\\\" - this statement doesn't make any sense without more explanation,\\n(d) Several other blanket statements about existing approaches\\n\\n(3) Notation : \\n(a) Notation was never standardized in the paper, \\n(b) what is \\\\phi? what is the input-output of \\\\phi? \\n(c) What is \\\\theta_old? What is \\\\sigma? \\n(d) There are a lot of things that needed explanation, especially in the algorithm\\n\\n(4) Related Work : \\n(a) The related work section is just a dump of citations without giving any context for where the proposed work lies in the spectrum of these works. How does it compare? Why is it better/worse? \\n(b) Missing related work that was publshed in ICML 2019 that has a very similar approach in matching state distributions (\\\"Provably efficient Imitation Learning from Observations Alone\\\" or FAIL) and works very well.\\n\\n(5) Motivation : \\n(a) The approach, in general, needs better motivation. The toy example in the introduction was good but the experiments did not reflect the complexity of that example. \\n(b) Try to have a running example in the paper that will help you motivate the approach better.\\n\\n(6) Background : \\n(a) The background section is very minimal and lacks any details necessary. \\n(b) I had to read the beta-VAE paper to understand what it does. \\n(c) The section also lacks any minimal background in IL/RL and the notation could also have been standardized in this section\\n\\n(7) Figures: \\n(a) All the figures in this paper could use a lot of improvement in terms of descriptive captions, more informative legends, bigger fonts, descriptive text, and more figures as well\\n\\n(8) Algorithm : \\n(a) The algorithm was not referenced anywhere in the text, \\n(b) No definition for \\\\tau, details on pretraining inverse dynamics model lacking in both text and algorithm, \\n(c) *Policy prior is used to pretrain policy before the VAE was trained!* which doesn't make any sense since the policy prior is obtained using the VAE, \\n(d) the equation at the end of Sec 4.3 has r(s, a) whereas reward is defined as r(s_t, s_{t+1}) but they are not equivalent when dynamics are stochastic\\n\\n(9) Experiments: \\n(a) What is AIRL? No reference was given. \\n(b) Why is keeping the variance of the policy constant reasonable? How do you come up with the value? \\n(c) How do you pretrain the VAE, invserse dynamics model? \\n(d) The setup of making the ant's legs smaller or body heavier seems very artificial. I am sure its easy to come up with more realistic setups in navigation domains, for example. Try to use more realistic experiments in the future. \\n(e) For results in Fig 3, is AIRL, GAIL also pretrained with VAE or by BC? Seems like SAIL was pretrained but the others weren't since SAIL starts off with a high score at the start. \\n(f) For Sec 5.1.2, comparison with baseline approaches are missing. Legends for the plots are terribly small.\", \"conceptual_questions\": \"1. How do you account for cases where due to differing dynamics, states reached by expert in demonstrations are unreachable by the imitator? \\n2. How do you account for cases where the environment dynamics changes between expert and imitator?\\n3. Why does using Wasserstein distance make sense? Why not other f-divergences? Also, matching global distributions can be very misleading if you have states that are visited multiple times in the same trajectory. FAIL recommends matching state distribution at each time-step instead and is much more stable\\n4. How is this different from GAIL where we match state visitation distribution (instead of state-action visitation distribution?)\", \"things_to_improve\": \"1. Writing needs to be improved a lot\\n2. Better experiments - more realistic domains\\n3. Approach needs to be explained more formally\"}"
]
} |
r1gBOxSFwr | Reweighted Proximal Pruning for Large-Scale Language Representation | [
"Fu-Ming Guo",
"Sijia Liu",
"Finlay S. Mungall",
"Xue Lin",
"Yanzhi Wang"
] | Recently, pre-trained language representation flourishes as the mainstay of the natural language understanding community, e.g., BERT. These pre-trained language representations can create state-of-the-art results on a wide range of downstream tasks. Along with continuous significant performance improvement, the size and complexity of these pre-trained neural models continue to increase rapidly. Is it possible to compress these large-scale language representation models? How will the pruned language representation affect the downstream multi-task transfer learning objectives? In this paper, we propose Reweighted Proximal Pruning (RPP), a new pruning method specifically designed for a large-scale language representation model. Through experiments on SQuAD and the GLUE benchmark suite, we show that proximal pruned BERT keeps high accuracy for both the pre-training task and the downstream multiple fine-tuning tasks at high prune ratio. RPP provides a new perspective to help us analyze what large-scale language representation might learn. Additionally, RPP makes it possible to deploy a large state-of-the-art language representation model such as BERT on a series of distinct devices (e.g., online servers, mobile phones, and edge devices). | [
"Language Representation",
"Machine Learning",
"Deep Learning",
"Optimizer",
"Statistical Learning",
"Model Compression"
] | Reject | https://openreview.net/pdf?id=r1gBOxSFwr | https://openreview.net/forum?id=r1gBOxSFwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"VYkv7shJdN",
"qurX__LD4",
"H1lUDA-hor",
"r1e2ID-noB",
"rkeoILZ2iS",
"H1xKq1WhoS",
"S1xFzHl2ir",
"HJlbxZg2iH",
"SkxYlayniS",
"rkgN_clcqr",
"HJlB_8x9cS",
"SJlU2stRYH"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1580013041236,
1576798748091,
1573817950141,
1573816148506,
1573815890865,
1573814160885,
1573811472911,
1573810408793,
1573809393153,
1572633195646,
1572632172957,
1571883950496
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2396/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2396/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2396/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2396/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2396/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2396/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2396/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2396/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2396/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2396/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2396/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Reply to the decision\", \"comment\": \"Although this decision is not a happy ending, we appreciate OpenReview and the transparency of ICLR, so that the community could see our work.\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a novel pruning method for use with transformer text encoding models like BERT, and show that it can dramatically reduce the number of non-zero weights in a trained model while only slightly harming performance.\\n\\nThis is one of the hardest cases in my pile. The topic is obviously timely and worthwhile. None of the reviewers was able to give a high-confidence assessment, but the reviews were all ultimately leaning positive. However, the reviewers didn't reach a clear consensus on the main strengths of the paper, even after some private discussion, and they raised many concerns. These concerns, taken together, make me doubt that the current paper represents a substantial, sound contribution to the model compression literature in NLP.\\n\\nI'm voting to reject, on the basis of:\\n\\n- Recurring concerns about missing strong baselines, which make it less clear that the new method is an ideal choice.\\n- Relatively weak motivations for the proposed method (pruning a pre-trained model before fine-tuning) in the proposed application domain (mobile devices).\\n- Recurring concerns about thin analysis.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Dear Review #3,\\n\\nThanks so much for going through the paper carefully and providing such valuable and positive feedback. Following the reviewer's suggestions, we have updated our manuscript. The answer to each of the specific questions is below.\\n\\n#Question: The authors should provide a detailed and rigorous explanation for the drawback of existing pruning methods.#\", \"response\": \"Thanks very much for the valuable comments. We update our original Figure 2 as the new Figure 3 for better visualization. We provided more analysis and insights from the visualization in Section 4.3 and General Response 1 (https://openreview.net/forum?id=r1gBOxSFwr¬eId=SkxYlayniS).\"}",
"{\"title\": \"Response to Review #5 (Part B)\", \"comment\": \"Dear Reviewer #5,\\n\\nThanks so much for going through the paper carefully and providing such valuable and positive feedback. We make the response as below:\\n\\n#Question: It is essential to compare the method with other related works for Bert and Transformer compression, including quantisation-based, factorisation-based, pruning, knowledge distillation papers such as:\\n--Prato, Gabriele, Ella Charlaix, and Mehdi Rezagholizadeh. \\\"Fully Quantized Transformer for Improved Translation.\\\" arXiv preprint arXiv:1910.10485 (2019).\\n--Tang, Raphael, et al. \\\"Distilling Task-Specific Knowledge from BERT into Simple Neural Networks.\\\" arXiv preprint arXiv:1903.12136 (2019).\\n--Sanh, Victor, et al. \\\"DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.\\\" arXiv preprint arXiv:1910.01108 (2019).\\n--Ziheng Wang, et al. \\\"Structured Pruning of Large Language Models.\\\" #\", \"response\": \"Thanks so much for the valuable comments and for pointing out these references.\\nThese are excellent works, especially for the factorization based and knowledge distillation based methods. \\nAmong the mentioned papers, the most relevant one is (Wang et al.), which although pruned the Transformer (basic structure block of BERT), and extend their results to Transformer-XL, by training a small dense network first. However, the pruning over BERT was not considered. \\n\\n(Prato et al. 2019) is about applying quantization on the Transformer model, and their method is evaluated on machine translation task. The situation about BERT is not considered in their paper. Besides, our proposed RPP is orthogonal to the quantization method. The quantized BERT model could be further pruned using RPP.\\n(Tang et al. 2019) is about applying knowledge distilling on BERT, and then evaluated on the machine translation task. The situation about fine-tuning on SQuAD and GLUE benchmark is not considered.\\n\\nAbout DistillBERT (Sanh et al. 2019), we did not get a chance to compare our approach with it since our submission was actually before the arxiv date of DistillBERT (Oct. 2). However, we found the paper interesting, and will add it to the related work in the final version. In what follows, we highlight some differences between our work and DistillBERT. First, The pruning ratio of our work (at least 80%) is much higher than the model size reduction of DistillBERT (40%). Second, RPP directly works on the original BERT model architecture, so we do NOT need to design a new student network structure like DistillBERT. Finally, the RPP weight pruning method is orthogonal to Knowledge distillation, DistillBERT has a similar structure with BERT, with six layers of transformer blocks. Besides, Besides, our proposed RPP is orthogonal to knowledge distillation. In the future, we would like to examine whether or not DistillBERT could be further compressed through our proposed algorithm.\\n\\nAbout the factorization based method on BERT, ALBERT is submitted to the ICLR2020 at the same time with us. However, we found ALBERT interesting, and will add it to the related work in the final version. In what follows, we hight some differences between our work and ALBERT. ALBERT small reduces the parameters compared with BERT through weight sharing, while the total amount of computation could not be reduced through weight sharing. Another contribution from ALBERT is the factorization of the hidden matrix in Transformer, and our proposed RPP is orthogonal to the factorization of the hidden matrix. In the future, we would like to examine whether or not ALBERT could be further pruned through our proposed RPP algorithm.\"}",
"{\"title\": \"Response to Review #5 (Part A)\", \"comment\": \"Dear Reviewer #5,\\n\\nThanks so much for going through the paper carefully and providing such valuable and positive feedback. We make the response as below:\\n\\n#Question: They have claimed that \\\"To the best of our knowledge, we are the first to apply reweighted l1 and proximal algorithm in the DNN weight pruning domain, and achieve effective weight pruning on BERT.\\\", however proximal optimization has been used for DNN in works like \\\"Combined Group and Exclusive Sparsity for Deep Neural Networks, 2017\\\". #\", \"response\": \"Thanks very much for the valuable comments. We make the following analysis and update our paper.\\nIn the revision, we make a more detailed analysis of the results that we obtained. Through fine-tuning the pruning BERT over different downstream tasks, we found that SQuAD the most sensitive to the pruning ratio, showing an evident performance drop after 80% pruning ratio. By contrast, the pruning can be made more aggressively when it is evaluated under other fine-tuning tasks. This is not surprising, since SQuAD is a much harder Question Answering (QA) tasks, than other simple classification tasks with limited solution space. \\nOn the other hand, as the prune ratio of the pre-trained BERT increases, the performances on different transfer learning tasks descend generally. The descending ranges differ in different transfer learning tasks. The descending speed on SQuAD is the fastest. Our proposed RPP mitigates the descending trend on all downstream transfer learning tasks to a great extent, compared with NIP. \\nWe have an overview diagram about the pre-training and fine-tuning of BERT in Appendix A of the first submission. Thanks to your advice, we have updated the previous diagram to Section 1 of the paper, to increase the readability of our paper. We have extended more analysis about our results and extended more description about fine-tuning on each downstream task in the updated Appendix C.\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"Dear Reviewer #4,\\n\\nThanks so much for going through the paper carefully and providing such valuable and positive feedback. We make the response as below:\\n\\n#Question: Modest technical contribution. The approach description also requires elaboration. Unclear what weights participate in the pruning objective. #\", \"response\": \"Thanks very much for pointing out DistillBERT. We did not get a chance to compare our approach with it since our submission was actually before the arxiv date of DistillBERT (Oct. 2). However, we found the paper interesting, and will add it to the related work in the final version. In what follows, we highlight some differences between our work and DistillBERT.\\nFirst, The pruning ratio of our work (at least 80%) is much higher than the model size reduction of DistillBERT (40%). \\nSecond, RPP directly works on the original BERT model architecture, so we do NOT need to design a new student network structure like DistillBERT. \\nFinally, the RPP weight pruning method is orthogonal to Knowledge distillation; DistillBERT has a similar structure with BERT, with six layers of transformer blocks. In the future, we would like to examine whether or not DistillBERT could be further compressed through our proposed algorithm.\"}",
"{\"title\": \"General Response 3 to all reviewers, regarding questions about the original Figure 3\", \"comment\": \"Dear Reviewers,\\n\\nThanks so much for the valuable comments. We make the responses to the questions about the original Figure 3 as below:\\n\\nQuestions about the original Figure 3 (in the revised paper, the original Figure 3 corresponds to the updated Figure 4)\", \"response\": \"The original Figure 3 was used to provide a visualization of how different the language representation obtained from a pruned BERT is from the representation obtained from the original BERT. Since BERT is different from commonly-studied image classifiers in network pruning, we would like to examine if pruning on BERT will lead to a significant change in the low-dimensional manifold of the language representation.\\n\\nOriginally the reduced dimension using t-SNE is $3$ ($x-y-z$ space), and thus we presented our results projected to $x-y$ space and $y-z$ space only. Based on the reviewer\\u2019s comments, we realize that the presentation could be misleading. In the revised paper, we consider a 2-D t-SNE to make our visualization more easily. \\n\\nFrom the updated Figure 4, we make the following observation and analyses:\", \"low_dimensional_manifold\": \"for both original BERT and BERT pruned with RPP, the low-dimensional manifolds of the language representation are similar, showing a similar projection.\\nTaking the specific word ``intelligent\\\" in Figure 4 as an example, the distribution of specific words and corresponding nearest words at the low-dimensional manifold (calculated using cosine/Euclidean distance) remains a high degree of similarity. This observation implies that the BERT applied with RPP keeps most of the language representation information similar to that from the original BERT.\", \"linguistic_interpretation_of_proper_noun\": \"There is one salient ribbon on the upper left of the macroscopical t-SNE visualization of word embeddings in either the original BERT or the pruned BERT through RPP. Each point in the ribbon represents a year number in annals. There is also one salient short line on the lower left of the macroscopical t-SNE visualization of word embeddings in either the original BERT or the BERT applied with RPP. Each point in most of the lines represents an age number. Other proper nouns also reveal similar characteristics. Our proposed RPP remains the embedding information of these proper nouns from the perspective of linguistic interpretation.\"}",
"{\"title\": \"General Response 2 to all reviewers, regarding questions on the issue of previous pruning method\", \"comment\": \"Dear Reviewers,\\n\\nThanks so much for the valuable comments. We make the responses to the questions on the issue of previous pruning method as below:\\n\\nIn the revision, we have updated the figure in Appendix D for better visualization, and have provided more details about the issue of previous methods. We summarize our main points below.\\n\\na) The previous method, such as Iterative Pruning (IP) and one-shot pruning, relies on directly optimizing the $\\\\ell_1$ / $\\\\ell_2$ penalized training loss to conduct DNN pruning (this is discussed in the NeurIPS 2015 paper by Han et al on iterative pruning, Section 3.1). As a result, a simultaneous backpropagation (for updating model weights) is conducted over both the original training loss as well as the non-smooth sparsity regularizer. When the penalty term is backpropagated together with the loss function, this affects the convergence direction of the original loss function. The convergence performance is significantly degraded for extremely large DNN model like BERT. This phenomenon is also observed in the training of BERT (Adam Weight Decay) that decouples the regularization term with the original loss function, instead of using an overall backpropagation.\\n\\nOur updated figures in Appendix D helps to illustrate this issue. IP and one-short pruning easily leads to non-convergence (we use the same hyperparameters as our NIP). Moreover, we observe that previous algorithms with directly optimizing the $\\\\ell_1$ penalty on TPU will easily lead to the gradient NaN. Our NIP method converges much better and serves as the new baseline method.\\n\\nb) We proposed New Iterative Pruning (NIP) as our worked baseline. As a fix of IP, NIP simplifies the training objective by removing the non-smooth sparsity regularizer. This simple fix improves the convergence of the training process, and make new iterative pruning doable for BERT.\\n\\nc) To further improve the pruning performance, we need to find a better pruning method that exploits our composite objective structure (original training loss + sparsity regularization), so that the backpropagation is not affected for the original training objective of BERT. Motivated by that, the proximal gradient provides an elegant solution, which splits the updating rule into a) gradient descent over the original training loss, and b) proximal operation over non-smooth sparsity regularizers. Moreover, reweighted $\\\\ell_{1}$ minimization serves as a better sparsity generalization method, which self-adjusting the importance of sparsity penalization weights. Furthermore, the incorporation of reweighted $\\\\ell_1$ will not affect the advantage of the proximal gradient algorithm. Thanks to the closed-form solution (equation 8) of proximal operation on a weighted $\\\\ell_1$ norm, Reweighted Proximal Pruning (RPP) is a desired pruning method on BERT model. We hope RPP proves to be effective in more kinds of DNN models in the future.\"}",
"{\"title\": \"General Response 1 to all reviewers, regarding questions about the original Figure 2\", \"comment\": \"Dear Reviewers,\\n\\nThanks so much for going through the paper carefully and providing such positive feedback. We make the responses to the questions about original Figure 2 as below:\\n\\nQuestions about original Figure 2 (in the revised paper, the original Figure 2 corresponds to the updated Figure 3)\", \"response\": \"In the revision, we have updated the original Figure 2 (updated as Figure 3) for better visualization, and have provided more analysis and insights from the visualization. We summarize our main points below.\\n\\n The original Figure 2 and the new one (updated Figure 3) are used to demonstrate the pattern of non-zero weights in every pruned transformer block of the pruned BERT model. More specifically, we found that the pruned Query and Key matrices within each transformer yield interesting group-wise structures (column-wise non-sparsity for Query matrix and row-wise non-sparsity for Key matrix). Interestingly, we obtained these structured sparse patterns from our irregular pruning method (namely, no group-wise sparsity is penalized). This is different from the irregular pruning on image classifiers, and thus shows the specialty of pruning on language models. We also believe that the usage of the reweighted $\\\\ell_1$ approach matters to find these fine-grained sparse patterns. \\n\\nFor better visualization, in the revised paper, we present the ratio of # non-zero weights at each row/column of a key or value matrix. Based on our updated Figure 3, we make the following analysis:\", \"structured_pattern\": \"we observe that most of the non-zero weights in the Key matrix obey a column-wise non-sparsity pattern, while there exists a column-wise non-sparsity pattern for a value matrix to some extent. The observation is consistent among multiple transformer blocks. Note that the structured sparsity pattern is more friendly to hardware implementation and acceleration than the non-structured pattern.\", \"semantic_interpretation\": \"the structured pattern found by RPP (visualized in updated Figure 3) has the following semantic interpretation. What might the large-scale language representation learn? The answer becomes clear after the language representation is pruned by the desired pruning algorithm, RPP. In the perspective of attention mechanism, the Query matrix $Q$ (column-wise non-sparsity) mainly models the attention information inside each sequence, while the Key matrix $K$ (row-wise non-sparsity) mainly models the attention information between different sequences in the context.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes a new approach to prune weights that is designed keeping large scale pre-trained language representations like BERT. Such a method is desirable for deploying such models on devices with limited memory like phones etc. Experiments on Squad and Glue datasets show that a pruned version of the model maintains high accuracy for these tasks.\\n\\nPros\\n1. Pretty high pruning ratios (80%) can be used for many datasets (except Squad). Its an encouraging result for low-memory requirement scenarios.\", \"weakness\": \"1. Modest technical contribution. The approach description also requires elaboration. Unclear what weights participate in the pruning objective. \\n2. Figure 2 is difficult to understand. The paper says \\\"The sparse attention pattern exhibits obvious structured\\ndistribution.\\\", but I do not know why that is desirable/useful. \\n3. The t-SNE visualization appears perfunctory. What should I take away from this analysis?\\n4. The baseline approach NIP was derived from the IP approach of Han et al. (2015). Explanation for not using IP is that it does not converge to a \\\"viable solution\\\". This needs more elaboration.\\n5. Why not compare to teacher-student distillation approaches like DistilBERT? These approaches have the same motivation of compressing model size, though different approach than what the paper adopted.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #5\", \"review\": \"Models such as BERT are pretrained language models which provide significant improvement for different tasks, however they suffer from high huge size and complexity. This paper has proposed using proximal gradient descent to find sparse weights for BERT to reduce the number of parameters and make the model smaller. They concentrate on the drawbacks of the previous sparse-based approaches and claimed that they have convergence issues (they have provided some evidence in the appendix). therefore, they propose to use reweighed sparse method and optimise it using proximal gradient descent which provides a closed form solution for sparse constraint. \\n\\nALthough proposing a minor novelty (reweighted sparse optimization ), they have provided interesting results for both pretrained structure and fine-tuning for several different tasks. they have also provided some visualisation for the weight matrices after sparsification.\\n\\nTheir results are notably stronger than simply adding the L1 regularizer to the optimisation method. \\n\\nThe paper is well written and easy to follow with nearly comprehensive related work.\\n\\nHowever, there are some drawbacks:\\n\\n1. They have claimed that \\u201c To the best of our knowledge, we are the first to apply reweighted l1 and proximal algorithm in the DNN weight pruning domain, and achieve effective weight pruning on BERT. \\u201d, however proximal optimization has been used for DNN in works like \\u201cCombined Group and Exclusive Sparsity for Deep Neural Networks, 2017\\u201d. \\n2. It should be explained clearly about all the matrices included in the sparsification steps, despite only saying \\u201cparameters of the model\\u201d.\\n3. More analysis is required on the results, specially the diagrams for fine-tuning over different datasets.\\n4. It is essential to compare the method with other related works for Bert and transformer compression, including quantisation-based, factorisation-based, pruning, knowledge distillation papers such as:\\n--Prato, Gabriele, Ella Charlaix, and Mehdi Rezagholizadeh. \\\"Fully Quantized Transformer for Improved Translation.\\\" arXiv preprint arXiv:1910.10485 (2019).\\n--Tang, Raphael, et al. \\\"Distilling Task-Specific Knowledge from BERT into Simple Neural Networks.\\\" arXiv preprint arXiv:1903.12136 (2019).\\n--Sanh, Victor, et al. \\\"DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.\\\" arXiv preprint arXiv:1910.01108 (2019).\\n--Ziheng Wang, et al. \\\"Structured Pruning of Large Language Models.\\\"\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a way to compress Bert by weight pruning with L1 minimization and proximal method. This paper is one of the first works aiming at Bert model compression.\\nThe authors think the traditional pruning ways can not work well for Bert model, so they propose Reweighted Proximal Pruning and conduct experiments on two different datasets. According to their results, they successfully compress 88.4% of the original Bert large model and get a reasonable accuracy.\", \"strong_points\": \"1. The authors propose a new method RPP for Bert model compression.\\n2. The authors design experiments to show their RPP can get a very good prune ratio with reasonable accuracy.\", \"weak_points\": \"1. The authors should provide a detailed and rigorous explanation for the drawback of existing pruning methods.\\n2. In the experiments, the authors only compare RPP with self-designed method NIP instead of any existing pruning method. The reason they said is \\u201cthese methods do not converge to a viable solution''. It would be better if they are also compared and analyzed in detail.\\n3. In the CoLA and QNLI datasets of Bert_large experiments, RPP can get a higher accuracy even than the original Bert_large model without pruning? This is counter-intuitive. \\n4. About the metrics, the authors use F1 score and accuracy, the standard metrics in the GLUE benchmark for different tasks, except for CoLA. It might make sense to also keep the metrics for CoLA consistent with GLUE benchmark for better comparison.\\n5. It is not clear what the authors want to express in Figure 2. The generation of the figure needs more explanation, and the results need to be better interpreted.\"}"
]
} |
H1gNOeHKPS | Neural Arithmetic Units | [
"Andreas Madsen",
"Alexander Rosenberg Johansen"
] | Neural networks can approximate complex functions, but they struggle to perform exact arithmetic operations over real numbers. The lack of inductive bias for arithmetic operations leaves neural networks without the underlying logic necessary to extrapolate on tasks such as addition, subtraction, and multiplication. We present two new neural network components: the Neural Addition Unit (NAU), which can learn exact addition and subtraction; and the Neural Multiplication Unit (NMU) that can multiply subsets of a vector. The NMU is, to our knowledge, the first arithmetic neural network component that can learn to multiply elements from a vector, when the hidden size is large. The two new components draw inspiration from a theoretical analysis of recently proposed arithmetic components. We find that careful initialization, restricting parameter space, and regularizing for sparsity is important when optimizing the NAU and NMU. Our proposed units NAU and NMU, compared with previous neural units, converge more consistently, have fewer parameters, learn faster, can converge for larger hidden sizes, obtain sparse and meaningful weights, and can extrapolate to negative and small values. | [
"nmu",
"subtraction",
"nau",
"vector",
"complex functions",
"exact arithmetic operations",
"real numbers",
"lack",
"inductive bias"
] | Accept (Spotlight) | https://openreview.net/pdf?id=H1gNOeHKPS | https://openreview.net/forum?id=H1gNOeHKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9-OmkaOVen",
"ryxPvIZiiH",
"HJx7vjkjiB",
"H1gWAnqFsr",
"SyxSQuqFjS",
"rklWB8gQor",
"ryxm9VxmiS",
"B1eJmNlmoS",
"BJx8Y7x7sS",
"ByxDNmx7oB",
"ryglXQgXir",
"Skg7-Qe7sB",
"S1gJpwkpqB",
"B1l3vxgLcS",
"BkgVknOCtB",
"HkxOeWlCYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748060,
1573750367044,
1573743450952,
1573657801258,
1573656604803,
1573221944607,
1573221514552,
1573221399183,
1573221246196,
1573221166918,
1573221144239,
1573221114778,
1572825015458,
1572368483784,
1571879899946,
1571844336138
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2395/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2395/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2395/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2395/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2395/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2395/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2395/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2395/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2395/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2395/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2395/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2395/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2395/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2395/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2395/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper extends work on NALUs, providing a pair of units which, in tandem, outperform NALUs. The reviewers were broadly in favour of the paper given the presentation and results. The one dissenting reviewer appears to not have had time to reconsider their score despite the main points of clarification being addressed in the revision. I am happy to err on the side of optimism here and assume they would be satisfied with the changes that came as an outcome of the discussion, and recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for a productive dialog\", \"comment\": \"Thanks for a productive dialog, both at ICLR and at the previous venue.\\n\\n- Regarding the observation in the original NALU paper on division on interpolation and extrapolation, that is a good catch. Would you please comment on it in the appendix?\\n\\nThis is already mentioned in Appendix C.7.1. Let us know if you find it the explanation unclear.\\n\\nFrom section C.7.1.: >>Division does not work for any model, including the $\\\\mathrm{NAC}_{\\\\bullet}$ and NALU models. This may seem surprising but is actually in line with the results from the NALU paper (Trask et al., table 1) where there is a large error given the interpolation range. The extrapolation range has a smaller error, but this is an artifact of their evaluation method where they normalize with a random baseline. Since a random baseline will have a higher error for the extrapolation range, errors just appear to be smaller. A correct solution to division should have both a small interpolation and extrapolation error.<<\\n\\n- In addition, please comment similarly (but short) on the findings of the workshop paper in the appendix.\", \"we_have_added_to_appendix_c\": \">>Our \\u201carithmetic task\\u201d is identical to the \\u201csimple function task\\u201d in the NALU paper (Trask et al.,2018). However, as they do not describe their setup in details, we use the setup from Anonomous(2019), which provide Algorithm 3, an evaluation-criterion to if and when the model has converged, the sparsity error, as well as methods for computing confidence intervals for success-rate and the sparsity error.<<\\n\\n- I would suggest (shortly) commenting the choice of MNIST multiplication vs counting and arithmetic tasks in the paper in the appendix to clarify it to future readers.\\n\\nThanks, we now elaborate on this in appendix D.1.: >>The sequential MNIST task takes the numerical value of a sequence of MNIST digits and applies a binary operation recursively. Such that $t_i = Op(t_{i-1}, z_t)$, where $z_t$ is the MNIST digit's numerical value. This is identical to the ``MNIST Counting and Arithmetic Tasks'' in Trask et al. [2018, section 4.2]. We present the addition variant to validate the NAU's ability to backpropagate, and we add an additional multiplication variant to validate the NMU's ability to backpropagate.<<\\n\\nWe have also added references to Appendix D.2. in the results section.\"}",
"{\"title\": \"Response + update\", \"comment\": \"Thank you very much for the very detailed reply both to my review and all the other reviews. I\\u2019ve check the paper and read the entire correspondence.\", \"my_comments_are_the_following\": [\"Thank you for the C.5 experiments. Nice to see that the multiplication with gated NAU/NMU works better than with NALU, but still too bad that it seems that gating is a bad option for this model. This definitely adds to your claim that gating by itself is not a good choice here.\", \"Regarding the observation in the original NALU paper on division on interpolation and extrapolation, that is a good catch. Would you please comment on it in the appendix? In addition, please comment similarly (but short) on the findings of the workshop paper in the appendix.\", \"I would suggest (shortly) commenting the choice of MNIST multiplication vs counting and arithmetic tasks in the paper in the appendix to clarify it to future readers.\", \"Also, please do another grammar/style iteration over the paper as there are at least newly introduced issues, such as:\", \"for subtraction a linear transformation can not -> cannot\", \"while NAU and NAC_+ solves -> solve\", \"as NMU and NAU uses -> use\", \"where an unknown function is often done by overparameterization -> overparameterization cannot fit functions, the model can\\u2026overparameterization in a property of the model wrt the function that needs to be fit!\", \"My comments aside, I really think authors went out of their way to address our concerns in great detail. What I particularly like is that they did that at ICLR, i.e. on openreview so that this whole discussion will stay here out in the open for good as I see it as an important addition to the paper.\", \"I understand other reviewers\\u2019 concerns that the model presented in this paper is incremental, but I don\\u2019t see the strength of this paper to be just the model itself but the whole informed theoretical + experimental analysis leading to improvements of the model plus the open code which is there to stay (hopefully with the experimental setup too?), as opposed to the original paper. This paper nicely reads as a try to work with a recently presented models, the failure of the presented model and then a detailed process of analyzing and fixing the model and the benchmarks. The paper directly confronts the reproducibility issue with the original model, and improves drastically upon it. That is why I think this paper should definitely be accepted. I\\u2019m not sure whether the presented model will make a big change in the area, but the approach might influence and inspire other researchers to do more thorough analyses.\", \"Consequently, I\\u2019m increasing my score to accept.\"]}",
"{\"title\": \"Summary of revision\", \"comment\": \"Dear reviewers, we appreciate your feedback. A lot of minor changes have been added in the last revisions and we hope we have addressed all of your concerns. To clarify what have been changed, we have made the following overview of major changes.\\n\\n- Elaborated in the assumptions behind the sparsity bias.\\n\\nAdded in section 2.2.: This bias is desired as it restricts the solution space to exact addition, and in section \\\\label{sec:method:nmu} also exact multiplication, which is an intrinsic property of an underlying arithmetic function. However, it does not necessarily restrict the output space as a plain linear transformation will always be able to scale values accordingly. The bias also adds interpretability which is important for being confident in a model\\u2019s ability to extrapolate.\\n\\n- Elaborated on results for NAU.\\n\\nAdded in section 4.1.2.: For addition, NAU is comparable to a linear transformation in success-rate and convergence speed but is more sparse. However, for subtraction a linear transformation cannot consistently solve the task, while NAU and $\\\\mathrm{NAC}_{+}$ solve it.\\n\\n- Balanced conclusion regarding division and gating.\\n\\nAdded to section 5.: A natural next step would be to extend the NMU to support division and add gating between the NMU and NAU, to be comparable in theoretical features with NALU. However we find, both experimentally and theoretically, that learning the division is impractical, because of the singularity when dividing by zero, and that a sigmoid-gate that chooses between two functions with vastly different convergences properties, such as a multiplication unit and an addition unit, cannot be consistently learned.\\n\\n- Added short summary of workshop publication for \\\"Measureing Arithmetic Extrapolation Performance\\\":\\n\\nAdded to section C.: Our \\u201carithmetic task\\u201d is identical to the \\u201csimple function task\\u201d in the NALU paper (Trask et al.,2018). However, as they do not describe their setup in details, we use the setup from Anonomous(2019), which provide Algorithm 3, an evaluation-criterion to if and when the model has converged, the sparsity error, as well as methods for computing confidence intervals for success-rate and the sparsity error.\\n\\n- Added Gated version of NAU/NMU, similar to NALU\\n\\nAdded to section C.5.: Furthermore, we also introduce a new gated unit that simply gates between our proposed NMU and NAU, using the same sigmoid gating-mechanism as in the NALU. This combination is done with separate weights, as NMU and NAU use different weight constrains and can therefore not be shared.\\n\\nAdded to section C.5.3. Which operation the gate converges to appears to be mostly random and independent of the task. These issues are caused by the sigmoid gating-mechanism and thus exists independent of the used sub-units.\\n\\nAdded to section C.5.: updated figure and results.\\n\\n- Added results for sequential addition of MNIST digits\\n\\nPrevious section D.2. is now D.4.\\nNew section D.2.: contains main results for \\\"sequential addition of MNIST digits\\\"\\nNew section D.3.: contains an ablation study of $R_z$ the regularizer used in section D.2.\\n\\nAdded to section D.1.: The sequential MNIST task takes the numerical value of a sequence of MNIST digits and applies a binary operation recursively. Such that, where is the MNIST digit's numerical value. This is identical to the ``MNIST Counting and Arithmetic Tasks'' in Trask et al. [2018, section 4.2]. We present the addition variant to validate the NAU's ability to backpropagate, and we add an additional multiplication variant to validate the NMU's ability to backpropagate.\"}",
"{\"title\": \"Added sequential addition of MNIST digests results\", \"comment\": \"Dear reviewer. We have now also added results for the sequential addition of MNIST digests under Appendix D. We hope this will satisfy your concerns.\"}",
"{\"title\": \"Deleted comments\", \"comment\": \"We apologize for the confusion with the deleted response messages. We had to delete them due to formatting errors related to the 5000 character limit. No text has been deleted. The new comments contain all the text from earlier.\"}",
"{\"title\": \"Response to reviewer #2 - thank you for your review\", \"comment\": \"Dear reviewer #2, we thank you for your review and in particular your feedback on our experimental section. As is often the case with foundational research, the applications are not always immediately clear. We belive that multiplication is useful, however as both NALU and NMU are very recent additions to the field of neural networks, the best applications have yet to emerge. We elaborate on what applications we think multiplication can be applied to below. We would appreciate further feedback and hope that we can employ some of your concerns to strengthen our experimental section.\\n\\n- divisions being difficult to handle does not constitute a sufficient justification for choosing to exclude them: the authors should at the very least propose a plausible way forward for future work.\\n\\nWe understand your concerns, it has been challenging for us to write our results. To be honest we don\\u2019t believe that division actually works for NALU.\\n\\nThat division doesn\\u2019t work is apparent when carefully inspecting Table 1 in the NALU paper. Here the results shows that division on interpolation doesn\\u2019t work, but it does work for extrapolation. Given the construction of NALU, it should be clear that if the model had truly found a correct solution, it should work for both interpolation and extrapolation. Unfortunately, due to the reporting of results in NALU [table 1] bad models can appear to be correctly converged as their comparison is based on a relative improvement over a random baseline model (details in our reviewer #4 response). This is mentioned in Appendix C.7.1.\\n\\nThis motivated us to change the evaluation criteria. We have published an in-depth explanation of these issues as well as a reproduction-study of NALU (shows the same results) in the SEDL workshop at NeurIPS 2019. We have shared this reproduction-study (which includes a table showing that division doesn\\u2019t work) with the authors of NALU, where the first author Andrew Trask publicly responded \\u201cGreat work! We can\\u2019t improve without good benchmarks.\\u201d. We have made an anonymized version of the paper available here: https://www.dropbox.com/s/e03kd4x9j0l7b5b/Measuring_Arithmetic_Extrapolation_Performance.pdf?dl=0 (please respect the double-blinded process, as the non-anonymous is on arXiv).\\n\\n- More generally, the proposed unit needs to be exposed to at least 10K examples to learn a single expression with fewer than 10 inputs (and the success rate already drops to under 65% for 10 inputs).\\n\\nThe complexity of the problem (hidden size, Figure 3) is indeed illusive. A good way to understand the complexity of these problems is to linearize them, such that they can be solved with a linear regression. Take for example the simple case from section 1.1, (x_1 + x_2) * (x_1 + x_2 + x_3 + x_4). An alternative way to learn this problem would be to expand the input vector to include all possible combinations. In this case it would be [x1, x2, x3, x4, x1*x1, x1*x2, x1*x3, x1*x4, x2*x2, x2*x3, x2*x4, x3*x3, x3*x4, x4*x4]. A linear regression could then learn to sum the correct values. For 10 hidden size in Figure 3, this is much more complex as the input size is 100 and we allow up to 10 subsets to be multiplied. To compute the linearized size use Sum(choose(100 + i - 1,i), i=1..10) = 46897636623980, which is a huge input size for a linear regression. We hope that this gives some intuition as to why it is such a challenging problem.\\n\\n - What would be the use case for such a unit? Even the NMU is only proposed as a step on the way to a more modular, general-purpose, or efficient architecture, its value is difficult to gauge without some idea of what that would look like.\\n\\nWhen building a basic component, SOTA results on a commonly known benchmark always help the story! However, we believe that the subject of arithmetic extrapolation is still in its infancy and might need more time before it is used ubiquitous. As explicit arithmetic and logical constructs are rarely present in the type of datasets commonly used for evaluating machine learning models (e.g. NLP), we would need to work with individuals that knowledge and access to such data, in order to better understand how we should integrate the NMU with common deep learning contraptions such as the LSTM. In particular, we think that unknown differential equations, or physical models, might be a good application of the NMU. However, in this work, our main concern has been to uncover and overcome some of the theoretical concerns of the NALU and build a component that can work with high number of hidden states, which is necessary in deep neural networks.\"}",
"{\"title\": \"Response to reviewer #3, thanks for your thorough review\", \"comment\": \"Dear reviewer #3, thank you for your kind words and thorough review, it is most appreciated.\\n\\n- should provide an explanation of the row in Table 2 showing that a simple linear transformation is able to achieve accuracy and convergence times comparable to those of the NAU\\n\\nThanks, we have added \\u201cFor addition NAU is comparable to a linear transformation in success-rate and convergence speed, but is more sparse.\\u201d\\n\\n- inconsistent captioning in Figure 2c, missing \\\"NAC\\u2022 with\\\"\\n\\nThanks, this has been corrected.\\n\\n- should clarify in Section 4.1 that the \\\"arithmetic dataset\\\" task involves summing only *contiguous* vector entries; this is implied by the summation notation, and made explicit in Appendix Section C, but not specified in Section 4.1\\n\\nThanks, this has been added. Although note that as the first layers is a linear layer, it is invariant to the order of the elements.\\n\\n- it is unclear what experiments you performed to obtain Figure 3, and the additional explanation in Appendix Section C.4 regarding interpolation/extrapolation intervals only adds to the confusion; please clarify the explanation of Figure 3, or else move it to the Appendix\\n\\nThanks, the extrapolation ranges need to be changed to not overlap with the interpolation range also reflect the scale of the interpolation range. We have made this more explicit in both the results section and appendix. All other parameters are unchanged. Let us know if this is still confusing.\\n\\n- should provide an explanation of the universal 0% success rate on the U[1.1,1.2] sampling interval in Figure 3\\n\\nThanks for pointing out the add behaviour of 0% success rate on U[1.1, 1.2]. We simply do not know why that is the case. However, as it was a part of our original testing for interpolation-range sensitivity we have kept it in the plot. We added the following comment in our result section discussing the figure: Interestingly, none of the models can learn the $\\\\mathrm{U}[-1.1,1.2]$, suggesting that certain input distributions might be troublesome to learn.\\u201d\\n\\n- the ordering of some of the sections/figures is confusing and nonstandard: Section 1.1 presents results before explaining what exactly is being measured, Figure 1 shows an illustration of an NMU 2 pages before it is defined, Section 3 could be merged with the Introduction\\n\\nThis is true, we use 1.1 to provide the reader with an example on what we are trying to solve and to highlight the challenges with NALU which motivates why we are looking at multiplication. We focus mainly on what the data input is and what the optimal solution is. We believe this problem introduction is important to give the reader a softer introduction before we begin the more formal mathematical descriptions in our method section. Keep in mind that not everybody is as familiar with arithmetic extrapolation as compared to other more typical subjects.\\n\\nWe do acknowledge that this is not the usual way of presenting a problem. If you believe that this negatively impacts the reading experience and your review, we would gladly either change, replace or completely remove this sub-section.\\n\\n- Grammatical/Typesetting errors:\\n\\nThanks, we truly appreciate your thoroughness.\"}",
"{\"title\": \"Response to reviewer #1 - we appreciate your feedback\", \"comment\": \"Dear Reviewer #1, thank you for your valuable comments and insight. Writing this paper was not easy as we had to juggle theoretical findings, making a new evaluation criterion, finding appropriate tasks, a reproduction study of the NALU, and providing evidence of our new methods. We are grateful for your feedback and would love to collaborate with you on how to best present our findings. Below we have taken snippets of your comments and either modified our submission or provided further questions to get clarification. We appreciate your time and effort.\\n\\n- The proposed neural addition unit uses a linear weight design and an additional sparsity regularizer. However, I will need more intuitions to see whether this is a good design or not.\\n\\nA linear function is always easier to fit than a non-linear function (for example NALU\\u2019s tanh(x)sigmoid(x) weights). We try to elaborate upon this in section 2.2, where we attempt to provide a theoretical analysis of the gradients from the tanh(x)sigmoid(x) weights construct provided by the original NALU paper. Our findings suggest that optimal initialization causes the gradients to be zero. The sparsity regularizer bias the weights to {-1,0,1} which tanh(x)sigmoid(x) unfortunately does not.\\n\\nA sparse solution is often an intrinsic property in the problem domain of arithmetic extrapolation. For example, all the experiments in the NALU paper have this property. Furthermore, even when it is not an intrinsic property, say for example we need to learn 1.5*x1*x2, the arithmetic rules of addition and multiplication mean that these constants can always be learned by another more traditional layer. In this example, a linear transform can learn to multiply x1 by 1.5, or simply add a constant as one of its hidden outputs. Therefore, the bias restricts our optimization space allowing us to find exact solutions but does in combination with traditional layers not restrict what solutions can be found.\\n\\nWe have added the following to section 2.2: \\u201cThis bias is desired as it restricts the solution space to exact addition, and in section 2.5 also exact multiplication, which is an intrinsic property of an underlying arithmetic function. However, it does not necessarily restrict the output space as a plain linear transformation will always be able to scale values accordingly. The bias also adds interpretability which is important for being confident in a model\\u2019s ability to extrapolate.\\u201d\\n\\n- I have to go through the NALU paper over and over again to understand some claims of this paper\\u201d, \\u201cI think the paper can be made more self-contained\\u201d\\n\\nIn the tradeoff between describing the NALU paper and focussing on our own contributions we have chosen to restrict the description of NALU to section 2.1. We are happy to update the paper, so please suggest changes that would help reading.\\n\\n- Overall, I think the paper makes an useful improvement over the NALU, but the intuition and motivation behind is not very clear to me.\\nI think the authors can strengthen the paper by giving more intuitive examples to validate the superiority of the NAU and NMU.\\n\\nIncluding division in NALU means that there is a singularity in the optimization space. As you can see in Figure 2, this leads to a dangerous optimization space where unwanted minimas are close to singularities. You will also see that when division is removed the NAC performs significantly better for a hidden size of 2.\\n\\nInitialization is not very important if the hidden size is 2. However, when the hidden size becomes larger optimal initialization is often important. We understand that optimal initialization is not an intuitive subject, but we hope that it is clear that NAC_mul cannot be optimally initialized and that NMU can, and that this is what gives much better performance for a larger hidden size.\\n\\nWe hope that this clarifies things. As we do believe this is already described we it would be very helpful if you could pinpoint which paragraphs lack intuition.\"}",
"{\"title\": \"Response to reviewer #4 - general thanks and comments\", \"comment\": \"Dear reviewer #4, thank you for your thorough review! We have tried our best to conform our paper to the feedback from our previous conference submission, in particular with elaborated results (MNIST), a better connection between theoretical findings and experimental design (testing increased hidden size), and a more fair comparison by excluding division from NAC_mul. Your comments are most useful, and we have updated our paper and responded to your questions using snippets of your review below. We are looking forward to a great discussion and hope to solve all the concerns you might have.\"}",
"{\"title\": \"Response to reviewer #4 - addressing specific comments\", \"comment\": \"- The conclusion of the paper is biased towards the introduced models, but it should clearly define the limitations of these models wrt NALU/NAC\\n\\nThanks for pointing this, we have updated our conclusion with: \\u201cOur study shows that gating behaves close to random for both NALU and a gated NMU/NAU variant. However, when the gate correctly selects multiplication our NMU converges much more consistently.\\u201d\\n\\nFurthermore, as part of our gating-analysis (Appendix C.5) we have added a unit that gates between NMU and NAU similarly to NALU. The results show that a sigmoid gating-mechanism between the NMU and NAU has similar gating-converges results (close to random). We hope that this adds clarity to what is the limitations of NAU and NMU and what are the limitations of sigmoid gating.\\n\\n- The performance of NALU on multiplication is in stark contrast to the results in the original paper (Table 1). This should be commented in the paper why that is, as the original model presents no issues of NALU with multiplication, whereas this paper essentially says that they haven\\u2019t gotten a single model (out of 100 of them) to do multiplication.\\n\\nOriginally we wanted to use the NALU for building NLP applications. However, as we investigated the unit we found that it was difficult to train and very fragile, which the main author agreed on over email correspondence. Deep diving into the result section of the NALU we found that their results, [Table 1], are easily misinterpreted. For example, the table shows that division on interpolation doesn\\u2019t work, but it does work for extrapolation. Given the construction of NALU, it should be clear that if the model had truly found a correct solution that should work for both interpolation and extrapolation. However, due to the reporting of results in NALU [table 1] bad models can appear to be correctly converged as their comparison is based on a relative improvement over a random baseline model. E.g. if the random baseline model has a loss of 1e10, a loss of 1e7 would be 0.001 using their reporting method. We choose to compare our results against a successful model instead, which is more interpretable and allows confidence intervals. Furthermore, our analysis of the convergence stability of a single NALU-gate (Appendix C.5) shows that convergence of gating is difficult and suggests cherry-picking of results.\\n\\nThis is what motivated us to keep the experiment but change the evaluation criteria from a single-instance relative MSE to a success-criterium summarized over multiple seeds. We have published an in-depth explanation of these issues as well as a reproduction-study of NALU (shows the same results) in the SEDL workshop at NeurIPS 2019. We have shared this reproduction-study with the authors of NALU, where the main author Andrew Trask publicly responded \\u201cGreat work! We can\\u2019t improve without good benchmarks.\\u201d. We have made an anonymized version of the paper available here: https://www.dropbox.com/s/e03kd4x9j0l7b5b/Measuring_Arithmetic_Extrapolation_Performance.pdf?dl=0\\n\\n- Could you explicitly comment on the paper why is the parameter sparsity such a sought-after quality of these models? ...\\n\\nThe linear model on the subtraction problem is just the NAU without regularization and weight-clipping. Its success-rate is only 14%, while NAU and NAC_+ are 100%. This should justify that constraining to [-1, 1] is necessary. The sparsity regularizer itself is necessary as seen in the ablation study (Appendix C.3), although in that example both the regularizer and the clipping can be removed, with only a minor loss in success-rate and convergence speed.\\n\\nGenerally speaking for Arithmetic Extrapolation, the fundamental assumption of the problem domain is that there is an exact solution of arithmetic operators that can be found. Because is always possible for a linear transformation to scale its output or add a learned constant output (linear bias), the bias towards {-1, 0, -1} does not restrict the output space, only the optimization space, when other traditional layers (that are based on linear transformations) are present.\\n\\nFurthermore having a sparse solution is much more interpretable which by manual inspection can add confidence in the model\\u2019s ability to extrapolate. We elaborated on this in section 2.2.\\n\\n- Throughout the text you say that NMU supports large hidden input sizes? Why hidden??\\nUsing the term \\u201cinput size\\u201d could be misinterpreted as the network\\u2019s input size (the dimensions of the dataset).\\n\\n- Figure 4 is identical to figure in D.2\\nWe can understand the confusion! But by closely examining the values of the plot we find that the NMU has 100% success-rate in figure 4, but only 80% success-rate in figure D.2.\\n\\n- Repetition that E[z] = 0 is a desired property in 2.2, 2.3, 2.4\\nThis is correct, our goal has been to describe the issues independently of one another. Would you rather prefer that we reword this?\"}",
"{\"title\": \"Response to reviewer #4 - regarding other NALU experiments\", \"comment\": \"- Why did you introduce the product of the sequential MNIST experiment but did not presents results on the original sum / counting of digits? ...\\n\\nThe major argument of using the NALU is extrapolation, multiplication, and plug-in integration with neural networks. 4.1 tests extrapolation and 4.2 can, without major modifications, test multiplication in integration with a larger network (CNN).\\n\\nWhile we do propose the NAU, the main focus of our paper is the NMU. We do not think investigating the NAC+ is particularly interesting as it works. As a result our experiments focuses on multiplication.\\n\\nBelow we have elaborated on the tasks of the NALU paper and why we believe that they fit/do not fit the purpose of: extrapolation, plug-in integration with neural networks, and multiplication.\\n\\n4.1 Simple Function Learning Tasks\\n-Extrapolation: Numeric extrapolation can be achieved by increasing the input/output range\\n-Integration with neural networks: By increasing the hidden-size we can assess the theoretical modeling capacity of these units.\\n-Multiplication: is explicitly tested.\\n\\nIn the original paper dataset hyperparameters are not reported, which is why we choose to extensively test various combinations.\\n\\n4.2 MNIST Counting and Arithmetic Tasks\\n-Extrapolation: This experiment does not test value-extrapolation (the primary goal of NALU), as the network needs to see all digits. The sequential extrapolation is a different type of extrapolation that relates more to getting precise sparse values, as minor errors will accumulate exponentially.\\n-Integration with neural networks: It does not integrate with a neural network, but by placing the arithmetic component after a CNN we can the arithmetic units capabilities and how well gradient signal travels through the arithmetic units.\\n-Multiplication: While it is called \\u201cArithmetic tasks\\u201d, they only test for addition.\\n\\nWe choose to extend this to multiplication as it is the focal point of our paper. We will run the tasks with our NAU for comparison, which we will report in the appendix when the results are ready.\\nTo further elaborate, we added the following description to our introduction \\u201cWe propose the MNIST multiplication variant as we want to test the NMU's and 's ability to learn from real, noisy data where the numeric input has to be learned from features.\\u201c\\n\\n4.3 Language to Number Translation Tasks\\n-Extrapolation: This task does not pose any extrapolation requirements, as the test set consists of numbers in the training range.\\n-Integration with neural networks: The arithmetic layer could be placed in the recurrent connection and use the operands to choose between gating. However, when contacting the main author about their architecture we find that they do not use their arithmetic components in the recurrent layer. Instead they use it to modify the final output, which means that all arithmetic modeling is performed by the LSTM (here is an anonymous link to the architecture that the main author has agreed on in our email correspondence: https://ibb.co/x7J1FZg).\\n-Multiplication: Multiplication is not required. This may be counter-intuitive, but the network does not need to learn multiplication to produce 7*100+2 = 702, as the network always multiplies by 100 and therefore it can be learned using a linear layer.\\n\\nGiven this does not test for extrapolation or multiplication, we have chosen not to include this task.\\n\\n4.4 Program Evaluation\\n-Extrapolation: is tested\\n-Integration with neural networks: also tested\\n-Multiplication: no multiplication is tested. They describe the experiment as \\u201cThe first consists of simply adding two large integers, and the latter involves evaluating programs containing several operations (if statements, +, \\u2212).\\u201c\\n\\nGiven this does not test multiplication, we have chosen to not include this experiment.\\n\\n4.5 Learning to Track Time in a Grid-World Environment \\n-Extrapolation: This task requires extrapolation when testing on numeric \\u201cwaiting\\u201d ranges above the training range.\\n-Integration with neural networks: As detailed by the authors, arithmetic components needs to be integrated into the architecture.\\n-Multiplication: This task concerns counting, counting can be solved by an LSTM as shown in formal language work (https://arxiv.org/abs/1805.04908).\\n\\nBecause this task only tests counting, we do not think it is interesting. Furthermore, it is difficult to implement and the authors provide no code for this. We have asked the authors for the code, but they were not able to help us.\\n\\n4.6 MNIST Parity Prediction Task & Ablation Study\\n-Extrapolation: As mentioned explicitly in the NALU paper, this task is designed for interpolation.\\n-Integration with neural networks: The arithmetic unit integrates with a larger network as described in Segu\\u00ed et al.\\n-Multiplication: This task is designed for addition and not multiplication.\\n\\nBecause of the lack of extrapolation and multiplication we have chosen not to include this task.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"DISCLAIMER: I reviewed a previous version of this paper at another venue.\\n\\nThis paper introduces Neural Addition Units (NAU) and Neural Multiplication Units (NMU), essentially redeveloped models of the Neural Arithmetic Logic Unit (NALU). The paper presents a strong case that the new models outperform NALUs in a few metrics: rate of convergence, learning speed, parameter number and model sparsity. The performance of NAU is better than NAC/NALU, as is the performance of NMU with a caveat that the presented NMU here cannot deal with division, though it can deal with negative numbers (as opposed to NALU).\\n\\nWhat this paper excels at is a thorough theoretical and practical analysis of NALU\\u2019s issues and how the authors design the two new models to overcome these issues. The presented issues of NALU are numerous, including unstable optimization space, expectations of gradients converging to zero, the inability of NALUs gating to work as well as intended and its issues with division, and finally, the intended values of -1, 0, 1 in NALU do not get as close to these values as intended. \\n\\nThe paper is easy to read, modulo a number of typos and admittedly some weirdly written sentences (see typos and minor issues later) and I would definitely recommend another iteration over the text to improve the issues with it as well as the style of writing. I am quite fond of the analysis and the informed design of the two new models, as well as the simplicity of the final models which are fairly close to the original models but have been shown both theoretically and practically that they work.\\nIt is great to see that the paper improved since my last review and stands stronger on its results, but there are still a few issues with it that make me hesitant to fully accept the paper:\\n- The conclusion of the paper is biased towards the introduced models, but it should clearly define the limitations of these models wrt NALU/NAC\\n- The performance of NALu on multiplication is in stark contrast to the results in the original paper (Table 1). This should be commented in the paper why that is, as the original model presents no issues of NALU with multiplication, whereas this paper essentially says that they haven\\u2019t gotten a single model (out of 100 of them) to do multiplication.\\n- Could you explicitly comment on the paper why is the parameter sparsity such a sought-after quality of these models?\\n- You \\u2018assume an approximate discrete solution with parameters close to {1-, 0, 1} is important\\u2019. What do you have to back this assumption? Would it be possible to learn the arithmetic operations (and generalize) even with parameters different than those?\\n- Why did you introduce the product of the sequential MNIST experiment but did not presents results on the original sum / counting of digits? The change makes it hard to compare with the results in the original paper, and you do not present the reason why. This also makes me ask why didn't you compare to NALU on more tasks presented in the paper?\\n\\nTo conclude, this paper presents a well-done experimental and theoretical analysis of the issues of NALU and ways to fix it. Though the models presented outperform NALU, they still come with their own issues, namely they do not support division, and (admittedly, well corroborated with analysis) are not joined in a single, NALU-like model, that can learn multiple arithmetic operations. The paper does a great analysis of the models\\u2019 issues, with an experimental setup that highlights these issues, however, it does that on only one task from the original paper, and a(n insufficiently justified) modification of another one (multiplication of MNIST digits)---it does not extensively test these models on the same experimental setup as the original paper does.\", \"typos_and_smaller_issues\": [\"Throughout the text you say that NMU supports large hidden input sizes? Why hidden??\", \"Figure 4 is identical to figure in D.2\", \"Repetition that E[z] = 0 is a desired property in 2.2, 2.3, 2.4\", \"In Related work, binary representation -> one-hot representation\", \"Found empirically in () - remove parentheses and see\", \"increasing the hidden size -> hidden vector size?\", \"NAU and NMU converges/learns/doesobtains -> converge/learn/do/obtain\", \"hard learn -> hard to learn ?\", \"NAU and NMU ...and improves -> improve\", \"Table 1 show -> shows\", \"Caption Table 1: Shows the - quite unusual caption (treating Table 1 as part of the sentence), would suggest to rephrase (e.g. Comparison/results of \\u2026 on the \\u2026 task). Similarly for Table 2...and Figure 3\", \"experiemnts -> experiments\", \"To analyse the impact of each improvements\\u2026.. - this sentence is missing a chunk of it, or To should be replaced by We\", \"Allows NAC_+ to be -> remove be\", \"can be express as -> expressed\", \"The Neural Arithmetic Expression Calculator () propose learning - one might read this as the model proposes, not the authors / paper / citation propose\\u2026(also combine or combines in the next line)\", \"That the NMU models works -> model works? models work?\", \"We choice the -> we choose the\", \"hindre -> hinder\", \"C.5 seperate -> separate\", \"There\\u2019s a number of typos in the appendix\", \"convergence the first -> convergence of the first?\", \"Where the purpose is to fit an unknown function -> I think a more appropriate statement would hint at an often overparameterization in practice done when fitting a(n unknown) function\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper aims to address several issues shown in the Neural Arithmetic Logic Unit, including the unstability in training, speed of convergence and interpretability. The paper proposes a simiplification of the paramter matrix to produce a better gradient signal, a sparsity regularizer to create a better inductive bias, and a multiplication unit that can be optimally initialized and supports both negative and small numbers.\\n\\nAs a non-expert in this area, I find the paper interesting but a little bit incremental. The improvement for the NAC-addition is based on the analysis of the gradients in NALU. The modification is simple. The proposed neural addition unit uses a linear weight design and an additional sparsity regularizer. However, I will need more intuitions to see whether this is a good design or not. From the experimental perspective, it seems to work well.\\nCompared to NAU-multiplication, the Neural Multiplication Unit can represent input of both negative and positive values, although it does not support multiplication by design. The experiments show some gain from the proposed NAU and NMU.\\n\\nI think the paper can be made more self-contained. I have to go through the NALU paper over and over again to understand some claims of this paper. Overall, I think the paper makes an useful improvement over the NALU, but the intuition and motivation behind is not very clear to me. I think the authors can strengthen the paper by giving more intuitive examples to validate the superiority of the NAU and NMU.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"The authors extend the work of Trask et al 2018 by developing alternatives to the Neural Accumulator (NAC) and Neural Arithmetic Logic Unit (NALU) which they dub the Neural Addition Unit (NAU) and Neural Multiplication Unit (NMU), which are neural modules capable of performing addition/subtraction and multiplication, respectively. The authors show that their proposed modules are capable of performing arithmetic tasks with higher accuracy, faster convergence, and more theoretically well-grounded foundations.\", \"The new modules modules are relatively novel, and significantly outperform their closest architectural relatives, both in accuracy and convergence time. The authors also go to significant lengths to demonstrate that the parameters in these modules can be initialized and learned in a more theoretically well-grounded manner than their NAC/NALU counterparts. For these reasons I believe this paper should be accepted.\", \"General advice/feedback:\", \"should provide an explanation of the row in Table 2 showing that a simple linear transformation is able to achieve accuracy and convergence times comparable to those of the NAU\", \"should provide an explanation of the universal 0% success rate on the U[1.1,1.2] sampling interval in Figure 3\", \"inconsistent captioning in Figure 2c, missing \\\"NAC\\u2022 with\\\"\", \"should clarify in Section 4.1 that the \\\"arithmetic dataset\\\" task involves summing only *contiguous* vector entries; this is implied by the summation notation, and made explicit in Appendix Section C, but not specified in Section 4.1\", \"it is unclear what experiments you performed to obtain Figure 3, and the additional explanation in Appendix Section C.4 regarding interpolation/extrapolation intervals only adds to the confusion; please clarify the explanation of Figure 3, or else move it to the Appendix\", \"the ordering of some of the sections/figures is confusing and nonstandard: Section 1.1 presents results before explaining what exactly is being measured, Figure 1 shows an illustration of an NMU 2 pages before it is defined, Section 3 could be merged with the Introduction\", \"Grammatical/Typesetting errors:\", \"\\\"an theoretical\\\" : bottom of pg 2\", \"\\\"also found empirically in (see Trask et al. (2018)\\\" : top of pg 4\", \"\\\"seamlessly randomly\\\" : middle of pg 5\", \"\\\"We choice\\\" : middle of pg 6\", \"inconsistent typesetting of \\\"NAC\\\" : bottom of pg 6\", \"\\\"hindre\\\" : middle of pg 8\", \"\\\"to backpropergation\\\" : bottom of pg 8\", \"\\\"=\\u2248\\\" : top of pg 17\", \"\\\"mathcalR\\\" : bottom of pg 23\", \"\\\"interrest\\\" : bottom of pg 24\", \"\\\"employees\\\" : bottom of pg 24\", \"\\\"models, to\\\" : bottom of pg 24\", \"\\\"difference, is\\\" : bottom of pg 24\", \"\\\"consider them\\\" : bottom of pg 24\", \"\\\"model, is\\\" : top of pg 25\", \"\\\"task, is\\\" : top of pg 25\", \"\\\"still struggle\\\" : top of pg 25\", \"\\\"seam\\\" : top of pg 27\", \"\\\"inline\\\" : top of pg 27\", \"inconsistent typesetting of \\\"NAC\\\" : top of pg 27\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose the Neural Multiplication Unit (NMU), which can learn to solve a family of arithmetic operations using -, + and * atomic operations over real numbers from examples. They show that a combination of careful initialization, regularization and structural choices allows their model to learn more reliably and efficiently than the previously published Neural Arithmetic Logic Unit.\\n\\nThe NALU consists of two additive sub-units in the real and log-space respectively, which allows it to handle both additions/subtractions and multiplications/divisions, and combines them with a gating mechanism. The NMU on the other hand simply learns a product of affine transformations of the input. This choice prevents the model from learning divisions, which the authors argue made learning unstable for the NALU case, but allows for an a priori better initialization and dispenses with the gating which is empirically hard to learn. The departures from the NALU architecture are well justified and lead to significant improvements for the considered applications, especially as far as extrapolation to inputs outside of the training domain.\\n\\nThe paper is mostly well written (one notable exception: the form of the loss function is not given explicitly anywhere in the paper) and well executed, but the scope of the work is somewhat limited, and the authors fail to properly motivate the application or put it in a wider context.\\n\\nFirst, divisions being difficult to handle does not constitute a sufficient justification for choosing to exclude them: the authors should at the very least propose a plausible way forward for future work. More generally, the proposed unit needs to be exposed to at least 10K examples to learn a single expression with fewer than 10 inputs (and the success rate already drops to under 65% for 10 inputs). What would be the use case for such a unit? Even the NMU is only proposed as a step on the way to a more modular, general-purpose, or efficient architecture, its value is difficult to gauge without some idea of what that would look like.\"}"
]
} |
rJe4_xSFDB | Lipschitz constant estimation of Neural Networks via sparse polynomial optimization | [
"Fabian Latorre",
"Paul Rolland",
"Volkan Cevher"
] | We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bound on the Lipschitz constant of neural networks. The underlying optimization problems boil down to either linear (LP) or semidefinite (SDP) programming. We show how to use the sparse connectivity of a network, to significantly reduce the complexity of computation. This is specially useful for convolutional as well as pruned neural networks. We conduct experiments on networks with random weights as well as networks trained on MNIST, showing that in the particular case of the $\ell_\infty$-Lipschitz constant, our approach yields superior estimates as compared to other baselines available in the literature.
| [
"robust networks",
"Lipschitz constant",
"polynomial optimization"
] | Accept (Poster) | https://openreview.net/pdf?id=rJe4_xSFDB | https://openreview.net/forum?id=rJe4_xSFDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"t3ThYgE2pw",
"ryly93ZjjS",
"r1e5jsZsoS",
"SygL05bjiB",
"B1emwTnzsH",
"B1grEtzMsr",
"ryePC-_y5r",
"BklFCPgRKH",
"Skxd_pz5Yr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748030,
1573751942595,
1573751713905,
1573751502353,
1573207386701,
1573165356625,
1571942863419,
1571846096824,
1571593584029
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2394/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2394/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2394/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2394/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2394/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2394/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2394/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2394/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper improves upper bound estimates on Lipschitz constants for neural networks by converting the problem into a polynomial optimization problem. The proposed method also exploits sparse connections in the network to decompose the original large optimization problem into smaller ones that are more computationally tractable. The bounds achieved by the method improve upon those found from a quadratic program formulation. The method is tested on networks with random weights and networks trained on MNIST and provides better estimates than the baselines.\\n\\nThe reviews and the author discussion covered several topics. The reviewers found the paper to be well written. The reviewers liked that tighter bounds on the Lipschitz constants can be found in a computationally efficient manner. They also liked that the method was applied to a real-world dataset, though they noted that the sizes of the networks analyzed here are smaller than the ones in common use. The reviewers pointed out several ways that the paper could be improved. The authors adopted these suggestions including additional comparisons, computation time plots, error bars, and relevant references to related work. The reviewers found the discussion and revised paper addressed most of their concerns.\\n\\nThis paper improves on existing methods for analyzing neural network architectures and it should be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Updated version with new experiments\", \"comment\": \"Dear Reviewer1, we have uploaded a revised version of the paper addressing your concerns:\\n\\n1. In section 7.1 we have added computation time plots. We see the time is lower for LipOpt when sparsity is increased. LiPopt with higher degree and less sparsity can take more time than SDP BUT it obtains bounds which are more tight. i.e.\\nif one is ok with less tight bounds, LiPopt with low degree is faster than SDP, but if time is available LiPopt allows to trade more computation for accuracy.\\n\\nWe also remark that before our work, the SDP method was limited to one-hidden-layer networks. As part of our contribution we generalize this method to arbitrary number of layers\\n\\n2. Also in section 7.1, have added error bars (time and Lipschitz constant estimation). \\n\\nWe hope this will reassure you about the quality of the work and that you consider raising your score. We thank you again for your contribution to the improvement of the paper.\"}",
"{\"title\": \"Updated version with new experiments\", \"comment\": \"Dear Reviewer3, we have uploaded a revised version of the paper addressing your concerns:\\n\\n1. We have included a new section (7.3) comparing the Local Lipschitz constant bounds that we can obtain, with the\\nglobal constant, we find that they can differ quite a bit. Hence, they potentially provide larger certified regions around samples, in the context of certified robustness of networks.\\n\\n2. In section 7.1 we have added computation time plots. We see the time is lower for LipOpt when sparsity is increased. LiPopt with higher degree and less sparsity can take more time than SDP BUT it obtains bounds which are more tight. i.e.\\nif one is ok with less tight bounds, LiPopt with low degree is faster than SDP, but if time is available LiPopt allows to trade more computation for accuracy.\\n\\nWe also remark that before our work, the SDP method was limited to one-hidden-layer networks. As part of our contribution we generalize this method to arbitrary number of layers.\\n\\n\\nWe hope this will reassure you about the quality of the work and that you consider raising your score. We thank you again for your contribution to the improvement of the paper.\"}",
"{\"title\": \"Updated version with new experiments\", \"comment\": \"Dear Reviewer2, we have uploaded a revised version of the paper addressing your concerns:\\n\\n1. In equation (4), the parameters over which the max is taken are in the description of the set, they are the variables 0<= s_i <= 1, and -1 <= t_i <=1.\\n\\n2. We have included a new section (7.3) comparing the Local Lipschitz constant bounds that we can obtain, with the\\nglobal constant, we find that they can differ quite a bit. Hence, they potentially provide larger certified regions around samples, in the context of certified robustness of networks.\\n\\nWe will consider adding an experiment on how the bounds change as the depth increases for the final version of the paper.\\n\\nWe hope this will reassure you about the quality of the work and that you consider raising your score. We thank you again for your contribution to the improvement of the paper.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Many thanks for the changes to the draft, which have improved it substantially.\\n\\nI am quite happy with the response as well. I imagine that you could perhaps just vary the seed to produce the error bars:\\n seed = 7\\n np.random.seed(seed)\\n torch.manual_seed(seed)\\nrather than necessarily vary the neural network otherwise. This would already account for the randomisation in the stochastic gradient descent or similar.\"}",
"{\"title\": \"Author response to Reviewers\", \"comment\": \"We thank the reviewers for their feedback, and we address their concerns:\\n\\nFirst, we have uploaded a new version of the paper correcting the following:\\n\\n1. Fixed some typos pointed out by Reviewer2 and Reviewer1\\n\\n2. As was asked by Reviewer1, we have added references in the statement of Theorem 2, which is a classical result in algebraic geometry and we only include for completeness (we do not claim it is our original work) as well as\\nTheorem 3, as our work leverages such results to implement the algorithm for upper bounding the Lipschitz constant. We remark that they are adapted to our particular setting, as the fully general result is not needed and we think it might hurt the readability and accessibility of the paper to a broad audience.\\n\\nWe have made this more explicit in the new version and we hope that Reviewer1 agrees it is better understood now that Thm2 and Thm3 are not presented as original work.\\n\\n3. At the end of section 4, we added more references to the use of sparsity in polynomial optimization (which we don't claim to be part of our contribution), as suggested by Reviewer1. Our work shows that the sparsity of the neural network is directly linked to the sparsity of its norm-gradient polynomial, which in turn allows the use of the sparse-polynomial optimization methods. We have made this more clear in the last paragraph in section 4.\\n\\n4. We included references to other applications of sparse polynomial optimization in safety verification, as suggested by Reviewer1. We are thankful for improving the completeness of our bibliography. \\n\\n5. We added the bound obtained by LiPopt in our MNIST example, with relaxation degree 4. The bound is tighter.\\n\\nSecondly, we will add the following as soon as possible:\\n\\n1. We will plot how the estimate of Local Lipschitz constant improves over the global Lipschitz constant, when we evaluate over an l_infinity ball centered at some particular point x_0, with varying radius epsilon. In this way we aim to address the interest of Reviewer2 and Reviewer3 in this application of our proposed method.\\n\\n2. We have found that the naive upper bound degrades considerably when the depth is increased, while the upper bounds we compute degrade at a much slower rate. So overall, the improvement over the naive upper bound that we can obtain with our method becomes more pronounced with increased depth. We will include an experiment and discussion on this phenomenon. In this way we aim to address a concern of Reviewer2.\\n\\n3. We will add error bars on our plots. However, this needs a small change in the plots as we now explain. When randomizing over the neural network's weights, the Lipschitz constant naturally becomes a random variable, and the error bars would show this. In order to provide meaningful error bars we want to plot the variability of the approximation error, but this would require access to the true lipschitz constant, which we can not obtain. We will use the lower bound obtained by sampling as an estimate of the error. This should provide a better sense of how much of an improvement does the LP method has over the SDP. In this way we will address the concern of Reviewer1 about the NeurIPS reproducibility checklist.\\n\\n4. We will compare average solving time for LiPopt with relaxation degree 2, 3 and 4, for one and two hidden layers, compared to the SDP approach. At lower degrees of relaxation we have observed it is faster to solve an LP rather than an SDP and it can obtain better bounds. If we consider higher degrees of relaxation we observe the LP can take more time to solve BUT it obtains increasingly tighter bounds on the Lipschitz constant, thus providing a way to trade off more computation time with accuracy. We will also asses the memory consumption.\\n\\nWith this we will hopefully improve our paper and make our claims and contributions stronger, as suggested by the reviewers.\", \"a_final_remark\": \"The SDP method, although is not the one we suggest to use due to the scalability of commercial SDP solvers, we consider also part of our algorithmic contribution. Although first proposed by Raghunathan et al. 2018a, In its original form it was limited to the one hidden layer case. In our work we show how it extends also to the multilayer case. Note that the limitation to the one hidden layer case is pointed out as a main drawback by subsequent work (Raghunathan et al. 2018b) among other works, so we think that lifting this drawback is a valuable contribution.\\nWe make this point in section 5 (page 7).\", \"references\": \"Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certified defenses\\nagainst adversarial examples. In International Conference on Learning\\nRepresentations, 2018a. URL https: //openreview.net/forum?id=Bys4ob-Rb.\\n\\nAditi Raghunathan, Jacob Steinhardt, and Percy S Liang. Semidefinite\\nrelaxations for certifying robustness to adversarial examples. In Advances in\\nNeural Information Processing Systems, pp. 10877\\u201310887, 2018b.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors introduce a framework for computing upper bounds on the Lipschitz constant for neural nets. The main contribution of the paper is to leverage the sparsity properties of typical feed-forward neural nets to reduce the computational complexity of the algorithm. Through experiments, the authors show that the proposed algorithm computes tighter Lipschitz bounds compared to baselines.\\n\\nThe approach proposed in the paper looks interesting. Although the presentation can be made clearer in places. For example, in equation (4), it would be helpful to explicitly state over which parameters the max is taken. There's also a number of small typos that need to be fixed. For example: \\\"We refer to d as the depth, and we we focus on the case where fd has a single real value as output.\\\" on page 1.\\n\\nI found the proposed algorithm and the discussions in Section 2 and 3 interesting, although I am not familiar enough with the literature on polynomial optimization to evaluate whether there is any significantly new idea presented in these sections. I found section 4 very interesting too, and very important towards making the algorithm actually computationally tractable. I have a couple of concerns with the rest of the paper however, which `I describe below:\\n\\n1. It is nice that upper bounds for the local Lipschitz constant can be incorporated easily into the formulation. I would have liked to see some experiments on evaluating local Lipschitz constants though, and how they compare with other methods, since this is a very popular setting in which such techniques are used nowadays.\\n\\n2. The paper overall I think would benefit from a better experimental evaluation. It would be interesting to see how much the sparsity pattern in convnets affect results compared to other baselines. It would also be interesting to see how the bound degrades as the network grows bigger, and in particular as the depth increases.\\n\\nGiven the lack of thorough experiments in the paper, I am giving the paper a borderline rating. I am however willing to increase my score based on discussions with the authors and other reviewers.\\n\\n===================================\", \"edit_after_rebuttal\": \"The latest draft addresses most of my concerns, and I am happy to recommend accepting this paper now.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a general approach for upper bounding the Lipschitz constant of a neural network by relaxing the problem to a polynomial optimization problem. And the authors extend the method to fully make use of the sparse connections in the network so that the problem can be decomposed into a series of much smaller problems, saving large amount of computations and memory. Even for networks that don't have high-level sparse connections, the proposed method can still help to reduce the size of the problem. This paper also compares the proposed LiPopt method with another solution derived from a quadratically constrained quadratic program reformulation. Compared with this method, the LiPopt method can handle cases with more parameters efficiently.\\n\\nCalculating a TIGHT upper bound of a neural network efficiently is very valuable and useful in many areas in deep learning community. And I really like the potential to use this LiPopt method to upper bound local Lipschitz constant in a given neighboring region, which will be very useful in certificated robustness application, etc..\\n\\nI also like that the authors present results on networks trained on real-world dataset (MNIST). My only suggestion is that I'd like to see LiPopt's computation time and memory usage compared to its counterparts, as the authors argue the proposed method can fully exploit the sparse connections to reduce the problem size.\\n\\n=======\", \"update\": \"I am satisfied with the authors' solid response and would like to raise my score.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors study the problem of estimating the Lipschitz constant of a deep neural network with ELO activation function. The authors formulate the problem as a polynomial optimisation problem, which is elegant. Subsequently, they utilise an LP hierarchy based on the Krivine-Vasilescu-Handelman\\u2019s Positivstellensatz and suggest exploiting sparsity therein. The computational results are clearly not sufficient to apply this approach to real-world neural networks, but are still respectable.\\n\\nSection 3 (Theorem 2) is not original work, as leaving the theorem without a reference would imply: the authors cite Section 9 of Lasserre's 2015 book later, so they are clearly aware of this, and there are many application even within verification, e.g.,\", \"https\": \"//ieeexplore.ieee.org/document/8493559\\n\\nThe suggestions as to the exploitation of sparsity (Section 4) are not original work either. The authors could cite, e.g., JB Lasserre: Convergent SDP-relaxations in polynomial optimization with sparsity (SIAM Journal on Optimization, 2006), as one of the early proponents of the exploitation of sparsity.\", \"in_section_7\": \"-- The claim \\\"We observed clear improvement of the Lipschitz bound obtained, compared to the SDP method\\\" is not supported by the results the authors present. \\n-- The authors do not present the run-time. This needs to be included, considering they imply that the key improvement over the traditional SDP is that this works with smaller variables and should be faster. \\n-- The presentation of the experimental results should be improved, so as to follow the NIPS reproducibility checklist, or at least have error bars at one standard deviation and standard deviation in the table. \\n\\nOther than that, the paper is well written (modulo Section missing in \\\"Section 5\\\" at the top of Section 7), and I would recommend its acceptance.\"}"
]
} |
SJx4Ogrtvr | Random Bias Initialization Improving Binary Neural Network Training | [
"Xinlin Li",
"Vahid Partovi Nia"
] | Edge intelligence especially binary neural network (BNN) has attracted considerable attention of the artificial intelligence community recently. BNNs significantly reduce the computational cost, model size, and memory footprint. However, there is still a performance gap between the successful full-precision neural network with ReLU activation and BNNs. We argue that the accuracy drop of BNNs is due to their geometry.
We analyze the behaviour of the full-precision neural network with ReLU activation and compare it with its binarized counterpart. This comparison suggests random bias initialization as a remedy to activation saturation in full-precision networks and leads us towards an improved BNN training. Our numerical experiments confirm our geometric intuition. | [
"Binarized Neural Network",
"Activation function",
"Initialization",
"Neural Network Acceleration"
] | Reject | https://openreview.net/pdf?id=SJx4Ogrtvr | https://openreview.net/forum?id=SJx4Ogrtvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0UJ0v0KhtO",
"BJeE7ZOaKr",
"Byg1CKm9KH",
"H1lAIDb-Kr"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747992,
1571811611700,
1571596742899,
1570998102224
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2393/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2393/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2393/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The article studies the behaviour of binary and full precision ReLU networks towards explaining differences in performance and suggests a random bias initialisation strategy. The reviewers agree that, while closing the gap between binary networks and full precision networks is an interesting problem, the article cannot be accepted in its current form. They point out that more extensive theoretical analysis and experiments would be important, as well as improving the writing. The authors did not provide a rebuttal nor a revision.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a method to initialize the bias terms in neural network layers, and argues that the proposed method improve the performance of binary neural networks (BNNs). The paper justifies the proposed method by analyzing the geometric properties of the ReLU and the hard tanh (htanh) activation functions, as well as by empirical results on the CIFAR-10 dataset using the (binary variants) of VGG-7 and ResNet.\\n\\nWhile closing the performance gap between BNNs and their full-precision counterparts is an interesting problem of practical importance, this paper has several limitations: \\n\\n(1) the analysis of geometric properties of ReLU/htanh is not sufficiently precise and clear;\\n(2) the paper does not clearly present the connections between the htanh activation function and the straight-through estimator employed in back-propagating the gradients in training a BNN;\\n(3) the experimental results are too limited on just one dateset, and only error rate on validation set is reported, however, lower error rate on validation set won't guarantee better performance on test set;\\n(4) the presentation is imprecise and unpolished.\", \"minor_comments\": \"\", \"section_2\": \"\\\"Tang et al. replaced replacing ReLU\\\" -> \\\"Tang et al. replaced ReLU\\\"\\n\\\"many relaated works\\\" -> \\\"many related works\\\"\", \"section_3\": \"please define the symbols used in Equation (1)\", \"title_of_figure_2\": \"\\\"behavior of ReLu\\\" -> \\\"behavior of ReLU\\\"\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: This paper tries to improve the training for the binary neural network.\", \"weaknesses\": \"[-] A lack of related works. There have been many related works about BNN in these years (after 2017), but the authors do not have a quick summary of them.\\n[-] More reference. e.g, when authors mention 'many related works require to store the full-precision activation map during the inference stage', some reference is necessary.\\n[-] Weak Motivation: The authors argue 'We analyze the behaviour of the full-precision neural network with ReLU activation' in the abstract. However, in Section 3, I cannot find any analysis. Only writing down the backward and forward cannot be called analysis. Initialization is different from the training dynamics. Assumptions and theorems should be highlighted. \\n[-] Poor writing: A lot of typos. Only in the last paragraph in Section 2, I find many typos, e.g. 'replaced replacing ReLU activation', 'any relaated works'.\", \"questions\": \"[.] In experiments, what structure is used for ResNet? ResNet-18-like or ResNet-110-like? (The results for these two kinds of structure are totally different for binary neural network, as the difference in the number of channels) \\n[.] In experiments, the performance of the baselines seems lower than related papers? Do the authors increase the number of channels in each layer as the other people do? It can improve the result a lot, and I wonder whether the improvement still exists in this setting.\\n[.] In experiments, only CIFAR10 results have been reported, but I wonder what is the error bar looks like? (Do the authors run the experiments several times and calculate the variance?)\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a method for bias initialization and shows that it improves training for BNN.\\n\\nI vote to reject the paper. Main points against are: (1) is no theory and very limited experiments (2) Bad writing.\", \"detailed_remarks\": [\"The level of english is not good enough all over the paper, example: \\\"It is more common to use low-bit quantized networks such as Binary Neural Networks (BNNs)\\u05f4 more common then what? (I also disagree on the scientific claim)\", \"The authors claim that XNOR nets and such have \\\"memory occupation is significantly larger than the pure 1-bit solution like the vanilla BNN.\\\". While this is true for training, it is not true for inference which is in many cases where one needs to use limited hardware.\", \"The paper main claim is the data equality and hyperplane equality are the main strengths of ReLU, but doesn't give any justification or even intuition into why this is the case. I am not convinced that these points are important, and the paper did nothing to try to persuade me.\", \"Data point equality shouldn't hold for ReLU networks with non-zero bias initialization as well.\", \"The experiments show promising results but only on cifar10 and only with the outdated BNN, also as a necessary baseline it would be important to show the effect of the bias initialization on ReLU networks.\", \"I believe the paper shows promising initial results but needs to strengthen them considerably. It also needs to improve the writing. A better justification for the method, even if it only at an intuitive level would help considerably.\"]}"
]
} |
B1xmOgrFPS | Meta-RCNN: Meta Learning for Few-Shot Object Detection | [
"Xiongwei Wu",
"Doyen Sahoo",
"Steven C. H. Hoi"
] | Despite significant advances in object detection in recent years, training effective detectors in a small data regime remains an open challenge. Labelling training data for object detection is extremely expensive, and there is a need to develop techniques that can generalize well from small amounts of labelled data. We investigate this problem of few-shot object detection, where a detector has access to only limited amounts of annotated data. Based on the recently evolving meta-learning principle, we propose a novel meta-learning framework for object detection named ``Meta-RCNN", which learns the ability to perform few-shot detection via meta-learning. Specifically, Meta-RCNN learns an object detector in an episodic learning paradigm on the (meta) training data. This learning scheme helps acquire a prior which enables Meta-RCNN to do few-shot detection on novel tasks. Built on top of the Faster RCNN model, in Meta-RCNN, both the Region Proposal Network (RPN) and the object classification branch are meta-learned. The meta-trained RPN learns to provide class-specific proposals, while the object classifier learns to do few-shot classification. The novel loss objectives and learning strategy of Meta-RCNN can be trained in an end-to-end manner. We demonstrate the effectiveness of Meta-RCNN in addressing few-shot detection on Pascal VOC dataset and achieve promising results. | [
"Few-shot detection",
"Meta-Learning",
"Object Detection"
] | Reject | https://openreview.net/pdf?id=B1xmOgrFPS | https://openreview.net/forum?id=B1xmOgrFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"F8FXr3itAs",
"Bkeg5r3sir",
"rJe_tNnijS",
"rygUTX2isH",
"ryg6pPtQ5r",
"HJeEHAb6tS",
"BygTZ2FdYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747963,
1573795207975,
1573794943771,
1573794749536,
1572210628559,
1571786299617,
1571490820891
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2392/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2392/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2392/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2392/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2392/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2392/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper develops a meta-learning approach for few-shot object detection. This paper is borderline and the reviewers are split. The problem is important, albeit somewhat specific to computer vision applications. The main concerns were that it was lacking a head-to-head comparison to RepMet and that it was missing important details (e.g. the image resolution was not clarified, nor was the paper updated to include the details). The authors suggested that the RepMet code was not available, but I was able to find the official code for RepMet via a simple Google search:\", \"https\": \"//github.com/jshtok/RepMet\\nReviewers also brought up concerns about an ICCV 2019 paper, though this should be considered as concurrent work, as it was not publicly available at the time of submission.\\nOverall, I think the paper is borderline. Given that many meta-learning papers compare on rather synthetic benchmarks, the study of a more realistic problem setting is refreshing. That said, it's unclear if the insights from this paper would transfer to other machine learning problem settings of interest to the ICLR community.\\nWith all of this in mind, the paper is slightly below the bar for acceptance at ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for highlighting potential issues! We think they can be addressed.\", \"comment\": \"Thanks for the comments! We do agree with some of your concerns, but do think that most of the suggested issues are addressable.\\n\\n1. Implementation details with high-res images\\n\\nThanks for the suggestions, and we apologise for lack details.\\n\\nDuring meta-training, we train the model using 5-way-1shot tasks, and only 5 query images (1 query image per class). This results in a total of 10 images for one task. With this, implementing the meta-training is not too difficult. Using this trained model, we evaluate performance on various settings (e.g. 5-way 1-shot, and 5-way 5-shot) meta test tasks. We apologize for the lack of clarity in the first RPN - this is not a critical component, and just a minor trick we use to make the prototype more robust (instead of constructing prototypes of the support objects by directly using ground truth bounding boxes and labels, we also use the proposals generated by the RPN, and if it has sufficient overlap with the ground truth, it is used for constructing the prototype). The main contribution in RPN is the one that is trained to generate support(class)-specific proposals.\\n\\nWe will definitely release the code.\\n\\n\\n2 & 3. Regarding RepMet\\n\\nThanks for these comments regarding RepMet. \\ni) We have improved the presentation to not call it RepMet, but to call it FRCN-PN(baseline), and have changed the written section describing the relation of FRCN-PN with RepMet.\\nii) FRCN-PN shares a similar principle as RepMet (traditional detector training + replacing the object classifier with a meta-learner), and thus is a baseline we considered for our work.\\niii) We would have liked to reproduce RepMet and compare directly with the original method, however, we were not able to find the code for it. As a result, we decided to implement the method based on this principle ourselves as a baseline. \\niv) The code for RepMet: The arxiv version and the published version do not have a working link for the code being available online. We found a not well tested/incomplete version of the code ( https://github.com/HaydenFaulkner/pytorch.repmet ) done by a third party, which has not yet reproduced the results. \\n\\n\\n4. Comparison with Meta-RCNN in ICCV2019\\n\\nThanks for suggesting the reference. We believe this work was done in parallel with our work. We would like to highlight that this work was made available on arxiv (28th September) a few days after the ICLR submission deadline (25th September). Moreover, it appeared in ICCV even more recently (27th October).\\n\\nThis work does share some similarities as our work (principle of class-attentive module), however, there is a fundamental difference in the training approach, specifically for the RPN. In contrast to the reference paper, our RPN is meta-trained and is tailored to generate proposals for the few-shot setting. \\n\\nWe train the RPN in the meta-learning paradigm (meta-RPN), whereas the RPN training in the ICCV paper is trained using the traditional setting. This difference is extremely crucial for few-shot detection. Traditional RPNs will detect all objects in the image (including objects not of interest, i.e., they will even detect objects that are not available in given support set). Our meta-trained RPN generates proposals for an object from classes only belonging to the support set (i.e., it generates class-specific proposals). \\n\\nFinally, we would also like to highlight that following the meta-learning literature, we have evaluated the performance of the object detector on \\u201cmultiple\\u201d few-shot detection tasks. Our reported few-shot performance is average performance over these tasks, in contrast to the existing reference which evaluates result on exactly one few-shot task.\\n\\nAs regards empirical comparisons, it would be slightly time consuming to do this given different settings (e.g. hyperparameters, backbone, data splits, different approach for using the meta-train dataset, etc.). We do aim to do this in the future.\"}",
"{\"title\": \"We offer clarifications on why the experiment setting is fair\", \"comment\": \"Thanks for the comments! We agree with your concerns, and would like to offer clarifications for a clearer understanding.\\n\\nTo do a novel few-shot detection task, a prior needs to be acquired from some base data (e.g. meta train data in our case). To acquire this prior, we can follow two approaches: 1) Train a traditional model (e.g. a detector or classifier), and then fine tune on the novel few-shot task; OR 2) Acquire a prior via meta-learning on the base data, and learn a model that is trained to do few-shot learning. \\n\\nLSTD follows the first paradigm, while our proposed Meta-RCNN follows the second paradigm. Note that both methods have access to the exact same base data, i.e., they have access to the same information. They differ only in the learning algorithm. Then, a novel few-shot task is given to the algorithm, and the algorithm makes the prediction.\\n\\nSince both models have access to the same information, and make predictions on the same few-shot test task, the comparison is fair.\\n\\nData Split Difference\\nMeta-learning literature (Vinyals et al. 2016, Finn et al. 2017, Snell et al. 2017, etc.) evaluates few-shot performance over multiple tasks drawn from a test task distribution, i.e., the few-shot performance is measured and averaged over multiple tasks. This is a more reliable metric than evaluating performance on only one few-shot task. LSTD data split considers evaluation on only one few-shot task in their data split. We train LSTD on appropriate base data, and then evaluate its performance over multiple tasks, and compare this performance with our method.\"}",
"{\"title\": \"Thanks for the positive comments and the concerns!\", \"comment\": \"Thank you for your review! We were delighted with your comments!\\n\\nAs regards novelty, we would like to highlight that it is not trivial to adapt meta-learning for object detection, and to the best of our knowledge, ours is the first work that trains both the object classifier and the RPN in a meta-learning paradigm, making all the components tailored for few-shot detection.\\n\\nThanks for identifying the writing issues, we have fixed them in the current version.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper is about the task of object detection in the setting of few-shots dataset. The problem is addressed in the learning scheme of meta-learning paradigm: the proposed meta-rcnn trains the popular faster-rcnn on several tasks of few shots object detection while the RPN and the object classification networks are meta-learned among the tasks. Compared to previous work the paper introduces the meta learning framework and several changes to the faster rcnn detector. A prototype representation is derived from the standard RPN network and its proposed bounding box. An attention mechanism choose the object of interest and is used to train the final RPN and classification network. Experiments on the popular Pascal Voc 2007 and ImageNet-FSOD show that the proposed system have state of the art performance.\\n\\nThe paper is very well written, easy to read and of excellent presentation. The introduction of the meta learning paradigm and its use to learn the RPN and classification networks are incremental in novelty but interesting. The experiments are solid and show state of the art performance. As a result I recommend this paper to be accepted.\", \"minor_issues\": [\"in caption of Fig1: avialable -> available\", \"in 4.1: \\u201cCompared to other variants...\\u201d please add a reference to the specific methods you are comparing to.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a method for few-shot object detection (FSOD), a variant of few-shot learning (FSL) where using a support set of few training images for novel categories (usually 1 or 5) not only the correct category labels are predicted on the query images, but also the object instances from the novel categories are localized and their bounding boxes are predicted. The method proposes a network architecture where the sliding window features that enter the RPN are first attenuated using support classes prototypes discovered using (a different?) RPN and found as matching to the few provided box annotations on the support images. The attenuation is by channel wise multiplication of the feature map and concatenation of the resulting feature maps (one per support class). After the RPN, ROI-pooling is applied on the concatenated feature map that is reduced using 1x1 convolution and original feature map (before attenuation) being added to the result. Following this a two FC layer classifier is fine-tuned on the support data to form the final\\nRCNN head of the few-shot detector. The whole network is claimed to be meta-trained end to end following COCO or ImageNet (LOC? DET?) pre-training. The method is tested on a split of PASCAL VOC07 into two sets of 10 categories, one for meta-training and the other for meta-testing. In addition, experiments are carried out on ImageNet-LOC animals subset. In both cases, the result are compared to some baselines, and some prior work.\\n\\nAlthough FSOD is an important emerging problem, and advances on it are very important, I believe there are still certain gaps in the current paper that need to be fixed before it is accepted. Specifically:\\n\\n1. Some important details are missing from the description. For example, detectors are usually trained on high resolution images (e.g. 1000 x 1000) and hence are problematic to train with large batches, yet in the proposed approach it is claimed that the proposed model is meta-trained with batch size 5 on 5 way tasks with 10 queries each, so even in 1-shot case, does it mean that 5 x 15 = 75 high resolution images enter the GPU at each batch? I doubt that even in parallel mode with 5 GPUs and 15 high res image per GPU it is possible for claimed backbone architectures (ResNet-50 and VGG16).\\nAs another example, the details of fine-tuning during meta-training seem to be left out, is the model optimized with an inner loop? Details of the RPN that is used to select the support categories prototypes are not specified, where it comes from and how is it trained (clearly as the \\\"main\\\" RPN relies on attenuated features, it cannot be it)? Some additional technical details are not very clear and hinder the reproducibility of the paper (no code seem to be promised?), in general I suggest the authors to improve the writing and clarity of the paper.\\n\\n2. In VOC07 experiment, FRCN-PN is very vaguely described and being claimed that it stands for RepMet (Karlinksy et al., CVPR 2019). It is not clear what it is and its training procedure on VOC07 is not clearly described.\\nIt is also claimed in ImageNet experiment that the real RepMet is \\\"more carefully designed then FRCN-PN\\\" and has a better backbone, hence it is not clear why FRCN-PN should stand for it.\\nI suggest the authors to either do a direct comparison or remove their claim of comparison.\\n\\n3. RepMet paper has proposed an additional benchmark on ImageNet-LOC with 5-way 1/5/10-shot episodes, and afaik it is reproducible as its code is released, so I am wondering as to why it was not used for \\nevaluation given that the authors made the effort of reproducing another ImageNet-LOC test on the same categories? It should be evaluated for a fair comparison.\\n\\n4. Although they don't strictly have to compare to it, I am wondering if the authors would be willing to relate to a similar approach that was proposed for the upcoming ICCV 19: \\n\\\"Meta R-CNN : Towards General Solver for Instance-level Low-shot Learning\\\", by Yan et al. Their approach is more similar to RepMet in a sense that the meta-learning is done in the classifier head,\\nand better results are reported on VOC07 benchmark (and except for 1-shot, higher results are reported for the 3 and 5 shot FRCNN fine-tuning).\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, authors propose a meta-learning based approach for low-shot object detection. Specifically, they use prototype in the support set as attention guidance, and learn the category-specific representation for each query image. Subsequently, they use the style of Faster RCNN for object detection.\\n\\nIt is an OK paper with good structure. The idea is somewhat novel, in terms of meta-learning based low-shot detection framework. My main concern is about experiment. First, the data setting is branch new. Why not use the data setting in the literature, e.g., COCO to VOC in LSTD (Chen et al., 2018)? As a result, how to make a fair comparison bothers me a little. Furthermore, LSTD is a non-episodic approach. How to make it in a meta-learning way? Please clarify the implementation details for all other related works in the comparison.\"}"
]
} |
SJeQdeBtwB | Adversarially learned anomaly detection for time series data | [
"Alexander Geiger",
"Alfredo Cuesta-Infante",
"Kalyan Veeramachaneni"
] | Anomaly detection in time series data is an important topic in many domains. However, time series are known to be particular hard to analyze. Based on the recent developments in adversarially learned models, we propose a new approach for anomaly detection in time series data. We build upon the idea to use a combination of a reconstruction error and the output of a Critic network. To this end we propose a cycle-consistent GAN architecture for sequential data and a new way of measuring the reconstruction error. We then show in a detailed evaluation how the different parts of our model contribute to the final anomaly score and demonstrate how the method improves the results on several data sets. We also compare our model to other baseline anomaly detection methods to verify its performance. | [
"anomaly detection",
"gan"
] | Reject | https://openreview.net/pdf?id=SJeQdeBtwB | https://openreview.net/forum?id=SJeQdeBtwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"DiprJvmxlN",
"BJe1BfZoor",
"SJxVCg-siB",
"ryeOhk-osr",
"Sked-RZaFr",
"r1xrGN9sFr",
"ryevebv_Yr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747933,
1573749302632,
1573748940013,
1573748655822,
1571786239880,
1571689484901,
1571479790715
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2391/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2391/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2391/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2391/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2391/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2391/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a cycle-consistent GAN architecture with measuring the reconstruction error of time series for anomaly detection.\\n\\nThe paper aims to address an important problem, but the current version is not ready for publication. We suggest the authors consider the following aspects for improving the paper:\\n1. The novelty of the proposed model: motivate the design choices and compare them with state-of-art methods\\n2. Evaluation: formalize the target anomalies and identify datasets/examples where the proposed model can significantly outperform existing solutions.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": [\"Thank you very much for your detailed comments. We updated the draft and included many of the mentioned points.\", \"In particular, we want to respond to some of the points that were mentioned:\", \"While the GAN architecture itself is not very novel, we want to emphasize that this is not our main contribution and we never claimed that it is a novel approach. However, we have adopted the architecture and applied it to time series signals, which is a rather novel method as far as we know. Furthermore, we introduce two similarity measures instead of a point-wise reconstruction error and show it can improve the scores and we demonstrate in much detail how a combination of the Critic and the reconstruction error can give much more robust anomaly detection scores.\", \"We have added some comments to the paper that try to clarify some of the design choices. And while the used similarity measures are unusual, we want to emphasise that this is exactly the claim of the paper, as we not only introduced those in the time series anomaly detection segment, but also show how these measures can improve the anomaly detection scores (see section 5.4.1 for the in-depth analysis). We are also not aware of any scenarios where this choice will give much worse scores than a standard L1 or L2 norm, therefore we would be happy if these pitfalls would be pointed out to us. We rather think that the experiments are actually a good indication that the proposed measures work very well in practice.\", \"We have rewritten parts of the experiments section such that it becomes clearer what the individual steps are. We also added a short description of the baseline methods in the appendix.\", \"We are not completely sure what is meant by \\u2018flagging the entire sequence as anomalous\\u2019 is the safe approach. Once we computed the anomaly scores for each time step, we try to identify any outliers by statistical analysis. If we find outliers, we flag them as an anomaly based on the location. We are currently not sure what benefit it would have to flag the entire sequence. We have added some more explanations to the corresponding section of the paper and hope that this clarifies any open points.\", \"We compare our method conceptually with Li et al\\u2019s approach in the related work section. Since we modified the structure to a CycleGAN, but the proposed ideas are similar when it comes to the use of the Critic and the point-wise reconstruction error, we did not include this method in our benchmarks since our evaluation in section 5.4.1 already includes these features.\", \"Thank you for pointing out the additional typos and citation errors, we fixed them in the revised draft.\"]}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you very much for the in-depth response. We have updated our draft and included many of the mentioned points.\\nSpecifically, we have added a small description of the baseline methods and marked the ground truth anomalies in the plots.\", \"we_also_want_to_address_some_specific_issues_that_were_pointed_out\": \"At first we would like to clarify some things about the novelty of our method. We think our approach is slightly more novel than mentioned by the reviewer, since it\\u2019s the first use of a CycleGAN in the time series domain as well it\\u2019s the first use of other similarity measures as a reconstruction error. We are not aware of any work that uses DTW in the sense of a reconstruction error. Also, there is no other work known to us that investigates the performance of these different reconstruction errors in this depth (see section 5.4.1 where we discuss the different reconstruction methods in detail and how our proposed similarity measures outperform the point-wise error in the experiments).\\n\\nFurthermore, we did not include AnoGAN and ADGAN in our experiments as they are GAN based anomaly detection methods for a different domain and not necessarily time series. Therefore, we consider our model as an adaption of these methods to the time series domain and only compare our model to other time series anomaly detection methods.\\n\\nWe also think that the experiments of the paper are enough to draw conclusions since they provide a good comparison between the methods on a large number of different signals and it becomes clear that the proposed similarity measures as well as our combination with the Critic are performing very well. As we already mentioned in our response to reviewer #3, we want to emphasise that labeled time series anomaly detection data sets are rare to find and we want to point out that many of the related work also uses only a very limited number of data sets (e.g. Li et al., Zhou et al. (2019) and Hundman et al. (2018)).\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you very much for the thorough feedback. We will try to address the mentioned points from our point of view and hope that this clarifies some issues.\\n\\n1. While it is true that the paper is not going into the details when it comes to the Critic output and the justification behind using it as an anomaly score, we see in the experiments that the output of the Critic can practically be used as an anomaly score, since the anomalous regions receive different scores than the normal regions. This is based on the same intuition that was already used in some of the cited previous papers which use GANs as an anomaly measure (e.g. Li et al (2018)). Basically, the Critic is trained to separate the generated time series sequences from the real ones, meaning it learns to assign scores of how real a time series sequence is. Using this trained Critic, we can detect anomalies since it will assign different scores to those sequences compared to the normal ones as shown in related literature and in our experiments.\\nRegarding the work of Chang et al. (2019), we are not sure at the moment how much it actually relates to our work since they train an \\u2018auxiliary generative model\\u2019 as emphasised in their appendix, which has some different constraints. We will look into this paper in more detail and try to identify any overlaps with our work.\\nLastly, we do not use early stopping in our current implementation and we are not sure if this would help in our case since the model is currently already performing as expected and provides quite convincing scores as shown in the experiments.\\n\\n2. It is also true that the number of data sets is rather limited in this paper. However, in our opinion the experiments already show very convincing results by reaching better scores in the majority of our experiments. It is also the case that labeled time series anomaly detection data sets are rare to find and we want to point out that many of the related work also use only a very limited number of data sets (e.g. Li et al., Zhou et al. (2019) and Hundman et al. (2018)).\\nIn addition, it is also confirmed in the plots how our method is improving the anomaly scores by increasing the distance between the scores of anomalies and non-anomalies, leading to much more robust anomaly scores. \\nRegarding the large number of combinations, we wanted to give a comprehensive overview of how the different similarity measures perform on the data sets. The main result of this is that the point-wise error, which is used in many previous papers (e.g. Hundman et al., 2018; Malhotra et al., 2015, Li et at., 2018), might not be the best option in most cases and we show how two different similarity measures can improve the scores (see section 5.4.1 where we discuss the different reconstruction methods in detail and how our proposed similarity measures outperform the point-wise error in the experiments).\\n\\n3. Thank you for pointing it out, we have improved the related literature section accordingly.\\n\\nIn summary, we think that our approach is justified through previous work and the results of the experiments. Even though there are not more data sets used in this work, we think that the results are already conclusive in the sense that they provide solid results and a comprehensible evaluation of different methods. They show how similarity measures can be used in time series anomaly detection to give much better reconstruction errors, as well as how they can be combined with a Critic to give robust overall anomaly scores.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: The paper propose a cycle gan variants combined with RNN for time series anomaly detection. The setting is assuming training model on a given normal data, then applying the trained model to detect anomalies. Different detecting criteria are studied in the experiments.\\n\\n1. It is not clear the advantage of using GANs in anomaly detection of the proposed algorithm, and it is questionable if using GANs is really useful. The authors only provide hand-waving explanation. For example, the papers says \\\" Therefore, once the Critic is trained, it should assign more or less stable scores to the normal sequences and a significantly different score to an anomalous sequence.\\\" in p5. I'm doubt if it is true. The GAN objective only says the score on x~p(x) and g(z) are similar. Also, the score for normal samples can also have a distribution since we only do mean matching in W-1 distance. I think a stronger analysis with certain assumptions is necessary for making this statement and justifying the proposed algorithm. Some possible route can be found in \\n\\nChang et al., Kernel Change-Point Detection with Auxiliary Deep Generative Models, ICLR 2019. \\n\\nAlthough their setting is slightly different, where they focus on change point detection. They provide a testing power lower bound explanation of using the critic of GANs, but they require some early stopping. Otherwise the guarantee won't hold. I'm wondering if the analysis can be extended to here. Also, if the proposed algorithm requires the early stopping or not? If not, why? \\n\\n2. The experiments are not conclusive. The simple LSTM predictions beats the proposed algorithms in some cases. Given there are only two datasets, it's hard to say if the proposed algorithm is really better. Also, the author proposed so many combinations, it is also not clear which one should be favored based on the table. \\n\\n3. Many related works are missing. In addition to Chang el al (2019) mentioned above, there are many related works of using GANs in time series detection problem, e.g. BeatGAN: Anomalous Rhythm Detection using Adversarially Generated Time Series, and many others. \\n\\n\\nTo summarize, the motivation and the advantage of using GANs is not well justified in addition to experiments. Also, the authors only study two datasets, and the results are not very conclusive.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a GAN model with cycle-consistent loss function for anomaly detection on timeseries. The loss function is the combination of reconstruction error (L2 norm) and both discriminators (generator and decoder) loss. Once trained, the anomaly score is computed as the mean-var-normalized product of the reconstruction score and the discriminator score. The framework is adapted to timeseries by using LSTM networks for the generator and the encoder and 1D CNN for the discriminators (critics). For the calculation of the reconstruction error, the authors study three approaches: point-wise, dynamic time warping and area difference. The method is applied to two datasets (NASA Spacecraft telemetry, Yahoo traffic) and is compared to simple baselines (LSTM, ARIMA, DenseAE). The results shows an improvement over the baselines. Variations of the models are also studied (with and without critic, different similarity measures for the reconstruction errors). The results show that the discrimination score (critic) does overall provide an improvement over the reconstruction error alone. And for the reconstruction score, the area differences is best, followed by DTW and point-wise.\", \"pros\": [\"A somewhat novel approach to anomaly detection in timeseries is proposed that combines GAN discriminator score with a simple area-based similarity measure for the reconstruction error.\", \"A solid adaptation of CycleGAN to timeseries.\", \"The simple area-based similarity measure is interesting and seems to be a good match for timeseries reconstruction as slight shifts are not penalized.\", \"The paper is well written, well organized and easy to follow.\", \"The technical content is sound and the math correct.\"], \"cons\": [\"not really novel (CycleGAN for timeseries) nor DTW as reconstruction error.\", \"no comparison to State-of-the-art GAN models for anomaly detection, such as AnoGAN and ADGAN.\", \"Too few datasets are used (2) for the experiments, making it hard to draw conclusions as to which variation is better.\", \"Baseline approaches are just mentioned by name. A minimal description would be desirable.\", \"The ground-truth anomalies and the anomaly threshold are not marked in the plots, making it hard to evaluate them.\", \"Overall the paper proposes a method that improves over baseline but is not compared to other GAN-based SOT models. The novelty is not very high as the main architecture is from CycleGAN and the proposed similarity measures for the reconstruction error are not really novel.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper trains a GAN on univariate time series data and uses reconstruction errors in combination with the critic's output to predict anomalous subsequences. The method is applied on two real-world data sets and compared to three simple baselines.\", \"i_have_several_reservations_about_this_manuscript\": [\"The methodology isn't very original: The GAN architecture is essentially a slightly modified CycleGAN trained with Wasserstein loss.\", \"Many design choices appear to be ad-hoc: there has been no principled selection of the GAN's hyperparameters, nor has their effect on the experimental results been studied. For computing the reconstruction error the authors use the integral over the difference between two time series without taking the absolute value, which is a very unusual choice. (Standard choices would be e.g. L1 or L2 norm.) While this turns out to have worked \\\"surprisingly well\\\" on the studied datasets, it is not difficult to construct scenarios where this choice will fail.\", \"Another design choice that should be discussed in more detail is the smoothing of the time series data using a moving average filter; in the context of anomaly detection this can have a significant impact, and it may not work equally well for different data sets, so a principled approach for determining the level of filtering is paramount. Same goes for the de-trending, and the actual parameters both of the moving averages and the de-trending functions should be reported.\", \"It is not clear to me how exactly the anomaly scores (supposedly for different sequence lengths l?) are used to predict the subsequence(s) containing an anomaly. It seems there is no incentive to keep the predicted subsequences as short as possible, i.e. if the anomaly score indicates there might be an anomaly present, then the safe approach is to just flag the entire sequence as anomalous (it doesn't hurt the sensitivity they way it's computed)?\", \"The description of the experiments requires more detail. Are both datasets labelled? After dividing the datasets into rolling chunks of length 100, how many samples do the training and test sets contain? How many of the test set samples contain anomalies?\", \"The baselines need to be described in more detail. What method was used, e.g. to select and fit the ARIMA models? What type of reconstruction loss was used? How were the anomalous subsequences predicted?\", \"Is there any way to compare the proposed method with Li et al's (who also used a GAN for anomaly detection in time series data)?\"], \"detailed_comments\": [\"abstract: \\\"particular hard\\\" -> \\\"particularly hard\\\"\", \"p.3, paragraph starting with \\\"To support...\\\": shouldn't this be \\\"E(x) with x ~ P_X\\\"?\", \"p.4 and p.6: broken citations \\\"?\\\"\", \"------------------\", \"I acknowledge that I have read the authors' response, but it doesn't change my assessment that several major revisions are needed:\", \"further motivating the design choices and comparing them to the state-of-the-art;\", \"formalizing (ideally providing a model for) the sort of anomalies that the proposed method aims to detect;\", \"discussing limitations of the proposed approach.\"]}"
]
} |
rkgfdeBYvH | Effect of Activation Functions on the Training of Overparametrized Neural Nets | [
"Abhishek Panigrahi",
"Abhishek Shetty",
"Navin Goyal"
] | It is well-known that overparametrized neural networks trained using gradient based methods quickly achieve small training error with appropriate hyperparameter settings. Recent papers have proved this statement theoretically for highly overparametrized networks under reasonable assumptions. These results either assume that the activation function is ReLU or they depend on the minimum eigenvalue of a certain Gram matrix. In the latter case, existing works only prove that this minimum eigenvalue is non-zero and do not provide quantitative bounds which require that this eigenvalue be large. Empirically, a number of alternative activation functions have been proposed which tend to perform better than ReLU at least in some settings but no clear understanding has emerged. This state of affairs underscores the importance of theoretically understanding the impact of activation functions on training. In the present paper, we provide theoretical results about the effect of activation function on the training of highly overparametrized 2-layer neural networks. A crucial property that governs the performance of an activation is whether or not it is smooth:
• For non-smooth activations such as ReLU, SELU, ELU, which are not smooth because there is a point where either the first order or second order derivative is discontinuous, all eigenvalues of the associated Gram matrix are large under minimal assumptions on the data.
• For smooth activations such as tanh, swish, polynomial, which have derivatives of all orders at all points, the situation is more complex: if the subspace spanned by the data has small dimension then the minimum eigenvalue of the Gram matrix can be small leading to slow training. But if the dimension is large and the data satisfies another mild condition, then the eigenvalues are large. If we allow deep networks, then the small data dimension is not a limitation provided that the depth is sufficient.
We discuss a number of extensions and applications of these results. | [
"activation functions",
"deep learning theory",
"neural networks"
] | Accept (Poster) | https://openreview.net/pdf?id=rkgfdeBYvH | https://openreview.net/forum?id=rkgfdeBYvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"GwTubrGbT",
"rkl1-KdEsH",
"H1gD4vdVsH",
"HyeTpizhqr",
"r1g6d9LCFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798747903,
1573320950949,
1573320495355,
1572772805286,
1571871349498
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2390/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2390/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2390/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2390/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The article studies the role of the activation function in learning of 2 layer overparaemtrized networks, presenting results on the minimum eigenvalues of the Gram matrix that appears in this type of analysis and which controls the rate of convergence. The article makes numerous observations contributing to the development of principles for the design of activation functions and a better understanding of an active area of investigation as is convergence in overparametrized nets. The reviewers were generally positive about this article.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Dear reviewer,\\n\\nThank you very much for your constructive review and for your time and effort. \\nWe have uploaded a revised version.\\n\\nResponses to suggestions/comments:\\n\\n1. We have changed the statement of Theorem 4.7 to be the informal statement of Cor. J.4.2 (Cor. I.4.2 in the revised version). We have dropped references to results whose informal statements do not appear in the main paper. \\n\\n2. If the dimension of span of input is of the order n^{\\\\gamma}, the lower-bound from Corollary J.4.2 (Cor. I.4.2 in the revised version) is approximately n^{-2/\\\\gamma}, while the upper bound from Theorem F.9 is approximately (n^2 e^{-1/\\\\gamma}). We will improve theorem statements to make comparisons easy. \\n\\n3. We agree. Please see the first paragraph of our response to Reviewer #4.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Dear reviewer,\\n\\nThank you very much for your constructive review and for your time and effort. \\n\\nWe share your concern about the appendix being too long. We have added a table of contents (at the beginning of the appendix) to make it easier to navigate. For better organization, we have slightly changed the order of some sections (previous Sec. I (LOWER BOUND ON LOWEST EIGENVALUE FOR NON-SMOOTH FUNCTIONS) is now Sec. K) and improved section names. We are sure there are many more ways of further improving the readability of the paper (e.g. by adding more explanations in some proofs) and are working on it. \\n\\nHaving said that, it seems to us that your specific concerns are already addressed in the paper as we now explain. We think a lengthy appendix is unavoidable largely due to the nature of the results; but perhaps the length issue is mitigated somewhat as we have tried to make the paper easy to navigate (e.g., the main paper provides a roadmap of the appendix). For focused presentation, we have highlighted the simplest case of one-hidden layer networks where only the input layer is trained so that the main ideas are clear. Large part of the paper is confined to this case. Section 5 on extensions provides a list of extensions along with pointers to the sections in the appendix for the results proven in the paper and mentions some others that are not proven. Please correct us if we have misunderstood your concerns. We welcome any further suggestions that might help improve readability further.\\n\\nIt is perhaps relevant to mention here that another reason for the length of the paper is that reviewers of a previous shorter version of this submission asked for proofs of extensions and related results that we mentioned without proof. While the extension to SGD and one or two other results turn out to be straightforward adaptation of previous work, others are more substantive and together try to present a reasonably complete picture. \\n\\nWe have uploaded a revised version.\", \"responses_to_minor_concerns\": \"(i) This was an error on our part. DZXP should have been DZPS after the authors of Du et al. 2019a [1]. Fixed in the revised version.\\n\\n(ii) We have made the suggested change in the revised version.\\u00a0\\n\\n(iii) Our Assumptions 1 and 2, while not identical to those of Allen-Zhu et al. [2], are essentially equivalent. Our Assumption 2 is about the lower bound on angles between the data points, and Assumption 2.1 in Allen-Zhu et al. [2] is about the lower bound on the distance between data points. The requirement of the final coordinate being 1/sqrt(2) in footnote 5, page 4 of their paper, along with their Assumption 2.1 leads to the lower bound on angles. Thus our statements make the role of angles more explicit. We have added some clarification in our paper. \\n\\n(iv) We agree and have amended the wording in the new version from \\\"does not evolve from its initial value\\\" to \\\"remains close to its initial value\\\". Your interpretation is correct. \\n \\nIf our responses satisfactorily address the weaknesses you mentioned, we hope that you would consider raising your score.\\n\\n\\n[1] Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In ICLR, 2019.\\n[2] Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. arXiv preprint arXiv:1811.03962 (2018).\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary: The authors of the paper examine how different activation functions affect training of overparametrized neural networks. They do their analysis in a general way such that it includes most activation functions such as ReLU, swish, tanh, polynomial, etc.\\n\\nThe main point of their analysis is that they examine a matrix called the G-matrix which is described in equation (2), and this (positive semi-definite) G-matrix can determine the rate of convergence to zero of the training error. Namely, the minimum eigenvalue of the G-matrix is inversely proportional to the time required to reach a desired amount of error (Theorem 3.1 (Theorem 4.1 from Du et al. (2019a)).\", \"the_main_results_separate_into_two_cases\": \"(1) activation functions with a kink (i.e. if the activation function is NOT in C^{r+1}, the space of r+1 continuously differentiable functions, for some finite r) and (2) smooth activation functions.\\n\\nIn the first case, the authors show that the minimum eigenvalue of the G-matrix is large, i.e. bounded away from zero after a few assumptions.\\n\\nIn the second case, the authors show that polynomial activations have many zero eigenvalues, and sufficiently smooth activations such as tanh or swish have many small eigenvalues, if the dimension of the span of data is sufficiently small.\\n\\nThe author\\u2019s initial problem setup works on a one-hidden-layer neural network where only the input layer is trained, but provide some extensions in the appendix.\", \"the_authors_also_provide_some_empirical_experiments\": \"one synthetic data, and on CIFAR10. The synthetic data experiments agreed with theory, but the experiment on CIFAR10 did have some gap between theory and experiment, although the CIFAR10 with ReLU experiment agreed with theory.\", \"stengths\": \"I appreciate the author\\u2019s effort in providing needed theoretical analysis on how activation affects training error for deep neural networks. The authors also provide an extensive appendix that provide seemingly full proofs of the theorems (although this reviewer did not go into detail for most of the appendix). The authors also provide experiments that confirm the theory and also provide examples highlighting the gap (which this reviewer sees as a strength).\", \"weaknesses\": \"A clear weakness of this paper is that the appendix is too long. The authors do provide a proof sketch of the main results and refer to the appendix, but I would have liked to have seen a more focused paper. Having extensions of the main results is nice, but it\\u2019s sometimes unclear what is being extended. Perhaps a list of extension would make clear what\\u2019s in the appendix.\", \"other_comments\": \"(i) I\\u2019d like to an explanation to why it\\u2019s called the DZXP setting.\\n(ii) On page 4, when explaining the M matrix, I think it should be grad_W F (instead of L)\\n\\n(iii) On page 4, I think there is more to assumption 1, after looking at Allen-Zhu et al. (2019) (https://arxiv.org/pdf/1811.03962.pdf page 4, footnote 5)\\n\\n(iv) the wording from page 4, \\u201cthe matrix G^(t) does not evolve from its initial value G^(0)\\u201d is a bit awkward. Do you mean G^(t) does not change much from G^(0), and as to goes to infinity then G^(t) goes to G^(0)?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper aims to characterize for different activation functions the minimum eigenvalue of a certain gram matrix that is crucial to the convergence rate for training over-parameterized neural networks in the lazy regime (small learning rate). On this front, the paper shows that for non-smooth activations the minimum eigenvalue is large under separation assumptions on the data improving on prior work. However, for smooth activations the authors show that the minimum eigenvalue can be exponentially small (or even 0 in case of polynomials) if the data has low-dimensional structure or the network is sufficiently deep. The authors experimentally validate the observations on synthetic data.\\n\\nOverall, I vote to accept the paper. The paper does a thorough theoretical study of the behavior of the eigenvalues of the matrix corresponding to NTK which is crucial to the NTK analysis. The paper successfully makes the case for non-smooth activations versus smooth activations in the lazy regime. The authors use polynomial approximations and low-dimensionality in an interesting way to show an upper bound on the min eigenvalue for activations approximable by sufficiently low-degree polynomials. The paper is well written, self-contained and well-referenced.\\n\\nSuggestions/Comments:\\n1. Please avoid referencing theorems in the appendix that do not have informal statements in the main paper. For example \\\"We sketch the proofs of Theorem J.3, Theorem J.4 and Corollary J.4.1 showing that our results about the limitations of smooth activations are essentially tight when the data is smoothed.\\\": Theorems/Corollary are not mentioned in the main text.\\n2. In real data as the authors point out, the dimension of the data is much larger than log n. In the setting of dimension being greater than log n, could you discuss how far the lower-bound from Theorem J.4.2 is from the upper bound from Theorem F.9. It would be useful to write it in similar notation.\\n3. The paper, in its current form, is long and probably hard for a general audience to parse. It would be useful to organize the appendix to emphasize the main techniques and ideas.\"}"
]
} |
H1xzdlStvB | Multi-Precision Policy Enforced Training (MuPPET) : A precision-switching strategy for quantised fixed-point training of CNNs | [
"Aditya Rajagopal",
"Diederik A. Vink",
"Stylianos I. Venieris",
"Christos-Savvas Bouganis"
] | Large-scale convolutional neural networks (CNNs) suffer from very long training times, spanning from hours to weeks, limiting the productivity and experimentation of deep learning practitioners. As networks grow in size and complexity one approach of reducing training time is the use of low-precision data representation and computations during the training stage. However, in doing so the final accuracy suffers due to the problem of vanishing gradients. Existing state-of-the-art methods combat this issue by means of a mixed-precision approach employing two different precision levels, FP32 (32-bit floating-point precision) and FP16/FP8 (16-/8-bit floating-point precision), leveraging the hardware support of recent GPU architectures for FP16 operations to obtaining performance gains. This work pushes the boundary of quantised training by employing a multilevel optimisation approach that utilises multiple precisions including low-precision fixed-point representations. The training strategy, named MuPPET, combines the use of multiple number representation regimes together with a precision-switching mechanism that decides at run time the transition between different precisions. Overall, the proposed strategy tailors the training process to the hardware-level capabilities of the utilised hardware architecture and yields improvements in training time and energy efficiency compared to state-of-the-art approaches. Applying MuPPET on the training of AlexNet, ResNet18 and GoogLeNet on ImageNet (ILSVRC12) and targeting an NVIDIA Turing GPU, the proposed method achieves the same accuracy as the standard full-precision training with an average training-time speedup of 1.28× across the networks. | [
"training",
"muppet",
"strategy",
"cnns",
"policy",
"quantised",
"networks",
"training time",
"use",
"precision"
] | Reject | https://openreview.net/pdf?id=H1xzdlStvB | https://openreview.net/forum?id=H1xzdlStvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"VHxYD5BXGe",
"Hkee75Q3jr",
"rkgKa_mhsB",
"rkgs7O7hsS",
"r1xFvatycr",
"BJeQTHBTtr",
"r1gJesMatS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747875,
1573825048210,
1573824704667,
1573824546728,
1571949921264,
1571800507195,
1571789543369
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2389/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2389/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2389/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2389/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2389/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2389/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The submission presents an approach to speed up network training time by using lower precision representations and computation to begin with and then dynamically increasing the precision from 8 to 32 bits over the course of training. The results show that the same accuracy can be obtained while achieving a moderate speed up.\\n\\nThe reviewers were agreed that the paper did not offer a signficant advantage or novelty, and that the method was somewhat ad hoc and unclear. Unfortunately, the authors' rebuttal did not clarify all of these points, and the recommendation after discussion is for rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comments.\", \"major_points\": \"We would like to clarify that the core idea of the manuscript is not to argue that dynamic switching between precision levels is a necessary condition for good classification results, but rather that knowing when to switch the precision of the computations can bring runtime benefits to the training process. Towards this, we demonstrate that MuPPET leads to identification of points for precision switching that can enable the inclusion of extreme precision regimes for training that were not considered before that lead to its acceleration with no loss in the final accuracy. The manuscript has been revised in points to better reflect the above. Furthermore, we have demonstrated that the framework is agnostic to network and dataset which allows for a single generalisable approach. \\n\\nSection 4.4 has been updated with Fig. 3 which is an accuracy-time trade-off plot. Fig. 3 shows that, for a given time-budget, MuPPET outperforms runs that have 1) randomly chosen switching points or 2) switching points borrowed between networks and datasets, thus justifying a need for a framework that adapts at run-time to both network and dataset. \\n\\nSimilar to the tuning of hyperparameters for training CNNs in general, the hyperparameters for MuPPET (threshold parameters) were explored in an empirical manner. In the revised manuscript, the choice of p is addressed in the paragraph beginning \\u201cAs long as the gradients \\u2026\\u201d in Section 3.3 which discusses how these choices make p and hence MuPPET agnostic to dataset and networks. Furthermore, the paragraph beginning \\u201cThe likelihood of observing r gradients\\u2026\\u201d in Section 3.3 discusses the reasoning behind using gradient diversity as part of the metric. Nonetheless, the key point to take away here is that the further tuning of the hyperparameters for p is not crucial for the performance of MuPPET as it\\u2019s generalisability across datasets and networks has been demonstrated through our results. \\n\\nSection 3.2.1 has been updated with Eq. (4), (5) and (6) to make the quantisation strategy more explicitly defined. Furthermore, following your suggestion, the introduction has been made more concise and further emphasis has been put on describing the novel contributions of MuPPET.\", \"minor_points\": \"Equation 3 has been made more explicit in terms of what the representable range of q^i means. \\nAll figures have been additionally added to Appendix B at a larger scale to make them more readable. \\nThis was a typo and has now been fixed. \\n\\u201cDistribution approach\\u201d referred to how the framework distributed the computations across GPUs, but has now been removed due to space considerations as this detail was not essential to the underlying principles of MuPPET. \\nTable 1 has been replaced with the new discussion in Section 4.4. It was originally there to motivate the need for precision switching. \\nWe used the standard process described in each model's implementation for data augmentation and preprocessing, such as scaling and cropping the input image, horizontal flipping with a probability of 50% and normalisation by subtracting each channel mean and dividing by the standard deviation.\\nClarification of \\u201ctheoretical limit\\u201d has been addressed in Section 4.3. All timings include computations at 8-, 12-, 14- and 16-bit fixed-point.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your review.\\nRegarding point 1), we have edited the sentence towards the end of the introduction to better phrase the impact and purpose of this work. \\n\\nRegarding point 2), it has been added to Section 3.3.1 that these quantisation levels were empirically chosen. The reasoning behind this is that we want to increase the utilised word length as little as possible in order to gain the most performance (runtime) from the computation platform, but moving too little will not result in \\u201cenough\\u201d information gain and will force the system to switch regimes too often leading to the waste of computational resources.\\n\\nRegarding point 3), Section 3.3 has been updated to address all the mentioned points. \\n\\nRegarding point 4), this never occurs, however, Fig. 2 will be updated to highlight the exact points at which the threshold is violated. A short discussion has also been added in Sec.4.1 to clearly indicate switching points. \\n\\nRegarding point 5), we feel that a difference of < 1% in Top-1 Validation Accuracy on ImageNet is not considered \\u201cmuch lower\\u201d and fluctuations at these levels can be seen between identical training runs. Furthermore, with respect to GoogLeNet, for the exact same hyperparameters, we achieve a +4.55% improvement in validation accuracy which is significant. Compared to [2], as has been added to the discussion in Section 4.3, this work pushes this boundary even further and opens the possibility for performing computations at wordlengths much lower than 16-bit, and at fixed-point instead of floating-point. With the availability of native hardware (e.g. 8-bit fixed-point computations in NVIDIA\\u2019s Turing GPUs), being able to perform training at these precisions without compromising accuracy (as shown in this paper) carries significant advantages. \\n\\nRegarding point 6), we have added these graphs to Appendix A with a short description of what they show.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your review. Further details on the motivation behind the switching mechanism has been added to Section 3.3 in the revised version of the paper, particularly at the paragraph beginning \\u201cThe likelihood of observing r gradients\\u2026\\u201d.\\nAdditionally we would like to state that when the observed p-value is high, this could be due to either true co-alignment of the gradients, or due to information loss from quantisation the gradients appear to be co-aligned. We find that it is unlikely to observe multiple minibatches across 3 epochs to have similar gradients. As a result, seeing this behavior would indicate that information is being lost through quantisation producing the observed low gradient diversity. \\n\\nThe gradients being used are those obtained from the last minibatch of each epoch. This has been updated in point 1) of Section 3.3. \\n\\nIt was not our intention in the original version for it to sound as an adhoc justification of MuPPET for AlexNet, more so just an observation of the experiment that we ran. However, Section 4.4 has been revised with more thorough and appropriate experiments and analysis in the current version of the paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Overall an interesting paper, though I wished a more detailed presentation of the reasoning behind the algorithm would have been provided. As it stands it feels a bit heuristic.\\n\\nIn particular I don't understand the motivation between the switching mechanism. Basically it says if the gradients are co-aligned between epochs it means there is not much to learn anymore!? Why? Intuitively if the gradients would go to 0 or become very small maybe you would want to increase precision. Or if you have high variance you could argue that the expected gradient would be 0 and hence you are not really making progress, i.e. you are just moving left-right. But if all gradients agree on a moving direction, why is that a bad thing? I know the heuristic is borrowed from a different work, but since it feels as such an integral part of MuPPET I think you should explain it better. \\n\\nI guess a few details about the algorithm as well. When you say you look at the diversity of the gradients over the epochs, is this the batch gradient !? \\n\\nThere are some small typos (e.g. FP23 instead FP32). \\n\\nI find the justification for AlexNet to be adhoc (it switched at the wrong time, but that allowed to take more advantage of computation in the low precision hence it was faster). The switching mechanism should only care of when the gradients are not informative anymore, not how much compute you are wasting .\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis paper proposes a training strategy called Multi-Precision Policy Enforced Training(MUPPET). This strategy aims to reduce training time by low-precision data representation and computations during the training stage. According to the gradient diversity, the authors introduce a precision-switching mechanism which chooses the best epoch to increase the precision. The validation accuracy and training time across several networks and datasets are shown in the experiments. However, the results are not superior enough compared with the state-of-the-art.\\n\\nMy detailed comments are as follows.\", \"positive_points\": \"1. This paper proposes a new reduced-precision training scheme to speed up training by progressively increasing the precision of computations from 8-bit fixed-point to 32-bit floating-point. This scheme moves to reduced-precision fixed-point computations while updating an FP32 model in order to push the boundaries of reduced-precision training. \\n\\n2. The authors propose a metric to decide when to switch the precision inspired by gradient diversity introduced by [1]. In this paper, the gradient diversity is enhanced by considering gradients across epochs instead of mini-batches. The proposed metric can be seen as a proxy for the amount of new information gained in each training step. Therefore, the metric can decide the most appropriate epoch at run time to increase the precision.\\n \\n3. The proposed low-precision CNN training scheme is orthogonal and complementary to existing low-precision training techniques.\", \"negative_points\": \"1. The proposed approach does not match the description in this paper. The authors describe \\u201cThis approach enables the design of a policy that can decide at run time the most appropriate quantization level for the training process\\u201d. In fact, this approach just decides which epoch to increase the quantization level while the levels of quantized precisions are fixed, rather than deciding the most appropriate quantization level. \\n\\n2. The setting of quantized precision levels (8-, 12-, 14- and 16-bit precisions) is confusing. Please illustrate how to choose the number of quantized bit and the number of quantized precision levels.\\n\\n3. The presentation of the precision switching policy is confusing and the notations are unclear. For example, in section 3.3, the ratio \\u201cp\\u201d needs more description because it is a key value in the policy, but lacks an explanation in this section. So please explain more about the motivation of ratio \\u201cp\\u201d in this section. \\tIn section 3.3, in step 5 of the proposed precision switching policy, the authors do not explain the meaning of \\u201cy\\u201d.\\n\\n4. In figure 2, the precision switch is not triggered even though the value of p violates the threshold more than 2 times, which mismatches the description in section 3.3.\\n\\n5. The proposed strategy has no obvious advantages. There are some scenes that the proposed strategy does not perform well. For example, the Top-1 validation accuracy on ImageNet of AlexNet and ResNet with MuPPET strategy is much lower than FP32 baseline. Compared with [2], the proposed method is more complex but not superior enough.\\n\\n6. The authors do not show the training and validation curves. However, the training and validation curves are common used to show more details of the training process, such as in [2] and [3]. Please show and analyze the training and validation curves of the proposed scheme and the baseline.\", \"minor_issues\": \"Some spelling and grammar mistakes.\\n\\n\\nReference\\uff1a\\n[1] Dong Yin, Ashwin Pananjady, Max Lam, Dimitris Papailiopoulos, Kannan Ramchandran, and Peter Bartlett. Gradient Diversity: a Key Ingredient for Scalable Distributed Learning. In 21st International Conference on Artificial Intelligence and StatiZZstics (AISTATS), pp. 1998\\u20132007, 2018.\\n[2] Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. Mixed Precision Training. In International Conference on Learning Representations (ICLR), 2018.\\n[3] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep Learning with Limited Numerical Precision. In 32nd International Conference on Machine Learning (ICML), pp. 1737\\u20131746, 2015.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The article presents an approach to reduce the precision of weights, activations and gradients to speed up the training of deep neural networks. The precision of these values is increased according to a dynamic schedule such that the original classification accuracy is reached after training.\\n\\nThe manuscript is in most parts well written and the addressed topic is of general interest for the research community represented at ICRL. Still, I recommend a weak reject, since the core idea of the manuscript, i.e. the dynamic switching between precision levels, is not shown to be a necessary condition for good classification results.\", \"major_points\": \"\\u2022\\tThe introduction does not give a clear statement about the novel contribution of the paper. Only the very last paragraph is specific about the paper.\\n\\u2022\\tYour results support that step-wise increasing the resolution speeds up training without significant losses in accuracy. However, the impact of the gradient diversity, choice of p and threshold parameters on the performance of the trained networks are unclear. What is the isolated impact of every of these choices? According to Figure 2, pre-defined switching points between precision levels may also generalize between networks and datasets.\\n\\u2022\\tThe description of the quantization scheme is not clear enough in order to reproduce the results:\\no\\tPlease give details about every step from FP32 to FPx values or cite appropriate literature.\", \"oequation_4_and_5\": \"How are the scaling factors SC determined?\\no\\tPlease clarify the difference/relation between n and WL.\", \"minor_points\": \"\\u2022\\tEquation 3: What does \\u201crepresent. range(q^i)\\u201d mean?\\n\\u2022\\tText in Figure 1 and 2 is far too small and barely readable\\n\\u2022\\tStep 5 in Algorithm in Section 3.3: What does \\u201cp violates y more than gamma times\\u201d mean? What is y?\\n\\u2022\\tPlease clarify \\u201cdistribution approach\\u201d. Distribution of what?\\n\\u2022\\tTable 1: For the baseline experiments, the precision is switched from 8 to 32 bits, for MuPPET from 8 to 12 bits (see main text). What is the motivation behind these different choices?\\n\\u2022\\tDo you use any type of data augmentation?\\n\\u2022\\tTable 3: Please clarify \\u201ctheoretical limit\\u201d. Does this limit include 12 and 14 bit quantisation. What do you mean by \\u201coptimized quantization implementation\\u201d in main text?\"}"
]
} |
S1eZOeBKDS | Deep Spike Decoder (DSD) | [
"Emrah Adamey",
"Tarin Ziyaee",
"Nishanth Alapati",
"Jun Ye"
] | Spike-sorting is of central importance for neuroscience research. We introducea novel spike-sorting method comprising a deep autoencoder trained end-to-endwith a biophysical generative model, biophysically motivated priors, and a self-supervised loss function to training a deep autoencoder. The encoder infers the ac-tion potential event times for each source, while the decoder parameters representeach source’s spatiotemporal response waveform. We evaluate this approach inthe context of real and synthetic multi-channel surface electromyography (sEMG)data, a noisy superposition of motor unit action potentials (MUAPs). Relative toan established spike-sorting method, this autoencoder-based approach shows su-perior recovery of source waveforms and event times. Moreover, the biophysicalnature of the loss functions facilitates interpretability and hyperparameter tuning.Overall, these results demonstrate the efficacy and motivate further developmentof self-supervised spike sorting techniques. | [
"self-supervised",
"deep learning",
"spike sorting",
"EMG",
"sEMG",
"autoencoder",
"inductive bias"
] | Reject | https://openreview.net/pdf?id=S1eZOeBKDS | https://openreview.net/forum?id=S1eZOeBKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"_UYc9KyXZ",
"r1gvTRU7jB",
"BJeR3p3xir",
"rJxSU_FbqB"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798747845,
1573248703002,
1573076405969,
1572079692777
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2388/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2388/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2388/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper presents a model for learning spiking representations. The basic model is a a deep autoencoder trained end-to-end with a biophysical generative model and results are presented on EMG and sEMG data, with the aim to motivate further research in self-supervised learning.\\n\\nThe reviewers raised several points about the paper. Reviewer 1 raised concerns about lack of context on surrounding work, clarity of the model itself and motivating the loss. Reviewer 2 pointed out strengths of the paper in its simplicity and the importance of this problem, but also raised concerns about the papers clarity, again motivations on the loss function and sensibility of design choices. The authors responded to the feedback from reviewer 1, but overall the reviewer did not think their scores should be changed.\\n\\nThe paper in its current form is not yet ready for acceptance, and we hope there has been useful feedback from the reviewing process for their future research.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Discussion of Review #4\", \"comment\": [\"Thanks a lot for the review. In response to your points:\", \"We avoided extensive literature review and explanation on KiloSort mostly due to page limitations. In the future revised version of the paper, we\\u2019ll try to provide more background information and also include a summary of KiloSort in the appendix.\", \"We chose L4 over L2 largely due to ease of hyperparameter tuning purposes. EMG signal contains both action potentials and noise. Sparsity and reconstruction loss terms, by creating a trade-off, are tuned such that only the signal corresponding to action potentials are reconstructed. Here, we use the L4 norm because it differentiates between action potential signals from EMG noise more successfully and makes the hyperparameter tuning (see total loss equation) easier.\", \"Parsimony loss is based on the L1-L2 norm. The L1-L2 norm (and more generally the L1-Lq norm) is used as a block sparsity inducing penalty in optimization algorithms (see [1]). The critical part in the parsimony loss function is the tensor partitioning G over which L1 norm (i.e. the sum operation) is performed. By applying the L1-L2 norm over different partitionings of a tensor, one can induce different patterns of block sparsity. Here, our partitioning is based on spatial neighborhoods (specified by the number of consecutive electrodes) for each individual motor unit. This loss then minimizes both the number of motor units and their spatial footprints. The reason why we apply this loss on the first time-derivative of the spatiotemporal waveforms tensor is to get the temporal smoothness effect as well. We\\u2019ll try to complement our explanation with visualizations in the revised paper.\", \"The uniqueness loss (and also refractory period loss) are optional terms we use to tackle particular problems we observed during our experiments. We\\u2019ll move the experimentations with these terms to the experiments section and explain them in relation to the particular problems they\\u2019re trying to address.\", \"[1] Francis Bach, Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski, et al. Optimization with sparsity-inducing penalties. Foundations and Trends in Machine Learning, 4(1):1\\u2013106, 2012.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a new algorithm for spike-sorting. It is implemented by a deep autoencoder with biophysically motivated loss functions. The encoder is the main module which conducts the spike-sorting task while the decoder is only used for training the model in an end-to-end fashion.\", \"this_paper_should_be_rejected_due_to_the_following_arguments\": [\"The paper lacks a section on literature survey, to let the reader know how/where the proposed method fills the gap in the current state-of-the-art. They do compare their results with the KiloSort (Pachitariu et al., 2016) algorithms, however, no discussion is provided on how it works and why their method outperforms it.\", \"It is unclear why the reconstruction loss is chosen to be an L4 norm as opposed to L2.\", \"The authors claim that the parsimony loss as defined in Eq. (7) forces \\u201cthe network to be parsimonious with respect to the set of MUAP proposals it uses to explain a given sEMG signal.\\u201d My understanding, however, is that the only functionality of the loss defined in Eq. (7) is to enforce temporal smoothness. More elaborate explanation is needed to support the authors claim.\", \"I could not understand the functionality of the uniqueness loss. Specifically, why should \\u201cthe joint occurrence of the temporal similarity between MU spike trains and the spatial similarity between their MUAP waveforms\\u201d be penalized? Isn\\u2019t that the case that same stimuli should result in similar response? It is unclear what this has to do with forcing to explain different phenomena.\"], \"things_to_improve_the_paper_that_did_not_impact_the_score\": [\"The method (and the paper) is named deep spike \\u201cdecoder\\u201d (DSD) while in fact the \\u201cencoder\\u201d part of the learned deep autoencoder actually conducts the spike-sorting task. This could be confusing!\", \"Page 2, Sec. 3.1, line 2: Should use \\\\times in inline equations in Latex for the multiplication symbol, not character x. Fix everywhere in the text.\", \"Page 6, Par. -2, line -2: The word \\u201creplicate\\u201d is repeated.\", \"Non-legible plots axes.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper describes a machine learning model for learning spiking representations from EMG (electromyography) like data. The basic model is an autoencoder, where the bottleneck layer is designed to project the waveform inputs into \\\"spike\\\" representations. Specifically, there is a DSD (Deep Spike Decoder) encoder and decoder, where the encoder is implemented as either a DenseNet[1]-like or Tiramisu[2]-like architecture. The bottleneck seems to be implemented by a linear layer binarized with a Gumbel-Softmax activation. The decoder is a linear layer. Several losses are considered, including a L4-norm (!), a sparsity loss, a parsimony loss, a uniqueness loss, and a refractory period loss. The model is validated qualitatively on real EMGs, and quantitatively on synthetic data.\", \"the_strengths_of_the_paper_are_the_following\": [\"Spiking models are very interesting class of models, which if attained would have a great impact on several other areas of machine learning.\", \"I like the straightforward design that is fit to the purpose of generating spiking representations. A Gumbel-Softmax has proven its validity and is a logical fit to the problem setup.\", \"I like the simpler setup the paper chooses.\", \"-- First, the paper chooses not to take the route of trying to learn backpropagation through infinite time steps, as often happens in spiking methods. This is a beast on each own, which would be nice to set as an ultimate goal (in my opinion). For now, however, it's ok if we forego this requirement.\", \"-- Second, the paper chooses not to assume that signals come asynchronously, which again makes things unnecessarily complex, given the state of the field.\"], \"the_weaknesses_of_the_paper\": [\"While the problem setup is simple, perhaps it makes some assumptions that are too strong. In my opinion, the strongest one is that of batch-learning (for the lack of a better name). Often, spiking models are studied in an online setup where the data comes online (continuously), and then spikes are generated when sufficient changes in the inputs warrant a delta-difference (spike). When in batch mode, however, it should be quite much easier to obtain spikes that lead to good reconstructions, as there is no much need for a \\\"memory\\\" mechanism. While one could argue whether an online setup is necessary or not, in my opinion it is necessary and would make a spiking model challenging and interesting to learn. Otherwise, it looks like a conventional autoencoder, only with spikes instead of dense representations.\", \"The model is unclear and writing vague.\", \"First of all, after reading the abstract and the introduction I get the feeling that the model is probabilistic, as there is mention of priors and autoencoders. Also, later on a Gumbel-Softax is mentioned for binarization. Gumbel-Softmax is a continuous function and binarization makes sense when sampling, that is when assuming a generative process. However, the rest of the paper seems not to explain a probabilistic or generative model. There is no explicit prior discussed. There is no sampling procedure discussed. The losses are explained but not within a particular probabilistic or stochastic framework. If the model is not stochastic, one way or the other, how are the discrete spikes obtained and how is the system is trained?\", \"All the loss functions, albeit logical when taken one by one, they do look ad hoc and appearing out of the blue. This perhaps relates to the previous point, where it is not clear if the model is stochastic or deterministic. If it is deterministic, it is ok to have all the loss functions appearing like that, but I would expect some more explanation on what purpose do they fill w.r.t. the spiking model. In the end, what does the deep spiker model try to achieve? Learn spikes as representations? Recover the original spikes? Be sparse? If yes, why and how sparse? Be energy-efficient?\", \"Some design choices are quite unclear. Generally, it is fair to say that the experimental and design setups are rather simple: multiple 1-D waveforms and not much noise (from what I get). In that context, it is not clear why DenseNets, or even Tiramisu-Nets are used as an encoder; especially when the decoder is a simple linear model.\", \"Also, phrases like \\\"what good combinations are useful ... the hyperparametres of DSD\\\" do not add to the clarity. There is no exploration of hyperparameters in the experiments and no individual examination of the contribution of each loss (unless I missed it somewhere).\", \"Similarly, what does \\\"For the DSD ... between 10 minutes to an hour depending on ...\\\". Such statements should be more precise, for instance plotting wall clock time vs training loss.\", \"All in all, while I like the motivation and the original direction, I believe there exist a lot of questions unanswered before acceptance.\"]}"
]
} |
r1eWdlBFwS | Isolating Latent Structure with Cross-population Variational Autoencoders | [
"Joe Davison",
"Kristen A. Severson",
"Soumya Ghosh"
] | A significant body of recent work has examined variational autoencoders as a powerful approach for tasks which involve modeling the distribution of complex data such as images and text. In this work, we present a framework for modeling multiple data sets which come from differing distributions but which share some common latent structure. By incorporating architectural constraints and using a mutual information regularized form of the variational objective, our method successfully models differing data populations while explicitly encouraging the isolation of the shared and private latent factors. This enables our model to learn useful shared structure across similar tasks and to disentangle cross-population representations in a weakly supervised way. We demonstrate the utility of our method on several applications including image denoising, sub-group discovery, and continual learning. | [
"variational autoencoder",
"latent variable model",
"probabilistic graphical model",
"machine learning",
"deep learning",
"continual learning"
] | Reject | https://openreview.net/pdf?id=r1eWdlBFwS | https://openreview.net/forum?id=r1eWdlBFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"uMu1stIs41",
"HJxnlUtN9B",
"HJg6uOEatH",
"r1lMN7j9FS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747815,
1572275699847,
1571797109206,
1571627818419
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2387/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2387/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2387/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a hierarchical Bayesian model over multiple data sets that\\nhas both data set specific as well as shared parameters. \\nThe data set specific parameters are further encouraged to only capture aspects \\nthat vary across data sets by an addition mutual information contribution to the \\ntraining loss. \\nThe proposed method is compared to standard VAEs on multiple data sets. \\n \\nThe reviewers agree that the main approach of the paper is sensible. However, \\nconcerns were raised about general novelty, about the theoretical justification \\nfor the proposed loss function and about the lack of non-trivial baselines. \\nThe authors' rebuttal did not manage to full address these points. \\n \\nBased on the reviews and my own reading, I think this paper is slightly \\nbelow acceptance threshold.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes to model multiple datasets from differing distributions with shared latent structure and private latent factors. The main techniques include architecture design which encourages the isolation of shared and private latent factors and a mutual information-based regularizer. The paper is clearly written and easy to follow. I enjoyed reading it. The experiments support the claim of learned population-specific representations.\\n\\nHowever, I found that the paper has some weaknesses:\\n1. The novelty is not enough. All the techniques involved in the paper are not new but from existing literature. The idea is not new. The authors also mentioned several previous works in Section 3, e.g. Multi-level VAEs, oi-VAEs.\\n\\n2. More importantly, I did not see any baselines in the experiments except vanilla VAE. As far as I understand, previous methods can be easily adapted to these tasks. For example, [1] tried continual generative modeling for a sequence of distinct distributions. Many important baselines are missing in the experiments, which makes it hard for me to evaluate how significant the work is.\\n\\n3. What if the populations are not exclusive? The regularizer enforces them to be isolated but they are not in fact.\\n\\n4. How did you choose the annealing schedule of $\\\\alpha$ in Section B.1?\", \"minor\": \"page 2 eq (1) z_i -> z_{ki}\\n\\npage 3 last paragraph \\u201cit may desirable\\u201d\", \"references\": \"[1] https://arxiv.org/pdf/1705.08395.pdf\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces a novel model called \\\"Cross-Population Variational Autoencoder (CPVAE), which is designed to model data from different distributions sharing some common structure. The proposed generative model utilizes both shared and private per-population latent variables. In order to restrict shared latent variables from \\\"leaking\\\" into private representations, the authors introduce an information-theoretic regularizer. This regularizer forces private population representations to: (a) maximize mutual information with input samples from their population, and (b) minimize mutual information with input samples from other populations. In other words, private representations are forced to be \\\"meaningful\\\" on the corresponding population alone.\", \"quality\": \"The paper is well-written. I find the proposed method to be quite interesting. The derivations appear to be correct except for possibly the cancellation in Eq. 5, which I originally missed.\", \"significance\": \"In my opinion, if sound, the approach discussed in this paper may lead to interesting practical applications and may inspire other methods based on similar ideas.\", \"originality\": \"Even though, as authors point out, there is a substantial amount of work in this field, I believe that their approach is novel and has its own merits.\", \"clarity\": \"The paper is well written and the material is presented with clarity. In my opinion, the only exception is Section 4.4, which could definitely benefit from a few additional sentences describing the training procedure in more detail. Right now I find it a bit confusing. It would appear that the shared encoder / decoder continue to be trained as new populations arrive. Would this mean that catastrophic forgetting can actually impact this shared representation? And if it changes by a sufficient degree, can it reduce the quality of the generated samples for older populations? If so, I think these points should be mentioned in the text.\", \"questions_and_suggestions\": \"Experiments described in the paper are sufficiently convincing, but there are a few questions that could potentially be better clarified in the paper.\\n\\n1. After reviewing the text again and seeing the comment of Reviewer #1, I am also confused about the cancellation in I_q(x_k; t_k) - I_q(x_{-k}; \\\\tilde{t}_k). Is it not true that marginal distributions q_\\\\phi(t) and q_\\\\phi(\\\\tilde{t}) in the KL divergence term in Eq. 5 are different? Unfortunately, the final optimization objective relies on the cancellation of these terms and if they do not cancel, the approach may not be theoretically justified despite producing interesting and compelling results. (This affected the final rating. I will be able to change the rating once this point is clarified.)\\n\\n2. Another issue is related to the special case when there are several very similar populations. Consider, for example, a case when there are two nearly-identical populations out of many. Using very similar latent variables for two similar populations would be penalized by the regularizer (not too significantly though). I assume that depending on the embedding sizes and the value of alpha (which authors introduce in Section 2.3), the model would either choose to use shared latent variables to encode these populations, or would allow for two nearly-identical private latent representations to exist. I think this is a conceptually important special case that could be mentioned and possibly explained in the paper.\\n\\n3. I think the paper would also benefit from a clarification regarding the \\\"mixing\\\" function g. Choosing this function to be a simple sum of arguments seems restricting and may be insufficient for some datasets. It does not appear to be the case, but are there any restrictions on g? Can it come from a parametrized function family with parameters being optimized during training?\\n\\n4. I think the paper would benefit from a more detailed discussion in Section 4.4 (see above).\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studied the problem of learning the latent representation from a complex data set which followed the independent but not identically distributions. The main contributions of this paper are to explicitly learn the commonly shared and private latent factors for different data populations in a unified VAE framework, and propose a mutual information regularized inference in order to avoid the \\u201cleaking\\u201d induced by the shared representations across different populations. The isolation of the commonly shared and population specific latent representations learned by the proposed are empirically demonstrated on several applications. However, I have some concerns regarding this paper as follows.\\n(1) It is not clear why the private representation exhibits latent features from the shared space when using equation 3 and how this phenomenon hurts this CPVAE model.\\n(2) In equation 1, how to define the isotropic diagonal covariance matrix in the Gaussian distribution p? Is it parameterized by g?\\n(3) In equation 3, what is the prior distribution of p(z_ki, t_ki)?\\n(4) In equation (4)(5), why could the marginal KL term be canceled out when using I_q(x_k; t_k) - I_q(x_-k; \\\\tilde{t}_k)?\\n(5) The mutual information regularized inference involved the KL term between any two private factors from different populations. It might be not efficient for optimization. Thus, it will be helpful if the authors provide the model efficiency analysis compared with other baseline methods.\", \"minor_comments\": \"(1) what is the symbol \\u201cn_k\\u201d? Did it denote the number of examples for the k-th population?\\n(2) For mutual information regularized inference, it used two different notations: \\u201cI_q(x_k; t_k) - I_q(x_-k; t_k)\\u201d and \\u201cI_q(x^k; t^k) - I_q(x^-k; t^k)\\u201d.\"}"
]
} |
BJxbOlSKPr | Learning Compact Embedding Layers via Differentiable Product Quantization | [
"Ting Chen",
"Lala Li",
"Yizhou Sun"
] | Embedding layers are commonly used to map discrete symbols into continuous embedding vectors that reflect their semantic meanings. Despite their effectiveness, the number of parameters in an embedding layer increases linearly with the number of symbols and poses a critical challenge on memory and storage constraints. In this work, we propose a generic and end-to-end learnable compression framework termed differentiable product quantization (DPQ). We present two instantiations of DPQ that leverage different approximation techniques to enable differentiability in end-to-end learning. Our method can readily serve as a drop-in alternative for any existing embedding layer. Empirically, DPQ offers significant compression ratios (14-238x) at negligible or no performance cost on 10 datasets across three different language tasks. | [
"efficient modeling",
"compact embedding",
"embedding table compression",
"differentiable product quantization"
] | Reject | https://openreview.net/pdf?id=BJxbOlSKPr | https://openreview.net/forum?id=BJxbOlSKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"II4lbZQObA",
"BJg9ZaL2iB",
"ByxFark3sB",
"r1eBRjEooB",
"S1erwsNjoH",
"BkeJQoNooH",
"B1eqa9EsoS",
"Hyl2WZpf5H",
"r1lYKM7SKB",
"Hyet4X5p_S"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747783,
1573838081550,
1573807552625,
1573764045064,
1573763932984,
1573763862636,
1573763777827,
1572159748021,
1571267201496,
1570771761228
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2386/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2386/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2386/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2386/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2386/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2386/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2386/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2386/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2386/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The presented paper gives a differentiable product quantization framework to compress embedding and support the claim by experiments (the supporting materials are as large as the paper itself). Reviewers agreed that the idea is simple is interesting, and also nice and positive discussion appeared. However, the main limiting factor is the small novelty over Chen 2018b, and I agree with that. Also, the comparison with low rank is rather formal: of course it would be of full rank , as the authors claim in the answer, but looking at singular values is needed to make this claim. Also, one can use low-rank tensor factorization to compress embeddings, and this can be compared.\\nTo summarize, I think the contribution is not enough to be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the prompt response!\", \"comment\": \"Dear reviewer,\\n\\nThanks for the prompt response!\\n\\nWe think it is reasonable that our method can achieve better task performance than full embedding in certain tasks/datasets, because DPQ can implicitly regularize the model with more efficient parameterization. We have observed this phenomenon consistently on some tasks (LM on PTB, Wikitext-2) with multiple runs and different random seeds (excluding the possibility of noises).\\n\\nThanks again for another nice suggestion. We manually checked the rank of the reconstructed embedding matrix with numerical method on the NMT/WMT-19(EN-DE) and it is indeed full rank (512 for 32000x512 matrix).\\n\\nWe firmly believe our work is novel and can make a positive contribution to the community. The major novelty is to **formulate discrete codes via Product Quantization and making it generally end-to-end differentiable**.\\n\\nIt only bears resemblance to (Chen et al, 2018b) in terms of using discrete codes to tackle the embedding compression problem. However our formulation and training techniques are very different. The product quantization formulation is far more flexible and efficient than (Chen et al, 2018b): it allows optimization with various approximation techniques (SX, VQ), and also allows one-pass end-to-end training whereas (Chen et al, 2018b) is still constrained by an extra distillation procedure. This new formulation led to SOTA empirical results by a large margin. Potentially it can also be applied beyond embedding layer compression (e.g. for dense or conv layers via end-to-end product quantization of weights).\\n\\nPlease feel free to let us know if there are any remaining concerns, we are happy to further clarify!\"}",
"{\"title\": \"Thanks for resolving my comments\", \"comment\": \"Dear authors,\\n\\nThanks for resolving my comments. \\n\\nI think the additional empirical results is more convincing than before to reveal the empirical performance of the new end-to-end embedding compression approach. \\n\\nThe only thing that requires some elaboration is the performance of DPQ-SX in PTB in table 9. It is pretty surprising to me DPQ-SX with high compression rate achieves observably better performance than uncompressed embedding. Is it because performance variation due to random seeds? Typically, it requires multiple random seeds to present statistically meaningful results. If the surprising performance is from randomness (such as using only 1 random seed), I would suggest using multiple seeds to enhance the results.\\n\\nIn terms of the theorem, I think now the statement makes sense. One typo there is that condition 2) should be V^j \\\\in \\\\mathbb{R}^{K x d / D}. \\n\\nIf I understand correctly, the authors aim to say that the newly proposed representation can achieve high rank and preserve more information than low-rank approximation. To enhance the theory, I would suggest the authors to validate that on the embeddings generated by the proposed method, the representation indeed achieves relatively high rank by checking the singular values of the representation. (But as this work is not theory-focused, this suggestion would not dominate my rating.)\\n\\nGiven the above, I would raise the rating to boarder line if there is such an option. The limiting factor on rating is the incremental novelty over the existing works such as Chen et al. 2018.\"}",
"{\"title\": \"Revision\", \"comment\": \"We have updated the paper with the following changes:\\n\\n 1) We provided detailed analytical comparisons between our work and Chen et al 2018b and other traditional compression techniques in Appendix F as suggested by Reviewer #1 and #2.\\n 2) We added more empirical comparisons of our work with a broader set of baselines on more tasks in Appendix G as suggested by Reviewer #2 and #3.\\n 3) We revised Theorem 1 and made some clarifications as suggested by Reviewer #3.\\n 4) We added new results on BERT compression in Appendix H.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your time, careful evaluation and valuable suggestions!\\n\\n[Theorem 1]\\n\\nThank you for pointing this out! We have revised the statement of Theorem 1. Conditioned on B and the sub-matrices of V being full rank, the reconstructed matrix H would be full rank. The original conclusion stays unchanged: despite that DPQ uses fewer bits in its parameterization than the conventional full embedding, its resulting embedding matrix can still be full rank, which is more expressive than factorization methods via low-rank.\\n\\n[Experiments]\\n\\nWe added more comparisons and ablations in the revision which we hope will address the concerns.\\n\\n**Post-training embedding compression.** \\nAs suggested by the reviewer, one could first train the full embedding model and then compress the full embedding table into discrete codes. There are two approaches:\\n 1) Fix the discrete codes and re-train the model parameters. This is the \\\"pre-train\\\" method in Paragraph 2 of Section 3.1. Table 4 shows that the performance is not as good as DPQ.\\n 2) Use the discrete codes and the reconstructed embeddings directly with the original model (we believe this is what the reviewer suggested). We tested this idea on the NMT task and presented the results in Table 11 in Appendix G. For the same compression ratios, task performance is notably worse than DPQ. This is most likely due to small approximation errors in the embedding layer accumulating over layers.\\n\\n**Decoupling K and V for DPQ-SX.**\\nFigure 12 (Appendix G4) shows that with K=V in DPQ-SX, it incurs a tiny performance (PPL) loss, but still performs slightly better than DPQ-VQ in LM task. Intuitively, we don\\u2019t think we should tie K and V for DPQ-SX, as K is used to compute the probability (using dot product as proximity metric) over elements on V. However, it seems the downstream model may be able to adapt to this change of parameterization (and therefore only a slight performance loss). \\n\\n**More baselines; more tasks.** \\nIn Table 9 and 10 we show comparison between DPQ and more existing methods on LM and text classification tasks. Our methods outperforms baselines quite significantly and consistently.\\n\\n**Applying DPQ to BERT.**\\nTo further verify our method we added experiments on BERT (Appendix H). Without any hyperparameter tuning, DPQ could compress the embedding layer 37x with negligible loss in performance. \\n\\n[Other]\\n\\nWe also revised the paper to incorporate the minor comments provided by the reviewer. We are happy to address any unresolved concerns or provide more clarifications!\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your time and the detailed comments! Please see below for clarifications and more empirical evidence.\\n\\n[Significance and Novelty]\\n\\nWe believe in the significance of embedding compression for inference/deployment time, because embedding parameters make up a large part of model parameters in a wide variety of models. For example, 95% of parameters in the medium-sized LSTM (refer to Paragraph 1 of Section 1) are embedding parameters; 99% in the MLP-based model for text classification, 22% in the BERT-base model, etc.\\n\\nTo our best knowledge, previous efforts offer up to ~50x compression ratios without performance loss, while our work achieve SOTA compression ratios of up to ~200x on 10 datasets across three different language tasks (we also have new results on BERT in Section H of the Appendix). We argue that these are empirically significant results.\\n\\nNovel contributions are made compared to Chen et al., 2018b's, and these contributions led to performance improvement by very large margins (Table 3 and 4). Their work and our work present very different methodologies. Here we list the major differences:\\n\\n 1) In their work, discrete codes are directly associated with each of the symbols; in this work, discrete codes are computed as outcome of product quantization. \\n 2) Our formulation of discrete codes with product quantization allows us to derive two variants with different approximation techniques (softmax-based and vector quantization-based).\\n 3) Our method uses a novel composition function (inspired by product quantization), which is much more efficient than before (smaller memory footprint and less computation time overhead, Figure 4).\\n\\n[Experiments and Why Discrete Codes (aka KD codes)]\\n\\nRe \\u201cthere's been a lack of explanation as to why this should be done only through KD codes\\u201d.\\n\\nFirst of all, we have added more comparisons to conventional methods in the Appendix G. Table 9 shows comparisons with scalar quantization, product quantization, low-rank factorization, as well as other discrete code baselines (Shu and Nakayama 2017 & Chen 2018b) for language modeling. Table 10 shows similar comparisons on text classification tasks. These results show that previous methods struggle to maintain task performance when trying to achieve good compression ratios, and our method DPQ outperforms them by large margins.\\n\\nThen, we provide more analysis on why our methods work better. Unlike traditional quantization techniques that accumulate and amplify quantization error, our method makes it end-to-end differentiable so that the neural nets can adapt to quantization error. DPQ also relates to factorization-based method, but DPQ can produce high-rank embedding table with sparse factorization, so it is more expressive (Theorem 1 in Section 2.1). More analysis with conventional approaches is in Appendix F.\\n\\nRe \\u201cIt would be great to show the change in PPL according to the compression ratio of DPQ models.\\u201d\\n\\nFigure 3, 7 and 8 show the trade-offs between task performance (PPL, BLEU or accuracy) and compression ratios for different sizes of K and D.\\n\\nRe \\u201cCan we apply it to pre-trained models like BERT?\\u201d\\n\\nYes. We added experiments on BERT (Appendix H). Without any hyperparameter tuning, DPQ could compress the embedding layer 37x with negligible loss in performance. \\n\\nRe \\u201cDid you run all experiments just one time? There is no confidence interval.\\u201d\\n\\nSome of our experiments are computationally expensive (e.g. days-long). However we did repeat experiments where resources allowed, e.g. the PTB language modeling experiments and the WMT\\u201919 En-De experiments, and found that the results were stable (e.g. std=0.6 over 4 runs for PPL in PTB LM). In the paper, we follow recent evaluation protocol in these tasks (e.g. [Shu and Nakayama 2017, Vaswani et al 2017]) and left out confidence intervals.\\n\\n[Other comments]\\n\\nRe \\u201cwhy the distilling in Chen et al., 2018b is a problem?\\u201d\\n\\nDistillation leads to more computation cost and in practice a more complex pipeline. Training with distillation requires pre-training of the embedding layer, which means the same model has to be trained twice (more computation). We also have to maintain two embedding tables for distillation (more memory).\\n\\nRe \\u201cIt is usually not necessary to train the entire embedding vector on GPU, so it would not be a big issue in the actual learning process.\\u201d\\n\\nOur goal is to reduce the embedding table size at the inference/deployment stage. E.g. we would be able to compress big models so that they can be deployed on mobile devices. It is not the goal of this work to improve training of embeddings.\\n\\nWe hope we have provided better explanation and more evidence for the contribution of this work, and are happy to address any further concerns!\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your time and the constructive feedback! While both this work and [1] are based on the idea of representing symbols with discrete codes, the two present very different methodologies. Here are the major differences:\\n\\n 1) In their work, discrete codes are directly associated with each of the symbols; in this work, discrete codes are computed as outcome of product quantization.\\n 2) Our formulation of discrete codes with product quantization allows us to derive two variants with different approximation techniques (softmax-based and vector quantization-based).\\n 3) Our method uses a novel composition function (inspired by product quantization), which is much more efficient than before (smaller memory footprint and less computation time overhead, Figure 4).\\n 4) With these improvements, DPQ can be trained in a truly end-to-end fashion to achieve an order of magnitude higher compression ratios at negligible or no performance cost.\\n\\nWe have also elaborated these differences in our related work section. Thank you for this suggestion.\\n\\nRegarding the comparisons between DPQ-SX and DPQ-VQ, they represent two ways of approximating discrete code learning. Each has its advantages and drawbacks (Table 1). In our experiments, we found DPQ-SX performance marginable better than DPQ-VQ for more tasks/datasets, while DPQ-VQ is more computationally efficient during training. There are potential ways to improve compression results for DPQ-VQ in the future, so we believe both variants have their merits.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper considers the problem of having compact\\u00a0yet expressive KD code for NLP tasks. The authors claim that the\\u00a0proposed differentiable product quantization framework has better compression but similar performance compared to existing KD codes.The authors present two instances of the DPQ framework: DPQ-SX using softmax to make it differentiable, and DPQ-VQ using centroid based approximation. While DPQ-SX performs better in terms of performance and compression, DPQ-VQ has the advantage in scalability.\\n\\n- Significance\\nIt's understandable that the size of the embedding is important, but there's been a lack of explanation as to why this should be done only through KD codes. Hence, it is doubtful how big the impact of the proposed framework is.\\n\\n- Novelty\\nJust extending and making Chen et al., 2018b's distilling method to be differentiable has limited novelty.\\n\\n- Clarity\\nThe paper is clearly written in most places, but there were some questions about the importance and logic of statements.\\n\\n- Pros and cons\\nCompared to Chen et al., 2018b, there is no need to use expensive functions, and performance is better. But, the baseline consists only of algorithms using KD codes; there might be many disadvantages compared to other types of algorithms.\\n\\n- Detailed comments and questions\\n1. It is true that the parameters for embedding make up a large part of the overall parameters, but I would like some additional explanation of how important they are to learning. It is usually not necessary to train the entire embedding vector on GPU, so it would not be a big issue in the actual learning process.\\n2. In a similar vein, it would be nice to show which of the embedding vector size or the LSTM model size contributes significantly to\\u00a0performance improvements.\\u00a0If LSTM model size contributes more, the motivation would be weakened.\\n3. It would be nice to add more baselines such as Nakayama 2017 as well as the standard compression/quantization methods used in other deep networks. And please explain why we should use KD codes to reduce embedding size. Also, why the distilling in Chen et al., 2018b is a problem?\\n4.\\u00a0Did you run all experiments just one time? There is no confidence interval.\\n5. DPQ models have different compression ratios depending on the size of K and D. It would be great to show the change in PPL according to the compression ratio of DPQ models.\\n6. Can we apply it to pre-trained models like BERT?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this manuscript, authors improve the work in [1] by simplifying the reverse-discretization function. The empirical study demonstrates the effectiveness of the proposed algorithm.\\nI\\u2019m not familiar with the area. The differences between this work and [1] should be elaborated more in the related work, since they are closely related.\\nBesides, for the technical part, DPQ-SX outperforms DPQ-VQ while the softmax approximation seems identical to that developed in [1].\\n\\n[1] ICML\\u201918: Learning K-way D-dimensional Discrete Codes for Compact Embedding Representations\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper works on methods for compressed embedding layers for low memory inference, where the compressed embedding are learned together with the task-specific models in a differentiable end-to-end fashion. The methods in the paper build on a slight variant of the K-way D-dimensional discrete code representation proposed by Chen et al.. Specifically, the two methods in the paper are motivated by the idea that the K-way D-dimensional code can be viewed as a form of product quantization. The first proposed method (DPQ-SX) uses softmax-based approximation to allow for differentiable learning, while the second proposed methods uses clustering centroid-based approximation. Empirically, the authors demonstrate that the proposed methods for generating compressed embedding can achieve matching inference accuracy with the corresponding uncompressed full embedding across 3 NLP tasks; these proposed approach can outperform the pre-trained word embedding and the K-way D-dimensional code baselines in language modeling tasks.\\n\\nThis paper builds on an existing embedding representation approach---K-way D-dimensional code. But I think the perspective on viewing k-way D-dim approach as product quantization (which motivate the differentiable learning approach in the paper) is very interesting. Also I think the empirical performance of the proposed method is promising. I gave weak rejection because 1) the proof of theorem 1 is flawed; 2) The experiment might need additional validation to fully support the merit of the proposed methods.\\n\\nI list below the major concern / questions I have. I am happy to raise the score if the following questions are properly resolved in the rebuttal:\\n\\n1. Correct me if I am wrong, I think the proof of theorem 1 is wrong---if the integer based code book C is full rank , it does not necessarily imply the one-hot vector based code book B is full rank. E.g. assume K = 2 D = 2,\\n\\nC = [1 1;1 2;1 1; 1 2] is full-rank (rank = 2), but the corresponding B = [1 0 1 0 ; 1 0 0 1; 1 0 1 0 ; 1 0 0 1] is not full rank (rank < 4).\\n\\n2. As the proposed methods advocate training the inference-oriented compressed embedding together with the task models (such as translation models), I think the following naive baseline is necessary to fully evaluate the merit of the proposed approach: one can train the full embedding with the task model as usual, compress the task-specific full embedding using the K-way D-dim approach by Chen et al. or using the deep compositional code learning approach by Shu et al., and then use it for the inference. This provides an alternative way to use product quantization based approach for embedding with low inference memory, without training together with task models. Without this, I can not evaluate the necessity to use the train-together approach the author proposed.\\n\\n3. The proposed DPQ-SX approach performs better in the two proposed approaches. However this approach uses different K and V matrix where in the original K-way D-dim approach we have K = V. This makes it hard to say if the better performance is due to the decoupling of K and V, or because of the training method inspired from the product quantization perspective. It needs ablation study here.\\n\\n4. In Table 4, the authors only compare to baselines on the LM task, I am wondering how it compares to the the baselines on the other two translation and text classification models.\\n\\nFor improving the paper, the relatively minor comments are as the following:\\n\\n1. In equation 4, the partition function Z is not explicitly defined.\\n\\n2. In the second paragraph of section 3.1, it is not clear what exactly is the pre-trained embedding used as baseline.\\n\\n3. For better readability, it is better to inflate the caption of figure and tables to provide useful take-away message there.\"}"
]
} |
HkxedlrFwB | Accelerating First-Order Optimization Algorithms | [
"Ange Tato",
"Roger Nkambou"
] | Several stochastic optimization algorithms are currently available. In most cases, selecting the best optimizer for a given problem is not an easy task. Therefore, instead of looking for yet another ’absolute’ best optimizer, accelerating existing ones according to the context might prove more effective. This paper presents a simple and intuitive technique to accelerate first-order optimization algorithms. When applied to first-order optimization algorithms, it converges much more quickly and achieves lower function/loss values when compared to traditional algorithms. The proposed solution modifies the update rule, based on the variation of the direction of the gradient during training. Several tests were conducted with SGD, AdaGrad, Adam and AMSGrad on three public datasets. Results clearly show that the proposed technique, has the potential to improve the performance of existing optimization algorithms. | [
"Neural Networks",
"Gradient Descent",
"First order optimization"
] | Reject | https://openreview.net/pdf?id=HkxedlrFwB | https://openreview.net/forum?id=HkxedlrFwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"DFXmJ0Bk3S",
"H1eOoyPTFr",
"HklyD34pKS",
"Hkgw-7intr"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747752,
1571807135892,
1571798102730,
1571758846865
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2385/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2385/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2385/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"All reviewers recommend rejection, and the authors have not provided a response.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\n\\nThis paper presents an adaptive method which can be used alongside existing accelerated gradient methods. The paper is difficult to read due to mistakes and poorly defined mathematical notation. I believe that the paper is missing reference to related methods. The theoretical analysis in the paper is difficult to follow and provides little insight into the benefits of the proposed approach.\", \"overview\": \"There are many mistakes throughout the paper which have made it difficult to read.\\n\\nOverall, I felt that this paper was missing a discussion of the effect of stochasticity on the proposed method. The issue with measuring the variation in the gradient direction is that in regimes where the gradient noise is dominating the signal the gradient direction at each time step is poorly correlated with overall optimization progress --- thus it seems intuitively ineffective to rely on the gradient direction to adjust the algorithm.\\n\\n1) At the bottom of page 2, the authors write \\\"knowing that we do not have any knowledge of what this function looks like\\\". While minor, I would point out that we are able to compute local statistics of the function and so certainly we have _some_ knowledge.\\n\\n2) The authors claim that no techniques exist which use the variation of the direction of the gradient. One such example is in [1] which uses (in one case) the variation of the gradient direction to determine and appropriate time to restart the momentum computation. \\n\\n3) In section 3.2, the Adam moment computation is missing a \\\"diag\\\". Assuming that AMRSGrad is AMSGrad (mistyped), then this term is incorrect and matches Adam.\\n\\n4) There are many mistakes in the Algorithm 1 box.\\n\\n- The wrong $F$ is used in the input (should be $\\\\mathcal{F}$).\\n- The algorithm takes as input a sequence of functions ($\\\\phi, \\\\psi$) which are not used.\\n- Within the if statement, $gm_t = g_t + m_t$. I believe this should be an $m_{t-1}$. It is not clear what the vector $gm$ is exactly, and then $\\\\dot{g}$ is used afterwards which is also not defined.\\n- The algorithm checks for $|m_{t-1} - g_t| > S$ while the text uses $|g_{t-1} - g_t| > S$.\\n\\n5) The first line of section 3.3 is quite worrying: \\\"We assume that if we are able to prove that modifying one optimizer with the proposed method does not alter its convergence, then the same applies for the other optimizers\\\". This seems like a dangerous assumption to make and should at the very least be carefully verified empirically. Following this, I am not sure what the authors mean by \\\"deterministic\\\" and \\\"non-deterministic\\\" methods.\\n\\n6) I do not understand the claim above Theorem 2 that $\\\\nabla f(x_{T-1}) = k \\\\nabla f(x_T)$. Under what conditions does this hold and how is $k$ computed? If I understand correctly, the bound provided in Theorem 2 is worse than that given for gradient descent. Moreover, the bound does not depend on the hyperparameter $S$ introduced in Algorithm 1 and provides limited insights into the method. I could not find a proof of Theorem 2 in the paper or appendix.\\n\\n7) There are serious flaws with the experimental evaluation in this paper.\\n\\na) There is no tuning over hyperparameter settings for any of the optimizers.\\n\\nb) The basic problems are very limited, even for toy problems. The 1D deterministic quadratic tells us very little about the performance of the optimizer. And the 1D cubic problem is particularly confusing. Unless I am mistaken, the gradient will always have the same sign (3x^2) and thus the acceleration condition will never be triggered.\\n\\nc) I believe that Figure 2 explores stochastic optimization problems which as discussed at top is a crucial evaluation. Unfortunately, due to lack of parameter tuning it is difficult to infer much about the comparison between the methods.\\n\\nd) Figure 4 compares performance variation over changing the threshold. The y-axis scale across each plot changes making the comparison unnecessarily difficult --- the scale should be the same.\", \"minor\": [\"TYPO Line 2, \\\"minimize ---,\\\"\", \"End of intro, MNIST and CIFAR not cited while IMDB is. Citation uses citet not citep.\", \"Bottom of page 2, \\\"\\\"\", \"Top of section 3.2, \\\"The pseudo code of our the method\\\"\"], \"references\": \"[1] Adaptive Restart for Accelerated Gradient Schemes, Brendan O'Donoghue and Emmanuel Candes\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors present an acceleration technique for first-order optimization algorithms by comparing the directions of gradients in consecutive steps, which works for SGD, Adam, and AMSGrad. Empirically it seems to work well with some standard evaluations with CNN for MNIST and CIFAR10 and LSTM for IMDB, beating the non-accelerated versions in convergence speed. However there are some issues with the parameter choice and proofs. Below are my specific comments:\\n\\n1. Setting the parameter S seems to be difficult and problem-dependent. S controls the size of the region near the optimum where the algorithm falls back to the non-accelerated version. But S depends on the size of the gradient, which is problem-dependent. If we need to tune S for the algorithm to work well on a particular dataset, then it defeats the purpose of acceleration in the first place. \\n\\n2. The setting of S also depends on batch size if mini-batch stochastic gradient algorithms are used. In the update rules S is compared against |g_t-1 - g_t|, and this quantity is directly related to the variance of gradients, which in term depends on the batch size. This makes it even more difficult to set a priori. \\n\\n3. What is k in Theorem 2? In the line above Theorem 2, why is it the case that the gradient at x_T-1 is k times the gradient at x_T? Also, if we compare equations 2 and 3, the regret bound for the `accelerated' version is k times worse than the original non-accelerated SGD. How could this happen? \\n\\n4. In the proofs in the Appendix I see no mention of the parameter S, which is very strange since it is part of the update condition. The size of S affects the convergence, as shown in Figure 4. It is odd to have a regret bound in Theorem 3 that is completely independent of S. \\n\\nUnless the authors can address these issues I don't think the current paper is suitable for publication yet.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper describes a technique to speed up optimizers that rely on gradient\\ninformation to find the optimum value of a function. The authors describe and\\njustify their method and show its promise in an empirical evaluation.\\n\\nThe proposed method sounds interesting and promising, but the empirical\\nevaluation is unclear. In particular, details are missing on the exact\\nexperimental setup and some of the presented results are unconvincing. I refer\\nto the results of the basic function optimization (Figure 1), which shows that\\nseveral of the considered optimizers are unable to even get close to the optimum\\nof x^2 after several hundred iterations. It seems that this is extremely easy\\nfunction to optimize -- why are the considered optimizers performing so poorly\\non it? How were the hyperparameters of the optimizers set? This presumably\\naffects the other results presented in the paper as well, and puts the\\nimprovement of the proposed method in question.\"}"
]
} |
BylldxBYwH | Physics-Aware Flow Data Completion Using Neural Inpainting | [
"Sebastien Foucher",
"Jingwei Tang",
"Vinicius da Costa de Azevedo",
"Byungsoo Kim",
"Markus Gross",
"Barbara Solenthaler"
] | In this paper we propose a physics-aware neural network for inpainting fluid flow data. We consider that flow field data inherently follows the solution of the Navier-Stokes equations and hence our network is designed to capture physical laws. We use a DenseBlock U-Net architecture combined with a stream function formulation to inpaint missing velocity data. Our loss functions represent the relevant physical quantities velocity, velocity Jacobian, vorticity and divergence. Obstacles are treated as known priors, and each layer of the network receives the relevant information through concatenation with the previous layer's output. Our results demonstrate the network's capability for physics-aware completion tasks, and the presented ablation studies show the effectiveness of each proposed component. | [
"neural inpainting",
"fluid dynamics",
"flow data completion",
"physics-aware network"
] | Reject | https://openreview.net/pdf?id=BylldxBYwH | https://openreview.net/forum?id=BylldxBYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"G_AN7-BgEc",
"HygM1raHcS",
"r1gggNzRYH",
"B1ehNl2iYS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747721,
1572357337805,
1571853288510,
1571696692323
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2384/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2384/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2384/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors present a physics-aware models for inpainting fluid data. In particular, the authors extend the vanilla U-net architecture and add losses that explicitly bias the network towards physically meaningful solutions.\\n\\nWhile the reviewers found the work to be interesting, they raised a few questions/objections which are summarised below:\\n\\n1) Novelty: The reviewers largely found the idea to be novel. I agree that this is indeed novel and a step in the right direction.\\n2) Experiments: The main objection was to the experimental methodology. In particular, since most of the experiments were on simulated data the reviewers expected simulations where the test conditions were a bit more different than the training conditions. It is not very clear whether the training and test conditions were different and it would have been useful if the authors had clarified this in the rebuttal. The reviewers have also suggested a more thorough ablation study.\\n3) Organisation: The authors could have used the space more effectively by providing additional details and ablation studies.\\n\\nUnfortunately, the authors did not engage with the reviewers and respond to their queries. I understand that this could have been because of the poor ratings which would have made the authors believe that a discussion wouldn't help. The reviewers have asked very relevant Qs and made some interesting suggestions about the experimental setup. I strongly recommend the authors to consider these during subsequent submissions. \\n\\nBased on the reviewer comments and lack of response from the authors, I recommend that the paper cannot be accepted.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"I am not an expert in recent Navier-Stokes approaches, but note that there is a lot of recent work in physics aware modeling. Specifically the sections on e.g. loss seem to have a lot of prior work. It\\u2019s difficult for me to judge the exact amount of novelty in this paper with respect to the physics. With respects to the DL part it looks like it\\u2019s mainly minor modifications to the known U-net architecture.\", \"With respect to the introduction of the stream function branch, in table 1 one can observe that the error is actually higher than simply without it. The authors argue that \\u201cthe synthetic velocity field data has discretisation errors and it is not truly divergence free. Therefore, the approach with a single stream function branch cannot capture the divergent modes present on the original data\\u201d. I\\u2019m not sure exactly what to think of this .. in my opinion this means they should use a more accurate flow solver for their simulation as otherwise it is hard to draw any definite conclusion here and only speculation remains.\", \"I guess the inpainting works decently well, as is expected from previous image inpainting literature and the problem is essentially treated as image completion. However, I can\\u2019t really visually tell much of a difference between the images shown for the different approaches/components in figures 3-6. Again, this ties back to my previous component about lack of clarity of the improvement of the introduced individual terms.\", \"I find it a little bit weird that there is exactly one reference prior to 2012. Fluid dynamics isn\\u2019t exactly a field that was introduced 5 years ago. Also their paper ends before the 8 page limit. I think the authors could have used the remaining space more efficiently.\", \"I generally like the idea of including physical consistency when training to train a neural network for a respective task where this matters. I\\u2019m just not sure I have a clear take-away from this paper as the results don\\u2019t seem to carry a clear message of the proposed approaches resulting in definite improvements over more naive approaches or including only partial physical consistency.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis paper proposes a physics-aware variant of a U-Net network for completing missing flow field data. Most notably, the loss functions are motivated by fluid dynamics, which forces the network to remain more consistent with the governing laws.\", \"decision\": \"I found the paper and the idea very exciting. Injecting domain knowledge by forcing the output (or differentiable transformations thereof) to be consistent with the physics is a quite relevant and appealing idea, for which this paper constitutes a nice proof-of-concept. Framing the problem of completing missing flow data as an inpainting task is also original. However, the evaluation of the method does not study important aspects regarding its generalization. The description of the experimental protocol is also missing important information.\", \"further_arguments\": [\"I found the discussion in Section 4.2 around method (b) not convincing. I do not understand why the network should be penalized if it does not reproduce the mistakes of the original simulator/solver. Since `div u` should be 0, why not simply penalizing ||div u|| instead of having the loss of Eqn. 8? Isn't approach (b) the approach which is most 'physics-aware' and correct from a physics point of view?\", \"The experiments do not highlight whether the network actually just learn the training distribution or generalize by \\\"understanding\\\" the physics of the problem. A compelling experiment would have been to evaluate whether a network trained on a prior family of obstacles transfer properly to a different family (e.g., training on 6 spherical and 6 rectangle obstacles, but testing on fewer/more obstacles with other shapes).\", \"Similarly, could the network generalize to larger/smaller inputs? Once trained, can it work on grids smaller/larger than 128x96? If not, what do you recommend to do in practice?\", \"The description of the experimental protocol does not specify whether the method was evaluated on independent test data. More worrisome, section 4.2 even states that the MAE for Figure 2 is computed \\\"over the whole dataset\\\".\", \"The method is not compared against any domain specific baseline.\", \"While quite exciting, I am not confident the contribution is original enough from an ML point of view for ICLR, although it is certainly novel for fluid dynamics.\"], \"additional_feedback\": [\"Some results reported in Table 1 are quite close of each other. It would have made the experiments much stronger if uncertainties were also reported and discussed.\", \"Fig 3: I would have liked seeing the error maps of the methods. This would have been quite helpful to better tell them apart.\", \"Given my comments above, I am confident an 8th page could be put to good use.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"In this paper the authors adopt prior work in image inpainting to the problem of 2d fluid velocity field inpainting by extending the network architecture and using additional loss functions. Specifically, the U-net network is extended with a DenseBlock in the middle, and a separate branch of the network is added which predicts the stream function (a different representation of the velocity field which guarantees incompressibility). The additional losses are L1 for various derivatives of the flow field (Jacobian, divergence, vorticity). Experiments presented in the paper show that these new elements improve the flow field error compared to a baseline model originally developed for image inpainting. The suggested application for this model is filling gaps in experimental measurements that are missing or impossible to obtain, and where such a model could be computationally cheaper than an actual fluid solver.\", \"The paper discusses prior work at the appropriate level of depth, and specifies hyperparameters that were used for training.\", \"As is, I do not believe the paper fully meets the bar for novelty and potential impact for acceptance at ICLR (but it could perhaps be a good fit for a more specialized venue). I am specifically not convinced about the practical applicability of the results. The novelty of the approach also seems quite limited, and the findings do not seem particularly surprising or insightful (i.e. that extending the vanilla U-net architecture and adding losses that directly bias the network towards physically meaningful solutions is better than the baseline).\", \"Since all training and testing data was generated from simulation, it is unclear how well the networks would cope with real world measurement noise. Furthermore, the authors note that the flow field from the simulation is sometimes not divergence-free. It is surprising that this could be a problem to a level where it could impact evaluations. If it indeed is, then perhaps it would make sense to either simulate the flows with a denser grid (inpainting could still be done on sparsified results). I found it surprising that the authors chose instead to use Eq. 8 and \\\"force\\\" the network to learn a flow field from the solver which is known to not be strictly physically valid.\", \"It is also unclear to me that the masking schemes used in the experiments are relevant to actual measurements -- for instance, 2nd row of Fig. 3 or 1st row of Fig. 4 seem completely artificial. I appreciated the ablation study variants (but see also comments below for at least one more configuration that I believe should be discussed in addition to the existing ones). It would however be more informative (and important for potential practical applications) to include some sort of breakdown by mask type and flow field configuration/structures (steady, unsteady, wakes, jets, vortices, etc). The included images show the model does not capture some finer details of the velocity field (e.g. vortex structures in the last row of Fig. 3 and right column of Fig. 4; wakes behind small obstacles in Fig. 5-6, long, narrow, and fast jets in Fig. 6), but it is unclear what impact the proposed extensions (DenseBlock, losses, and stream function branch) have on these structures. Similarly, interpolating missing data points with a fairly dense input where no points are more than a few pixels apart seems like a much easier problem than filling large empty spaces, and it would make sense to do separate analyses for these cases.\", \"Questions & suggestions for improvements:\", \"Has the impact of different weight combinations in A.2. been investigated? How was the ratio of 6:1 for empty:valid determined?\", \"The text says that in the stream function pathway, the features are passed \\\"through 4 densely connected convolution layers\\\". I was confused the first time I read that sentence, and only later I realized that this refers to a DenseNet-like pattern of connectivity. A reference to Huang et al. here or some other clarification would help.\", \"In the figures, please consider also showing the difference between the predictions and ground truth to make it easier to see which features of the flow field are predicted accurately, and which are not.\", \"Is there an L1 loss applied as well directly to the output of the stream prediction branch?\", \"What are the Reynolds numbers used in the simulations? How far from the original Re does the network generalize?\", \"When computing MAE, what are the units? What are the velocity magnitudes? Consider also reporting mean relative error.\", \"Impact of the various L1 deriv. losses seems negligible when the stream function is used, but is more visible when only velocity is being directly predicted. Please comment on why that might be.\", \"Why does (f) (Jacobian only) work better than (e) and (g)?\", \"It is not clear that that the effect of the additional L1 losses is truly cumulative. Please consider testing just (u, DenseBlock, L_div).\", \"It would be an interesting extension to include a non-ML baseline, which could be compared against the current results in terms of flow field quality and computational cost.\", \"Can the network predictions deviate from the known (non-masked) values in the input? If so, please consider including a breakdown of the evaluation that shows how much the error varies between the \\\"valid\\\" and \\\"empty\\\" areas.\"]}"
]
} |
BkeyOxrYwH | Imagine That! Leveraging Emergent Affordances for Tool Synthesis in Reaching Tasks | [
"Yizhe Wu",
"Sudhanshu Kasewa",
"Oliver Groth",
"Sasha Salter",
"Li Sun",
"Oiwi Parker Jones",
"Ingmar Posner"
] | In this paper we investigate an artificial agent's ability to perform task-focused tool synthesis via imagination. Our motivation is to explore the richness of information captured by the latent space of an object-centric generative model - and how to exploit it. In particular, our approach employs activation maximisation of a task-based performance predictor to optimise the latent variable of a structured latent-space model in order to generate tool geometries appropriate for the task at hand. We evaluate our model using a novel dataset of synthetic reaching tasks inspired by the cognitive sciences and behavioural ecology. In doing so we examine the model's ability to imagine tools for increasingly complex scenario types, beyond those seen during training. Our experiments demonstrate that the synthesis process modifies emergent, task-relevant object affordances in a targeted and deliberate way: the agents often specifically modify aspects of the tools which relate to meaningful (yet implicitly learned) concepts such as a tool's length, width and configuration. Our results therefore suggest, that task relevant object affordances are implicitly encoded as directions in a structured latent space shaped by experience. | [
"Affordance Learning",
"Imagination",
"Generative Models",
"Activation Maximisation"
] | Reject | https://openreview.net/pdf?id=BkeyOxrYwH | https://openreview.net/forum?id=BkeyOxrYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BSKqR4y2a-",
"SyeF38jhjH",
"SygCHuJssB",
"Syx08TSOiB",
"Ske2zlrdjH",
"BklfCyrdiS",
"SJgplR4usr",
"HklqFh4diH",
"SJlp49V_sr",
"rygLWwJ6Yr",
"rJe7toL2Fr",
"SJxPoldPtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747690,
1573856944610,
1573742662348,
1573571925701,
1573568532080,
1573568457899,
1573567988725,
1573567617971,
1573567029029,
1571776254040,
1571740539089,
1571418271507
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2383/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2383/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2383/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2383/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2383/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2383/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2383/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2383/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2383/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2383/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2383/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper investigates the task of learning to synthesize tools for specific tasks (in this case, a simulated reaching task). The paper was reviewed by 3 experts and received Reject, Weak Reject, and Weak Reject opinions. The reviews are very encouraging of the topic and general approach taken by the paper -- e.g. R3 commenting on the \\\"coolness\\\" of the problem and R1 calling it an \\\"important problem from a cognitive perspective\\\" -- but also identify a number of concerns about baselines, novelty of proposed techniques, underwhelming performance on the task, whether experiments support the conclusions, and some missing or unclear technical details. Overall, the feeling of the reviewers is that they're \\\"not sure what I am supposed to get out of the paper\\\" (R3). The authors posted responses that addressed some of these issues, in particular clarifying their terminology and contribution, and clearing up some of the technical details. However, in post-rebuttal discussions, the reviewers still have concerns with the claims of the papers. In light of these reviews, we are not able to recommend acceptance at this time, but I agree with reviewers that this is a \\\"cool\\\" task and that authors should revise and submit to another venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for a detailed reply to the comments.\", \"comment\": \"Thank you for your detailed explanations to the concerns.\\nI think this is certainly an interesting topic. However, I still think the results demonstrated in the current work is not strong enough to convince people. So, I would stick with the current score.\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thanks for your detailed responses to my comments. As I mentioned in my review, I do think this is a very interesting and important research direction and I would love to see robust planning for synthesizing tools, so I hope you continue this line of research. However, I still think this particular set of results needs some work.\\n\\nBased on your response (and the paper), it seems to me that you'd like to be able to make two claims:\\n\\n1. MONet learns about which properties of objects make them useful tools when it is trained to perform a classification task about which one out of multiple tools will solve a task. (\\\"task relevant object affordances are implicitly encoded as directions/trajectories in a structured latent space shaped by experience\\\").\\n\\n2. Not only does the latent representation encode information about affordances, this information can be effectively used for planning (\\\"the synthesis process modifies emergent, task-relevant object affordances in a targeted and deliberate way\\\").\", \"my_concerns_with_these_claims_are\": \"1. This is a relatively weak hypothesis. If an agent is trained to perform the classification task, and it can do this task well, what would it look like to *not* have information about object affordances encoded in its latent space? It is interesting that it seems to ignore information such as color, but that does not seem not be the main focus of the paper (and is only shown qualitatively, not quantitatively). If you would like to make this claim more strongly, I think more detailed analysis of the latent space itself is warranted, as would be comparisons to other models which represent the latent space differently.\\n\\n2. In my opinion, this is the more interesting hypothesis (and the one that the paper seems to be most concerned with). However, it does not seem to me that the experimental results support it. Specifically, the fact that the experimental results are quite weak---for example, that the model performs worse in the case when a feasible tool is already given, and that it does not generalize well even to cases where the tools are familiar---suggests to me in fact that the learned representations are not actually that useful for planning (or at least, not for planning via activation maximization).\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Q: Overall, I am not quite sure what I am supposed to get out of the paper. Is it that \\u201ctask relevant object affordances are implicitly encoded as directions in a structured latent space shaped by experience\\u201d? If so, then the results do not support this claim and so I am not sure what to take away. Is it that the latent space encodes information about what makes a tool feasible? If so, then this is a bit of a weak argument---of *course* it must encode this information if it is able to do the classification task at all. Is it that tool synthesis is a challenging problem? If so, then the lack of strong or canonical baselines makes it hard to evaluate whether this is true (and the navigation-only synthesis task also undermines this a bit).\", \"a\": \"We thank the reviewer for these constructive suggestions, we will clarify these notions in the next iteration.\", \"s\": \"It would be helpful to more clearly explain scene types. Here is some suggested phrasings: in-sample = familiar scenes with familiar tools, interpolation = novel scenes with familiar tools, extrapolation = novel scenes with novel tools. In Table 1 it would be helpful to indicate which scene types are which (in-sample, interpolation, extrapolation).\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Q: Second, given that the \\u201csynthesis\\u201d task is more like a navigation task, the results are somewhat disappointing. When provided with a feasible solution, the model actually gets *worse* even in some of the in-sample scenes that it has seen during training (e.g. scene types C and D) which suggests that it hasn\\u2019t actually learned a good generative model of tools. Generalization performance is pretty bad across the board and is only slightly better than random, which undermines the claim in the abstract that \\u201cOur experiments demonstrate that the synthesis process modifies emergent, task-relevant object affordances in a targeted and deliberate way\\u201d. While it\\u2019s clear there is successful synthesis in some cases, I am not sure that the results support the claim that the synthesis is \\u201ctargeted\\u201d or \\u201cdeliberate\\u201d given how poor the overall performance is.\", \"a\": \"Thank you for this insightful comment. At the time, we aimed to keep comparisons limited to ablations in order to verify the efficacy of the proposed architecture and to avoid confounders. A solution that uses ground-truth symbolic/physical representations of objects and tasks would be a good upper-bound baseline. We note that the Pix2Pix model can also be used to generate realistic feasible tools if we synthesis the corresponding feasible tools as additional supervision although it can not turn an infeasible tool to a feasible one. We will evaluate these in the next iteration.\", \"q\": \"While I do appreciate the comparisons that are in the paper (to a \\u201cRandom\\u201d version of TasMON that moves in a random direction in the latent space, and to \\u201cFroMON\\u201d agent which is not allowed to backpropagate gradients from the classification loss into MONet), these comparisons are not particularly meaningful. The difference between FroMON performance and TasMON tool imagination performance (I didn\\u2019t test tool utility) across tasks is not statistically significant (z(520, 544)=-0.8588, p=.38978), so I don\\u2019t think it is valid to claim that \\u201ca task-aware latent space can still provide benefits.\\u201d The Random baseline is a pretty weak baseline and it would be more interesting to compare to an alternative plausible architecture (for example, which doesn\\u2019t use a structured latent space, or which doesn\\u2019t have a perceptual frontend and operates directly on a symbolic representation of the tools/scene).\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We would like to thank Reviewer 3 for their review and constructive suggestions. Our responses:\", \"q\": \"First, while the task can be construed as a tool synthesis task, it doesn\\u2019t come across to me as very ecologically valid. In fact, the task seems to be more like a navigation task than a tool synthesis task: what\\u2019s required is simply to draw an unbroken line from one part of the scene to another, rather than actually generate a tool that has to be manipulated in an interesting way. Navigation has been studied extensively, while synthesis of tools that can be manipulated has not, which makes this task both not very novel and disappointing in comparison to what more ecologically-valid tool synthesis would look like. For example, consider a variation of the task where you would have to start the tool at the red region and move it to the green region. Many of the tools used here would become invalid since you wouldn\\u2019t actually be able to fit them through the gaps (e.g. Figure 2E).\", \"a\": \"While our work is firmly rooted in the literature on tool use (e.g. [1][2][3]) we agree that our problem setup is also reminiscent of planning and navigation tasks. We see this as an opportunity to apply our approach to these fields rather than an indication that our work is framed in an inappropriate context. Also, our work shows that the appearance of an object can be planned. The task-relevant variations can be captured precisely and manipulated deliberately via a performance predictor.\\n\\nPlease see our discussion 1) about the explanation of valid tool.\\n\\nWe are unaware of other papers using our approach, exploiting a performance predictor to optimise a latent code and generate potential solutions having been explored in these domains. We would be grateful if the reviewer could point us to such work so we can incorporate it into our related work section.\\n\\nWe note that the dataset is designed to contain three task-relevant variations (affordances), e.g. length, width, shape (hook-length) and some other task-irrelevant variations, e.g. colour and location of the tool. The model is expected to capture the task-relevant variations and neglect the irrelevant ones. Moreover, given a specific task that exposes constraints on one particular or a combination of affordances, the model should be able to not only understand which kind of affordance needs to modified but also guide the traversal in the latent space along a trajectory. Traversing a \\u201ctrajectory\\u201d (what we previously called a \\u201cdirection\\u201d) in the latent space corresponds exactly to the modification of one kind of affordance in the image space, as depicted in Fig 4. To the best of our knowledge, we are the first to link the concept of affordance in this way to following trajectories in latent space.\\n\\n[1] Nathan J. Emery and Nicola S. Clayton. Tool use and physical cognition in birds and mammals. Current Opinion in Neurobiology, 19(1):27 \\u2013 33, 2009.\\n\\n[2] Jackie Chappell and Alex Kacelnik. Tool selectivity in a non-primate, the New Caledonian crow (Corvus moneduloides). Animal Cognition, 5(2):71\\u201378, 2002. \\n\\n[3] Jackie Chappell and Alex Kacelnik. Selection of tool diameter by New Caledonian crows Corvus moneduloides. Animal Cognition, 7(2):121\\u2013127, 2004.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We would like to thank Reviewer 2 for their review and constructive suggestions. Our responses:\", \"q\": \"Why MoNet and multiple tools in the toolkit. A simplified version could drive the point as well. Using MoNet to decompose tools from a toolkit is nice. However, is it really necessary to drive the main point (an auxillary loss of success prediction can shape the latent space of a VAE model) in this paper. In a simplified version, where there is only one tool in the toolkit, one may not need MoNet (maybe still need it for object-background separation?) May the authors comment why multiple tools in the toolkit is important?\", \"a\": \"This is a misunderstanding. The main point of our paper is in fact task relevant object affordances are implicitly encoded as [trajectories] in a structured latent space shaped by experience and that we can access them with optimisation of the latent encoding via a high-level performance predictor. Our results from Tables 1 and 2 indicate that TasMON and FroMON have similar performance. We feel that the following statement in our paper might have caused this misunderstanding: \\u201cTasMON outperforms FroMON in tool utility prediction (Table 2) and tool imagination in most tasks, suggesting that, although the predictor is powerful enough to guide the imagination through an already existing underlying structure of toolkit-representations, a task-aware latent space can still provide benefits.\\u201d We will clarify this in the next version. Thank you for helping us to refine our presentation.\\n\\nWe also thank the reviewer for pointing out the typo; we have corrected it.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We would like to thank Reviewer 1 for their review and constructive suggestions. Our responses:\", \"q\": \"The experimental results, as just mentioned, are not very impressive, especially given the simplified setup. There are no results on other tasks except reaching. In addition, comparisons with published methods are missing. For example, what if I just train a Pix2Pix from the inputs to successful reaches? That sounds like a reasonable baseline and should be compared with.\", \"a\": \"We investigate reaching tasks because recent work [1] in cognitive science shows that crows do not only use tools but also improvise better ones to reach the food in a puzzle box. Our reaching-task design tries to model affordance-learning with a similar experiment, inspired by but not exactly the same as that confronted by crows. To the best of our knowledge, ours is the first work to perform tool-imagination utilising only learning signals from reconstruction and weak supervision from true/false-feasibility of the toolkit given a specific task, and thus were unable to compare with other published baselines. Specifically, in order to compare with a conditional generative model like Pix2Pix we would need paired data of infeasible tools and corresponding feasible ones. Our approach, in contrast, explicitly does not require such alignment.\\n\\nWe could manually add feasible tools to the dataset but that would change the task. Also, human-defined pairs of feasible/infeasible tools would introduce additional (and, we posit, unnecessary) inductive-biases, as compared to the weakly supervising true/false-task feasibility signal that we used. There are many ways in which an infeasible tool might be made feasible. So hand-designing feasible correspondents to infeasible tools would restrict the problem space. In contrast, our approach does not rely on hand-defined correspondence - i.e. it creates novel tools, based on the affordance knowledge gleaned via only unsupervised learning and success/failure signals.\", \"the_suggested_comparison_with_conditional_vae_models_and_conditioning_activation_maximization_naturally_leads_to_the_question\": \"Why can't we use conditional VAE/GAN models on this problem?\\n\\nWe think the answer is that conditional VAE/GAN models only learn a single generative function which maps condition $x$ to the target $y$, i.e. $p(y \\\\mid z , x)$. Conditioning activation maximization, in contrast, learns a mapping from the condition $x$ to a function $f$ that takes any tool as input and outputs a feasible tool for that conditioned task $x$. This latter case is reminiscent of meta-learning modulo the fact that it is not few-shot. Therefore, conditional VAE/GAN must be trained with stronger supervision.\\n\\n[1] Bayern, A.M.P.v., Danel, S., Auersperg, A.M.I. et al. Compound tool construction by New Caledonian crows. Sci Rep 8, 15676 (2018) doi:10.1038/s41598-018-33458-z\"}",
"{\"title\": \"General comments to the reviewer's questions\", \"comment\": \"Thanks to the reviewers for their detailed comments.\\n\\nFirst, we would like to address a number of misunderstandings and general concerns. Upon reflection, it is clear we could have done better at communicating some of the key ideas in our paper, and we apologise for any confusion this may have caused; thank you for helping us improve on future iterations. With regard to each of the following points, we shall endeavour to articulate our positions more clearly. \\n\\n1) What is our reaching task?\\nThe task is not about moving an object in a plane, nor is it exactly a navigation task aimed at finding a path from one point to another (although it can certainly be framed this way). Our inspiration was as a \\u2018puzzle-fitting\\u2019 task, involving a robot, standing behind a red line, placing a tool onto the floor from above in such a way that it avoids any blue obstacles while touching the green dot. We note that this top-down view is distinct from threading the tool around or through any obstacles in 2D. Additionally, the task was to find the object that could reach the goal in this way, not to actually find the optimal pose that would result in a successful reach.\\n\\n2) What do we mean by \\u2018structured latent space\\u2019?\\nWe mean that the distribution of embeddings in the latent space is influenced by architectural design choices and constraints imposed by the network\\u2019s loss functions. To be more precise, in latent spaces that have a bottleneck and use Gaussian prior, the semantics represented in the embeddings tend to vary smoothly, as the embeddings themselves change.\\n\\n3) What do we mean by \\u2018shaped by experience\\u2019?\\nBy \\u2018experience\\u2019 we mean not just the success or failure feasibility targets, but also the toolkit and task inputs. So the generator psi\\u2019 learns from the experience of seeing and reconstructing tools, while the performance predictor learns from the \\u2018experience\\u2019 of succeeding or failing at tasks. \\n\\n4) What do we mean by \\u2018directions in latent space\\u2019?\\nThis term in particular seems to have caused confusion. We do not mean \\u2018directions\\u2019 to denote straight-line vectors in latent space from infeasible tools to feasible tools. Instead our use of this term was as a \\u2018local direction\\u2019. That is, in the region around an embeded point, there appears to be vectors that modify some semantic properties of the embedded tool (again \\u201clocal directions\\u201d). Globally, a better term might be \\u2018trajectory\\u2019 or even \\u2018path\\u2019. More abstractly, we demonstrate that by using a task-based performance predictor to optimise this embedding we can implicitly discover and modify a subset of semantic properties that are useful for task performance while ignoring the others. \\n\\nIn hindsight, some of our writing in the paper can be misunderstood as stating that we see straight-line paths in latent space from infeasible to feasible tools. In situations where we mean or imply the complete path, we will change the word to \\u2018trajectory\\u2019.\\n\\n5) What is our main contribution?\\nWe demonstrate that (i) task-relevant object affordances are implicitly encoded as trajectories in the latent space and (ii) that these can be leveraged by traversing along a trajectory driven by a task-classifier performing conditioning activation maximization.\\n\\nImportantly, our work shows that the appearance of an object can be smoothly modulated according to discrete, high-level task descriptions (e.g. a classifier representing task success).\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes an algorithm that learns to synthesize tools for the task of reaching. The main idea is to first use unsupervised segmentation algorithm to decompose an image of a set of tools into individual objects; then, with a trained feasibility model, search through the latent space of encoded tools to 'imagine' new ones for reaching.\\n\\nThis paper has clear strengths and weaknesses. It's studying an important problem from a cognitive perspective; it also proposes a novel model for the task, building upon SOTA models. However, the current problem formulation and experiment setup are not well justified, and the experiments are quite limited. I lean toward rejection.\\n\\nMost importantly, while this paper argues for the importance of an object-centric representation, it conducts most of its search in the pixel space (both as input to the model, and as the output of the imagination). This leads to some unnatural and unphysical results: in the teaser figure, it's true that the final, imagined tool reaches the target; however, the tool itself shouldn't be able to pass the gap/hole on the wall, due to its angular shape. Objects, in essence, have shapes and physical occupancy. Without modeling physics, it's unclear how useful the object-centric representation is.\\n\\nImagination is done by searching over the latent space, which limits the model's generalization power to novel tools or new configurations. This is revealed in the results on case H, where the model doesn't work at all.\\n\\nThe experimental results, as just mentioned, are not very impressive, especially given the simplified setup. There are no results on other tasks except reaching. In addition, comparisons with published methods are missing. For example, what if I just train a Pix2Pix from the inputs to successful reaches? That sounds like a reasonable baseline and should be compared with.\\n\\nDue to all these limitations, I lean toward rejection.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes an architecture for synthesizing tools to be used in a reaching task. Specifically, during training the agent jointly learns to segment an image of a set of three tools (via the MONet architecture) and to classify whether one the tools will solve the given scene. At test time, one of the three tools is selected based on which seems most feasible, and then gradient descent is used to modify the latent representation of the tool in order to synthesize a new tool to (hopefully) solve the scene. The paper demonstrates that this approach can achieve ok performance on familiar scenes with familiar tools, but that it fails to generalize when exposed to unfamiliar scenes or unfamiliar tools. The paper reports a combination of the quantitative results showing that optimizing the latent space can lead to successful synthesis in some cases, and qualitative results showing that the synthesized tools change along interpretable dimensions such as length, width, etc. The combination of these results suggest that the model has learned something about which tool dimensions are important for being able to solve the types of reaching tasks given in the paper.\\n\\nWhile I think this paper tackles a very interesting, important, and challenging problem, I unfortunately feel it is not ready for publication at ICLR and thus recommend rejection. Specifically, (1) neither the particular task, results, or model are not very compelling, (2) there are no comparisons to meaningful alternatives, and (3) overall I am not quite sure what conclusions I should draw from the paper. However, given the coolness of the problem of tool synthesis, I definitely encourage the authors to continue working on this line of work!\\n\\n1. The task, results, and model are not very compelling. Any of these three things alone would not necessarily be a problem, but given that all three are true the paper comes across as a bit underwhelming.\\n \\n- First, while the task can be construed as a tool synthesis task, it doesn\\u2019t come across to me as very ecologically valid. In fact, the task seems to be more like a navigation task than a tool synthesis task: what\\u2019s required is simply to draw an unbroken line from one part of the scene to another, rather than actually generate a tool that has to be manipulated in an interesting way. Navigation has been studied extensively, while synthesis of tools that can be manipulated has not, which makes this task both not very novel and disappointing in comparison to what more ecologically-valid tool synthesis would look like. For example, consider a variation of the task where you would have to start the tool at the red region and move it to the green region. Many of the tools used here would become invalid since you wouldn\\u2019t actually be able to fit them through the gaps (e.g. Figure 2E).\\n \\n- Second, given that the \\u201csynthesis\\u201d task is more like a navigation task, the results are somewhat disappointing. When provided with a feasible solution, the model actually gets *worse* even in some of the in-sample scenes that it has seen during training (e.g. scene types C and D) which suggests that it hasn\\u2019t actually learned a good generative model of tools. Generalization performance is pretty bad across the board and is only slightly better than random, which undermines the claim in the abstract that \\u201cOur experiments demonstrate that the synthesis process modifies emergent, task-relevant object affordances in a targeted and deliberate way\\u201d. While it\\u2019s clear there is successful synthesis in some cases, I am not sure that the results support the claim that the synthesis is \\u201ctargeted\\u201d or \\u201cdeliberate\\u201d given how poor the overall performance is.\\n \\n- Third, the model/architecture is a relatively straightforward combination of existing components and is highly specialized to the particular task. As mentioned above, this wouldn\\u2019t necessarily be a problem if the task were more interesting (i.e. not just a navigation task) and if the results were better. I do think it is cool to see this use of MONet but I\\u2019m skeptical that the particular method of optimizing in the latent space is doing anything meaningful. While there is prior work that has optimized the latent space to achieve certain tasks (as is cited in the paper), there is also a large body of work on adversarial examples which demonstrate that optimizing in the latent space is also fraught with difficulty. I also suspect this is the reason why the results are not particularly good.\\n \\n2. While I do appreciate the comparisons that are in the paper (to a \\u201cRandom\\u201d version of TasMON that moves in a random direction in the latent space, and to \\u201cFroMON\\u201d agent which is not allowed to backpropagate gradients from the classification loss into MONet), these comparisons are not particularly meaningful. The difference between FroMON performance and TasMON tool imagination performance (I didn\\u2019t test tool utility) across tasks is not statistically significant (z(520, 544)=-0.8588, p=.38978), so I don\\u2019t think it is valid to claim that \\u201ca task-aware latent space can still provide benefits.\\u201d The Random baseline is a pretty weak baseline and it would be more interesting to compare to an alternative plausible architecture (for example, which doesn\\u2019t use a structured latent space, or which doesn\\u2019t have a perceptual frontend and operates directly on a symbolic representation of the tools/scene).\\n \\n3. Overall, I am not quite sure what I am supposed to get out of the paper. Is it that \\u201ctask relevant object affordances are implicitly encoded as directions in a structured latent space shaped by experience\\u201d? If so, then the results do not support this claim and so I am not sure what to take away. Is it that the latent space encodes information about what makes a tool feasible? If so, then this is a bit of a weak argument---of *course* it must encode this information if it is able to do the classification task at all. Is it that tool synthesis is a challenging problem? If so, then the lack of strong or canonical baselines makes it hard to evaluate whether this is true (and the navigation-only synthesis task also undermines this a bit).\", \"some_additional_suggestions\": \"It would be good to include a discussion of other recent work on tool use such as Allen et al. (2019) and Baker et al. (2019), as well as on other related synthesis tasks such as Ha (2018) or Ganin et al. (2018).\\n \\nThe introduction states that \\u201ctool selection and manufacture \\u2013 especially once demonstrated \\u2013 is a significantly easier task than tool innovation\\u201d. While this may be true, it is a bit misleading in the context of the paper as the agent is doing something more like tool selection and modification rather than tool innovation (and actually the in-sample scenes are more like \\u201cmanufacture\\u201d, which the agent doesn\\u2019t always even do well on).\\n \\nIt would be helpful to more clearly explain scene types. Here is some suggested phrasings: in-sample = familiar scenes with familiar tools, interpolation = novel scenes with familiar tools, extrapolation = novel scenes with novel tools.\\n \\nI was originally confused how psi\\u2019 knew where to actually place the tool and at what orientation, and whether the background part of the rendering process shown in Figure 1. I realized after reading the supplemental that this is not done by the agent itself but by separate code that tries to find the orientation and position of the tool. This should be explained more clearly in the main text.\\n \\nIn Table 1 it would be helpful to indicate which scene types are which (in-sample, interpolation, extrapolation).\\n \\nAllen, K. R., Smith, K. A., & Tenenbaum, J. B. (2019). The Tools Challenge: Rapid Trial-and-Error Learning in Physical Problem Solving. arXiv preprint arXiv:1907.09620.\\n \\nBaker, B., Kanitscheider, I., Markov, T., Wu, Y., Powell, G., McGrew, B., & Mordatch, I. (2019). Emergent tool use from multi-agent autocurricula. arXiv preprint arXiv:1909.07528.\\n \\nGanin, Y., Kulkarni, T., Babuschkin, I., Eslami, S. M., & Vinyals, O. (2018). Synthesizing programs for images using reinforced adversarial learning. arXiv preprint arXiv:1804.01118.\\n \\nHa, D. (2018). Reinforcement learning for improving agent design. arXiv preprint arXiv:1810.03779.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors constructed an interesting dataset named reaching task, where the model need to predict if the given toolkit is able to solve the corresponding task or not. They showed that combining variational auto-encoding with an auxiliary loss (in this case, the predictor of solving the tasks, could help shaping a latent space where affordance of the tool is encoded as directions in such latent space.) Using the activation maximisation technique (which was phrased as the imagination process), they are able to modify the input tools into the ones that is suitable to solve the corresponding task. I found the idea of using an auxiliary loss when training a VAE may cause the latent space coding direction change novel and interesting. However, I do not find the authors has a strong case of proven it is the case in this manuscript.\\n\\n1. The performance difference between FroMON and TasMON is not clear.\\n The most critical control model in this paper is the FroMON (frozen MoNet). In this control model, the gradient from the success predictor is not flowing back into the VAE encoder. So, based on the author's assumption, it should not be benefit of having the tool affordance directions in the latent space. However, in the main results in Table 1. We found the performance between FroMON and TasMON is not quite clear. This is particularly true for the Scenario E, F, G (the interpolation tasks), which is more about generalization and is more important.\\n\\n2. Are the affordance 'directions' in the latent space?\\n The authors used activation maximisation approach to travel in the latent space. My understanding of the approach is it follow the gradient to maximise the predictor's success prediction in an iterative approach. So, at each optimization step, the z_im can move in different direction. This seems to not fit as a sense of 'direction', as I would assume it is moving along a particular line (not necessarily axis aligned.). Maybe this does explain whey FroMON and TasMon perform equally well. As long as the possible shapes is encoded in a smooth way in the latent space, the activation maximisation could find a path toward the target object. Unfortunately, is that a 'direction'? Would it be possible to train an optimization algorithm that is only allow to move in a linear direction, and see how well that work?\\n\\n3. Why MoNet and multiple tools in the toolkit. A simplified version could drive the point as well.\\nUsing MoNet to decompose tools from a toolkit is nice. However, is it really necessary to drive the main point (an auxillary loss of success prediction can shape the latent space of a VAE model) in this paper. In a simplified version, where there is only one tool in the toolkit, one may not need MoNet (maybe still need it for object-background separation?) May the authors comment why multiple tools in the toolkit is important?\", \"minor\": \"1. typo: page 1, (2nd to the last line). '...that habitual tool use cannot in and off itself ..' --> of\\n2. A simple video showing how the tool shape change sequentially during the activation maximisation process would be interesting.\"}"
]
} |
BJxkOlSYDH | Provable Filter Pruning for Efficient Neural Networks | [
"Lucas Liebenwein",
"Cenk Baykal",
"Harry Lang",
"Dan Feldman",
"Daniela Rus"
] | We present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network. Our algorithm uses a small batch of input data points to assign a saliency score to each filter and constructs an importance sampling distribution where filters that highly affect the output are sampled with correspondingly high probability.
In contrast to existing filter pruning approaches, our method is simultaneously data-informed, exhibits provable guarantees on the size and performance of the pruned network, and is widely applicable to varying network architectures and data sets. Our analytical bounds bridge the notions of compressibility and importance of network structures, which gives rise to a fully-automated procedure for identifying and preserving filters in layers that are essential to the network's performance. Our experimental evaluations on popular architectures and data sets show that our algorithm consistently generates sparser and more efficient models than those constructed by existing filter pruning approaches. | [
"theory",
"compression",
"filter pruning",
"neural networks"
] | Accept (Poster) | https://openreview.net/pdf?id=BJxkOlSYDH | https://openreview.net/forum?id=BJxkOlSYDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"UuAtqjBE4",
"SkgCaS9hsB",
"rkejpAh_iB",
"HylTqAh_iS",
"HyeztAh_jB",
"BJg5eChdsr",
"B1ehHpGjqS",
"S1e1rr3e9S",
"B1gqAUOxqr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747661,
1573852614299,
1573600963109,
1573600916781,
1573600890333,
1573600754022,
1572707652486,
1572025655008,
1572009681759
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2382/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2382/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2382/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2382/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2382/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2382/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2382/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2382/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents a sampling-based approach for generating compact CNNs by pruning redundant filters. One advantage of the proposed method is a bound for the final pruning error.\\n\\nOne of the major concerns during review is the experiment design. The original paper lacks the results on real work dataset like ImageNet. Furthermore, the presentation is a little misleading. The authors addressed most of these problems in the revision.\\n\\nModel compression and purring is a very important field for real world application, hence I choose to accept the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"General Response -- ImageNet Results\", \"comment\": \"We have updated our paper with a revised version containing the results of our ImageNet evaluations. We believe that the new results highlight the versatility of our approach in that it is readily applicable to large-scale problems out-of-the-box. In other words, our new results on ImageNet were obtained by running our algorithm without any tuning of the hyper-parameters; this stands in contrast to existing approaches that generally require tedious, task-specific intervention or manual parameter tuning [1-2].\\n\\nMore specifically, we evaluated and compared our algorithm in two scenarios on ResNet18/50/101 models trained on ImageNet: (i) prune-only experiments where the retraining step is omitted after pruning (see Fig. 6 in Sec. E.4) and (ii) iterative prune-retrain with a limited number (i.e., 2-3) of iterations (see Table 6 in Sec. E.4). For the former prune-only scenario, our algorithm significantly outperformed competing pruning approaches in all of the considered models. This suggests that our algorithm\\u2019s baseline pruning effectiveness is better than that of the competing methods. \\n\\nFor the latter prune-retrain scenario (with limited iterations), we observe from Table 6 that our approach is competitive with the results obtained by the various state-of-the-art pruning algorithms. We would like to highlight that our algorithm\\u2019s performance was competitive despite significant time (2-3 days) and resource constraints (8 NVIDIA Tesla V100). We conjecture that given more time and resources to conduct additional train-prune iterations for ImageNet, the relative performance of our algorithm would be even more favorable and would resemble the comparisons on CIFAR10 (see Table 8).\\n\\n\\n[1] Pruning Filters for Efficient Convnets\\n(https://arxiv.org/abs/1608.08710)\\n[2] Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks\\n(https://arxiv.org/abs/1808.06866)\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your consideration of our paper and for your thoughtful comments. We would like to first clarify that the Lottery Ticket Hypothesis [1] does not claim that the predictive accuracy of the pruned network is necessarily better than the unpruned network (pg. 2, para. 2 of [1]). In fact, from the figures of [1] (e.g., Fig. 3), we can see that the performance of the pruned network may improve at moderate sparsities (i.e., pruning ratios). On the other hand, in the regime of extreme sparsities the significantly compactified network may not perform as well as the original network [1]. This makes intuitive sense, since if it were the case that the pruned network\\u2019s performance simply improves as we prune-and-retrain iteratively, we would end up with pruned networks with only a handful of parameters that outperform the original network.\\n\\nThis phenomenon occurs for, e.g., ResNet56, ResNet110, VGG16, and DenseNet22 in Fig. 3 of our revision. The unifying trend in all of these plots is that the pruned model tends to outperform the original model in the regime of moderate sparsities, however, as we compress the model further down and consider higher sparsities, we see that the pruned model\\u2019s performance can no longer match (or outperform) the original model\\u2019s performance.\\n\\nThe reported test error of the pruned model in Tables 1 and 2 of our submission are slightly above that of the original model because these tables focused on reporting the sparsest model possible that was within 0.5% of the original model\\u2019s accuracy (i.e., commensurate accuracy). To further contextualize the performance of our method, we have added an additional table (Table 8 in the appendix) that reports statistics pertaining to the pruned model that (i) is within 0.5% of the original model\\u2019s accuracy (akin to Tables 1 and 2), (ii) matches the original accuracy, and (iii) achieves the highest accuracy possible -- which in some cases, is higher than that of the original model. As our results show, the trade-off between predictive accuracy of the model and its size is an application-dependent choice that is up to the practitioner to decide.\\n\\n[1] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks (https://arxiv.org/pdf/1803.03635.pdf)\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your in-depth review of our paper and helpful suggestions in improving the clarity and quality of our work. Our revision contains a streamlined and improved Sec. 2 that includes further clarifying text \\u2014 such as the equation of a layer \\u2014 and additional explanation of the symbols in Sec. 2.1. We have also improved Fig. 1 and have made explicit references to it in Sec. 2 in order to better illustrate the denotations of the symbols used and the pruning pipeline.\\n\\nWe also thank the reviewer for the references to related work. We have added additional text in the introduction and related works sections of our paper that further relate back to the work of [1] and [2].\\n\\n[1] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks (https://arxiv.org/pdf/1803.03635.pdf)\\n[2] The State of Sparsity in Deep Neural Networks \\n(https://arxiv.org/abs/1902.09574)\"}",
"{\"title\": \"Response to AnonReviewer5\", \"comment\": \"Thank you for your insightful and constructive feedback. Please find our specific comments below.\\n\\n1) \\nOur initial experimental results were computed on a desktop computer to highlight that the method we propose is practical and easy to use for a wide range of data sets and network models. Since the initial submission, we have conducted more extensive experiments on the Cloud with a refined iterative prune-retrain strategy including larger models for CIFAR10 (see Fig. 3 and Tables 2 and 8 of our revision). We are currently running experiments on the ImageNet data set evaluated on various ResNet architectures. \\n\\nOur preliminary results (single prune-retrain cycle) of the experiments conducted on ResNet101 show that we can prune 55.40% (PR) of the network and achieve a FLOP Reduction (FR) of 50.80% with only a 0.69% increase in the Top-5 classification error on ImageNet. We are currently running refined ImageNet experiments with iterative prune-retrain cycles (as outlined in Sec. 4.1) and we will update our submission within the next couple of days with our results. We would also like to note that VGG and ResNets are commonly used for baseline evaluations and comparisons in contemporary network pruning literature [1-3]. Please see our general response for a list of updates to the results section.\\n\\nThe network used for the real-time regression task (see Sec. 4.5) of [4] is a lightweight model for use in autonomous driving scenarios. Our paper contains evaluations of our compression algorithm on this lightweight network (see Fig. 4). We would be happy to include results for other (lightweight) models as time permits. Please let us know if you have any particular models in mind for evaluation.\\n\\n2)\\nWe have clarified the exposition to highlight that our original (and revised) submission already contains comparisons for selecting the top-k filters with the highest norms. The Filter Thresholding (FT) and SoftNet algorithms that are plotted in the figures and reported in the tables in Sec. 4 correspond precisely to this approach (also see Sec. E.1 - Comparison Methods of the appendix). In particular, the FT algorithm picks the k filters with the largest \\\\ell_2-norm (i.e., top-k \\\\ell_2 norm), whereas SoftNet picks the k filters with the largest \\\\ell_1 norm.\\n\\n3)\\nOur main compression theorem (Theorem 8 in the appendix of the original submission) establishes bounds on the overall accumulative error, not just layer-wise error. The proof of the error propagation through all of the layers follows from the application of our layer-wise error bound (Theorem 2 of Sec. 2) and the error-propagation analysis of [4] (see Lemmas 2,3 and Theorem 4 in [4]). We agree with the reviewer that the reconstruction error of the previous layer has an affect on the variance of our estimator in the subsequent layers, however, this is taken care of in the error propagation analysis by iteratively conditioning on the fact that the previous layer is well-approximated, implying that our bounds are not affected by more than a constant.\\n\\nNevertheless, we understand that the placement of Theorem 8 in the appendix may have caused some unintended confusion regarding the scope of our theoretical guarantees \\u2014 which, to be clear, are not just layer-wise guarantees, and in fact do ensure that the output of the network is well-preserved. To minimize confusion, we have moved our main compression theorem (Theorem 8) to the main body of the paper (see Theorem 3 of our revision) and have made the application of the error propagation bounds explicit. Thank you again for pointing out this source of confusion.\\n\\n\\n[1] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks (https://arxiv.org/pdf/1803.03635.pdf)\\n[2] SNIP: Single-shot Network Pruning Based on Connection Sensitivity ( https://arxiv.org/pdf/1810.02340.pdf )\\n[3] Importance Estimation for Neural Network Pruning (http://openaccess.thecvf.com/content_CVPR_2019/papers/Molchanov_Importance_Estimation_for_Neural_Network_Pruning_CVPR_2019_paper.pdf)\\n[4] Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds (https://arxiv.org/abs/1804.05345)\"}",
"{\"title\": \"General Response\", \"comment\": \"We thank all the reviewers for their careful consideration of our paper and helpful suggestions. We have submitted a revised version that contains an improved exposition of our work thanks to the reviewers\\u2019 constructive feedback. Since our original submission, we have also conducted and included additional empirical evaluations in our revision. In particular, we:\\n\\n1) re-ran our experiments with a refined, iterative prune-retrain scheme \\u2014 as is standard in literature \\u2014 with a larger number of retraining epochs,\\n\\n2) included additional evaluations and figures with VGG16 with Batch Normalization, ResNet56, and ResNet110 (please see updated Fig. 3 and Table 2), and\\n\\n3) added a new table to the appendix (Table 8, Sec. E.6) containing extensive evaluations and comparisons to state-of-the-art pruning results (as reported in the respective papers) published within the last couple of years. The latest results show that our approach outperforms competing filter pruning methods in virtually all of the considered network architectures and pertinent metrics, especially when the overall quality (i.e., sparsity, efficiency, and accuracy) of the pruned model is taken into account.\\n\\nMoreover, we are currently running experiments on various ResNet architectures trained on ImageNet and will upload another revision within the next couple of days. In addition to conducting and including the results of additional empirical evaluations, we have also refined the presentation of our paper for ease of readability and understanding. Overall, we believe that the quality and exposition of our paper have significantly improved thanks to the reviewers\\u2019 suggestions and we welcome any additional feedback. We hope that the results in this paper, pruning with performance guarantees backed by extensive experiments, highlight that we can improve the efficiency and storage requirements of modern neural networks in a practical and theoretically-grounded way.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n \\nIn this paper, the author propose a provable pruning method, and also provide a bound for the final pruning error. Among most heuristics prune method, pruning with mathematics guarantee is indeed more convincing. We expect this work can help people devoting some effort into more solid theoretical study in understanding the over-parameterized training.\\n\\nIntuitively speaking, the sensitive neuron has greater contribution for the final output, reusing the corresponding filter and carefully rescale its value require many empirically attempts. To achieve a more reasonable algorithm, author prune the redundant channel by controlling the deviation of the summation statistically small, and reusing the filter by important sampling the given channel. Experiment show that this method can reach a competitive prune radio against other pruning algorithm, and show robustly in retained parameters vs error experiment.\", \"weakness\": \"1. experiment is too weak\\n\\nImageNet model has great impact on most CV problem, and the current release models are flooding in the open source world. Author should at least provide a imagenet model and make this work more convincing. Besides, Author should also consider an experiment in modern lightweight network, vgg and resnet like model are out of fashion and so big that any one can make a sound result on it. \\n\\n2. lack of a comparing experiment for random select the top-k norm. \\n\\nImportant sampling require an input of probability [p1, p2, p3, ... pn], if those probabilities are nearly uniform, important sampling will behave like a random sampling method. In most case, if we want to prune the large channel network, picking the top-1 significant filter or random sampling top-k filter will almost do the same thing. \\n\\n3. lack of further theory consideration\\n\\nauthor only consider the single layer reconstruction, without discussing the overall accumulative error. Unlike the other deterministic method, sampling skill suffer variance propagation problem, the pre-layer variance will affect the sampling probability of next layer, how this pruning work if we change status of the pre-layer, I didn't find any theoretical guarantee and only find a proof of single layer reconstruction bound.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper attacks the problem of pruning neural networks to obtain sparser models for deployment. In introduces a principled importance sampling approach for which independence of samples allows one to obtain bounds easily. These bounds can be used to control the accuracy of the method.\\n\\nThe proposal mechanism is very smart. The authors use a measure of the sensitivity of the network outputs to the channels in a particular layer (eqn 1).\\n\\nThe paper is very well written, but it would help to add a picture where all the symbols in section 2.1 appear. At times it is hard to keep track of the channels and features. It might alternatively be a good idea to specify the equation of a layer (eg what eventually ends up happening at the bottom of page 3) in section 2.1 and then explain the symbols in the equation. This will make life easier for anyone reading the paper for the first time. \\n\\nThe experiments are well execute and include reasonable baselines. I would in addition recommend this recent paper:\", \"https\": \"//openreview.net/forum?id=rJl-b3RcF7\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the tasks of pruning filters, a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs). This paper gives rise to a fully-automated procedure for identifying and preserving the filters in layers that are essential to the network\\u2019s performance. In general, this paper is very well written and organized.\\n\\n1, The key concerns come from the Lottery papers [1,2]. One can find sparse structure from an overparameterized model. The results of pruned network should be improved, rather than getting worse, since some redundant filters/params are removed from original network. In contrast, all the results of this method gets worse results; this is less desirable.\\n\\n2, the theoretical analysis is very good; it is worthy publishing themselves. But ever since the \\\"lottery\\\" papers, I think it makes sense in locating the sparse and representative pruned structure, which can achieve better performance than full overparameterized model. \\nso it\\u2019s quite a borderline paper. \\n\\n\\n[1] THE LOTTERY TICKET HYPOTHESIS: FINDING SPARSE, TRAINABLE NEURAL NETWORKS. ICLR 2019.\\n[2] RETHINKING THE VALUE OF NETWORK PRUNING. ICLR 2019.\"}"
]
} |
HJeRveHKDH | ADAPTIVE GENERATION OF PROGRAMMING PUZZLES | [
"Ashwin Kalyan",
"Oleksandr Polozov",
"Adam Tauman Kalai"
] | AI today is far from being able to write complex programs. What type of problems
would be best for computers to learn to program, and how should such problems
be generated? To answer the first question, we suggest programming puzzles as a
domain for teaching computers programming. A programming puzzle consists of a
short program for a Boolean function f(x) and the goal is, given the source code, to
find an input that makes f return True. Puzzles are objective in that one can easily
test the correctness of a given solution x by seeing whether it satisfies f, unlike the
most common representations for program synthesis: given input-output pairs or an
English problem description, the correctness of a given solution is not determined
and is debatable. To address the second question of automatic puzzle generation,
we suggest a GAN-like generation algorithm called “Troublemaker” which can
generate puzzles targeted at any given puzzle-solver. The main innovation is that it
adapts to one or more given puzzle-solvers: rather than generating a single dataset
of puzzles, Tro | [
"program synthesis",
"reasoning",
"math problems"
] | Reject | https://openreview.net/pdf?id=HJeRveHKDH | https://openreview.net/forum?id=HJeRveHKDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Kg-j6npq7E",
"BygJYh9njH",
"SygfM2lOjB",
"HkeVtsgOor",
"S1xh1sl_jr",
"S1xQslvMiH",
"HJgzFWw7qS",
"SJgZe8sAYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747632,
1573854327177,
1573551113540,
1573550972404,
1573550820054,
1573183643207,
1572200825844,
1571890665068
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2381/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2381/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2381/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2381/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2381/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2381/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2381/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors introducing programming puzzles as a way to help AI systems learn about reasoning. The authors then propose a GAN-like generation algorithm to generate diverse and difficult puzzles.\\n\\nThis is a very novel problem and the authors have made an interesting submission. However, at least 2 reviewers have raised severe concerns about the work. In particular, the relation to existing work as pointed by R2 was not very clear. Further, the paper was also lacking a strong empirical evaluation of the proposed ideas. The authors did agree with most of the comments of the reviewers and made changes wherever possible. However, some changes have been pushed to future work or are not feasible right now. \\n\\nBased on the above observations, I recommended that the paper cannot be accepted now. The paper has a lot of potential and I would strongly encourage a revised submission addressing the questions/suggestions made by the reviewers.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of changes made during rebuttal period\", \"comment\": [\"Thank you for your feedback --- it has helped us improve the paper immensely! In particular, we have uploaded a paper with several changes including:\", \"We have expanded on the related work section situating our contributions; and moved it earlier in the paper.\", \"We have added a game-theoretic analysis of the Problem-generation game, putting our algorithm on more solid foundations.\", \"We have added an analysis of the behavior of the generator and the learnable solver as a function of training time.\", \"We have added another learnable solver that updates weights for each rule, similar to the pCFG based solver proposed in the paper (and corresponding results).\", \"Additionally, we performed an analysis of the trainable solver on randomly generated problems. We notice that it does not fare well (worse than enumerative solver) \\u2014 we note this observation in the results section \\u2014 somewhat expected as the solver is tuned towards puzzles produced by the generator.\", \"We have added a short conclusion section.\"]}",
"{\"title\": \"Evaluation of Guided Generator\", \"comment\": \"Thank you for the great review and improvement suggestions! We are currently working on the new revision, and will incorporate your suggestions and other improvements into it in a few days. This will also include more details on the training process, experiments, and references to the used solvers embedded in Related Work.\", \"q\": \"If you train the guided search solver on the generated puzzles and evaluate it on a random set of puzzles, would you see an improvement? This is indeed an interesting question, and would be a useful additional metric for evaluating the generator\\u2019s quality. At the moment, the focus of this work is generating a hard and diverse dataset of puzzles for *assessing* artificial solver algorithms. That said, generating training curricula would be a reasonable extension of the same framework and a good focus for future work.\\n\\nAs an additional evaluation question, we will be adding results on the trained solver\\u2019s performance on randomly sampled problems from the grammar to the new paper revision.\"}",
"{\"title\": \"Advantages of programming puzzles, problem size vs. hardness and non-trainable solvers\", \"comment\": \"Thank you for your great review! We are currently working on the new revision, and will incorporate your suggestions and other improvements into it in a few days.\", \"q\": \"Why do non-trainable solvers also increase their processing time over the course of training? Because the Troublemaker generator discovers their weaknesses (i.e. patterns in the puzzle construction that the solver is not well-equipped to handle) over time. This is the key feature of our adaptive algorithm. Non-trainable solvers, by construction, are designed to solve a class of puzzles well (and often all other classes poorly or not at all). The fact that Troublemaker automatically discovers this class over time serves as a validation of its core capabilities.\\n\\nWe will also add detailed discussions on the construction of the solvers to the appendix.\", \"they_are_objective\": \"an answer is easy to validate and score automatically.\\nThey require abstract reasoning but not real-world knowledge, NLP, or spatio-temporal biases.\", \"this_class_of_reasoning_problems_occupy_a_sweet_spot_in_complexity\": \"often, humans can solve programming puzzles easily, but current reasoning systems fail or take a long time to solve.. Thus, it\\u2019s a great next milestone for advancing the capabilities of artificial reasoning.\"}",
"{\"title\": \"Related work (curriculum learning) and other empirical evaluations\", \"comment\": \"Thank you for the detailed review and great suggestions! We are currently working on the new revision, and will incorporate your suggestions and other improvements into it in a few days.\", \"relation_to_curriculum_learning\": \"The contribution of our work is two-fold -- we introduce programming puzzles as a useful class of reasoning problems and then propose \\u201ctroublemaker\\u201d algorithm to generate hard puzzles for a given solver. We agree that generating training curricula is an interesting application of our (or similar) framework; however, it is not the focus of this work. The goal of the troublemaker algorithm in this work is to generate a set of hard puzzles that shed light on the weaknesses of existing solvers. When the solver is trainable, we propose an adaptive version of the troublemaker algorithm that improves the puzzle generator iteratively.\\n\\nThat said, we agree that the curriculum learning literature is highly relevant, even if this work does not explore curriculum learning per se. We are including an overview of the relevant ideas into Related Work in our next revision (and will move it earlier in the PDF, as you rightly suggest).\\n\\nEvaluation of Trainable Solver. As you rightly point out, we do not focus on developing new solvers, trainable or otherwise. In this work, we evaluate the trainable solver against two generation mechanisms -- 1) Probabilistic context-free grammar based and 2) Neural Guided. In both cases, the weights (rule weights for pCFG and neural network parameters in the case of guided generators) are updated based on the solver performance. While table 1 provides a comparison, we were unable to add more empirical analysis due to time and space constraints. \\n\\nOn the float grammar, we find that the guided generator produces about 200/1000 unsolvable problems (same when 1000 puzzles are randomly sampled from the grammar) and after about 1000 iterations saturates to produce ~700/1000 unsolvable problems. On the other hand, the pCFG based generator saturates at only ~350/1000 unsolvable problems after 500 iterations; nearly half the number of unsolvable problems produced by the guided generator on average. We will add detailed analysis/plots to the next updated draft.\\n\\nFurther, we will add a more detailed analysis of the learning solver in the next version. We will add a comparison of the solver\\u2019s performance at various stages of training and compare the effect of training time vs. solver performance (# puzzles solved). Additionally, we will evaluate a trainable solver and evaluate its performance on randomly sampled puzzles from our grammar. \\n\\nWe *may* be able to add another flavor of a trainable solver in time for the discussion period, to show how the Troublemaker generation influences different kinds of training processes (as you rightly point out).\\n\\nQ. How is the initial trainable solver trained? \\nThe solver must be pre-trained on puzzles from the same grammar/distribution we consider in this paper, thus it cannot be lifted from some prior work on other datasets (e.g. Saxton et al). For the first iteration of the Troublemaker algorithm, we sample puzzles from the grammar uniformly at random and use the traces of an enumerative solver to train it. Following this phase, it can be directly trained via REINFORCE as the puzzles we generate are \\u201ccheckable\\u201d i.e. the validity of a solution can be verified. As adaptive training progresses, the solver is continuously retrained on the puzzles sampled from the Troublemaker generator (as well as all the previously generated puzzles). We will clarify this better in the paper.\\n\\nQ. Is there a way to quantify the hardness of a puzzle? Size? This is an interesting question that is non-trivial to answer. For the purpose of this work, we define hardness based on the time taken to solve a puzzle. For example, when we state that our algorithm finds \\u201chard\\u201d problems for a given solver we mean that the solver is unable to solve the problem within some time limit. \\n\\nFrom preliminary experiments, we find that problem size is not a good measure of hardness i.e. the number of unsolvable problems (for a given solver) does not increase with the size of the problems generated. We will update the draft with detailed analysis.\\n\\nQ. Training time vs. puzzle hardness. We find that both the pCFG based and neural guided generators tend to produce rapidly increase in the number of unsolvable produced before saturating at a certain value. We find that the guided generator nearly produces twice the number of unsolvable problems as the pCFG based generator (and ~4 times that of random sampling) We will add these plots in the next revision.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"In this paper, the authors propose a new class of programs they call programming puzzles. The authors argue that this class of programs is ideal for helping learn AI systems to reason. The second contribution of the paper is an adaptive method of puzzle generation inspired by GAN-like generation that can generate a diverse and difficult set of programs. The paper shows that the generated puzzles are reasonably difficult to solve (using the time to solve as a measure of difficulty) and reasonably diverse.\\n\\nI found the paper well-written and easy to understand. The methodology to generate programs is convincing. I am not sure time to solve is the best way to measure the complexity of the program, but it seems a reasonable proxy. Did the authors study if the program length is correlated to the time to execute? If the correlation holds, then can complex programs not be created by simply having a bias towards longer programs? That would be a strong baseline to compare against. \\n\\nI may have missed something, but I understand that only the Guided Solver is trainable. If that is the case, then why do we see increase in solving time for other solvers (Table 1). In only the case of the Guided solver can the generator adaptively increase the complexity of the programs. \\n\\nOverall, I feel that the paper puts forth an interesting class of programs. But there are some gaps in the evaluation and the baselines. I am also not sure how this class of programs can help advance artificial reasoning.\", \"feedback\": [\"The authors should provide more references to the solvers used in Section 5.\", \"The paper ends abruptly. A summary/conclusion would be useful.\", \"I am not an expert in this area, and I am willing to revise my recommendation if the authors can address these issues.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a trainable 'puzzle' program synthesizer that outputs a program f with a specific syntax. These 'puzzles' are structured as boolean programs, and a program solver solves the puzzle by finding an input x such that f(x) = True. The authors motivate this task by making a case that puzzles of this sort are a good domain for teaching computers how to program.\\n\\nThe paper is fairly clear overall. There is some repetition in the early parts, so this could be restructured a bit, but these are minor points. A more significant restructuring however, is that this work would benefit from the related work being present the beginning of the work. Since this work is so similar in many ways to previous work I think the overall clarity of the paper would be improved, and the contributions clearer, if the work was better situated with respect to related work.\\n\\nThe experiments demonstrate that the trainable puzzle generator is able to produce harder (i.e. takes longer time to solve) puzzles than a random or probabilistic generator of the same grammar. While this does show that the program generator is learning something useful, these results are insufficient to show the utility of this approach in any real context. It seems the most interesting solver to assess is a trainable solver. Yet only 1 of the 4 solvers they assess is trainable. I know the authors make a point that they are not putting forward any new solver algorithms. That ok, however, taking existing trainable solvers and assessing how they perform with this guided puzzle generation vs. some other puzzle generation approach is a critical empirical study. Furthermore, it would be helpful to have more discussion of the baseline methods of generating puzzles. When the trainable puzzle solver was originally proposed, how was it trained? Where did the data come from? How does that compare to this approach. I am not very familiar with this literature, and I imagine this paper would be of interest to folks outside the program synthesis space, so it would be very helpful to better explain this (also we note about related work). There are several additional empirical analysis that could be added to improve this work. For example, for the trainable solver, a plot of (training time) vs (time to solve puzzle) would be interesting. Beyond looking at training time, does a solver trained with the guided puzzle generator end up being a 'better' solver in some way? Are the resulting puzzles harder but still being solved? Is there a way of quantifying the 'hardness' of a puzzle? Perhaps a proxy like size? Then it would be cool to plot (training time) vs (approx puzzle hardness) to demonstrate that . the puzzle generator is really developing a reasonable curriculum.\\n\\nFinally, there is a bunch of related work that I think is missing. Again, I'm not super familiar with this work, but I think there is a lot of curriculum learning stuff within RL that seems super relevant. Of particular relevance is the Alice/Bob framework from \\\"Intrinsic motivation and automatic curricula via asymmetric self-play\\\" seems very similar to the work at hand. Something that is interesting in the Alice/Bob framework that could be transferred over here is the notion of the generator wanting to make a puzzle hard, but not too hard, i.e. make it just outside the solvers current capabilities. \\n\\nOverall my assessment is that this paper doesn't quite meet the standard for ICLR. My two major critiques are (1) the related work is seriously lacking making it difficult to situate this work in a broader context. The authors also seem to miss the entire curricular learning literature. (2) The empirical evaluations are lacking. In particular, more thorough analysis of how the generator is behaving, the type of curriculum it learns, and the resulting impact this has on a trainable solver all are missing. Furthermore, more focus on trainable solvers would improve this work. \\n\\nI'm not an expert in this area so it is possible I misjudged the significance of this work. I'm certainly open to revising my assessment if the authors are able to address (2) in a meaningful way.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a method for generating hard puzzles with a trainable puzzle solver. This is an interesting and important problem which sits at the intersection of symbolic and deep learning based AI. The approach is largely GAN inspired, where the neural solver takes the role of a discriminator, and the generator is trained with REINFORCE instead of plain gradient descend.\\n\\nAlthough I'm not an expert in this area, I have found this paper well written and easy to follow. The problem is well motivated, and the approach is sensible. As this is a novel problem, the paper also defines their own metric, namely the average time taken to solve the puzzle by given solvers, and the diversity of generated puzzles. It is nice to see that the generator indeed learns to generate puzzles that are significantly harder than random counterparts, while maintaining reasonable diversity. Although I think these are convincing results, my question to the authors is: have you tried or considered other ways of evaluating the generated puzzles? E.g., if you train the guided search solver on the generated puzzles and evaluate it on a random set of puzzles, would you see an improvement? I think this would be interesting to see, which can serve as an alternative evaluation metric.\", \"my_other_comments_are_regarding_the_experiment_section\": \"1. It would be useful to provide references to the solvers used, both in the adversarial training phase and the evaluation phase, if there is any.\\n2. More details of the training process would also be valuable. E.g., the training time and stability, common failure modes if any.\", \"minors\": \"1. Figure f3 should be s.count(\\\"A\\\")==1000 and s.count(\\\"AA\\\")==0 \\n2. First sentence under Fig 1, one is give -> one is given\\n3. Figure 5, f2: 2**(x**2)) == 16 -> 2**(x**2) == 16\"}"
]
} |
ryeRwlSYPH | Learning transitional skills with intrinsic motivation | [
"Qiangxing Tian",
"Jinxin Liu",
"Donglin Wang"
] | By maximizing an information theoretic objective, a few recent methods empower the agent to explore the environment and learn useful skills without supervision. However, when considering to use multiple consecutive skills to complete a specific task, the transition from one to another cannot guarantee the success of the process due to the evident gap between skills. In this paper, we propose to learn transitional skills (LTS) in addition to creating diverse primitive skills without a reward function. By introducing an extra latent variable for transitional skills, our LTS method discovers both primitive and transitional skills by minimizing the difference of mutual information and the similarity of skills. By considering various simulated robotic tasks, our results demonstrate the effectiveness of LTS on learning both diverse primitive skills and transitional skills, and show its superiority in smooth transition of skills over the state-of-the-art baseline DIAYN. | [
"transitional skills",
"skills",
"lts",
"intrinsic motivation",
"diverse primitive skills",
"information theoretic objective",
"recent methods",
"agent",
"environment",
"useful skills"
] | Reject | https://openreview.net/pdf?id=ryeRwlSYPH | https://openreview.net/forum?id=ryeRwlSYPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"fbgK9NG51",
"HkefttE2iH",
"SkgFrtNhoS",
"rye7zKEnoH",
"B1ghdRtJcr",
"Hyx7VMW19B",
"SJeB8gyy5r"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747604,
1573828985850,
1573828929466,
1573828875373,
1571950195956,
1571914283422,
1571905612533
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2380/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2380/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2380/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2380/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2380/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2380/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The submission has two issues, identified by the reviewers; (1) the description of the proposed method was found to be confusing at times and could be improved, and (2) the proposed transitional skills were not well motivated/justified as a solution to the problem the authors propose to solve.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"We thank the reviewer for the thoughtful and constructive feedback. It has greatly helped us improve the paper, which involves the following details:\\n\\n1. Q: The writing is in general not particularly clear and the notations are hard to follow, the symbols are often bloated with superscripts that are not clearly defined, and mixing capital and small letters for random variables and their realization.\", \"a\": \"We have revised the following details according to your comments:\\n\\n(1) Revising $\\\\omega \\\\in p(\\\\omega)$ to $\\\\omega \\\\sim p(\\\\omega)$.\\n(2) Mending the expression of mutual information in footnote 1 on page 2.\\n(3) Removing $S^t_{i,j,1}$ on page 3.\\n(4) Mending 'muture' to 'mutual'.\\n(5) We are sorry to admit that our previous experiments are irrelevent to MoJoCo. Hence, we have revised our mention into 'The tasks of CartPole, MountainCar, Pendulum and HalfCheetah-v3 are based on OpenAI gym \\\\footnote{http://gym.openai.com/}'.\\n\\nMoreover, we have added supplementary experiments in the revised paper. For more details about our experiments, please visit our site for video demonstrations: https://sites.google.com/view/lts-skill\"}",
"{\"title\": \"Response to Official Blind Review #2\", \"comment\": \"Thank you for your thoughtful review and questions, we very much appreciate the time you took to review our work. We reply to your points below.\\n\\n1. Qs: (1)How would the feature engineering used in Appendix E affects the performance? (2)How would polices learned perform compared to standard RL? (3)Whether the learned skills are useful for down-stream tasks?\", \"as\": \"(1)Here we note that the feature value and its statistical characteristics is just used to visualize the trajectories and effectiveness of skill transition (i. e. show the performance). The states used to discriminate the skills by our discriminator are the whole states of the agent instead the single feature value. Hence, the feature value never affects the performance of our proposed algorithm.\\n(2)The standard RL algorithm is less effective to accomplish the tasks in this paper (such as the acrobatics of half-cheetah), where the comparison with standard RL can be achieved in our baseline DIAYN. In standard RL, the update of value function is based on a dedicated extrinsic reward , however, in some complex tasks, such as the acrobatics of half-cheetah, it is nearly impossible to design a perfect reward function. While in our setting, we need the agent to explore the environment learning primitive skills, then adopting a meta-policy to choose these low-level skills. In our work, we focus on the transition process of different primitive skills instead of collecting rewards from the environment. \\n(3)The learned skills are useful for down-stream tasks. In the revised paper, we add supplementary experiments of down-stream tasks to show the effectiveness of skills. In the HalfCheetah task, the agent is able to execute several specifical movement thanks to learning different skills. For more details, please refer to Section 6.5 (Transition with Hierarchical Framework) in the revised paper.\\n\\n2.Q: What are these learned primitive skills for each benchmark task?\", \"a\": \"Thank you for your reviews. We are sorry to admit that there do exit lots of typos in the previous paper. Hence, we have revised and updated our paper, changing the typos, description and adding a new experiment for further analysis. Please let us know if you have any other questions or concerns.\"}",
"{\"title\": \"Response to Official Blind Review #1\", \"comment\": \"Thanks for your comments and questions. We very much appreciate the time you took to review our work. We reply to your points below.\\n\\n1. Q: Lack of experiments for specific tasks.\", \"a\": \"We are sorry to admit that there exits a misleading in Figure 5, and we have modified the interpretation of Figure 5 in the revised paper. In fact, the value of the y-axis denotes the mean of feature values (which is also the statistical characteristic), whose sharp fluctuation illustrates large probabilities of failure of skill transition. For DIAYN's primitive skills, there exists a sharp leap which notes harder transition between the two skills, compared with the smooth line of LTS's transition. Hence, the sharp leap of DIAYN does not mean it has fast skill transition.\\n\\nMoreover, we have rewrite and update our paper, adding more experiments for further analysis.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"What is the specific question/problem tackled by the paper?\\nThis paper addresses the problem of learning and using useful skills through unsupervised RL. While past work (VIC, DIAYN) found skills by maximizing the mutual information between skills and states, this paper (LTS) aims to find transitional skills to move between the primitive skills.\\n\\nThe proposed method is, if the primitive skills found by DIAYN are labeled w_i, to find skills z_[i,j,k] which corresponds to the kth state while transitioning from w_i to w_j. These transition skills are learned jointly with the primitive skills using the mutual information objective.\\n\\nThe results in the paper are on mountain car, cart pole, and pendulum. The plots show that distinct primitive skills can be learned in these environments, and that the transition skill transitions between them. The results do not show the actual performance of using these skills for any specific task, which was the motivation presented in the introduction. The \\n\\nIs the approach well motivated?\\nThe approach is not well motivated. The paper claims that transitional skills are necessary, but does not show any evidence of an agents behavior improving by using transitional skills. \\n\\nI argue to reject the paper in its current state; Sections 3 and 4 need to be rewritten to be more easily understandable, and the need for transitional skills should be more empirically motivated.\", \"clarity_issues\": \"- The method is presented in a very confusing way. I am not confident that I understand how the transitional skills are defined. \\n- Figure 2: What is the \\u201cstatistical characteristic\\u201d and \\u201cfeature value\\u201d? The vague terms are not defined. Which value? What statistic?\\n- Figure 5: I see that LTS transitions more smoothly between skills, but why is DIAYN\\u2019s fast transition bad? Clearly it still succeeds at reaching the other state.\\n\\n--------------------------------\\nUpdate\\nDue to the additional experiments that show the benefit of LTS, I have increased my score to weak reject.\\nSection 6.5: Thank you for adding this additional experiment. This section lacks details about how the experiment was run. How was the meta-policy trained? In \\u201c3) LTS-master: Transfer policy with optimal weights learned by meta-policy\\u201d, why is there transfer? Transfer from what to what? \\n\\u201cMoreover, we conduct hierarchical framework to weight (or choose) the action modeled by different primitive skills.\\u201d What does this sentence mean?\\n\\nReading over the paper again, the paper needs several editing passes to fix the grammar, as a number of sections (like above) are not understandable. While the paper has improved and is on a good track, I do not think it is yet ready for publication.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper aims to learn transitional skills in addition to creating diverse primitive skills without a reward function by using an information theoretic objective, borrowing ideas from instrinsic motivation. The paper addresses the challenge of learning diverse primitives and also learning to transition between any pair of them, which is valuable for real-world complex problem solving with sparse rewards. The proposed LTS approach is compared againt a state-of-the-art DIAYN approach on several benchmark control tasks.\\n\\nI have several comments/concerns regarding the experimental results.\\n\\n1. How would the feature engineering used (Appendix E) affect the performance of tasks solved using the primitive skils learned using LTS and how `would policies learned with this framework perform compared to standard RL policies learned individually with external rewards? This is to establish whether the learned skills and transitions between them are useful for any down-stream tasks. In the DIAYN paper, experiments on down-stream tasks were provided.\\n\\n2. What are these learned primitive tasks for each benchmark task? A visualization (or some interpretation of the meaning of the skills learned) would prove useful in understanding the primitive skills learned. Again, the DIAYN paper visualizes the primitive skills learned in their tasks.\\n\\n3. I don't understand how the results in 6.3 demonstrate the generalization ability of the method. The authors clearly state that using 3 transtional skills fails when evaluated on 50 transitional skills. Can this be elaborated on? \\n\\n4. Related to 1 above, I found it difficult to relate the features used for the tasks with the actual tasks themselves. More background information and motivation for the features is required (as opposed to saying it makes distinguishing between skills better -- what does better mean here?).\\n\\n5. Lastly, the paper is full of silly typos, such as the incorrect usage of singular and plural nouns.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"General description\\nThe paper tackles the problem of transition between different skills in hierarchical reinforcement learning. \\nIn particular, they follow the work of VIC and DIAYN and define an information-theoretic objective function that maximizes the mutual information between the future states and options, given the initial state, while minimizing the distance between the options and the so-called transitional states. The algorithm, LTS, is compared to DIAYN on three environments, namely CartPole, MontainCar, and Pendulum.\\n\\nGeneral remarks,\\nThe problem tackled and the proposed idea are interesting; however, I am not fully convinced by the derivation and the experiments of the paper. The writing is in general not particularly clear and the notations are hard to follow, the symbols are often bloated with superscripts that are not clearly defined, and mixing capital and small letters for random variables and their realization.\\n\\nOn the derivation, Eq 3. How is it possible to replace p(s^p | p^t) with p(omega|s_t)? I understand the connection between the two but what guarantees the mutual information is still maximized? (the whole derivation depends on that)\\n\\nThe experiments and the plots are interesting, showing a smoother transition between skills than DIAYN, however, it is still not clear how that can help solve the task at hand. Could the author elaborate on that please?\", \"some_details\": [\"Page 2 \\\\in should be \\\\sim. In general, the notation does not clearly distinguish a random variable (in capital) from a realization (in small letters). For instance, page 3, big \\\\Omega (the random variable I presume) is written with a subscript i to indicate the ith skill.\", \"Footnote 1 page 2, a log is missing in the MI definition.\", \"Page 3, what does t refer to in S^t_{i ,j, 1}?\", \"Page 3, muture -> mutual.\", \"the paper mentions the experiments are conducted on MuJoCo but the appendix mentions the classical OpenAI Gym experiments.\"]}"
]
} |
HyeAPeBFwS | Quantifying uncertainty with GAN-based priors | [
"Dhruv V. Patel",
"Assad A. Oberai"
] | Bayesian inference is used extensively to quantify the uncertainty in an inferred field given the measurement of a related field when the two are linked by a mathematical model. Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of large dimension, and/or have prior distributions that are difficult to characterize mathematically. In this work we demonstrate how the approximate distribution learned by a generative adversarial network (GAN) may be used as a prior in a Bayesian update to address both these challenges. We demonstrate the efficacy of this approach by inferring and quantifying uncertainty in inference problems arising in computer vision and physics-based applications. In both instances we highlight the role of computing uncertainty in providing a measure of confidence in the solution, and in designing successive measurements to improve this confidence. | [
"Bayesian inference",
"Uncertainty quantification",
"Generative adversarial networks"
] | Reject | https://openreview.net/pdf?id=HyeAPeBFwS | https://openreview.net/forum?id=HyeAPeBFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"3xiESHSF3l",
"rklmUH52sH",
"BkexChE3jS",
"rJxFqsV3jr",
"r1xXycN3jB",
"HJgn1YNnsr",
"SyxQvoKmjS",
"HJenV_FQjr",
"r1lA7pumoH",
"Hker4juXsH",
"SJlld01kcS",
"rJghfrNnKS",
"BygwEyIPKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747575,
1573852491428,
1573829832366,
1573829521056,
1573829083451,
1573828836115,
1573260123420,
1573259315708,
1573256485696,
1573255980783,
1571909223877,
1571730708052,
1571409711430
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2379/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2379/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2379/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2379/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2379/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2379/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2379/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2379/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2379/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2379/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2379/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2379/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper suggests a Bayesian approach to make inference about latent variables for image inference tasks. While the idea in the paper seems elegant and simple, reviewers pointed out a few concerns, including lack of comparisons, missing references, and requested for more extensive validations. While a few comments might have been misunderstandings (eg lack of quantification - seems to be resolved by author\\u2019s comments), other comments are not (eg equation (8) needs further justification even if the final results don\\u2019t use it). We encourage authors to carefully review comments and edit the manuscript (perhaps some appendix items should be in the main to reduce confusion) for resubmitting to future conferences.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"tl;dr\", \"comment\": \"Based on all the reviewer's response we have recognized that we were remiss in not clarifying our main contributions in the manuscript. We have done so in the revised version and repeat them below.\", \"the_main_contribution_of_this_paper_can_be_summarized_as_follows\": \"\\u2022\\tA novel method for performing Bayesian inference involving complex priors and high dimensional posterior. In the proposed method we utilize the distribution learned by a GAN as a surrogate for the prior distribution and reformulate the inference problem in the low-dimensional latent space of the GAN. \\n\\n\\u2022\\tA theoretical analysis of the weak convergence of the posterior density learned by the proposed method to the true posterior density.\\n\\n\\u2022\\t Novel unsupervised image denoising and inpainting algorithms with quantitative measures of uncertainty through pixel-wise variance. \\n\\n\\u2022\\tApplication of the proposed method to physics-based inference problems.\\n\\n\\u2022\\tDemonstration of the utility of uncertainty quantification to facilitate active learning.\"}",
"{\"title\": \"Final response to reviewer 1 -- Part (2/2)\", \"comment\": \"\\\"It is not clear how the HMC parameters are fixed.\\\"\\n\\n** We have included concise description of this in section 3 of the revised version. **\\n-----\\n\\\"The experiments do not have error bars (Figure 4.) This questions the significance of the results.\\\"\\n\\n** This was an oversight on our part. We have included error bars in the revised version. **\\n-----\\n\\\"My overall impression is that there is little novelty in the proposed approach. Namely, using a GAN to learn the prior distribution, and then very well known techniques to infer the original input image.\\\"\\n\\nWe agree that the approach is simple; on the other hand, it is quite novel. We are not aware of any other work that performs uncertainty quantification using this combination of techniques in an unsupervised fashion. In contrast to other approaches [3, 4, 5], which use pairs of desired and measured images (x,y) to train the network, our approach only requires desired images (x). Furthermore, our work uniquely demonstrates the use of quantified uncertainty in active learning setup - finding the optimal location of sensors (successive measurement location) in an unsupervised fashion, which again is not reported before and has many potential applications.\\n**We have made this clear in a new subsection titled \\\"Our contributions\\\"**\\n-----\\n\\\"I have missed some references to related work on inverse problems. An example is:\", \"https\": \"//arxiv.org/pdf/1712.03353.pdf\\\"\\n\\n** This is an interesting, related work. We have included a reference to it in the revised version. **\\n-----\\n\\\"Is the original figure contained in the training set used to infer the GAN. If so that can lead to biased results.\\\"\\n\\nNo, the original figures were not used in training the GAN in all examples. We studiously avoided this bias. \\n** We have mentioned in Section 3 of the revised version. **\\n-----\\nI have missed a simple baseline in which one simply finds the training image that is closest to the corrupted observed or partially observed image.\\n\\n** We have included this result (figure 10) in Appendix C. **\\n---\\n\\n[3]. A. Kendall and Y. Gal, \\u201cWhat Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?\\u201d, NIPS (2017).\\n [4]. Kohl, S.A., Romera-Paredes, B., Meyer, C., Fauw, J.D., Ledsam, J.R., Maier-Hein, K.H., Eslami, S.M., Rezende, D.J., & Ronneberger, O. \\u201cA Probabilistic U-Net for Segmentation of Ambiguous Images\\u201d, NeurIPS (2018).\\n[5]. Hu, S., Worrall, D., Knegt, S.J., Veeling, B., Huisman, H.J., & Welling, M., \\u201cSupervised Uncertainty Quantification for Segmentation with Multiple Annotations\\u201d. MICCAI (2019).\"}",
"{\"title\": \"Final response to reviewer 1 -- Part (1/2)\", \"comment\": \"Thank you for your valuable feedback! After carefully reviewing it, we have modified manuscript as discussed below (description of changes is enclosed within ** ... **).\\n\\n----\\n\\\"Eq. (8) is expected to give very bad results. The reason is that it is very unlikely to sample from the prior configurations for z that are compatible with y.\\\"\\n\\nWe agree that it is in general a bad idea to use this equation, especially when the likelihood is very informative. However, we have found that for likelihoods that are not very informative this method is still useful. Regardless, we would like to point out that we do not use this equation for the results shown in the manuscript, but rather use MCMC (eq. (9)) for all the results. \\n** We have included this discussion in the revised manuscript in subsection 2.1. **\\n-----\\n\\n\\\"The paper does not address learning any model parameters. e.g. the amount of noise.\\\"\\n\\nYou are right, we do not learn parameters associated with the noise in this work. We do however learn the parameters associated with the forward model by mapping them back to the latent space of the GAN. This is most clear in the context of the physics-based model (Section 3.2), where we parameterize forward model using pixel-wise values of the initial temperature, and then learn these parameters using the proposed method. \\n\\nWe note that the proposed approach can easily be extended to regime where likelihood is also unknown by incorporating likelihood-free inference methods like ABC or meta-learning approaches. \\n** We have included this discussion in the Conclusion section of the revised version **. \\n-----\\n\\\"A more principled approach would be to estimate the prior parameters using maximum likelihood estimation. That has already been done in the case of the variational autoencoder.\\nThe variational autoencoder is an already known method that can be used to solve the problem formulated by the authors. It also automatically proposes an inference network that can be used for recognition. If the likelihood is Gaussian and p(x|z) is also Gaussian, one can directly marginalize x and work with p(y|z) and p(z). The authors should at leas discuss the potential use of this method alongside with the BIGAN model which also provides a recognition model.\\\"\\n\\nWe agree that we were lacking a proof that demonstrated the convergence of the proposed method for computing point estimates of the posterior. \\n** We have now derived this proof and included it Appendix A** \\nIn a nutshell, this proof establishes that with increasing the expressivity of the generator and the discriminator (increasing number weights) the posterior density of the proposed method weakly converges to the true posterior density. \\n\\nWe agree that using a variational autoencoder (VAE) in lieu of a GAN is an interesting extension of the proposed approach and that this can be accomplished in different ways. However, we would like to point out that for image recovery tasks GANs have consistently demonstrated better performance than VAEs, as the latter tend to smear out images due to their maximum likelihood loss [1].\\n\\nWhile the idea of using corrupted images to train the VAE and inferring the latent variable, which would be the - un-corrupted image is very interesting and using VAE with max. likelihood loss is an intriguing option, there are some major drawbacks of using it in the proposed Bayesian inference setting. \\n\\u2022 It is well-known that image samples produced by VAEs are quite blurry and of poorer quality than GAN and fail to match the true data distribution. It is shown in earlier studies that they fail to match marginal distribution not only in visible space but also in latent space [2]. Since, the focus of our paper is to use these distributions as priors, we believe that it is better to select a model for these distributions and hence GAN is our preferred choice. \\n\\u2022 Furthermore, VAEs are explicit density model and we have to select a model family (like Gaussian) for the latent variables. Therefore, in a setting where we treat un-corrupted images as latent variables (as suggested by the reviewer), and use max. likelihood as our loss function, we are forcing the latent variables to be close to that chosen family of distributions. This, might fail to capture complex inferred joint probability distribution seen in the examples considered in this manuscript, which is far from Gaussian (or any other simple distribution). It is also against the spirit of this work, where we want to make as few assumption as possible for our prior and use data to guide its final form.\\n\\n[1] R. A. Yeh, C. Chen, T. Yian Lim, A. G. Schwing, M. Hasegawa-Johnson, and M. N. Do, \\u201cSemantic image inpainting with deep generative models,\\u201d in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, vol. 2017\\u2013January, pp. 6882\\u20136890.\\n[2]. Rosca, M., Lakshminarayanan, B., & Mohamed, S. (2018). Distribution Matching in Variational Inference. ArXiv, abs/1802.06847.\"}",
"{\"title\": \"Final response to reviewer 2\", \"comment\": \"Thank you for your valuable feedback! After carefully reviewing it, we have modified manuscript as discussed below (description of changes is enclosed within ** ... **).\\n\\n\\\"Though the Bayesian inference using GAN is a natural idea, learning algorithms proposed in this paper are simple and are not intensively developed. In numerical experiments, there is no comparison with major competitors besides random sampling in the active learning setup. Hence, the effectiveness and advantage of the proposed methods are not clear.\\\"\\n\\nWe have addressed this by responding to the specific questions below. \\n\\n\\\"- In active learning, the proposed method should be compared with other methods such as Bayesian DNN using dropout, etc. \\\"\\n\\nWe are not aware of any other methods for computing uncertainty in recovered images that have been used to drive an active learning task in image inpainting. While methods based on dropout (\\\\cite{Kendall2017a, Kendall2019}) or variational inference (\\\\cite{Kohl2018a}) could be extended to accomplish this, this has not been done thus far.\\n**We have added this comment in Section 3.1**\\n\\nAnother big difference between the methods mentioned above and our approach is that while they require image pairs (true and corrupted images) for training, our approach only requires uncorrupted images. Thus while our algorithm relies on unsupervised learning, the other algorithms fall under the category of supervised learning. \\n**We have also clarified this within the \\\"Our Contributions\\\" Section**\\n\\n\\n\\\"- How does the estimation accuracy of GAN relate to the estimation accuracy of the proposed method? Showing a quantitative description would be nice.\\\"\\n\\nWe thank the reviewer for raising this important question.\\n**We have addressed it thoroughly in Appendix A. We have provided a proof that demonstrates the weak convergence of the posterior density calculated using our method to the true posterior density as the number of weights in the discriminator and generator components of the GAN is increased.**\"}",
"{\"title\": \"Final response to reviewer 3\", \"comment\": \"Thank you for your valuable and constructive feedback! After carefully reviewing it, we have modified the manuscript as discussed below (description of changes is enclosed within ** ... **).\\n\\n\\\"A big issue of this paper is the deviation of purpose and method. As the paper claims to quantify the uncertainty, the paper is supposed to give specific quantitative metric or values to probe the uncertainty. However, the paper demonstrates to us only the ability, not exactly 'quantification'. I\\u2019d like to see a specific metric of uncertainty that could only be calculated through the proposed method.\\\" \\n\\nYou are right, the main purpose of the method is to quantify uncertainty in the task of image inference. In fact, to our knowledge the method we describe is the only unsupervised learning method for quantifying uncertainty in a deep-learning based image-recovery task. We treat the inference as a stochastic problem, and develop an expression for the probability density function of the inferred image (i.e. joint pdf for each pixel of the inferred image). Once this is done, we sample from this distribution and compute any appropriate point estimate that can quantify the uncertainty in the inference. In our work, we have chosen the \\\"pixel-wise\\\" variance as this metric. Note that this metric is a field and not a scalar quantity and is plotted as an image. We have computed this metric for every example in the manuscript (see last row of figure 2, 3, 5 etc). \\n** However, we have been remiss in not highlighting, or bringing the reader\\u2019s attention to it. In the revised version of the paper we have done this by highlighting this field in the images and its description in the text. **\", \"some_more_things_to_note\": \"1. In one example (Figure 4) we compute a scalar metric (that is the average variance/per pixel over the entire image) for the inferred images, and show that this measure increases with increasing noise in the input, as it should. \\n** In the revised version, we have drawn the reader's attention to this example. **\\n\\n2. We note that our method of inferring the desired image from the measured image is an unsupervised method; in that for training we only need a set of desired images to construct the prior. We are not aware of any other unsupervised learning approach for solving these types of problems with quantified uncertainty. In that regards, the calculation of pixel-wise variance (our metric of uncertainty) in an unsupervised setting is possible only using our approach. \\n** We have clarified this unique aspect in the revised version of the manuscript by listing it under a new subsection titled \\\"Our Contribution\\\" ** \\n\\n3. We note that there has been recent work on computing the uncertainty in an inferred image within a supervised learning framework where pairs of measured and desired images are used for training the network. In these articles the authors have used methods like Bayesian dropout to compute uncertainty in the inferred images [1]. Similar to what we have done, these authors have also plotted the point-wise variance as a quantitative metric of uncertainty. \\n** We have referred to these works in the \\\"Related Work\\\" subsection of the revised version of the manuscript. **\\n\\n4. We note that we have gone beyond just computing the metric of uncertainty (point-wise variance) and also described how it might be useful in making the subsequent measurement in the context of an active learning approach, which to the best of our knowledge has not been done previously in Bayesian deep learning applied to image inference. \\n** We have clarified this unique aspect in the revised version of the manuscript by listing it under a new subsection titled \\\"Our Contribution\\\" ** \\n\\n\\\"There are some grammar issues in the paper. For example. '\\u2026we the MAP\\u2026' in the 7th page.\\\"\\n\\n** We have done a through scrub of manuscript in order to catch these. **\\n\\n\\n[1]. A. Kendall and Y. Gal, \\u201cWhat Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?\\u201d, NIPS (2017).\"}",
"{\"title\": \"Initial response to reviewer 1 -- Part (2/2)\", \"comment\": \"\\\"It is not clear how the HMC parameters are fixed.\\\"\\n\\n** We will include a concise description of this in the revised version. **\\n-----\\n\\\"The experiments do not have error bars (Figure 4.) This questions the significance of the results.\\\"\\n\\n** This was an oversight on our part. We will include error bars in the revised version. **\\n-----\\n\\\"My overall impression is that there is little novelty in the proposed approach. Namely, using a GAN to learn the prior distribution, and then very well known techniques to infer the original input image.\\\"\\n\\nWe agree that the approach is simple; on the other hand, it is quite novel. We are not aware of any other work that performs uncertainty quantification using this combination of techniques in an unsupervised fashion. In contrast to other approaches [2, 3, 4], which use pairs of desired and measured images (x,y) to train the network, our approach only requires desired images (x). Furthermore, our work uniquely demonstrates the use of quantified uncertainty in active learning setup - finding the optimal location of sensors (successive measurement location) in an unsupervised fashion, which again is not reported before and has many potential applications.\\n-----\\n\\\"I have missed some references to related work on inverse problems. An example is:\", \"https\": \"//arxiv.org/pdf/1712.03353.pdf\\\"\\n\\n** This is an interesting, related work. We will include it in the revised version, along with a description of how it relates to our approach. **\\n-----\\n\\\"Is the original figure contained in the training set used to infer the GAN. If so that can lead to biased results.\\\"\\n\\nNo, the original figures were not used in training the GAN in all examples. We studiously avoided this bias. ** We will mention this in the revised version. **\\n-----\\nI have missed a simple baseline in which one simply finds the training image that is closest to the corrupted observed or partially observed image.\\n\\n** We do not have this baseline but will include it in the revised version. **\\n---\\n\\n[2]. A. Kendall and Y. Gal, \\u201cWhat Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?\\u201d, NIPS (2017).\\n [3]. Kohl, S.A., Romera-Paredes, B., Meyer, C., Fauw, J.D., Ledsam, J.R., Maier-Hein, K.H., Eslami, S.M., Rezende, D.J., & Ronneberger, O. \\u201cA Probabilistic U-Net for Segmentation of Ambiguous Images\\u201d, NeurIPS (2018).\\n[4]. Hu, S., Worrall, D., Knegt, S.J., Veeling, B., Huisman, H.J., & Welling, M., \\u201cSupervised Uncertainty Quantification for Segmentation with Multiple Annotations\\u201d. MICCAI (2019).\"}",
"{\"title\": \"Initial response to reviewer 1 -- Part (1/2)\", \"comment\": \"Thank you for your valuable feedback! After carefully reading your comments we plan to modify the manuscript as discussed below (the planned changes are shown by ** \\u2026 **). We would appreciate it if you could let us know whether the proposed changes address your concerns, or whether we have misinterpreted your comments.\\n-----\\n\\\"Eq. (8) is expected to give very bad results. The reason is that it is very unlikely to sample from the prior configurations for z that are compatible with y.\\\"\\n\\nWe agree that it is in general a bad idea to use this equation, especially when the likelihood is very informative. However, we have found that for likelihoods that are not very informative this method is still useful. Furthermore, we would like to point out that we do not use this equation for the results shown in the manuscript, but rather use MCMC (eq. (9)) for all the results. ** We will include this discussion in the revised manuscript. **\\n-----\\n\\\"The paper does not address learning any model parameters. e.g. the amount of noise.\\\"\\n\\nYou are right, we do not learn parameters associated with the noise in this work. We do however learn the parameters associated with the forward model by mapping them back to the latent space of the GAN. This is most clear in the context of the physics-based model (Section 3.2), where we parameterize forward model using pixel-wise values of the initial temperature, and then learn these parameters using the proposed method. \\n\\nWe note that the proposed approach can easily be extended to regime where likelihood is also unknown by incorporating likelihood-free inference methods like ABC or meta-learning approaches. ** We will include this discussion the revised version **. \\n-----\\n\\\"A more principled approach would be to estimate the prior parameters using maximum likelihood estimation. That has already been done in the case of the variational autoencoder.\\nThe variational autoencoder is an already known method that can be used to solve the problem formulated by the authors. It also automatically proposes an inference network that can be used for recognition. If the likelihood is Gaussian and p(x|z) is also Gaussian, one can directly marginalize x and work with p(y|z) and p(z). The authors should at leas discuss the potential use of this method alongside with the BIGAN model which also provides a recognition model.\\\"\\n\\nWe agree that we were lacking a proof that demonstrated the convergence of the proposed method for computing point estimates of the posterior. ** We have now derived this proof and will include it in the Appendix. ** In a nutshell, this proof establishes that with increasing the expressivity of the generator and the discriminator (increasing number weights) the point estimates computed using the proposed approach converges to the true point estimates of the posterior. \\n\\nWe agree that using a variational autoencoder (VAE) in lieu of a GAN is an interesting extension of the proposed approach and that this can be accomplished in different ways. ** We are working on writing a concise description of these ideas and will include in the revised manuscript. ** However, we would like to point out that for image recovery tasks GANs have consistently demonstrated better performance than VAEs, as the latter tend to smear out images due to their maximum likelihood loss.\\n\\nWhile the idea of using corrupted images to train the VAE and inferring the latent variable, which would be the - un-corrupted image is very interesting and using VAE with max. likelihood loss is an intriguing option, there are some major drawbacks of using it in the proposed Bayesian inference setting. \\n\\u2022\\tIt is well-known that image samples produced by VAEs are quite blurry and of poorer quality than GAN and fail to match the true data distribution. It is shown in earlier studies that they fail to match marginal distribution not only in visible space but also in latent space [1]. Since, the focus of our paper is to use these distributions as priors, we believe that it is better to select a model for these distributions and hence GAN is our preferred choice. \\n\\u2022\\tFurthermore, VAEs are explicit density model and we have to select a model family (like Gaussian) for the latent variables. Therefore, in a setting where we treat un-corrupted images as latent variables (as suggested by the reviewer), and use max. likelihood as our loss function, we are forcing the latent variables to be close to that chosen family of distributions. This, might fail to capture complex inferred joint probability distribution seen in the examples considered in this manuscript, which is far from Gaussian (or any other simple distribution). It is also against the spirit of this work, where we want to make as few assumption as possible for our prior and use data to guide its final form.\\n\\n[1]. Rosca, M., Lakshminarayanan, B., & Mohamed, S. (2018). Distribution Matching in Variational Inference. ArXiv, abs/1802.06847.\"}",
"{\"title\": \"Initial response to reviewer 2\", \"comment\": \"Thank you for your valuable feedback! After carefully reading it, we plan to modify the manuscript as discussed below (the planned changes are shown by ** \\u2026 **). We would appreciate it if you could let us know whether the proposed changes address your concerns, or whether we have misinterpreted your comments.\\n\\n\\u2022\\tYou have raised an interesting question about how the accuracy of the GAN impacts the accuracy of the proposed method. In order to address this, we have developed analytical estimates for the error in the point estimates computed using the proposed approach and show that these are intimately tied to error in computing the point estimates for the prior using the GAN. We have also demonstrated that as the generator and the discriminator of the GAN become more expressive this error tends to zero, and the exact point estimates, for both the prior and the posterior, are recovered. ** In the revised manuscript, we will include this mathematical analysis in the Appendix and refer to it in the main text. **\\n\\n\\u2022\\tWe note that our method of inferring the desired image from the measured image is an unsupervised method; for training we only need a set of desired images to construct the prior. We are not aware of any other unsupervised learning approach for solving these types of problems with quantified uncertainty. In that regard, the calculation of point-wise variance (our metric of uncertainty) is possible only using our approach, and therefore a direct comparison is not possible, since other supervised methods (explained below) cannot work in this setting where only set of desired images are available. ** We will clarify this unique aspect in the revised version of the manuscript. **\\n\\n\\u2022\\tThere has been some work on computing the uncertainty in an inferred image within a supervised learning framework where pairs of measured and desired images are used for training the network [1, 2]. In these articles the authors have used methods like Bayesian dropout and variational autoencoder to compute uncertainty in the inferred images. ** We will refer to these works in the revised version to better orient reader. **\\n\\n\\n[1]. A. Kendall and Y. Gal, \\u201cWhat Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?\\u201d, NIPS (2017).\\n [2]. Kohl, S.A., Romera-Paredes, B., Meyer, C., Fauw, J.D., Ledsam, J.R., Maier-Hein, K.H., Eslami, S.M., Rezende, D.J., & Ronneberger, O. \\u201cA Probabilistic U-Net for Segmentation of Ambiguous Images\\u201d, NeurIPS (2018).\"}",
"{\"title\": \"Initial response to reviewer 3\", \"comment\": \"Thank you for your valuable feedback! After carefully reviewing it, we plan to modify the manuscript as discussed below (the planned changes are shown by ** ... **). We would appreciate it if you could let us know whether the proposed changes address your concerns, or whether we have misinterpreted your comments.\\n\\n\\\"A big issue of this paper is the deviation of purpose and method. As the paper claims to quantify the uncertainty, the paper is supposed to give specific quantitative metric or values to probe the uncertainty. However, the paper demonstrates to us only the ability, not exactly 'quantification'. I\\u2019d like to see a specific metric of uncertainty that could only be calculated through the proposed method.\\\" \\n\\nYou are right, the main purpose of the method is to quantify uncertainty in the task of image inference. Given this, we treat the inference as a stochastic problem, and develop an expression for the probability density function of the inferred image (i.e. joint pdf for each pixel of the inferred image). Once this is done, we sample from this distribution and compute any appropriate point estimate that can quantify the uncertainty in the inference. In our work, we have chosen the \\\"pixel-wise\\\" variance as this metric. Note that this metric is a field and not a scalar quantity and is plotted as an image. We have computed this metric for every example in the manuscript (see last row of figure 2, 3, 5 etc). ** However, we have been remiss in not highlighting, or bringing the reader\\u2019s attention to it. In the revised version of the paper we will do this. **\", \"some_more_things_to_note\": \"1.\\tIn one example (Figure 4) we compute a scalar metric (that is the average variance/per pixel over the entire image) for the inferred images, and show that this measure increases with increasing noise in the input, as it should. ** In the revised version, we will draw the reader's attention to this example. **\\n\\n2. \\tWe note that our method of inferring the desired image from the measured image is an unsupervised method; in that for training we only need a set of desired images to construct the prior. We are not aware of any other unsupervised learning approach for solving these types of problems with quantified uncertainty. In that regards, the calculation of pixel-wise variance (our metric of uncertainty) in an unsupervised setting is possible only using our approach. ** We will clarify this unique aspect in the revised version of the manuscript. ** \\n\\n3.\\tWe note that there has been recent work on computing the uncertainty in an inferred image within a supervised learning framework where pairs of measured and desired images are used for training the network. In these articles the authors have used methods like Bayesian dropout to compute uncertainty in the inferred images [1]. Similar to what we have done, these authors have also plotted the point-wise variance as a quantitative metric of uncertainty. ** We will refer to these works in the revised version to better orient the readers. **\\n\\n4.\\tWe note that we have gone beyond just computing the metric of uncertainty (point-wise variance) and also described how it might be useful in making the subsequent measurement in the context of an active learning approach, which to the best of our knowledge has not been done previously in Bayesian deep learning applied to image inference. \\n\\n\\n\\\"There are some grammar issues in the paper. For example. '\\u2026we the MAP\\u2026' in the 7th page.\\\"\\n\\n** We are doing a through scrub of manuscript in order to catch these. **\\n\\n\\n[1]. A. Kendall and Y. Gal, \\u201cWhat Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?\\u201d, NIPS (2017).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to use a trained GAN model as the prior distribution for Bayesian inference to quantify the uncertainty. As for me, the best application of this paper is to restore a corrupted image, which shares a lot of common properties in image restoration, denoising and image reconstruction. I do like the extension of applying the idea in physics problems. And the results demonstrate at some extent, the proposed method could evaluate some uncertainty.\\n\\nThe idea is pretty simple and the paper is easy to read. Nonetheless, there are some issues:\\n\\nA big issue of this paper is the deviation of purpose and method. As the paper claims to quantify the uncertainty, the paper is supposed to give specific quantitative metric or values to probe the uncertainty. However, the paper demonstrates to us only the ability, not exactly \\u201cquantification\\u201d. I\\u2019d like to see a specific metric of uncertainty that could only be calculated through the proposed method. \\n\\nThere are some grammar issues in the paper. For example. \\u201c\\u2026we the MAP\\u2026\\u201d in the 7th page.\\n\\nGiven my major issue seems to be quite problematic, I currently would weakly reject this paper. But I don\\u2019t have a full picture over this area, I\\u2019ll read the rebuttal and see if I could raise the score.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper studies the Bayesian inferences with the generative adversarial network (GAN). In the first half of the paper, the general framework of the Bayes estimation is introduced. Then, The authors proposed how to incorporate GAN to the Bayesian inference. Some computational methods for calculating the mean of the statistic under the posterior distribution are described. Then, numerical experiments using MNIST and Celeb-A datasets are presented.\\n\\nThough the Bayesian inference using GAN is a natural idea, learning algorithms proposed in this paper are simple and are not intensively developed. In numerical experiments, there is no comparison with major competitors besides random sampling in the active learning setup. Hence, the effectiveness and advantage of the proposed methods are not clear.\\n- In active learning, the proposed method should be compared with other methods such as Bayesian DNN using dropout, etc. \\n- How does the estimation accuracy of GAN relate to the estimation accuracy of the proposed method? Showing a quantitative description would be nice.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary of the paper:\\n \\n The paper proposes a Bayesian approach to make inference about latent variables such as un-corrupted images. The prior distribution plays a key role in this task. The authors use a GAN to estimate this prior distribution. Then, standard Bayesian techniques such a Hamilton Monte Carlo are used to make inference about the latent variables.\", \"detailed_comments\": \"Eq. (8) is expected to give very bad results. The reason is that it is very unlikely to sample from the prior configurations for z that are compatible with y.\\n\\nThe paper does not address learning any model parameters. e.g. the amount of noise.\\n\\nA more principled approach would be to estimate the prior parameters using maximum likelihood estimation. That has already been done in the case of the\\nvariational autoencoder.\\n\\nThe variational autoencoder is an already known method that can be used to solve the problem formulated by the authors. It also automatically proposes\\nan inference network that can be used for recognition. If the likelihood is Gaussian and p(x|z) is also Gaussian, one can directly marginalize x and work\\nwith p(y|z) and p(z). The authors should at leas discuss the potential use of this method alongside with the BIGAN model which also provides a recognition model.\\n\\nIt is not clear how the HMC parameters are fixed.\\n\\nThe experiments do not have error bars (Figure 4.) This questions the significance of the results.\\n\\nMy overall impression is that there is little novelty in the proposed approach. Namely, using a GAN to learn the prior distribution, and then very well known\\ntechniques to infer the original input image.\\n\\nI have missed some references to related work on inverse problems. An example is:\", \"https\": \"//arxiv.org/pdf/1712.03353.pdf\\n\\n\\nIs the original figure contained in the training set used to infer the GAN. If so that can lead to biased results.\\n\\nI have missed a simple baseline in which one simply finds the training image that is closest to the corrupted observed or partially observed image.\\n\\nMy overall impression is that there is not much novelty in the paper as it is simply a combination of well known techniques. E.g. GANs and Bayesian inference with Monte Carlo methods.\"}"
]
} |
rkxawlHKDr | End to End Trainable Active Contours via Differentiable Rendering | [
"Shir Gur",
"Tal Shaharabany",
"Lior Wolf"
] | We present an image segmentation method that iteratively evolves a polygon. At each iteration, the vertices of the polygon are displaced based on the local value of a 2D shift map that is inferred from the input image via an encoder-decoder architecture. The main training loss that is used is the difference between the polygon shape and the ground truth segmentation mask. The network employs a neural renderer to create the polygon from its vertices, making the process fully differentiable. We demonstrate that our method outperforms the state of the art segmentation networks and deep active contour solutions in a variety of benchmarks, including medical imaging and aerial images. | [
"differentiable",
"polygon",
"end",
"trainable active contours",
"vertices",
"image segmentation",
"iteration",
"local value",
"shift map",
"input image"
] | Accept (Poster) | https://openreview.net/pdf?id=rkxawlHKDr | https://openreview.net/forum?id=rkxawlHKDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"pgOXDwUbau",
"CDwLynS9UX",
"saqlhjcpp",
"RRQ6G3k2_",
"e9Nw9mdqfH",
"OwGtF3nhon",
"S1gg0BrviH",
"HkxNP0b7oH",
"SJxyGCWQsB",
"ryx4aT-Xir",
"SJxs76WmjH",
"H1xhlGTQqH",
"SygWvXz19S",
"B1lq6PWaYr",
"Syl8NXg6tH",
"r1xFi7sFFS"
],
"note_type": [
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1689594723128,
1578989715410,
1578871199596,
1578219660938,
1576973736694,
1576798747541,
1573504455711,
1573228123533,
1573228038945,
1573227963581,
1573227811341,
1572225523744,
1571918680702,
1571784642512,
1571779374020,
1571562401150
],
"note_signatures": [
[
"~Gladis_Ne_Limes1"
],
[
"ICLR.cc/2020/Conference/Paper2378/Authors"
],
[
"~Ali_Hatamizadeh1"
],
[
"ICLR.cc/2020/Conference/Paper2378/Authors"
],
[
"~Ali_Hatamizadeh1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2378/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2378/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2378/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2378/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2378/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2378/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2378/Authors"
],
[
"~Amlan_Kar2"
],
[
"ICLR.cc/2020/Conference/Paper2378/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2378/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"re\", \"comment\": \"Schedule interviews with shortlisted candidates to evaluate their technical skills, problem-solving abilities, and communication. 9. Check References: If possible, reach out to the candidate's references to get insights into their work ethics and performance. 10. NDA and Contract: If you decide to move forward with a candidate, ensure you have a well-defined contract that includes terms, project milestones, payment details, and any non-disclosure agreements (NDAs) if needed. 11. Collaboration Tools: Set up communication and collaboration tools to facilitate smooth project management and communication with the hired Android app developer https://mlsdev.com/blog/hire-android-developer. Remember that hiring the right Android app developer is crucial for the success of your project. Take your time, ask relevant questions, and ensure you find a candidate or team with the necessary skills and experience to deliver a high-quality Android application.\"}",
"{\"title\": \"Related work\", \"comment\": \"This is unfortunate and does not seem right. We will treat your paper as concurrent to our work and think that it should be treated as such by all future reviewers.\"}",
"{\"title\": \"Related Work\", \"comment\": \"Thank you. We originally submitted our paper, as it appears on arXiv, to ICCV 2019, back in March 2019, but it was ultimately rejected for perplexing reasons, despite the fact that our model significantly outperformed the then state-of-the-art building segmentation methods. We will cite your paper in the published version of our work.\"}",
"{\"title\": \"Post-submission related-work\", \"comment\": \"Thank you for letting us know about your paper, which we will cite in the next version.\\n\\nWe ask that you would also cite our work, noting that it was made public on open review before the publication date of your arxiv manuscript (obviously the two efforts are concurrent).\"}",
"{\"title\": \"More Related Work\", \"comment\": \"We appreciate your work and would like to bring to your attention our paper published as:\\n\\n@article{hatamizadeh2019endtoend,\\ntitle={End-to-End Deep Convolutional Active Contours for Image Segmentation},\\nauthor={Ali Hatamizadeh and Debleena Sengupta and Demetri Terzopoulos},\\njournal={arXiv preprint arXiv:1909.13359},\\nmonth={September},\\nyear={2019}\\n}\\n\\nIt introduced the first fully automatic, end-to-end trainable CNN-ACM combination, where the Active Contour Model (ACM) is defined implicitly, as a level set. This has important advantages relative to your use of the explicit ACM formulation. Among them is the fact that our DCAC model can simultaneously segment multiple object instances, as opposed to just a single instance, while dealing with arbitrary shapes and capturing sharp corners as necessary. Our DCAC model is implemented entirely in Tensorflow and is thus end-to-end differentiable and backpropagation trainable. It requires no user intervention either during training or during image segmentation. We trained and tested DCAC on the Vaihingen and Bing Huts datasets and our results established a new state-of-the-art performance by a wide margin at the time (March 2019). \\n\\nOur foregoing publication is highly relevant to your work and should be discussed in your Related Work section, such as in your paragraph on \\\"Building segmentation and recent active contour solutions \\\". Thanks.\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The submission presents a differentiable take on classic active contour methods, which used to be popular in computer vision. The method is sensible and the results are strong. After the revision, all reviewers recommend accepting the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for the additional feedback and for upgrading the paper\\u2019s rating\", \"comment\": \"Thank you for the additional feedback and for upgrading the paper\\u2019s rating. We appreciate the timely response and apologize for neglecting to proofread the submitted version more carefully.\\n\\nRegarding the number of iterations. In the original (and revised) submission, Section 4.4 \\\"Number of Iterations\\u201d and Fig.8 (as numbered in the revised version), we experiment with different number of iterations. We noticed that a single iteration is less beneficial across all datasets, while 2-3 iteration results in higher performance. Nevertheless, as the reviewer hypothesised, a single iteration can already produce very good results, as can be seen from our experiments. \\n\\nWe have released a new revision, elaborating on two subjects: (i) The initial guess, and (ii) the effect of the number of iterations T.\"}",
"{\"title\": \"We have improved the manuscript considerably following the feedback\", \"comment\": \"Thank you for the detailed review.\\n\\nIndeed, our approach predicts the displacement field only once. We believe that this is a strength of our approach and is part of its simplicity. Similarly to RNNs with fixed weights, having the displacement map computed only once, does not mean that iterations are not beneficial. Note also that the error is backpropagated from all iterations. \\n\\nRegarding the use of the balloon and curvature term, please see the ablation study, which shows that while our method is extremely competitive even without these losses, the two losses contribute to the results.\\n\\n\\u201cWhy is ||M^t-M|| not evaluated per iteration\\u201d -- As mentioned in the text before Eq.5., M^t is evaluated at each iteration given the updated set of points. Therefore, ||M^t-M|| is evaluated per iteration. The backpropagation is done on the accumulated loss.\\n\\nClarity regarding M^t in Equation 4 - the mask M^t is a filled polygon rendered from the set of points P^t. We have further clarified this in the revision.\\n\\nWe did not search for the best initial diameter, and simply fixed it to the size of 16 pixels across all datasets. Please note that DARNet uses multiple initializations (circles) or different sizes for each dataset as can be seen in Fig.4, while we use only one fixed-size circle. This further supports the robustness of our method.\\n\\nThe caption of Fig.6 (of the original paper, 7 in the revised) is indeed a typo, and the colors were switched. We apologize for this and have fixed it in the revision. The quantitative results in the graphs of Fig.7 (of the original submission, now Fig. 8) support the fact that our method yields better segmentation for simple polygons as well.\\n\\nThe values of Lambda1 and Lambda2 were fixed early during the development process and used across datasets. These reflect the relatively smaller part that the ballooning force and the curvature loss play, in the optimization. This is further supported by the ablation analysis that demonstrates that our method is extremely competitive even without these. Similarly, the set of rotations was set without much thinking early on during training, and since it worked, we kept it as is. We believe that changing the augmentation would contribute little to the results, and does not justify the pitfalls of multiple hypothesis testing.\\n\\nOverall, we hope that the simplicity and elegance of our method are not interpreted as a disadvantage. We believe that the power of our method over previous work (as complicated as they\\u2019ll be) is in the straightforward approach. Following the reviews, we have provided an additional dataset for comparison, and clarified and fixed the relevant sections and figures.\\n\\nWith the CVPR deadline in a week, we would appreciate a timely response, in order for us to be able to plan our submission strategy.\"}",
"{\"title\": \"Thank you for the supportive review\", \"comment\": \"Thank you for the supportive review. We are sorry for the reference mistakes, these are all fixed in the new revised version.\\n\\nThere was a typo in the caption of Fig.6 (Fig. 7 in the revised version), which switched the association between the methods and the colors. We believe that this mistake has led to the remark concerning this figure.\"}",
"{\"title\": \"We have improved the manuscript considerably following the detailed feedback\", \"comment\": \"We thank the reviewer for the comprehensive review.\\n\\nWe apologize for the typos in the previous draft. These have been corrected.\", \"to_your_comments\": \"\", \"comparison_to_curv_gcn\": \"as noted by the reviewer, the differences between the methods are in the support of splines and working with an embedding space in Curve-GCN, vs. displacement map. To emphasize: our method employs a single learned network that produces a displacement image in a single forward pass. The CNN used by Curve-GCN predicts an embedding space of size 28x28 that is further processed by graph neural networks.\\n\\nFollowing the review, we have conducted experiments on Cityscapes, which is the only public dataset available from Curve-GCN experiments (their code is not available). In this dataset, our method obtains SOTA for 6/8 classes and SOTA, by a sizable margin that is larger than the difference between the performance of previous work, in the overall mean mIoU. We believe that this also directly addressed the reviewer\\u2019s comment regarding larger datasets.\\n\\nAll of our models are trained at the resolution of 64x64 pixels. As noted in the original submission when discussing the Vaihingen dataset \\u201cwe experiment with different resizing factors during training\\u201d. Following the review, we share these results in Tab.5 of the revised submission. As can be seen, there is a steep degradation in performance below 64x64, which we attribute to the lack of details. When doubling the resolution to 128x128, the performance slightly degrades, however, that model could improve with further training epochs.\\n\\n\\nThe recent learning-based approaches are either non-competitive or proven to be effective in the specific settings of building segmentation\\\" \\u2014 we have clarified in the text that we mean learning-based active contour methods and have limited the scope of the claim. \\n\\nWe have added a paragraph regarding the use of a 3D renderer for 2D maps. We simply fix the third coordinate and use the code of Kato as is.\\n\\nThe letter \\u201cF\\u201d (for faces) is defined at the beginning of the \\u201cMethod\\u201d section.\\n\\nWe believe that various issues raised by the reviewer were fully addressed in a way that considerably improved the manuscript. With the CVPR deadline in a week, we would appreciate a timely response, in order for us to be able to plan our submission strategy.\"}",
"{\"title\": \"A revised version, including new experiments comparing with Curve-GCN\", \"comment\": \"Following the reviews, we have revised our manuscript to correct the various typos, to clarify some issues and, most importantly, to update the related work section and to compare experimentally with the CVPR 2019 work of Ling et al. As detailed on open review, which this work indeed narrows our novelty claims, there are important differences and the two methods are very much different.\\n\\nWe are happy to report that on the public dataset on which the CVPR 2019 work has been tested, our method outperforms all previous work in 6/8 categories and shows a clear advantage in the mean performance. This, without performing any modification to our method and despite our method being considerably less involved than the other methods.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a straightforward method for end-to-end learning of active contours, based on predicting a dense field of 2D offsets, and then iteratively evolving the contour based on these offsets. A differentiable rendering formulation by Kato et al is employed to make the process of aligning a contour to a GT mask differentiable.\\n\\nThe model shows rather compelling results on small datasets, and is very simple, with very strong parallels to active contours, which is a strength. The results improve those of DARNet, which to the best of my knowledge is the main published work in the space other than Curve-GCN. One thing that would be helpful, is to have an experiment on a large dataset, such as Cityscapes -- right now all the datasets are testing the model in only the small-data regime. Perhaps in a supplement, it would also help to do ablation of how input image / dense deformation resolution affects the result quality -- the input can be subsampled by powers of 2 for the experiment. \\n\\nAs Amlan Kar helpfully points out, the work heavily overlaps with his approach \\\"Fast Interactive Object Annotation with Curve-GCN\\\", CVPR 2019, which is not cited or compared to. Curve-GCN similarly utilizes differential rendering (only a different variant) to match the GT masks. To me, the main difference wrt Curve-GCN is that explicit dense displacement fields are generated by the net and used directly for the iterative refinement steps, while Curve-GCN leverages implicit feature embeddings and uses GCN layers for their iterative updates. A second main difference is that Curve-GCN supports splines and interactive editing, while the proposed approach does not. Beyond these, there are multiple other differences that the authors point out, but those are more of a technical nature. Unfortunately, without a more direct comparison, it is very difficult to evaluate the design choices in the two approaches, which I feel is necessary for proper understanding of the paper.\", \"after_rebuttal\": \"The authors made additions that covered my concerns, so I have switched my recommendation.\\n\\nA few more minor clarity / presentation issues. \\n-- \\u201cThe recent learning-based approaches are either non-competitive or proven to be effective in the specific settings of building segmentation\\\". It's not exactly clear what the point is in the context. Which \\\"learning-based approaches\\\"? \\n-- Typo 'backpropogation'. \\n-- A little better explanation of how a differentiable renderer of Kato works would have been helpful. \\n-- Figure 3 is not referenced in the text, takes a little bit of thought why it is relevant (helps explain Fig 1, but maybe better to show it prior to Fig 1). \\n-- In Eq 4 it\\u2019s not clear what F is. (I see it is explained in Algorithm box, but that's much later)\"}",
"{\"title\": \"Thank you for pointing us to the missing related work\", \"comment\": \"Thank you very much for pointing us to [1], which is indeed related and would be cited appropriately. We would like to enumerate some of the important differences between the methods.\\n1. Supervision and training: We supervise with GT masks only during training, while [1] learns an additional edge branch and a vertex branch. Vertex-based supervision constraints their model to learn a specific location for each point. \\n2. Training: While we train our model end-to-end with a single supervision, [1] performs a two-phase learning, where they first train using edges and vertices supervision, followed by fine-tuning with GT masks. [1] also points out that their rendering process is too slow for training end-to-end using only GT masks (Sec. \\u201cTraining Details\\u201d), while we use a fast, fully differentiable renderer.\\n3. CNN role: Our method employs a single learned network that produces a displacement image in a single forward pass. The CNN used by [1] predicts an embedding space of size 28x28 that is further processed by other networks.\\n4. CNN architecture: We employ a fully convolutional CNN that produces an output that is the same size as the input image, while [1] scales a fixed-sized input to a spatially-limited 28x28 image.\\n5. To emphasize: our method is considerably more direct, and learns a 2-D displacement field in the scale of the input by a fully convolutional network. The update in [1] is by a learned GCN that is applied over graph nodes that employ the 28x28 embedding.\\n6. Loss: we incorporate two loss terms that are based on time-tested pulling forces from the classical active contour literature: the Balloon and Curvature terms. This allows us to work directly with the contour.\\n\\n[1] Fast Interactive Object Annotation with Curve-GCN: https://arxiv.org/abs/1903.06874 - CVPR 2019\"}",
"{\"comment\": \"Thanks for the nice work! I would like to point out our related work [1], published at CVPR '19 here, that also utilizes a differentiable renderer to render polygons into masks similar to your work. It would be nice to discuss contributions in light of our paper as well, thanks!\\n\\n[1] Fast Interactive Object Annotation with Curve-GCN: https://arxiv.org/abs/1903.06874 - CVPR 2019\", \"title\": \"Related Work\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper investigates an image segmentation technique that learns to evolve an active contour, constraining the segmentation prediction to be a polygon (with a predetermined number of vertices). The advantage of active contour methods is that some shapes (such as buildings) can naturally be represented as closed polygons, and learning to predict this representation can improve over pixelwise segmentation.\\n\\nThe authors propose to learn an image-level displacement field to evolve the contour, and a neural mesh renderer to render the resulting mask for comparison with the ground truth mask. The performance compared to prior learning-based active contour methods is impressive.\\n\\nIn section 4.3, there\\u2019s a reference to a \\u201cgap in performance\\u201d between the proposed method and DARNet and a reference to a \\\"low number of vertices,\\\" but a comparison between the two methods as the numbers of vertices is varied seems to only be present in Fig. 6 -- it would be interesting to see an explanation of the discrepancy for the lower number of vertices seen in this figure.\\n\\nOverall, due to the relative simplicity of the approach and impressive performance compared to prior learning-based approaches I recommend to accept.\", \"post_rebuttal\": \"I maintain my recommendation.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"EDIT: The rating changed from '1: Reject' to '6: Weak accept' after the rebuttal. See below for my reasoning.\\n\\nThe submission considers two-class image segmentation problems, where a closed-contour image region is to be specified as the 'object'/region of interest, vs. 'no-object'/background. The approach taken here is end-to-end learning with an active-contour type approach. The main loss, in contrast to other active contour approaches, contains a direct difference of the estimated polygon area vs. ground truth polygon area.\\n\\nThe applied method seems conceptually quite simple (as admitted by the authors in Section 5), and the neural rendering approach seems quite neat, but both method presentation (Section 3) and evaluation (Section 4) seem incomplete and leave significant open questions.\\n\\nOne of my main concerns is related to the fact that the displacement field is static and, according to Figure 1 and Algorithm 1, is evaluated only once per image.\\nIf the displacement field J is not conditioned on the current polygon shape (and this does not seem to be the case), then I am wondering why T iterations in the sampling/rendering part are necessary at all. When only considering L_seg, the optimal solution should be found within one iteration, since the displacement field will be able to provide the optimal answer. So maybe these iterations are only necessary when L_B and L_K are incorporated?\\nIn any case, it is unclear why even L_seg is accumulated (using unweighted mean) over all T iterations before being backpropagated. Does this mean that these iterations are not meant to yield shape improvements? Why is ||M^t-M|| not evaluated per iteration, for the purpose of minimization?\\nIt is also not sufficiently clear whether M^t in Equation 4 is a filled polygon mask, or if the mask is just related to the boundary (with a certain width). In absence of explanatory image material, I am assuming the former.\\nOverall the method description remains weak, since obvious questions/concerns such as the above are not addressed.\\n\\nThe experimental results look good from a quantitative point of view, and indeed, the strongest baselines, e.g. DARNet, are outperformed significantly in many cases.\\nSection 4 mostly focuses on quantitative evaluation and lots of picture examples, but fails to give insight into particular behaviors, failure cases, etc.\", \"the_evaluation_procedure_is_cast_a_bit_into_doubt_by_two_things\": \"1) In Figure 4, the initializations (blue circles) between the DARNet method and the proposed method are very different in size. I am wondering if this then still constitutes a fair comparison, and I have some doubts there. 2) In Figure 6, the proposed method consistently looks much worse than the DARNet baseline (and, in contrast to the baseline, completely fails for 4 vertices), unless the colors were swapped in the description.\\n\\nOverall, I do not think the submission is in a good enough shape for acceptance.\", \"minor_remarks\": [\"The values for lambda_1 and lambda_2 seem to come out of thin air, and they also seem quite small. It needs to be mentioned how they were determined.\", \"Data augmentation by rotation seems to be missing several values (between 270 and 260 degrees) and also not evenly spaced. Is this a typo or on purpose? In the latter case, an explanation is needed, since this seems weird.\", \"Section 4.3: There is no \\\"Figure 4.2\\\", I assume you mean Figure 6, which otherwise remains unreferenced.\", \"Section 4.3, Ablation Study: Don't use the word \\\"derivatives\\\" when you're talking about variations.\", \"Section 4.3, Ablation Study: \\\"even without no auxiliary loss\\\" -> remove \\\"no\\\" or change \\\"without\\\" -> \\\"with\\\"\", \"-------------\"], \"post_rebuttal_comments\": \"I have read the revised version, as well as the other reviews and all authors' comments. The inclusion of an evaluation on a larger-size data set is highly appreciated, and seems to indeed validate the robustness of the method. Typos were fixed, including the switched color descriptions in Figure 7 (which should not have passed initial submission in the first place, if the text had been proofread properly).\\n\\nSeveral of the open questions (e.g. \\\"Why is L_seg accumulated before backpropagation?\\\", \\\"Why is the algorithm iterative if the displacement map is computed only once, if not for the other loss terms?\\\", \\\"Choice of values for lambda_1, lambda_2\\\", Initial diameter of initialization\\\") have been somewhat addressed by the authors in the rebuttal comment, though not in great detail.\\n\\nBased on the quality of the results across data sets, and because I believe that the timely publication of this rather simple method can benefit further research in this area, I have adjusted my score to a 'Weak accept'. That said, I still do not think it is a good manuscript, and my score should be seen as a massive benefit of the doubt toward the authors.\\n\\nMost importantly, above questions have NOT been adequately addressed in the actual revised text. The authors claim they have \\\"improved the manuscript considerably\\\", but yet I see more reasoning for certain choices described in the comment here than in the actual manuscript. Most of the changes are in Section 2 and the new Section 4.3, but not much relevant to my comments changed in Section 3.\\n\\nFor example, balloon and curvature losses aside, it is still not clear why an iterative approach would be helpful past the first iteration. An ideal displacement map that is not conditioned on the polygon should point, for each pixel, straight to the closest contour pixel. It is clear to me that this may not be what is being learned when multiple iterations are forced, yet it is not addressed why multiple iterations should be beneficial. (I could see why they could be beneficial if the approach was conditioned on the polygon vertices, to avoid vertex collapsing, but it's not.)\\n\\nA good submission preempts these kinds of questions by addressing them carefully. What seems crystal clear to the authors will not be crystal clear to every reader. The authors should be more careful to include their reasoning in the actual text, which I believe this is essential for proper, easy understanding of the paper.\"}"
]
} |
Bye6weHFvB | Plan2Vec: Unsupervised Representation Learning by Latent Plans | [
"Ge Yang",
"Amy Zhang",
"Ari Morcos",
"Joelle Pineau",
"Pieter Abbeel",
"Roberto Calandra"
] | Creating a useful representation of the world takes more than just rote memorization of individual data samples. This is because fundamentally, we use our internal representation to plan, to solve problems, and to navigate the world. For a representation to be amenable to planning, it is critical for it to embody some notion of optimality. A representation learning objective that explicitly considers some form of planning should generate representations which are more computationally valuable than those that memorize samples. In this paper, we introduce \textbf{Plan2Vec}, an unsupervised representation learning objective inspired by value-based reinforcement learning methods. By abstracting away low-level control with a learned local metric, we show that it is possible to learn plannable representations that inform long-range structures, entirely passively from high-dimensional sequential datasets without supervision. A latent space is learned by playing an ``Imagined Planning Game" on the graph formed by the data points, using a local metric function trained contrastively from context. We show that the global metric on this learned embedding can be used to plan with O(1) complexity by linear interpolation. This exponential speed-up is critical for planning with a learned representation on any problem containing non-trivial global topology. We demonstrate the effectiveness of Plan2Vec on simulated toy tasks from both proprioceptive and image states, as well as two real-world image datasets, showing that Plan2Vec can effectively plan using learned representations. Additional results and videos can be found at \url{https://sites.google.com/view/plan2vec}. | [
"Unsupervised Learning",
"Reinforcement Learning",
"Manifold Learning"
] | Reject | https://openreview.net/pdf?id=Bye6weHFvB | https://openreview.net/forum?id=Bye6weHFvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"1YYTsovr96",
"B1gvgac3iH",
"Hylqshc2jr",
"BJxqc253iH",
"HJlrunc2sS",
"BJgEsgciiS",
"S1x9U4-I5r",
"HyeuCKuCtB",
"H1g_F7Z0KB",
"B1eMLIT1KS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1576798747511,
1573854446667,
1573854370128,
1573854354173,
1573854317056,
1573785755642,
1572373585692,
1571879375980,
1571849088035,
1570915913629
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2377/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2377/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2377/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2377/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2377/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2377/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2377/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2377/AnonReviewer3"
],
[
"~Aravind_Srinivas1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a representation learning objective that makes it\\namenable to planning, \\n\\nThe initial submission contained clear holes, such as missing related work and only containing very simplistic baselines. The authors have substantially updated the paper based on this feedback, resulting in a clear improvement.\\n\\nNevertheless, while the new version is a good step in the right direction, there is some additional work needed to fully address the reviewers' complaints. For example, the improved baselines are only evaluated in the most simple domain, while the more complex domains still only contain simplistic baselines that are destined to fail. There are also some unaddressed questions regarding the correctness of Eq. 4. Finally, the substantial rewrites have given the paper a less-than-polished feel.\\n\\nIn short, while the work is interesting, it still needs a few iterations before it's ready for publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your thoughtful review! We have incorporated these into the updated draft, with three new experiments.\", \"comment\": \"Thank you for your thoughtful comment and constructive review, and this great opportunity to improve our paper.\\n\\nWe incorporated all the suggested improvements, clarifications with respect to the method, and details to enable reproducibility in an updated draft. Specifically, we have included a DQN baseline which learned from a fixed dataset, but does not explicitly build the graph. Our new experiment shows that by constructing a graph, Plan2vec is able to attain success with much less data than off-line Q learning. Please refer to Fig. 6 in the updated draft.\\n\\n* We decided to use DQN instead of SAC here because the action space is discrete. SAC would work better if the action space is continuous. Note our state space is continuous, and the action space is discretized in the simulated navigation domain.\\n\\nWe have also fixed the SPTM baseline to use the original, full method. We want to show a more complete picture, so in the updated draft, we added a new figure plotting the planning success rate of Plan2Vec, SPTM versus a random baseline, w.r.t. a range of planning budget. By increasing the plan-ahead horizon, both methods improve in success rate. But Plan2Vec gains a large gap due to its incorporation of an amortized value function as a planning heuristic, since the local simularity function that SPTM uses is limited to a small neighborhood.\\n\\nTo give an idea of how much distance evaluation Dijkstra needs, we also added a figure comparing the empirical inference time between Plan2vec and Dijkstra. We have also included further discussion into the benefits of using the learned representation space to get O(|E|) planning time as opposed to O(|E|+|V|log|V|) which Dijkstra\\u2019s gets.\\n\\nWe\\u2019ve also added an additional section in the Appendix detailing architecture and hyperparameter details for easier reproducibility.\\n\\n--------\", \"detailed_responses_for_each_specific_comment\": \"(A) What policy is used to collect the training data on each environment? \\n \\n We use a random policy to collect the passive dataset for each environment.\\n\\n(B) What is the relation between the \\\"global embedding\\\" \\\\Phi and the \\\"goal-conditioned value function\\\" V_\\\\Phi(x, x_prime) in Algorithm 2? \\n\\n\\tV_\\\\phi(s, g):=||\\\\phi(s) - \\\\phi(g)||_2, defined as the Euclidean distance between the two states in the learned embedding space.\\n\\n(C) What is the difference between the local metric function \\\\phi and the reward function in Algorithm 2? Are they the same? \\n\\n The local metric function is used as a cost function for the MDP, so yes they are the same.\\n\\n(D) If they are the same, how can the local metric accurately estimate rewards for states x and x_g that are far apart from one another as would naturally be the case when training the value function?\\n\\n\\tThe cost is a constant negative factor with each step taken, it is not a shaped reward of distance between states x and x_g.\\n\\n(E) What does the notation N(1, \\u03f5) in line 5 of Algorithm 2 mean?\\n\\n\\tWe meant the \\u03f5 neighborhood around 1, i.e. [1-\\u03f5, 1+\\u03f5], and updated the paper with the explicit definition.\\n\\n(F) At convergence, it takes roughly 8 planning steps. For each one of the planning steps, we search the neighborhood up to the 3rd neighbor. This corresponds to roughly 4-6 neighbors (2-3 on each side) in the Manhattan dataset. This is roughly 3 * 8 = 24 units, half of the 50 unit radius. On the Manhattan-tiny dataset, this is roughly 4 blocks in diameter.\"}",
"{\"title\": \"Response to R1\", \"comment\": \"Thank you for your thoughtful comments! We have completely re-written the method section of the paper, and have added three more experiments showing the improved sample complexity of our method compared with standard off policy method, and the improved planning performance of plan2vec versus SPTM under a range of planning budget. Please refer to the revised draft for the additional references, and the updated website for more qualitative results.\", \"detailed_responses\": [\"We apologize for the Methods section being unclear! Please refer to the updated draft for the revised version.\", \"It is indeed a mistake in Alg 2 for $\\\\Phi$ to take in $x$ and $x\\u2019$ and be called an embedding function.\", \"To clarify the notations, we decided to refer to all metric functions as $f_\\\\phi$, where the subscript $\\\\phi$ indicates the embedding we use. Both $\\\\phi$ and $\\\\Phi$ map states to a representation space. The value function is constructed by either taking the $L_p$ distance between the two latent vectors, or by passing them through a multi-layer perceptron. Plan2vec treats the local metric function $f_\\\\phi(x, x')$ as the negative reward, and the global metric function $f_\\\\Phi(x, x')$ as the negative value function. By running value iteration using the plans sampled on the graph, using this global metric function as a planning heuristic, plan2vec bootstraps the information in the local metric function to the global metric function. In doing so also obtaining a compact representation in the form of the global embedding $\\\\Phi$.\", \"We have experiments exploring different kernels and different type of (pseudo-) metric that one can impose on the global embedding, that we plan to introduce to later versions of the paper.\", \"\\u201cFind n\\u201d is done using the local metric, which outputs a continuous score denoting closeness in time steps. We can find all 1-step neighbors by passing in all pairs of states to f_\\\\phi, and taking those that are in neighborhood [1-\\\\epsilon, 1+\\\\epsilon].\", \"We have incorporated more of the relevant citations relating to learning for planning, as well as incorporating experiments comparing against the full SPTM setting, showing computational trade-offs with that method, and with Soft Actor-Critic, as a 1-step greedy policy method.\", \"We appreciate the references to additional generative models. The VAE visualizations intend to show how generative models do not learn representations that extend well to sequential data and planning. The referenced works do not extend to sequential data, which is the setting we are studying.\"]}",
"{\"title\": \"Thank you for your thoughtful comment!\", \"comment\": \"We have incorporated additional discussion with respect to gradient-based planning methods such as UPN and VIN. \\nThank you for the catch on Eq 4, it has been updated with the transition probability moved inside to act on the value estimate for next state.\\n\\nIncluded the Oord et al. paper citation. A dynamics model is not learned, the local metric is used to source neighbors from the dataset.\\n\\nOn navigation we used two times more negative samples than close neighbors. We did not see issues with performance when visually inspecting neighbors and checking error on a validation set. On the rope dataset, we adjust the ratio is larger.\\n\\nWe replace a dynamics model with the local metric sourcing next step neighbors. We build a lookup table of neighbors by passing all state pairs through the local metric and use a threshold to control number of neighbors. \\n We have removed the Dijkstra\\u2019s pseudocode to clarify focus on value iteration to learn a representation as our main method.\\n\\nWe have updated the methods section to clarify how neighbors are sourced using the local metric, and that plans are generated with a greedy policy over neighbors by taking the minimum over Euclidean distances from neighbors to the goal. \\n\\nWe\\u2019ve clarified the description of Figure 4. There are local rotations seen, but globally the representation is consistent.\\nThank you for the reference, we\\u2019ve added discussion on Laplacian methods [3] to related work. \\n\\nThe action spaces for these environments are all discrete for collecting the datasets except rope domain, but because the action space is abstracted away in our method, the action space can be discrete or continuous. These details are missing because they are not needed by our method, we only assume that a set of disjoint trajectories are given.\\n\\nThe StreetLearn dataset is generated from Google StreetView and consists of 3D photos from pedestrian level of the entire island of Manhattan. From this, we curated several independent randomly generated trajectories over several blocks and perform goal-based navigation from streetview images. There are a limited number of start and goal states in this dataset, and therefore almost all the start and goal states Plan2Vec is evaluated on are never seen before.\\n\\nWe have removed the final sentence in the conclusion for succinctness.\", \"questions\": \"(1) What do you see as the contribution of the work? Why is it new/different from existing literature?\\n\\nIn this work, we combine two ideas. The first is to train a local metric with a contrastive loss, and the second is to distill the graph defined by that local metric into a representation space that can perform O(1) planning per step. \\nThis is novel in that we can frame this in a self-supervised, passive setting with no assumptions made about the action space and without requiring actions in the dataset. While there has been similar work to leverage graph building, it either uses Dijkstra\\u2019s at inference time, leading to O(|E|+|V|log|V|) total planning time, or builds the graph using the replay buffer and compresses the graph with Laplacian methods, which does not apply to passive datasets.\\n\\n(2) How exactly do you generate a plan?\\n\\nAt training time we use value iteration to learn the optimal value function for each state and goal pair, and encode it as Euclidean distance between each of those pairs. Then, planning can be done via a 1-step greedy policy on the representation space by comparing Euclidean distances of neighboring states to the goal and choosing the shortest.\\n\\n(3) How is the UVFA V used?\\n\\nV is used to learn a representation that is globally consistent. The V is equivalent to Euclidean distance in the learned representation space, and therefore greedy policy in the representation space gives the 1-step greedy policy obeying V.\\n\\n(4) When to use UVFA? When to use Dijkstra? Why are the choices in the experiments as made?\\nUVFA is the main method of the paper, which learns a representation which can plan in O(1). Dijkstra\\u2019s is equivalent to optimal policy, which we use for analysis of optimal path lengths.\"}",
"{\"title\": \"Overall response\", \"comment\": \"We thank the reviewers for detailed comments and constructive suggestions to improve the paper.\\n\\nSince the review period, we re-wrote large sections of the draft. We have added missing citations from the control and RL side, clarified the method, added hyperparameters and implementation details, and relevant qualitative results on our website.\\n\\nIn response to the constructive reviews, we also added a number of new experiments, to highlight the benefit of combining the idea of 1. constructing a graph on top of a passive dataset, and 2. extrapolating a local metric function to an amortized value function, for planning. \\n\\nIn this updated draft, we incorporate:\\n\\n1. an additional baseline of DQN\\n2. a full SPTM baseline over a range of different planning budget. \\n3. Additional analyses of the computation at inference time of our method in comparison to Dijkstra\\u2019s. \\n4. We have also included empirical results of computation needed for various planning lengths of the two methods.\\n\\nFor these new experiments, please refer to Fig. 6 in the updated paper.\\n\\nFor improved clarity, we:\\n\\n1. clarified the introduction\\n2. added missing definitions for terms\\n3. re-written the methods section to explain how the local metric is used to generate neighbors in place of a dynamics model, how UVFA works in the learned representation space. \\n\\nWe hope the reviewers find the new draft of the paper to be much clearer. Please refer to the personalized responses below for detailed responses\"}",
"{\"title\": \"Addressing Related Work\", \"comment\": \"Hi Aravind, \\n\\nThank you for reading our paper! We have updated the draft to include these related works, and moved the related work section to the main text (it was placed in the appendix due to space constraints, a decision the authors regret). We will post it here as soon as it is ready for upload.\\n\\nIn comparison with VIN, GPPN and CMP [3-5], a major difference is that Plan2vec does not assume that the feature vectors live on a 2D grid world with known environment dynamics and well-defined local connectivity between states. Instead, Plan2vec is more akin to graph-based approaches such as SPTM, DeepWalk and diffusion maps. Plan2vec learns the local connectivity of the domain contrastively, and runs the value iteration through graph-convolution. This is numerically more difficult as reported in [5], and according to our experience. Also note the number of neighbors for each node is not a constant, but is conditioned on each sample.\\n\\nBoth UPN and DPN [1, 2] require expert observations paired with action data. They rely on supervised learning to get to the reward through a differentiable forward model, trained end-to-end [0] by grounding through expert actions. Plan2vec in this regard follows Causal InfoGAN, where the dataset (rope) only contains observations but not the actions. This is a common scenario with human demonstrations and deformable object manipulation tasks because actions are not explicitly observable. To get around the requirement of having an expert sampling policy, Plan2vec constructs a graph from the dataset, which allows it to sample segments from multiple different trajectories. This alleviates the need for each trajectory to be optimal in its entirety. Additionally, Plan2vec can also generalize to longer planning horizons beyond the sampled trajectories used during learning, as shown in the rope experiments; UPN and DPN on the other hand only guarantee generalization to trajectory lengths observed during training.\\n\\nAnother contribution of our work is to demonstrate that with unsupervised representation learning, we can go beyond conditionally generating samples in a small temporal window, and learn features that are not visually apparent. We demonstrate the first point by showing that our method can learn a plannable representation on the rope domain without image generation. Our result is superior to ones that appeared in [7] because the planning horizon is much longer (hundreds of steps vs just a few). The simulated rope domain in [2] has a similar difficulty level because the background is fixed for the entire simulated dataset, making it trivial for the convnet to learn to ignore [8].\\n\\nWe demonstrate the second point with results on the StreetLearn dataset, a complex real world dataset where the location is hard to identify from individual streetview alone. We provide quantitative results, and show that Plan2vec achieves good performance on this domain in comparison to baselines. Additionally, it generalizes to initial and goal state pairs that it has not seen during training.\\n\\n\\n[0] An On-line Algorithm for Dynamic RL and Planning, Schmidthuber et al\\n[1] Universal Planning Networks, Srinivas et al\\n[2] Unsupervised Visuomotor Control through Distributional Planning Networks, Yu et al\\n[3] Value Iteration Networks, Tamar et al\\n[4] Gated Path Planning Networks, Lee et al\\n[5] Cognitive mapping and planning for visual navigation, Gupta\\n[6] From Language to Goal, Fu et al\\n[7] Learning plannable representations with Causal InfoGAN, Kurutach\\n[8] Learning Robotic Manipulation through Visual Planning and Acting, Wang et al\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper aims at learning a latent representation of images in the setting of sequential decision making. The representation are learned such that there exists a metric that correctly reflects the difference in reachability between points in the neighborhood of the current observation and the goal.\\n\\nI'm struggeling to understand the core approach of the paper. More specifically, while learning the local metric (Alg. 1) seems clear, I can not understand the details of Alg.2 (which btw. is never referenced in the text). The surrounding paragraph is not detailed enough. Why is \\\\Phi(x, x') denoted a global embedding? \\\\Phi has two inputs, shouldn't that be some sort of a metric? How is \\\"find n\\\" done? There is a lot of talk about embeddings, but they are actually not occuring in section 3. What is a 'plannable representation'?\\n\\nSome of the experiments compare to VAE and its learned embedding space. Shouldn't the comparision be to models that think about the Riemannian geometry of a VAE, e.g. \\\"Latent space oddity: on the curva-ture of deep generative models\\\". There are a several citations missing in that direction, going back to at least \\\"Metrics for probabilistic geometries\\\" by Tossi et al. in 2014. As it was also pointed out in a public comment, relevant citations related to \\\"Learning for planning\\\" seem to be missing, too. Finally, a wider set of experiments needs to demonstrate the method (again, considering the scope of the as-of-now-not-cited papers).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"## Paper Summary\\n\\nWhile cast slightly differently in the intro, it seems to me that this paper learns a goal-conditioned value function that is used at test time to construct visual plans by selecting an appropriate sequence of data points from the training data. Similar to prior work they learn a local distance metric without supervision using a temporal proximity objective and then construct a graph of the training data points using this metric. The main novelty that this paper introduces seems to be the idea to distill the results of planning algorithms run at training time into a global, goal-conditioned value function, which allows to reduce the required planning time at test time. The authors perform experiments on constructing visual plans for a simulated toy navigation task, a robotic rope manipulation task and the StreetLearn navigation task. The paper reports favorable results under a time-constrained test setting but does not include strong baselines that were designed for this setting. \\n\\n## Strengths\\n\\n- bootstrapping a learned local distance metric to a global distance metric to reduce test-time planning cost is an interesting problem\\n- the paper has nice visualizations / analysis on the toy dataset\\n- the learning procedure for the local distance metric is clearly described\\n- the paper uses a large variety of different visualizations to make concepts and results clearer \\n\\n## Weaknesses\\n\\n(1) missing links to related work: the author's treatment of related work does not address the connections to some relevant papers (e.g. [1-3]) or is only done in the appendix (especially for [4]). It is not clearly delineated between techniques and ideas that are introduced in other papers (see (2) below) and the novel parts of this work. This makes it hard to understand the actual contributions of this paper. \\n\\n(2) only minor contribution: the core parts of this paper build heavily on prior work: time-contrastive objectives for distance learning have been introduced in [5] and also been used in a very similar setup as here in [4], further [4, 3] also use semi-parametric, graph-like representations for planning with a learned local distance metric. The major contribution seems to be to distill the plans derived with either (a) n-step greedy rollout or (b) Dijkstra graph-search into a value-function so that planning does not need to be performed at test time. This somewhat small contribution is in contrast to the claims from the introduction that this paper \\\"pose[s] the problem of unsupervised learning a plannable representation as learning a cognitive map of the domain\\\".\\n\\n(3) comparison to weak baselines: the main comparison in the experimental section is to a version of [4] where the authors constrain the planning horizon to a single step, which means effectively greedily using the local metric from Sec. 3.1. To be clear: this is in no way the method of [4]: they use Dijkstra-based planning at test time and it is clear that a \\\"version\\\" of [4] that does not use planning is not able to work. To me this seems rather like an ablation of the proposed method than a real baseline. The baseline that plans greedily with embeddings based on visual similarity has even less hope of working. The paper lacks thorough comparison to (a) baselines with the same semi-parametric structure that perform planning at test time (like the real method of [4]) and (b) methods that generate reactive policies without constructing a semi-parametric memory (e.g. off-policy RL). Only then a thorough comparison of pros and cons of planning at training/test time is possible (see detailed suggestions below).\\n\\n(4) lack of qualitative samples for generated plans: for both the rope and the StreetLearn domain the authors do not provide thorough evaluation. For the rope domain only a single qualitative rollout is shown, for the StreetLearn domain no qualitative samples are provided for either the proposed method or the comparisons. (see suggestions for further evaluation below)\\n\\n(5) explanation of core algorithmic part unclear: the explanation of how the local metric is used to learn the global value function is somewhat unclear and the used notation is confusing. Key problems seem to be the double-introduction of symbols for the local metric in Alg. 2 and the confusing usage of the terms \\\"global embedding\\\" and \\\"value function\\\" (see detailed questions below) \\n\\n(6) terms used / writing structure makes paper hard to follow: the connection between used concepts like \\\"global embedding\\\", \\\"plannable representation\\\" and \\\"goal-conditioned value function\\\" are not clear in the writing of the paper. The authors talk about concepts without introducing them clearly before (e.g. problems of RL are listed in the intro without any prior reference to RL).\\n\\n(7) lacks detail for reproducing results: the paper does not provide sufficient detail for reproducing the results. Neither in the main paper nor in the appendix do the authors provide details regarding architecture and used hyperparameters. It is unclear what policy was used to collect the training data, it is unclear how the baselines are working in detail (e.g. how the 1-step planning works) and how produced plans are checked for their validity.\\n\\n\\n## Questions\\n\\n(A) What policy is used to collect the training data on each environment?\\n(B) What is the relation between the \\\"global embedding\\\" \\\\Phi and the \\\"goal-conditioned value function\\\" V_\\\\Phi(x, x_prime) in Algorithm 2? \\n(C) What is the difference between the local metric function \\\\phi and the reward function in Algorithm 2? Are they the same?\\n(D) If they are the same, how can the local metric accurately estimate rewards for states x and x_g that are far apart from one another as would naturally be the case when training the value function?\\n(E) What does the notation N(1, \\\\eps) in line 5 of Algorithm 2 mean?\\n(F) What is the expectation over the length of trajectories between start and goal on the StreetLearn environment (to estimate what percentage of that the success horizon of 50 steps is)?\\n\\n\\n## Suggestions to improve the paper\\n\\n(for 1) please add a more thorough treatment of the closest related works on semi-parametric memory + learned visual planning + learned distance functions (some mentioned below [1-5]) to the main part of the paper, clearly stating differences and carving out which parts are similar and where actual novelty lies.\\n\\n(for 2) please explain clearly the added value of distilling the training-plans into a value function for O(1) test-time planning and point out that this is the main difference e.g. to [4] and therefore the main contribution of the paper. \\n\\n(for 3) in order to better understand the trade-offs between doing planning at test time (like [3,4]) or learning an O(1) planner contrast runtime and performance of both options (i.e. compare to the proper method of [4]). This will help readers understand how much speed they gain from the proposed method vs how much performance they loose. It might also make sense to include an off-policy RL algorithm (e.g. SAC) that uses the local metric as reward function (without constructing the graph) to investigate how much planning via graph-search can help at training time. Another interesting direction can be to investigate the generalization performance to a new environment (e.g. new street maze, new rope setup) after training on a variety of environment configurations. [3] showed that explicit test-time planning performs better than \\\"pure\\\" RL, it would be interesting how the proposed \\\"hybrid\\\" approach performs.\\n\\n(for 4) please add randomly sampled qualitative results for both environments and all methods to the appendix. It can additionally be helpful to add GIFs of executions to a website. It might also be interesting to add a quantitative evaluation for the plans from the rope environment as was performed in Kurutach et al. 2018.\\n\\n(for 5) please incorporate answers to questions (B-E) into the text in Sec 3.2 explaining Algorithm 2. It might also help to structure the text in such a way as to follow the flow of the algorithm.\\n\\n(for 6) restructure and shorten the introduction, clarify terms like \\\"inductive prior within image generation\\\" or \\\"non-local concepts of distances and direction\\\" or \\\"conceptual reward\\\" or \\\"planning network\\\", clarify how the authors connect the proposed representation learning objective and RL. Avoid sentences that are a highly compressed summary of the paper but for which the reader lacks background, like in the intro: \\\"training a planning agent to master an imagined \\u201creaching game\\u201d on a graph\\\".\\n\\n(for 7) add details for architecture and hyperparameters to the appendix, add details for how baselines are constructed to the appendix. add details about data collection and evaluation for all datasets to the appendix (e.g. how is checked that a plan is coherent in StreetLearn). It might also help to add an algorithm box for the test time procedure for the proposed method.\\n\\n\\n## Minor Edit Suggestions\\n- Fig 2 seems to define the blue square as the target, the text next to it describes the blue square as the agent, please make coherent\\n- for Fig 7: the numbers contained in the figure are not explained in the caption, especially the numbers below the images are cryptic, please explain or omit\\n\\n\\n[Novelty]: minor\\n[technical novelty]: minor\\n[Experimental Design]: Okay\\n[potential impact]: minor\\n\\n################\\n[overall recommendation]: weakReject - The exposition of the problem and treatment of related work are not sufficient, the actual novelty of the proposed paper is low and the lack of comparison to strong baselines push this paper below the bar for acceptance.\\n[Confidence]: High\\n\\n\\n[1] Cognitive Planning and Mapping, Gupta et al., 2017\\n[2] Universal Planning Networks, Srinivas et al., 2018\\n[3] Search on the Replay Buffer: Bridging Planning and Reinforcement Learning, Eysenbach et al., 2019\\n[4] Semi-Parametric Topological Memory for Navigation, Savinov et al., 2018\\n[5] Time-Contrastive Networks, Sermanet et al., 2017\\n\\n\\n### Post-rebuttal reply ###\\nI appreciate the author's reply, the experiments that were added during the rebuttal are definitely a good step forward. The authors added comparison to a model-free RL baseline as well as proper comparison to a multi-step planning version of SPTM. However, these comparisons were only performed on the most simple environment: the open room environment without any obstacle. These evaluations are not sufficient to prove the merit of the proposed method, especially given that it is sold as an alternative to planning methods. The method needs to be tested against fair baselines on more complicated environments; the current submission only contains baselines that *cannot* work on the more complicated tasks. I therefore don't see grounds to improve my rating.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes to learn a representation space that is useful for planning. This is done by a 2-step process: (1) learn a metric to reflect locality, (2) utilize that metric to learn a global metric by utilizing UVFAs from Reinforcement Learning literature. While the idea may be an interesting direction to improve sample efficiency of planning, it is not the first work that proposes to combine planning with representation learning, and I do not think the work is clearly presented/motivated/validated sufficiently to deem it acceptable.\\n\\nAs presented, there are many details that are unclear or imprecise. I list them below.\", \"feedback\": \"(1) The contrast between learning playable representations vs. planning via representation learning (as proposed) is poorly motivated. The presentation ignores well recognized works in the latter in terms of UPNs [1], VINs [2].\\n(2) The RL background section is poorly presented -- as written Equation (4) seems incorrect. Further, is a dynamics model learned to be able to utilize the correct form of Equation (4)? It is never specified. How exactly is a plan extracted for any of the experiments?\\n(3) Possibly wrong citation for Equation (2) -- maybe cite the Oord et. al. paper? Further, c vs C -- the notation in general in the paper is inconsistent and poor.\\n(4) Algorithm 1, line 4 -- typos for x. Further for Contrastive learning methods to be effective, the negative sampling needs to be much higher. What ratio of samples is used for local metric learning?\\n(5) If an actual forward dynamics model is not used (Section 3 end), then as given in Algorithm 2 from pseudocode I'm completely unsure how a plan is extracted -- How is the UVFA used to extract a plan? What is N(1,\\\\epsilon)? P-norm?\\n(6) The Dijkstra psuedocode is rather incomplete.\\n(7) How is sampling done with the local metric function? How is a plan generated? These important details are missing.\\n(8) Figure 4 is poorly presented/annotated. In the description \\\"Plan2Vec correctly stretched out learned..\\\" -- no it doesn't, visually it seems wrapped too. \\n(9) There exists literature in RL to combine the 2 step metric learning process to 1 step. This is relevant. [3].\\n(10) What is the action space of these domains? Based on visualization in Figure 3 (2), are the actions continuous? What is the action space for Figure 7? These details details are missing.\\n(11) Description of the StreetLearn dataset would be useful. Further an example of why Plan2Vec generalizes (last para, Section 4.3) would be useful. Just statement based claims seem rather vacuous.\\n(12) The last statement in Conclusion - why? The paper has made an argument against utilizing generative modelling in an unsupervised manner. So why would including it improve it? Such unexplained statements reflect poorly.\", \"questions\": \"(1) What do you see as the contribution of the work? Why is it new/different from existing literature?\\n(2) How exactly do you generate a plan?\\n(3) How is the UVFA V used?\\n(4) When to use UVFA? When to use Dijkstra? Why are the choices in the experiments as made?\", \"some_typos_to_help_future_version_of_the_paper\": \"(1) Section 2 -- We now overview --> We now review.\\n(2) Section 2, UVFAs -- expected discounted future value --> expected sum of discounted future rewards.\\n(3) Section 2 completely ignores the discount factor/horizon of an MDP, although the utility here I suppose relies on the horizon aspect.\\n(4) Figure 6 explanation is very sloppy (description and body).\\n(5) The Zhang et. al. reference in Section 4.2 is unclear.\\n\\nWhile I am not from the planning community, I am from the RL community - and as presented the paper is ignoring a lot of details, and was extremely difficult to piece together for me.\\n\\n[1] Srinivas, Aravind, et al. \\\"Universal planning networks.\\\" arXiv preprint arXiv:1804.00645 (2018).\\n[2] Tamar, Aviv, et al. \\\"Value iteration networks.\\\" Advances in Neural Information Processing Systems. 2016.\\n[3] Wu, Yifan, George Tucker, and Ofir Nachum. \\\"The laplacian in rl: Learning representations with efficient approximations.\\\" arXiv preprint arXiv:1810.04586 (2018).\"}",
"{\"comment\": \"I find it quite necessary to point out the following facts:\\n\\n1. The paper tries to introduce plannable representation learning as a problem they study and contribute to without doing justice to some of the better papers on this topic such as\\na. Value Iteration Networks - Tamar et al 2016\\nb. Cognitive Mapping and Planning - Gupta et al 2017\\nc. Universal Planning Networks - Srinivas et al 2018\\nd. Gated Path Planning Networks - Lee et al 2018\\ne. Distributional Planning Networks - Yu et al 2019\\n\\n2. InfoNCE / NCE for learning distance metrics - a. Time Contrastive Networks - Sermanet et al 2018, b. Warde-Farley et al 2019 - have already done the same idea (arguably better versions) and not been cited.\\n\\n3. In fact, the notion of plannable representations (what does that even mean and why is it worth a problem studying, what does it mean to generalize to unseen tasks through plan-based priors, why generative approaches are ill-suited for this problem, why building effective abstract maps/distance-aware representations of the raw observation space is useful, how distance metrics can be used for planning through dense smooth rewards, etc have been extensively discussed in Tamar et al and Srinivas et al). The idea of building a cognitive visual map for path planning is the main contribution in Gupta et al.\\n\\n4. Finally, way more impressive results have been shown on more complex tasks than those considered in this paper (simple maze navigation and uncluttered rope images) in prior work.\", \"title\": \"Insufficient coverage of related work and re-introduction of existing ideas\"}"
]
} |
rklnDgHtDS | Compositional Language Continual Learning | [
"Yuanpeng Li",
"Liang Zhao",
"Kenneth Church",
"Mohamed Elhoseiny"
] | Motivated by the human's ability to continually learn and gain knowledge over time, several research efforts have been pushing the limits of machines to constantly learn while alleviating catastrophic forgetting. Most of the existing methods have been focusing on continual learning of label prediction tasks, which have fixed input and output sizes. In this paper, we propose a new scenario of continual learning which handles sequence-to-sequence tasks common in language learning. We further propose an approach to use label prediction continual learning algorithm for sequence-to-sequence continual learning by leveraging compositionality. Experimental results show that the proposed method has significant improvement over state-of-the-art methods. It enables knowledge transfer and prevents catastrophic forgetting, resulting in more than 85% accuracy up to 100 stages, compared with less than 50% accuracy for baselines in instruction learning task. It also shows significant improvement in machine translation task. This is the first work to combine continual learning and compositionality for language learning, and we hope this work will make machines more helpful in various tasks. | [
"Compositionality",
"Continual Learning",
"Lifelong Learning",
"Sequence to Sequence Modeling"
] | Accept (Poster) | https://openreview.net/pdf?id=rklnDgHtDS | https://openreview.net/forum?id=rklnDgHtDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"2bws3MYb5P",
"HkgYv3uhjB",
"rklSQhOnjB",
"HkxxasO2jr",
"rygs4o_3jS",
"S1xahryRFr",
"rke6W0h6YH",
"ryeyDTniFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747482,
1573846112544,
1573846044577,
1573845944122,
1573845811016,
1571841460645,
1571831301417,
1571700055134
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2375/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2375/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2375/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2375/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2375/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2375/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2375/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper addresses the task of continual learning in NLP for seq2seq style tasks. The key idea of the proposed method is to enable the network to represent syntactic and semantic knowledge separately, which allows the neural network to leverage compositionality for knowledge transfer and also solves the problem of catastrophic forgetting. The paper has been improved substantially after the reviewers' comments and also obtains good results on benchmark tasks. The only concern is that the evaluation is on artificial datasets. In future, the authors should try to include more evaluation on real datasets (however, this is also limited by availability of such datasets). As of now, I'm recommending an Acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to all reviews\", \"comment\": [\"Thank you for reviews. We summarized some updates based on the suggestions.\", \"Added flowchart in Figure 1.\", \"Added standard deviations in Figure 3.\", \"Revised the paper to improve writing and language.\", \"Made the overall logic more clear.\", \"Removed some redundant texts.\", \"Corrected typos and confusing notations.\"]}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thanks very much for your helpful comments and support.\", \"q1\": \"Evaluation on the more realistic dataset.\", \"a1\": \"Yes, we agree. This is the initial research for compositional continual language learning, aiming at finding fundamental mechanism for such problem, so we started with these instruction learning and machine translation datasets. Also, artificial data helps to clearly show that the core idea is valid.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thanks very much for your constructive comments and support.\", \"q1\": \"The novelty is somewhat limited. However, this might be okay since the experiments are quite compelling and the method is applied to a different setting.\", \"a1\": \"Thank you. We agree that this is okay. Our novelty is to propose continual language learning, and address it with compositionality to bridge LP-CL and S2S-CL.\", \"q2\": \"Notations and language in section 3.2-3.4.\", \"a2\": \"Thank you very much. We revised the paper and fixed notations and language.\\nFor the syntax embedding of a new word, the new word is still supposed to have syntactic information, but it is seen information, so that the model should still learn it to be close to some existing syntax embeddings. These syntax embeddings should correspond to empty syntactic information in instruction learning task. It might also be feasible to encode this to model design, but the current design is simpler.\", \"q3\": \"Freezing \\\\theta contradicts the claim that only \\\\phi is frozen.\", \"a3\": \"In Section 3.2, we freeze \\\\phi to keep syntax processing ability in continual learning, and leave learning semantic parameters to LP-CL algorithm. In this case, the non-parametric LP-CL algorithm freezes a part of \\\\theta. We now made it more clear in Section 3.2.\", \"q4\": \"The scalability without co-adapting the other embedding. Perhaps it could be alleviated if f_predict has extremely high capacity at initialization?\", \"a4\": \"This is the initial research for continual language learning, and we assume there is no new syntax information during continual learning. This assumption is reasonable to some extent, because syntax has less variations than semantics, and syntax does not change frequently. Since semantics and syntax are separated, increasing the capacity of f_predict at initialization may not address syntax continual learning, or at least it is not an efficient approach.\", \"q5\": \"Analysis on the decreasing performance in the instruction learning task.\", \"a5\": \"The performance may decrease during the end of the second phrase (discussion Section 5.2) of continual learning where the embeddings squeeze into the explored space, maybe because exploring becomes expensive with the dense population under regularization. We wrote the analysis more clearly in the discussion Section 5.2.\", \"q6\": \"Short-comings of the approach and how to overcome.\", \"a6\": \"One short-coming is that this work focuses on continual learning for semantics, but not for syntax. To make it possible, we may need to address hierarchical compositionality, maybe with stacked attention models such as transformer.\", \"q7\": \"Overall flow chart.\", \"a7\": \"We added an overall flow chart in Figure 1.\", \"q8\": \"Error bars to the plots.\", \"a8\": \"We added standard deviations in Figure 3.\", \"q9\": \"What characteristics of the 2 tasks cause the baselines to behave so differently?\", \"a9\": \"The 2 tasks are compositional continual language learning problems, with increasing numbers of vocabulary. The baselines do not handle such characteristics, while the proposed method does, so that the proposed method outperforms the baselines.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thank you for the review, question and suggestions.\", \"q1\": \"Writing and backbone of the paper.\", \"a1\": \"We revised the paper and made the main points more clear. In this paper, we propose a new scenario of continual learning which handles sequence-to-sequence tasks common in language learning. We further propose an approach to use label prediction continual learning algorithm for sequence-to-sequence continual learning by leveraging compositionality.\", \"q2\": \"Problem/motivation, preliminary background knowledge\", \"a2\": \"The main problem is how to enable continual language learning with compositionality. The backgrounds are covered in introduction and related work sections (there are also pointers to references).\", \"q3\": \"classic or na\\u00efve methods?\", \"a3\": \"The classic or na\\u00efve continual learning algorithms (Kirkpatrick et al., 2017a; Aljundi et al., 2018) mostly focus on label prediction tasks (with fixed input and output sizes), but we address sequence to sequence tasks (with unfixed input and output sizes), common in language learning. We use these methods as experiment baselines.\", \"q4\": \"Main advantage/novelty compared to classic/na\\u00efve method?\", \"a4\": \"Our approach bridges the gap between label prediction and sequence-to-sequence learning by using compositionality in language. To our knowledge, this is the first work for applying compositionality to continual learning of sequence-to-sequence tasks. Experiments show that the proposed method has significant improvement over multiple state-of-the-art baselines.\", \"q5\": \"Does \\\"One of the key skills \\u2026 ability to produce novel composition\\u2019\\u2019 in abstract imply your method can continually learn new compositions? How does that reflect in the technical parts and the experiments?\", \"a5\": \"Yes, this skill is the compositional generalization skill, which is reflected in Section 3.3 for technical parts, and the transfer experiments in Section 4. We made it more clear in the abstract.\", \"q6\": \"Redundant paragraph before Section 3.\", \"a6\": \"We removed it.\", \"q7\": \"Typos.\", \"a7\": \"Thank you for pointing out. We revised the paper carefully.\", \"q8\": \"What\\u2019s the problem of S2S-CL? Increasing input number n and output number m?\", \"a8\": \"S2S-CL is Sequence-to-Sequence Continual Learning problem. It is not about increasing input number n and output number m, but about increasing input and output vocabulary sizes. (Section 3.1)\", \"q9\": \"What does \\\"COMPOSITIONALITY\\u2019\\u2019 mean in Section 3.2? What\\u2019s the relationship between the last two equations of Page 4?\", \"a9\": \"In Section 3.2, compositionality means the language property that syntax and semantics can be separated, and the output syntactic information depends only the input syntactic information, and (given output syntactic information,) the semantic output information depends only on the input semantic information. Please refer to (Li, 2019) for more details. The last two equations (Eq. 2 and Eq. 3) on Page 4 are probabilistic interpretation of the compositionality property.\", \"q10\": \"Simplifications in the first Equation of Page 5?\", \"a10\": \"This simplification is valid because we design the model in the way that for each output label, Y^f tells which x^p corresponds to the label, and X^p tells the value of x^p, so that this label can be inferred without knowing other labels. (Eq. 4)\", \"q11\": \"Notation \\\"{0,1}^{Uxn}\\u2019\\u2019.\", \"a11\": \"We borrow the notation from (Li, 2019). We now do not use the notation, but explain it in the text.\", \"q12\": \"Entropy regularization with L2 norm.\", \"a12\": \"Adding noise and using L2 regularization together reduce channel capacity (amount of information it can contain) in each representation and thus entropy for the representations. Please see Section 2 in (Li et al 2019) for more information.\", \"q13\": \"What are the detailed settings of the demonstrated experiments?\", \"a13\": \"This is explained in the first paragraph of Section 4 and Appendix A provides more details. Please see Table 2 for simple examples. In summary, the experiments include two stages. The first stage is a standard process in which we train a model with combinations of multiple words in various sentence structures. In each continual stage, we add a new input word and corresponding new output symbol. The training dataset contains only one sample, whose input is a sentence with the new word, and output is a sequence with the new symbol. For each continual stage, we can only use the data in that stage, and have no access to data in previous or future stages.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper is about continual learning on NLP applications like natural language instruction learning or machine translation. The authors propose to exploit \\\"compositionality\\\" to separate semantics and syntax so as to facilitate the problem of interest.\\n\\nIn summary, the current manuscript is clearly not ready for publication. The writing is not good, as I cannot see clearly the backbone of the paper. Honestly, I got very confused by the presented contents. What\\u2019s the problem/motivation? No preliminary background knowledge? What\\u2019s the classic or na\\u00efve method to solve the problem of interest? What\\u2019s the main advantage/novelty of the presented method compared to that classic/na\\u00efve method?\\n\\nPlease see the detailed comments below.\\n\\nIn the Abstract, you mentioned \\\"One of the key skills \\u2026 ability to produce novel composition\\u2019\\u2019. Do you imply your method can continually learn new compositions? If so, how does that reflect in the technical parts and the experiments?\\n\\nThe paragraph before Section 3 might be redundant.\\n\\nMany typos exist. Such as the word \\\"iuput\\u2019\\u2019 in the 1st paragraph of Page 4.\\n\\nWhat\\u2019s the problem of S2S-CL? Increasing input number n and output number m?\\n\\nWhat does the word \\\"COMPOSITIONALITY\\u2019\\u2019 mean in Section 3.2? Also, what\\u2019s the relationship between the last two equations of Page 4?\\n\\nHow do you defend the simplifications adopted in the first Equation of Page 5?\\n\\nThe notation \\\"{0,1}^{Uxn}\\u2019\\u2019 usually represents a binary matrix of size Uxn. It is not suitable to use them to represent a matrix containing one-hot columns.\\n\\nAt the beginning of Page 6. Why entropy regularization can be introduced via L2 norm on the embedding matrixes p and f? Also why that L2 norm regularizations can `achieve disentanglement\\u2019? Please provide the detailed proof or the reference. \\n\\nWhat are the detailed settings of the demonstrated experiments?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"*Summary\\n\\nThe paper proposes a continual learning algorithm for label prediction to deal with sequence-to-sequence continual learning problems. The proposed method is designed to leverage compositionality. The key idea of the proposed method is to enable the network to represent syntactic and semantic knowledge separately. This allows the neural network to leverage compositionality for knowledge transfer while alleviating catastrophic forgetting.\\nThe experiments showed that their method performed significantly better results than baseline methods. The method was tested on two different datasets, e.g., instruction Learning and machine translation.\\n\\n\\n*Decision and supporting arguments\\n\\nI think this paper has enough quality to be be accepted as a conference paper.\\nThe main reasons of my decision are two-folds.\\nFirst, the proposal is quite insightful. The separation of semantics and syntax of an input sentence for using compositionality is an excellent idea.\\nSecond, the proposed method improved the performance on two dataset significantly. This supports the usefulness of the idea.\\n\\n\\n*Additional feedback\\n\\nMy concern is about evaluation. Table 1 shows the significant difference between the proposed method and the baseline methods. It looks to nice. But, this suggests that the datasets might be too artificial for this evaluation. To my understanding, both of the datasets are artificial to some extent. Hopefully, the method should be evaluated on the more realistic dataset.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes an approach for continual learning for applications in sequence-to-sequence continual learning (S2S-CL). More specifically, the paper addresses the problem of growing vocabulary. The overall approach is intuitive yet quite effective. The method assumes a split between the syntax (f) and semantics (p) of the sequence -- in other words, each token is associated with two labels. Furthermore, the syntax is assumed to be shared across all the training sequences and the sequences that are not encountered during the initial training. A sequence model (LSTM) is used for learning the syntax f over the initial training and is then frozen for all the downstream tasks (i.e. continual learning). The sequence model predicts the correspondence between the output token and input tokens. Another network (a 1-layer MLP) is used to predict the semantic label of the selected token in the target domain. Barring some notational confusion, I believe the method is very reasonable and should work well. They perform experiments on 2 datasets (Instruction learning & machine translation) in 3 continuous learning paradigms with varying difficulties and demonstrate that the proposed method significantly outperforms the baselines by an impressive margin.\\n\\nI am leaning towards accepting this paper since I believe the proposed approach can be a promising direction for continual learning in S2S settings and the empirical results are convincing.\\n\\nHowever, I have some concerns that might require clarification or additional experiments:\\n 1. The novelty of the proposed method is somewhat limited in my opinion. It seems it is a very simple adaptation of Li et al. 19 and the only addition to that work is the usage of a very simple label prediction scheme by fixing everything else. However, this might be okay since the experiments are quite compelling and the method is applied to a different setting.\\n\\n 2. Notations and language in section 3.2-3.4 are hard to follow:\\n - At the bottom of page 4, I believe some of the equations are not correct. For example, the first term of line 2 should be P(Y^f | X^f, X^p) rather than Y^p?\\n - In the second equation of page 6, the second term should be v_j = p\\u2019 \\\\cdot a_j\\n - In Figure 1, k_r, E_r, and W are never defined (although their meaning can be inferred).\\n - In section 3.4, it reads that new word embedding is appended to both semantic and syntactic embedding matrices but this doesn\\u2019t make sense because the syntactic network is already fixed so it shouldn\\u2019t be able to handle new symbols; therefore, I believe only new row is added to the semantic embedding.\\n\\n 3. I believe that f_predict here is parameterized by \\\\theta and \\\\theta is also frozen during the continual learning phase which contradicts the claim at the end of section 3.2 that only \\\\phi is frozen. Otherwise, it\\u2019s hard to see why f_predict does not suffer from catastrophic forgetting. From the provided source code, it seems that it is indeed the case that only the new embedding is being updated. In other words, the only thing happening at this stage is that the newly added embedding are optimized to adapt to the frozen f_predict.\\n\\n 4. Due to point 3, I have some doubts about how scalable this approach is if the other embedding is not allowed to co-adapt. However, perhaps this problem could be alleviated if f_predict has extremely high capacity at initialization? More on this in point 6.\\n\\n 5. Assuming all my assessment above is correct, it seems that the performance of the proposed method should *not* decrease at all so I would like to see more analysis on the decreasing performance in the instruction learning task (Figure 2, left column). Is this be due to the fact that f_predict or the other embedding is not allowed to update and the newly added embedding is mapped close to existing embedding? In that case, can increasing the model capacity solve this problem? I understand the paper makes argument about regularization but I believe this warrants thorough study for gauging the significance of this approach in more realistic settings.\\n\\n 6. I would like to see a discussion of the short-comings of the approach and possible ways to overcome them. For example, freezing the syntactic network seems limiting in machine translation settings if a new language, say Italian, is added. Intuitively, knowing how to translate English-French should help translating English to Italian but fixing the syntax prevents this. Another example is that prior knowledge about the syntax needs to be known about the language for labeling the f and p and this can be expensive and cannot handle words that have more than 1 usages (e.g. run can be used as a noun but also as a verb).\\n\\nI am willing to increase my score to accept if the revised manuscript can address the majority of, if not all of the concerns listed above.\\n\\n========================================================================================================\", \"minor_comments_that_did_not_affect_my_decision\": [\"These sections could greatly benefit from an overall flow chart of how everything fits together\", \"It says the experiments are repeated with 5 different random seeds so why not add error bars to the plots?\", \"What characteristics of the 2 tasks cause the baselines to behave so differently?\"]}"
]
} |
H1livgrFvr | Out-of-Distribution Image Detection Using the Normalized Compression Distance | [
"Sehun Yu",
"Donga Lee",
"Hwanjo Yu"
] | On detection of the out-of-distribution images, whose underlying distribution is different from that of the training dataset, we tackle to apply out-of-distribution detection methods to already deployed convolutional neural networks. Most recent approaches have to utilize out-of-distribution samples for validation or retrain the model, which makes it less practical for real-world applications. We propose a novel out-of-distribution detection method MALCOM, which neither uses any out-of-distribution samples nor retrain the model. Inspired by the method using the global average pooling on the feature maps of the convolutional neural networks, the goal of our method is to extract informative sequential patterns from the feature maps. To this end, we introduce a similarity metric which focuses on the shared patterns between two sequences. In short, MALCOM uses both the global average and spatial pattern of the feature maps to accurately identify out-of-distribution samples. | [
"Out-of-Distribution Detection",
"Normalized Compression Distance",
"Convolutional Neural Networks"
] | Reject | https://openreview.net/pdf?id=H1livgrFvr | https://openreview.net/forum?id=H1livgrFvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"-STvBNexM",
"ByxS1FYhsH",
"rJg_udYnoS",
"HkgLRLYhoB",
"BJgc_IKhjH",
"SJxip7thoH",
"BJxkDunM5S",
"H1lDwHtCtH",
"S1epit-TFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747454,
1573849309295,
1573849199965,
1573848782396,
1573848690391,
1573848002815,
1572157526940,
1571882335066,
1571785125381
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2374/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2374/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2374/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2374/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2374/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2374/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2374/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2374/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes an out-of-distribution detection (OOD) method without assuming OOD in validation.\\n\\nAs reviewers mentioned, I think the idea is interesting and the proposed method has potential. However, I think the paper can be much improved and is not ready to publish due to the followings given reviewers' comments:\\n\\n(a) The prior work also has some experiments without OOD in validation, i.e., use adversarial examples (AE) instead in validation. Hence, the main motivation of this paper becomes weak unless the authors justify enough why AE is dangerous to use in validation. \\n\\n(b) The performance of their replication of the prior method is far lower than reported. I understand that sometimes it is not easy to reproduce the prior results. In this case, one can put the numbers in the original paper. Or, one can provide detailed analysis why the prior method should fail in some cases.\\n\\n(c) The authors follow exactly same experimental settings in the prior works. But, the reported score of the prior method is already very high in the settings, and the gain can be marginal. Namely, the considered settings are more or less \\\"easy problems\\\". Hence, additional harder interesting OOD settings, e.g., motivated by autonomous driving, would strength the paper.\\n\\nHence, I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We are strongly convinced that out-of-distribution detection is important for AI safety, and we believe that there will be more studies on the task in the future. By the way, the settings you mentioned where the inputs are similar but very different in some aspects sound interesting. For now, the idea does not come to mind, but we think that it could be future work to propose the strict definition of \\u201csimilar to A, but different from B\\u201d, formulate the settings, and conduct experiments. Last but not least, the intrinsic importance of our work is to make the out-of-distribution detection task more practical, which is slightly different in that most previous work just tried to improve the performance in existing settings. The compressive-complexity pooling, which we introduced by using the normalized compression distance (NCD), successfully improves the performance even with the constraints that makes the method much practical.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We really appreciate your constructive comments. We found them helpful and the followings are our responses to the major comments.\\n\\n- You mainly pointed out that our justification about both the constraints is not enough. In case of the first constraint, validating the models by using out-of-distribution test samples does not make sense, because the main motivation of out-of-distribution detection is that we are not able to know (and assume) the test distribution. For this reason, the usage of the best hyperparameter values found by a few out-of-distribution samples eventually becomes identical to assume a specific test distribution (including out-of-distribution samples), which makes the experiments unfair. In case of the second constraint, retraining the models to make it effective to detect out-of-distribution samples not only seems unnatural in that they are already being deployed, but also usually degrades the in-distribution classification performance [3]. Simply employing the pretrained deep neural network to compute the confidence score does not compromise the performance of in-distribution classification, and it is the main reason why this kind of approach has gained much attention.\\n\\n- In addition to the justification about the strict constraints above, we totally agree that our method needs to be compared against existing methods which are not so tightly constrained. Thus, we conducted the additional experiments of out-of-distribution detection in case that a few out-of-distribution samples (or adversarial samples) are available for validation (please refer to Appendix B and C). For competing methods, we considered 1) ODIN [2] and 2) Mahalanobis [1] that require the validation to determine the hyperparameter values. For fair comparisons, we additionally build a variant of our proposed method, termed as MALCOM++, which uses the weighted sum of the multiple scores for the final confidence score and the weights should be validated. As shown in Table 4 and 5, MALCOM++ consistently outperforms all the other methods in most cases, especially when validating with the adversarial samples. From this observation, we can conclude that the proposed compressive-complexity pooling is effective and helps to accurately identify the out-of-distribution images even without the strict constraint related to the validation.\\n\\n- We further polished the writing and corrected several typos. We agree with your comment suggesting to add missing related work about NCD on image classification, so added the citation in the paper. \\n\\n[1] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. NeurIPS 2018.\\n[2] Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In 6th International Conference on Learning Representations, ICLR 2018\\n[3] Andrey Malinin and Mark Gales. Predictive uncertainty estimation via prior networks. NeurIPS 2018.\"}",
"{\"title\": \"Response to Reviewer #3 (2)\", \"comment\": \"2. As our method aims to additionally capture the spatial information of feature maps, it works well when both the global average pooling and the compressive-complexity pooling are used. We can explain this observation for both the cases 1) without the concatenation and 2) with the concatenation:\\n\\n2-(a). Without the concatenation: \\nThe feature vector obtained by the global average pooling on the last layer of a CNN model is very effective to characterize input images for classification. This is because the CNN model is already trained for classification based on the global average pooling layer, which feeds the processed data into the last linear layer. Thus, the CNN model focuses on training the feature vector obtained by global average pooling of the last convolutional layer. Therefore, the pooled feature vector already has rich information related to its target class, so that it can describe the image [2]. This is why the vanilla version of the Mahalanobis method works so well and our compressive-complexity pooling sometimes fails to extract other information.\\n\\n2-(b). With the concatenation: \\nExcept for the last convolutional layer, there is no guarantee that the global average pooling can capture much information related to the class. In fact, Table 1 shows that AUPR(In) of Mahalanobis-assemble is significantly low, which means that out-of-distribution samples are more recognized as in-distribution samples when we concatenate feature vectors average-pooled from hidden layers than when we do not. This indicates that the global average pooling on feature maps of lower layers cannot correctly capture the class information, because the spatial information becomes more important to capture low-level features of classes. For this reason, our compressive-complexity pooling that utilizes the spatial information of feature maps could be effective to discriminate out-of-distribution feature maps from in-distribution feature maps especially in lower layer, and it eventually helps to detect out-of-distribution samples.\\n\\n\\nIn summary, the performance of MALCOM could be comparable to the method using the global average pooling when only the last hidden layer is used as discussed in 2-(a). However, we think that the important part is the observation from 2-(b) in that MALCOM achieves the best performance by effectively capturing the spatial information in lower feature maps, which the existing global average pooling is not able to do.\\n\\n\\n* As you suggested, we added the ablation study about weighted averaging on the proposed method (please refer to Appendix C). We first build a variant of our proposed method, termed as MALCOM++, which uses the weighted sum of the multiple scores for the final confidence score and the weights are validated by adversarial samples. We observe that the weighted sum (utilizing the adversarial samples for the validation) performs better than the concatenation (not using adversarial samples at all) by the help of the adversarial samples in general. However, as presented in Table 6, MALCOM++ shows the best performance, which strongly indicates that the proposed compressive-complexity pooling is consistently effective regardless of whether the adversarial examples are used or not.\\n\\n[1] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detectingout-of-distribution samples and adversarial attacks. NeurIPS 2018\\n[2] Babenko, Artem, and Victor Lempitsky. Aggregating deep convolutional features for image retrieval. arXiv 2015\"}",
"{\"title\": \"Response to Reviewer #3 (1)\", \"comment\": \"We really appreciate your constructive comments. We found them helpful and the followings are our responses to the major comments.\\n\\n1-(a). First of all, we are not sure that utilizing generated adversarial examples for validation does make sense due to the intrinsic difference between the adversarial examples and out-of-distribution samples that we want to detect, and it could be an another research topic by itself. In case that we use the adversarial examples in place of the out-of-distribution samples for the validation, we have to carefully determine \\u201cwhat makes samples like out-of-distribution\\u201d. Also, the adversarial examples looks similar with their original images to our eyes, so it is not consistent with the basic assumption of out-of-distribution data, which might differ in some respect from the training (or in-distribution) data. Although Lee et al. [1] tried to validate the model by using generated adversarial examples as you mentioned, we wonder if the results would be robust with respect to how the adversarial examples are constructed. In this sense, we think that constructing such adversarial examples for out-of-distribution data detection is one of interesting research topics but slightly different from the problem that we want to tackle.\\n\\n1-(b). Nevertheless, it is true that adjusting hyperparmeters by utilizing adversarial examples does not violate our constraints as you mentioned. Thus, we conducted the additional experiments about this, and please refer to Table 5 in Appendix C. We could not figure out the details about how Lee et al. [1] generated the adversarial examples from the training set, but we tried to reproduce the experimental setting as much as possible. We observe that our method, MALCOM, shows the comparable performance to the Mahalanobis detector except for a few cases (e.g. AUROC 81.01% -> 93.01% for (id, ood)=(CIFAR-10, SVHN) using ResNet). Furthermore, as adversarial samples are available for validation, we extended our method to use the weighted summation of confidence scores from multiple layers similarly to Mahalanobis, and we named this method as MALCOM++. With the help of our proposed compressive-complexity pooling, MALCOM++ outperforms Mahalanobis in most cases.\"}",
"{\"title\": \"Overview of revision\", \"comment\": [\"Dear reviews,\", \"Considering the constructive comments from reviewers, we revised the paper as follows:\", \"We added Figure 1 to help you better understand the overall process of our compressive-complexity pooling and its difference from the global average pooling.\", \"We added Appendix B with Table 4, which is about the experiments in case that out-of-distribution samples are used for validation.\", \"We added Appendix C with Table 5 & 6, which is about the experiments in case that generated adversarial samples are used for validation.\", \"The performances of Mahalanobis-vanilla on the experiment (in-distribution:\\u201dCIFAR-100\\u201d, ResNet) have been wrongly reported and we corrected it.\", \"We corrected several typos.\", \"(+) For reproducibility, we upload the code although it needs to be refactored.\", \"(https://github.com/malcom2020/malcom)\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"** post rebuttal start **\\n\\nAfter reading reviews and authors' response, I decided not to change my score.\\nI remark that this paper still requires a lot of revision; comparison in the main paper is somewhat unfair and all new results are in the appendix.\\nAlso, the performance of their replication of the prior method is far lower than reported. In worst case, the performance gain from the compared method would be from their incorrect implementation on the prior works. In this kind of case, I suggest the authors to put {the numbers in the original paper} as well as {their replication} and claim that they fail to replicate the number. Ideally, if their method is evaluated in the same condition, it should outperform prior works in any case.\", \"detailed_comments\": \"1-(a). Adversarial attack and OOD (which is hard to detect) are closely related: they are both in off-manifold. Their main difference would be, while adversarial attack is very close to the clean data in the data space, OOD is relatively far from the in-distribution in the data space.\\nThough it does not talk about OOD, you may refer to [R1] for analysis in perspective of manifold. The difficulty of OOD detection can be considered to be coming from overlapped manifolds in the latent space.\\n\\n[R1] Stutz et al. Disentangling Adversarial Robustness and Generalization. In CVPR, 2019.\\n\\n\\n1-(b). Though the numbers are much lower than those reported in the original paper, I am happy to see the fair comparison. However, according to the original paper, simple FGSM is used for validation, so I am not sure such a huge difference can actually happen. In this kind of case, I suggest the authors to put {the numbers in the original paper} as well as {their replication} and claim that they fail to replicate the number.\\n\\n\\n2. I am happy to see that their revised method has better performance.\\n\\n** post rebuttal end **\\n\\n\\n- Summary:\\nThis paper proposes an out-of-distribution detection (OOD) method under constraints that 1) no OOD is available for validation and 2) model parameters should be unchanged. They specifically address a problem of the state-of-the-art method satisfying the constraints, and propose a new distance metric inspired by data compression. Experimental results on several benchmarks with different deep neural network architectures support their claim.\\n\\n\\n- Decision and supporting arguments:\\nWeak reject.\\n\\n1. The problem setting is clear and their approach is interesting and makes sense. However, the method for comparison is not properly set. As the authors addressed, Mahalanobis detector proposed by Lee et al. (2018b) requires validation to determine weights for feature ensembling, but the validation can be done without OOD data by generating adversarial samples as proposed in the same paper. Although Table 2 in Lee et al. (2018b) shows that the performance is not better than the case when we have an explicit OOD data for validation, it reasonably works well. Therefore, rather than comparing with the vanilla version (only using last latent space) or the alternative \\\"assemble\\\" method (concatenating all average-pooled features), they had to compare their method with the model validated by adversarial samples, which essentially satisfies the constraints.\\n\\n2. Also, I wonder the main body of the proposed method itself is really effective or some minor tweak they made is essential. According to Table 2 in the submission, their method is better than \\\"Mahalanobis vanilla\\\" only when all components are applied. Though the idea is interesting, I am skeptical about the effectiveness of the proposed method.\\n\\n\\n- Comments:\\n1. As addressed by the authors, feature concatenation (\\\"assemble\\\") is not effective for the Mahalanobis method but the proposed method. How about to do an ablation study about weighted averaging vs. concatenation on the proposed method as well? Again, weights can be validated by adversarial samples to satisfy the constraints.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"After reading the other reviews and comments, I appreciate the effort by the Authors, but it looks like the paper still needs some work before being ready. So, I have decided to maintain my rating.\\n\\n===================\\n\\nThe work proposes a system for detecting out-of-distribution images for neural networks under strict limitations of not retraining the network or tuning parameters with out of distribution validation data in mind using Compression Distance in a novel way. The authors evaluate the method broadly and against the state of the art and provide a thorough explanation of the background material and formulation. \\n\\nThis work is strong in it\\u2019s cleverness, novelty, and evaluation when limited to the class of solution stipulated in section 2; however, it is not clear or presented why this restrictive choice is or could be necessary. For this reason, I am borderline unless that caveat is addressed as described below, in which case I would be happy to accept. \\n\\nIt is unclear to me as it was not presented in the work when or if the problem that is being solved by the paper is particularly either important or frequent. Obviously, not having to retrain is more efficient than having to and not having to validate on out-of-distribution samples is helpful in times when we don\\u2019t know them ahead of time, but it is unclear to me if it is the case that we will have a situation in which both of the above are, for instance, not possible at all. To me, if the work could be changed to compare against works which are not so tightly constrained, not for the purposes of holding it to the same standard but to understand it\\u2019s relative standing, or to better justify the very strict constraints which somehow, despite out-of-distribution detection being a popular upcoming topic, apparently only has one other paper that matches it.\\n \\t\\t\\nThe paper could use some extra proofreading, for instance the first sentence of the abstract doesn\\u2019t make much grammatical sense especially with the first phrase included.\", \"it_may_be_nice_to_cite_works_such_as_http\": \"//citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.132.6389&rep=rep1&type=pdf and others in that vein as this is certainly not the first work to involve compressive principles in image classification related tasks.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new framework for out-of-distribution detection, based on global average pooling and spatial pattern of the feature maps to accurately identify out-of-distribution samples. MAHALANOBIS distance based methods were discussed, and the shortcomings of using mahalanobis distance were given (i.e. assumption that features are independent, etc). They propose to use compression based distance measures off the shelf from standard compression techniques to detect spatial feature patterns in feature space and demonstrate its effectiveness on several datasets and comparison with baselines is reported and well discussed.\\n\\nThe motivation of the proposed approach is clear, and the method seems novel. However, the experiments could have been done in a more complex setting. The out-of distribution samples pose a danger in safety critical applications such as autonomous driving, for example a car deployed in environment that it has not seen during the training might crash. So, it would be interesting to see the performance of both baselines and proposed approach in those settings where inputs are similar in nature but very different in some aspects. I do not necessarily see something wrong with the paper, but I'm not convinced of the significance (or sufficient efficiency) of the approach. There is also theoretical guarantees showing exhaustiveness of the proposed methods in detecting all possible out-of distribution examples.\"}"
]
} |
SJxjPxSYDH | Discriminative Variational Autoencoder for Continual Learning with Generative Replay | [
"Woo-Young Kang",
"Cheol-Ho Han",
"Byoung-Tak Zhang"
] | Generative replay (GR) is a method to alleviate catastrophic forgetting in continual learning (CL) by generating previous task data and learning them together with the data from new tasks. In this paper, we propose discriminative variational autoencoder (DiVA) to address the GR-based CL problem. DiVA has class-wise discriminative latent embeddings by maximizing the mutual information between classes and latent variables of VAE. Thus, DiVA is directly applicable to classification and class-conditional generation which are efficient and effective properties in the GR-based CL scenario. Furthermore, we use a novel trick based on domain translation to cover natural images which is challenging to GR-based methods. As a result, DiVA achieved the competitive or higher accuracy compared to state-of-the-art algorithms in Permuted MNIST, Split MNIST, and Split CIFAR10 settings. | [
"Continual learning",
"Generative replay",
"Variational Autoencoder"
] | Reject | https://openreview.net/pdf?id=SJxjPxSYDH | https://openreview.net/forum?id=SJxjPxSYDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"YWTOjO-nze",
"rylcVXLnir",
"HklW0uH3jB",
"rygR36nHjr",
"rJlpm6hSir",
"SJlTY3nrsH",
"ryl1Y2nHoH",
"H1xBk23rjB",
"SkgI3OVTYH",
"S1eIP8MpYB",
"H1l4PVq4KH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747425,
1573835570418,
1573832905136,
1573404086259,
1573403940648,
1573403781210,
1573403767232,
1573403612901,
1571797166076,
1571788381759,
1571230811902
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2373/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2373/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2373/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2373/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2373/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2373/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2373/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2373/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2373/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2373/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper presents a method for continual learning with a variant of VAE. The proposed approach is reasonable but technical contribution is quite incremental. The experimental results are limited to comparisons among methods with generative replay, and experimental results on more complex datasets (e.g., CIFAR 100, CUB, ImageNet) are missing. Overall, the contribution of the work in the current form seems insufficient for acceptance at ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your feedback\", \"comment\": \"Thank you for your feedback. Since, the rebuttal deadline is almost end, we are not sure to report the additional comparison that you raise. Nevertheless, we are now going to start the CL experiment with the [1] based on CIFAR 10 dataset. However, we hope to say that the DGR in our paper is also based on the WGAN-GP [2] which can generate high-quality images. The experimental result on Table 2 shows that the WGAN-GP based GR algorithm also suffers from severe catastrophic forgetting on CIFAR 10 dataset.\\n\\n\\n[References]\\n[1]Wu, Chenshen, et al. \\\"Memory replay GANs: Learning to generate new categories without forgetting.\\\" Advances In Neural Information Processing Systems. 2018.\\n\\n[2] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein gans. In Advances in neural information processing systems (pp. 5767-5777).\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response. After checking the literature more carefully, I realized that there have been generative replay models based on conditional GAN [1]. It avoids the problem of the proposed model that it needs to first generate images based on conditional VAE and then convert them into better quality images via cycle-GAN. Conditional GAN can directly generate high-quality images in one step. I do not suggest acceptance of this paper unless experimental results are provided showing that the proposed model outperforms the conditional-GAN-based model.\\n\\nReferences\\n[1] Wu, Chenshen, et al. \\\"Memory replay GANs: Learning to generate new categories without forgetting.\\\" Advances In Neural Information Processing Systems. 2018.\"}",
"{\"title\": \"Thank you for your valuable comments.\", \"comment\": \"We appreciate your constructive feedback. Specifically, your comments about our motivation and development of our idea greatly help us to improve the quality of our paper.\\n\\nIf we correctly understand reviewer 2\\u2019s concerns, the concerns can be divided into two folds: \\n\\n1. Our suggestion to mitigate the catastrophic forgetting looks a naive combination of well-known concepts. Thus, it is more system engineering than science. \\n2. Each component described in Figure 1 is not explained enough. Also, there is no description of the complete task.\\n\\n[Response for 1]\\nAs we explained at the common response, we started our research from clear open questions. Our first open question was that why other GR-based algorithms [1, 2] assume unit Gaussian priors even though they integrate classification loss into their VAE formulation. Since they do not consider the conflict between the unit Gaussian prior and discriminative loss for the latent variable z, their models generate ambiguous samples that negatively affect the performance of incremental learning, which is discussed in section 4.1 in our paper. This leads us to a more theoretical formulation for classification-regularized VAE. By introducing class conditional priors induced by the mutual information maximization, DiVA yields class-wise discriminative one mode Gaussians for latent variable z. Naturally, DiVA can conduct both class prediction and class conditional sample generation with one integrated model. \\n\\nThe second open question was that why GR-based algorithms suffer from serious catastrophic forgetting in natural image datasets, even though generated samples are not completely noisy. We assumed that this is due to the vulnerability of neural networks [3] triggered by different distributions of pixel values between real and generated images. Thus, we defined the two domains: real domain and sample domain. To narrowing the distribution gap, we needed a solution that satisfies two conditions (also described in section 5): \\n\\n1. We should translate only the style (a global pattern of a specific domain) as keeping outline patterns of given images.\\n2. We should consider an unpaired domain translation between real and generated images because the generated images are sampled randomly.\\n\\nFortunately, we were able to find an existing solution that satisfies the requirements: CycleGAN. Any other domain translators that satisfy the conditions can be used or newly studied. With the solution, we could make a breakthrough for GR-based methods. To the best of our knowledge, this is the first successful approach for a GR-based algorithm to start to resist the catastrophic forgetting problem with a natural image dataset.\\n\\n[Response for 2]\\nFigure 1 is a conceptual description of our proposed model, DiVA. Each component is explained in section 4, below Equation 2, and justified in section 4.1. Also, for an easy understanding of the whole CL process with DiVA, we added another figure in Appendix E.\\n\\n[References]\\n[1] van de Ven, Gido M., and Andreas S. Tolias. \\\"Generative replay with feedback connections as a general strategy for continual learning.\\\" arXiv preprint arXiv:1809.10635 (2018).\\n\\n[2] Mundt, Martin, et al. \\\"Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition.\\\" arXiv preprint arXiv:1905.12019 (2019).\\n\\n[3] Su, Jiawei, Danilo Vasconcellos Vargas, and Kouichi Sakurai. \\\"One pixel attack for fooling deep neural networks.\\\" IEEE Transactions on Evolutionary Computation (2019).\"}",
"{\"title\": \"Thank you for your valuable comments.\", \"comment\": \"We appreciate your constructive feedback. Specifically, your comments about our derivations greatly help us to improve the quality of our paper. We hope you to also consider our notable experimental results as well.\\n\\n(Bounds of KL divergence) Thank you for this good comment. We claimed that the Equation 1 can be maximized indirectly by maximizing Equation 2 which is a lower bound of Equation 1. If we understand your primary concern correctly, the concern comes from the bound of KL divergence in Equation 5. To prove correctness of our formulation, we can rewrite the pointed term in Equation 5 by using simple bayes rule as follows:\\n\\n$$\\\\displaystyle\\\\sum_{\\\\mathrm{z}}\\\\hat{q}\\\\mathrm{(z|c)}\\\\ \\\\mathrm{log}\\\\ \\\\frac{\\\\hat{q}\\\\mathrm{(z|c)}}{\\\\hat{p}\\\\mathrm{(c|z)}} = \\\\displaystyle\\\\sum_{\\\\mathrm{z}}\\\\hat{q}\\\\mathrm{(z|c)}\\\\bigg(\\\\mathrm{log}\\\\ \\\\frac{\\\\hat{q}\\\\mathrm{(z|c)}}{\\\\hat{p}\\\\mathrm{(z|c)}} + \\\\mathrm{log}\\\\ \\\\frac{\\\\hat{p}\\\\mathrm{(z)}}{\\\\hat{p}\\\\mathrm{(c)}} \\\\bigg)$$\\n\\nBecause the $\\\\hat{p}\\\\mathrm{(c)}$ is constant, and $\\\\hat{p}\\\\mathrm{(z)}$ is not included in our optimization, we just optimize $\\\\displaystyle\\\\sum_{\\\\mathrm{z}}\\\\hat{q}\\\\mathrm{(z|c)}\\\\mathrm{log}[\\\\hat{q}\\\\mathrm{(z|c)} / \\\\hat{p}\\\\mathrm{(z|c)}]$. Since the $\\\\hat{q}\\\\mathrm{(z|c)}$ and $\\\\hat{p}\\\\mathrm{(z|c)}$ are both normalized distributions, the $D_{KL}[\\\\hat{q}\\\\mathrm{(z|c)} || \\\\hat{p}\\\\mathrm{(z|c)}]$ is always positive. Then, we can conclude that Equation 2 becomes the lower bound for Equation 1. \\n\\n(lambda) Actually, Equation 7 consists of three terms. Since only the third term is proposed additional regularization, we applied weighting parameter lambda to the third term only.\\n\\n(Difference with CDVAE) To clarify the difference our DiVA with CDVAE, we write derivations for both models here.\", \"cdvae\": \"$\\\\mathbb{E}_{q_{\\\\theta}\\\\mathrm{(z|x)}}[\\\\mathrm{log}\\\\ p_{\\\\theta '}(\\\\mathrm{x|z)}] - D_{KL}[q_{\\\\theta}\\\\mathrm{(z|x)} || p\\\\mathrm{(z)]} + \\\\lambda \\\\mathbb{E}_{q_{\\\\theta}\\\\mathrm{(z|x)}}[\\\\mathrm{log}\\\\hat{p}_{\\\\phi '}\\\\mathrm{(c|z)}]$\", \"diva\": \"$\\\\mathbb{E}_{q_{\\\\theta}\\\\mathrm{(z|x)}}[\\\\mathrm{log}\\\\ p_{\\\\theta '}(\\\\mathrm{x|z)}] - D_{KL}[q_{\\\\theta}\\\\mathrm{(z|x)} || \\\\hat{q}_{\\\\phi}\\\\mathrm{(z|c)]} + \\\\lambda \\\\mathbb{E}_{q_{\\\\theta}\\\\mathrm{(z|x)}}[\\\\mathrm{log}\\\\ \\\\hat{p}_{\\\\phi '}\\\\mathrm{(c|z)}]$\\n\\nAs we discussed in section 4.1, below the table for Algorithm 1, the key difference is that we consider class-conditional Gaussian distributions as priors for variational posteriors. Since CDVAE assumes the prior as unit Gaussian for all classes and optimizes classification loss simultaneously with the KL divergence, the latent space does not follow the prior exactly. As a result, CDVAE sometimes generates ambiguous samples (Figure 2 (c)). Interestingly, RtF [1] also does not consider the class-conditional priors even though they consider a classifier integrated VAE similar to CDVAE. In contrast, we assume class-wise specific Gaussian for each class. As a result, we can stably generate more realistic samples than CDVAE.\\n\\n[Additional feedback]\\n(dt in Algorithm 1) dt means the domain translation explained at section 5. \\n\\n(Figure 1) \\n- We corrected the typo. \\n- The 3d plot conceptually represents class-specific one mode Gaussians. \\n- The classification loss has implicit dependency with input conditions by minimizing the KL divergence in Equation 2.\\n\\n(heavy classifier) A classifier such as resnet. We used this term to distinguish the additional classifier from our integrated encoder that has discriminative power. \\n\\n(Redundant weights) If we extend to a more complex dataset such as ImageNet, it will become highly redundant. Furthermore, if we consider fully-convolutional architecture (without fully-connected layers), redundancy becomes a serious problem. For example, a feature map that has shape of [W x H x dim] becomes [W x H x (dim + the number of classes)]. In contrast, using discriminative conditional distributions can keep the dimension of the feature map as [W x H x dim] regardless of the number of classes. \\n\\n(Notations) Thank you for commenting this. We corrected the notations of section 3 to match with later sections. \\n\\n(Complexity of encoder) We intended that the encoder network can have enough both discriminative and generative power with a powerful architecture such as a deep residual network. \\n\\n[References]\\n[1] van de Ven, Gido M., and Andreas S. Tolias. \\\"Generative replay with feedback connections as a general strategy for continual learning.\\\" arXiv preprint arXiv:1809.10635 (2018).\"}",
"{\"title\": \"[References]\", \"comment\": \"[1] Legg, Shane, and Marcus Hutter. \\\"Universal intelligence: A definition of machine intelligence.\\\" Minds and machines 17.4 (2007): 391-444.\\n\\n[2] https://sites.google.com/view/continual2018\\n\\n[3] Shin, Hanul, et al. \\\"Continual learning with deep generative replay.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n[4] Lesort, Timoth\\u00e9e, et al. \\\"Marginal Replay vs Conditional Replay for Continual Learning.\\\" International Conference on Artificial Neural Networks. Springer, Cham, 2019.\\n\\n[5] Su, Jiawei, Danilo Vasconcellos Vargas, and Kouichi Sakurai. \\\"One pixel attack for fooling deep neural networks.\\\" IEEE Transactions on Evolutionary Computation (2019).\\n\\n[6] Mundt, Martin, et al. \\\"Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition.\\\" arXiv preprint arXiv:1905.12019 (2019).\"}",
"{\"title\": \"Thank you for your valuable comments.\", \"comment\": \"We appreciate your constructive feedback. Specifically, your comments about the motivation and problem definitions greatly help us to improve the quality of our paper.\\n\\n(Importance and motivation) To step forward to artificial general intelligence, we should further consider making an agent that can learn and remember many tasks incrementally [1]. However, this is particularly challenging in real-world settings: the agent may observe different tasks sequentially, and an individual task may not recur for a long time. In this settings, a learned model might overfit to the most recently seen data, forgetting the rest, a phenomenon referred to as catastrophic forgetting, which is a core issue CL systems aim to address [2]. Recently, GR-based methods, inspired by the generative nature of the hippocampus as a short-term memory system in the primate brain [3], have been widely studied to address the catastrophic forgetting problem. In terms of GR, we are trying to address the two open questions mentioned above. \\n\\n(Use of labels and novelty) In GR-based approaches, the quality of generated samples is crucial to keep the performance of previous tasks. If we use labels, we can construct a conditional generative model. Generally, conditioning on a generative model yields higher quality samples than unconditional one and makes it possible to generate class-balanced samples [4]; the importance of conditional generation is also described in section 6.1 in our paper. In this paper, we showed that discriminative regularization could make VAE possible to conduct both class conditional generation and classification with one integrated model. Thus, we do not need to train an additional classifier, e.g., deep CNN, which is necessary for other works, including Narayanaswamy et al. There is also classifier integrated VAE such as [6]. The difference with [6] is the use of class-conditional priors; more details are explained at the response (Difference with CDVAE) for reviewer 3 and section 4.1 in our paper.\\n\\n(Domain translation) Even though the conditional generation improves the quality of the generated samples, there is still a big difference between real and generated images. Because a deep neural network is vulnerable to even single-pixel perturbation [5], the difference can seriously affect the classification performance of GR-based algorithms. Thus, we suggested applying the domain translation to address this issue. By narrowing distribution discrepancy between real and generated images using the domain translation technique, we were able to alleviate the catastrophic forgetting problem successfully (Table 2).\\n\\n(Modeling) Good point. Since we consider finite discrete conditions, we can directly optimize $\\\\mathrm{\\\\mu_c}$ and $\\\\mathrm{\\\\sigma_c}$ for each c as parameters without the prior network. However, introducing a prior network makes our model become a more general framework that can address continuous-valued conditions. Also, in our paper, we set the prior network as a single fully-connected layer for easy handling of conditions and simple implementation. Otherwise, we should keep an additional mapping table between class conditions and its $\\\\mathrm{\\\\mu_c}$ and $\\\\mathrm{\\\\sigma_c}$.\\n\\n(Experimental settings) Generally, CL systems assume that each task comes sequentially, and an agent can not directly access previous experience [2]. We exactly follow the assumption. Also, we train DiVA sequentially for each task with one same model. To clarify our training process, we provide a brief summarization. Firstly, we train DiVA with task 1 that consists of real images and labels. Then, when new task 2 is coming, DiVA generates images and its labels of task 1 and learns both task 2 and the generated task 1 simultaneously. We added an additional figure in Figure 6 in Appendix E, for helping conceptual understanding.\\n\\n(Domains of CIFAR dataset) Since current generative models are not perfect for generating complex natural images, there is always a discrepancy between generated images and real images. Thus, we can define two domains: real image domain (realistic) and generated image domain (blurry). We used the domain translation for narrowing the gap.\"}",
"{\"title\": \"[Response for common concerns]\", \"comment\": \"We greatly appreciate all reviewers for valuable concerns and constructive feedback to improve clarity and quality of our paper. We will firstly respond common concerns about motivations, then address main points that each reviewer raised.\\n\\nIn this paper, we primarily address two issues: \\u201cwhat are current GR-based methods missing?\\u201d and \\u201chow can we extend GR-based CL approaches to a more complex dataset such as CIFAR10?\\u201d. We suggested two solutions to answer the open questions: a new type of conditional VAE that can also predict class labels (DiVA) and applying a domain translation (DT) trick. By applying DT, we significantly improved the continual learning performance of the current state of the art GR-based algorithms (Table 2). Furthermore, DiVA achieved the highest CL accuracy among the GR-based algorithms. We believe that this could be an important step that will trigger other GR-based researches trying to address more complex natural image datasets.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors focus on alleviating the catastrophic forgetting problem in continual learning. The authors propose a discriminative variational autoencoder (DiVA) to solve this problem under the generative replay framework. DiVA modifies the objective function of VAE by introducing an additional term that maximizes the mutual information between the latent variables and the class labels.\\n\\nThe authors do not thoroughly explain the motivation of this paper. The authors do not explicitly define continual learning, incremental learning, and catastrophic forgetting problem. It is also not clear to me why these problems are important. \\n\\nThe idea that introduces labels in VAE is not novel. For example, Narayanaswamy et al. [1] also propose to utilize labels to VAE. I do not understand why making use of labels is important for solving the catastrophic forgetting problem and how the labels are useful in the generative replay process. It is also not clear to me how domain translation is relevant to continual learning. \\n\\nIn terms of modeling, since the input into the prior network has finite possible discrete values, we do not need a fully connected network to generate $\\\\hat{\\\\mu}_c$ and $\\\\hat{\\\\sigma}_c$. Instead, we can directly optimize $\\\\hat{\\\\mu}_c$ and $\\\\hat{\\\\sigma}_c$ for each $c$ as parameters.\\n\\nThe paper provides some good experimental results. But the problem settings are not clear to me. I do not understand how the model is trained to solve multiple tasks. Do the same model is trained for multiple tasks? Is each of the tasks trained sequentially or simultaneously? It is also not clear to me why CIFAR datasets involve two domains and how these domains are relevant in each of the tasks.\\n\\nIn summary, since DiVA gives a good experimental performance, the proposed method might be promising. However, it looks to me that the authors need to better explain the motivation of DiVA, the differences of DiVA from existing supervised VAE, and the experimental settings, before the acceptance of this paper.\\n\\nReferences\\n[1]Narayanaswamy, Siddharth, T. Brooks Paige, Jan-Willem Van de Meent, Alban Desmaison, Noah Goodman, Pushmeet Kohli, Frank Wood, and Philip Torr. \\\"Learning disentangled representations with semi-supervised deep generative models.\\\" In Advances in Neural Information Processing Systems, pp. 5925-5935. 2017.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"-- This paper seeks to combine several ideas together to propose an approach for image classification based continual learning tasks. In this effort, the paper combines previously published approaches from generative modeling with VAEs, mutual information regularization and domain adaptation.\\n\\nI am a making a recommendation for reject for this paper with the main reason being that I believe the primary derivations for their method appear flawed. \\n\\n--In the main section describing the approach (Section 4), the authors start with a claim that Equation 1 and 2 are equal; I don\\u2019t believe 1 and 2 are equal.\\n\\n--In Section 4.1, it appears that they are instead making a claim about Equation 2 being a bound for equation 1; but even this derivation appears to have a problem. The following is the concern:\\n\\n--In the second line of Equation 5, the KL term appears to be measuring a distance between distributions on two different variables; z|c and c|z. If one were to interpret the second one as the unnormalized distribution on z defined via the likelihood for c given z; even this has an issue because then the expression for KL where we plug the unnormalized density in place of the normalized need not be positive which is something they need to derive their bound.\\n\\n--Another issue is that the regularization lambda should apply to both the terms in the bound but in Equation (7) only appears selectively for one of the two terms. \\n\\nIt is also not clear how the loss function proposed differs from that of the CDVAE, etc. If the novelty is in applying to continual learning and new datasets, it is not clear that this is sufficient.\\n\\nAdditional feedback for authors (not part of the main decision reasoning):\\n\\n- What is dt in Algorithm 1 description?\", \"figure_1\": \"-typo \\u201cimplmented\\u201d\\n-What\\u2019s the 3d plot supposed to represent?\\nDoesn't the classification loss have a dependency on the input condition?\\n\\n--What does a \\\"heavy classifier\\\" imply concretely? \\n\\n--\\u201cRedundant weights\\u201d seems like not a very strong constraint especially for a small cardinality label space (like 10, in the case of this paper).\\n\\n--The notation for the proposed parameters theta, theta\\u2019, phi, phi\\u2019 are not consistent with the notation in the intro section, where phi was used for the encoder and theta for the decoder. In later sections they use theta and theta\\u2019 for encoder/decoder resp.\\n\\n-- \\u201cWhen the encoder and decoder networks are sufficiently complex, it is enough to implement each the prior and classification network as one fully-connected layer\\u201d \\u2192 what do the authors mean \\u201cwhen \\u2026 networks are sufficiently complex\\u201d or do they actually mean when the \\u201cwhen the problem is simple enough\\u201d?\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper devises a pipeline that aims to address catastrophic forgetting in continual learning (CL) by the well-known generative replay (GR) technique. The key ingredient of the pipeline is a modern variational auto-encoder (VAE) that is trained with class labels with respect to a mutual information maximization criterion.\\n\\nThe paper does not follow a smooth story line, where an open research question is presented and a solution to this problem is developed in steps. The flowchart in Fig 1 is rather a system design consisting of many components, the functionality of which is not clearly described and existence of which is not justified. This complex flowchart does not even describe the complete task. It is in the end plugged into a continual learning algorithm which also performs domain transformation. All of these pieces are very well-known methods (e.g. VAEs, conditional VAEs, CL, catastrophic forgetting, domain transformation) in the literature and this paper puts them together in a straightforward way. Hence, I kindly do not think the outcome is truly a research result. It is more system engineering than science. \\n\\nThe next submission of the paper could choose one or few of these pieces as target research problems and develop a thoroughly analyzed novel technical solution for them. If this solution can be proven to improve a valuable metric (e.g. accuracy, interpretability, theoretical understanding, or computational efficiency) of a setup, it is then worthwhile being published.\", \"minor\": \"The abstract could be improved by providing more clear pointers to the presented novelty.\"}"
]
} |
HkliveStvH | Connectivity-constrained interactive annotations for panoptic segmentation | [
"Ruobing Shen",
"Bo Tang",
"Ismail Ben Ayed",
"Andrea Lodi",
"Thomas Guthier"
] | Large-scale ground truth data sets are of crucial importance for deep learning
based segmentation models, but annotating per-pixel
masks is prohibitively time consuming. In this paper, we investigate interactive graph-based segmentation algorithms that enforce connectivity. To be more precise, we introduce an instance-aware heuristic of a discrete Potts model, and a class-aware Integer Linear Programming (ILP) formulation that ensures global optimum. Both algorithms can take RGB, or utilize the feature maps from any DCNN, whether trained on the target dataset or not, as input. We present competitive semantic (and panoptic) segmentation results on the PASCAL VOC 2012 and Cityscapes dataset given initial scribbles. We also demonstrate that our interactive approach can reach $90.6\%$ mIoU on VOC validation set with an overhead of just $3$ correction scribbles. They are thus suitable for interactive annotation on new or existing datasets, or can be used inside any weakly supervised learning framework on new datasets. | [
"Panoptic Segmentation",
"Semantic Segmentation",
"Interactive Segmentation",
"Integer Programming"
] | Reject | https://openreview.net/pdf?id=HkliveStvH | https://openreview.net/forum?id=HkliveStvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"zt4p3lJx-",
"Byl9phUFjr",
"ByxhFh8tjH",
"Skg_DhUKoB",
"HkgGijUFjH",
"HkgsejLYir",
"r1e_52FhcH",
"SkxvMZ5i9B",
"BJlbD83nYr",
"BygLwR_cYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747397,
1573641409779,
1573641347821,
1573641311845,
1573641114414,
1573640947281,
1572801680449,
1572737295472,
1571763800561,
1571618397676
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2372/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2372/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2372/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2372/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2372/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2372/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2372/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2372/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2372/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes two methods for interactive panoptic segmentation (a combination of semantic and instance segmentation) that leverages scribbles as supervision during inference. Reviewers had concerns about the novelty of the paper as it applies existing algorithms for this task and limited empirical comparison with other methods. Reviewers also suggested that ICLR may not be a good fit for the paper and I encourage the authors to consider submitting to a vision oriented conference.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewer 3\", \"comment\": \"We thank the reviewer for the detailed comments.\\n\\n1. Our method is not a direct apply of existing algorithms. The heuristic is greatly modified to comply with scribbles and in addition enforces connectivity of each scribbled region. Our ILP formulation extends previous MRF (only for class) to panoptic (both class and instance) by introducing dummy edge variables, and does not increase the complexity of the problem.\\n\\n2. Although not a learning algorithm, ours are most suitable for annotating ground truth dataset, which is of fundamental importance to the data hungry deep learning method. Extension of our algorithms into the weakly/scribbles supervised learning framework similar to Lin et al (2016) can be a natural next step. In addition, we have conducted extensive experiments using RGB, lower level features and probability map as input to our algorithms. Results show that traditional machine learning algorithms can also benefit from deep learning by taking the its feature layers as input, which may be direction that is worth discovering.\\n\\n3. Since the main application of our method is to annotate dataset, hence in a data annotation point of view, we argue that the artificial scribble is realistic. We tested drawing more strict (even closer to the boundary than the artificial ones) scribbles on Cityscapes, and it takes on average only 2 minutes per image, which is still a dramatic decrease in annotation time compared to 1.5 hours. On the other hand, by adopting the online available of VOC scribbles, we sort of already validate the robustness of our algorithms (84.6% mIoU and 90.6% after 3 correction scribbles). \\n\\n4. We are re-running our algorithms that take pixels as input on Cityscapes. We will report the results once it\\u2019s done.\\n\\n5. Our paper is mainly focused on the design of the algorithm and ILP formulation, hence we argue the experiments on two datasets suffice to validate the performance. In the future work when incorporating our algorithms into the weakly supervised learning framework, it is of great interest to test on more challenging datasets.\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"We thank the reviewer for the detailed comments.\\n\\n1. Other scribble supervision method requires deep learning in the loop, while ours not. Since our contribution lies on the design of the algorithm/formulation that enforces connectivity, which could be served as a baseline for any weakly supervised learning method. Although we are pretty optimistic that adding the connectivity constraint would boost up the performance of Lin et al (2016), we leave the implementation and experiments of that as future work.\"}",
"{\"title\": \"Response to reviewer 4\", \"comment\": \"We thank the reviewer for the detailed comments.\\n\\n1. Our contribution is the design of a heuristic optimization algorithm and an ILP formulation that enforces connectivity (NP-hard) for interactive panoptic segmentation. Other than adopting lower level features of any deep learning basenet or its final probability map as input to our algorithms, no learning is evolved in our approach. Hence, it is not suitable to compare with other weakly supervised learning approach.\\n\\n2. Since our contribution lies on the design of the algorithm/formulation, we did not focus on selecting and fine tunning the SOTA deep learning network. We searched and the networks presented in the paper are the best public available deep nets together with checkpoint on the internet for the time being.\\n\\n3. We agree with the reviewer on this point, and that is why we added a condition on this statement, \\u201cgiven initial scribbles\\u201d.\\n\\n4. Since the main application of our method is to annotate dataset, we argue that the artificial scribble is realistic for any data annotator. We tested drawing even more strict (closer to the boundary than the artificial ones) scribbles on Cityscapes, and it only takes on average 2 minutes per image, which is still a dramatic decrease in annotation time compared to 1.5 hours. On the other hand, by adopting the online available of VOC scribbles, we sort of already validate the robustness of our algorithms (84.6% mIoU and 90.6% after 3 correction scribbles).\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"We thank the reviewer for the detailed comments.\\n\\n1. We have compared ours to MRF shown in Table 2, and ours is 0.9% and 3.8% better in mIoU.\\n\\n2. Our paper is mainly focused on the design of the algorithm and ILP formulation with connectivity constraints, hence we argue the experiments on the two datasets presented in the paper suffice to validate the performance. In the future work when incorporating our algorithms into the weakly supervised learning framework, it is of great interest to test on the more challenging datasets.\\n\\n3. Our focus is the design of the two interactive optimization algorithm/formulation that enforces connectivity, which could be served as a baseline for any weakly supervised learning method. Although we are pretty optimistic that adding the connectivity constraint would boost up the performance of Lin et al (2016), we leave the implementation and experiments of that as future work.\"}",
"{\"title\": \"Response to all the reviewers\", \"comment\": \"First of all, we would like to thank all the reviewers for spending time reading our paper.\\nWe have replaced Fig. 4 and corrected all typos mentioned by the reviewers.\", \"we_first_want_to_remind_the_4_highlights_of_our_paper\": \"1. We enforce the connectivity constraints (NP-hard) by introducing either a class-agnostic heuristic or a class-aware MRF integer programming, the later being an exact global optimization formulation.\\n\\n2. Our method is not a learning method, but instead optimization algorithms that are suitable for inference based on RGB or lower level features input, or post-processing on existing learning algorithms (using their probability maps as input). \\n\\n3. Ours does not necessarily require any available training data, i.e., it can use only the RGB, or the lower layer of any base network trained on arbitrary data set, as the input to our algorithms. \\n\\n4. Compared to weakly supervised learning approach, the connectivity of scribbled region allow more control within the annotation framework (no outliers as shown in Fig. 1), hence is particularly suitable for annotating dataset.\\n\\nFinally, the novelty of the paper comes from the re-design of the heuristic algorithm that complies with scribbles, and a novel ILP formulation that introduces dummy edge variables to deal with the multi-instance panoptic segmentation, while not increasing the complexity compared to the MRF for semantic segmentation. In addition, both methods enforce the connectivity prior.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces post-processing methods for panoptic (a combination of semantic and instance) segmentation, which are capable of using scribble annotations provided interactively by users. The proposed methods rely on (i) a discrete Potts model making use of RGB or DCNN features of the image, as well as the edge connectivity in a superpixel graph, and (ii) an integer linear program corresponding to a MRF with a pairwise data term. The proposed methods are evaluated on the Pascal VOC 2012 and Cityscapes datasets.\\n\\nThe paper is generally well written and easy to follow. The problem of panoptic segmentation is fundamental to computer vision, and as such, of relevance to the ICLR community. The proposed methods appear novel and worth pursuing.\\n\\nA first reservation about the paper is that the method is primarily one of post-processing (after, e.g., extracting primary features from a DCNN), but the most common means of post-processing, namely conditional random fields, are not even mentioned, let alone compared against.\\n\\nThe other main reservation about the paper is that there are very few comparisons to the abundant literatures on either semantic or instance segmentation, and as such it is difficult to appreciate the paper\\u2019s contributions to these areas. Of note:\\n\\n1. Evaluate on the COCO dataset, which is the current standard for segmentation ;\\n2. The scribble supervision method of Lin et al (2016) is mentioned, but not compared against.\\n\\nSeparately, the paper should compare the proposed method for semantic and instance segmentation with other methods that use weak-labels such as:\\n* Laradji, I. H., Vazquez, D., & Schmidt, M. (2019). Where are the Masks: Instance Segmentation with Image-level Supervision. arXiv preprint arXiv:1907.01430.\\n* Laradji, I. H., Rostamzadeh, N., Pinheiro, P. O., Vazquez, D., & Schmidt, M. (2019). Instance Segmentation with Point Supervision. arXiv preprint arXiv:1906.06392.\\n* Cholakkal, H., Sun, G., Khan, F. S., & Shao, L. (2019). Object counting and instance segmentation with image-level supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 12397-12405).\\n* Zhou, Y., Zhu, Y., Ye, Q., Qiu, Q., & Jiao, J. (2018). Weakly supervised instance segmentation using class peak response. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3791-3800).\\n* Zhu, Y., Zhou, Y., Ye, Q., Qiu, Q., & Jiao, J. (2017). Soft proposal networks for weakly supervised object localization. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1841-1850).\\n* Ahn, J., & Kwak, S. (2018). Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4981-4990).\\n\\nAs the paper currently stands, given the gaps in the experimental evaluation, it is difficult to appreciate the contributions and complementarities of the proposed methods to the panoptic segmentation problem. As such, the paper would require more work before recommending acceptance at ICLR.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes two graph-based deep network feature fusion methods with connection constraints for semantic and panoptic segmentation. By incorporating additional scribble information to DCNN features, the methods yield improved results over the original network predictions on two popular semantic and panoptic segmentation datasets. Through interactively correcting error regions with the scribbles, further performance increase can be obtained.\\n\\nI am not completely convinced by the novelty and experiments.\\n(1) First, the idea of smart annotation can be formalized as a weekly supervised segmentation problem where only part of the annotation is available. Can the authors justify how your work differs from those works solving the weekly supervised problem and what's your advantages. (Seed Expand and Constrain ... Alexander Kolesnikov; STC: A simple to Complex Framework for Weakly... Yunchao Wei; FickleNet Jungbeom Lee; etc..) Or, if possible, could you make a fair comparison with some existed weekly supervised approach on the final (semantic) result. Second, Potts model, MRF, K-nearest cut are known approaches. Thus I would like to know the deeper contribution of this work other than set constraints and solve ILP.\\n(2) The authors did not justify the use of less powerful models (DeepLabV2 and DRN) as both the inputs for l0H and ILP-P and the baseline comparison. The authors mentioned the current SOTA model (DeepLabV3+), which has achieved 79.55% mIoU on the CityScapes val set. However, they did not perform experiments using its probability map. It would be more convincing if the same performance gain can be achieved by using the SOTA model as inputs to the algorithms.\\n(3) The argument of achieving competitive results for panoptic segmentation is rather weak. To approach the panoptic segmentation problem, the authors essentially used scribbles to separate semantic region predictions into individual instances. Since the proposed algorithm requires as many scribbles to be drawn as there are regions, the baseline network only needs to predict semantic classes, and the algorithms uses the provided region IDs from the scribbles to segment individual instances. While this still has numerous applications in data annotation, it is somewhat unjust to claim that this method achieves competitive results in panoptic segmentation.\\n(4) The artificial scribbles for CityScapes experiments do not resemble human-drawn scribbles. Compared to the scribbles data for VOC12, the artificially generated scribbles for CityScapes experiments are visually idealistic. Rather than a stroke through the object, the generated is more similar to an outline of the object, which conveys a lot more information than a single line. Particularly when applied on super-pixels, it seems that super-pixels can easily be merged together by grouping any super-pixels within a scribble outline.\\nThere are some other minor suggestions. For example, it might be clearer and easier to read if section 2.2.2 is presented in an algorithm format. Some minor typos and grammatical mistakes should also be corrected.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper investigates scribble-based interactive semantic and panoptic segmentation. The algorithms described build a graph on superpixels and do not require deep features or labels but rely on \\u201cscribbles\\u201d for supervision. Given that semantic (and panoptic) annotation is very labor-intensive, advances in scribble-based annotation could significantly improve annotation time for new datasets; in applications where real-time performance is not required, scribble-based refinement of predictions could also be advantageous.\\n\\nThe experiments compare the proposed algorithms to deep baselines for VOC2012 and Cityscapes panoptic segmentation, and show impressive performance even without deep features. However they do not compare results to other scribble supervision methods to highlight the advantages of their approach over prior work. I\\u2019d like for the experiments section to have a proper comparison to prior scribble algorithms (e.g. in section 4.4, comparing to other algorithms with the SOTA approach as baseline) to clearly show the advantage of their approach.\\n\\nThe results are impressive compared to the deep learning baseline, but I think further experimental validation should exist for properly comparing to prior work.\", \"post_rebuttal\": \"I maintain my recommendation.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"Summary:\", \"key problem: efficiently leveraging scribbles as interactive supervision (at test time) for panoptic segmentation;\", \"contributions: 1) two algorithms leveraging scribbles via a superpixel connectivity constraint (one class-agnostic local diffusion heuristic, one class-aware with a MRF formulation), 2) experiments on PASCAL VOC 2012 and Cityscapes showing that both methods i) can achieve good performance without training data (using RGB values or pretrained features), ii) can improve the performance of a fully supervised baseline when using its probability maps as representation, and iii) can significantly improve performance beyond the state of the art on PASCAL when used interactively (90% mIoU with 3 rounds of corrective scribbles).\"], \"recommendation\": \"weak reject\", \"key_reason_1\": [\"unclear novelty and relevance to ICLR.\", \"The paper proposes to apply two existing algorithms (Nguyen & Brown 2015, Rempfler et 2016) to a new task (interactive panoptic segmentation): what is the claimed novelty? What is specific to panoptic segmentation vs semantic or instance segmentation? Could the difference with related work in Section 2 be discussed more precisely?\", \"Furthermore, there seems to be no learning (representation or otherwise) involved in this submission. The paper mentions potential applications to weakly-supervised learning in Section 5, but it does not provide clear insights into what would be the benefits in terms of representation learning (vs. RGB, pre-trained features, or probability maps).\", \"Overall, this paper might be more tailored for a Computer Vision venue like CVPR.\"], \"key_reason_2\": [\"lack of sensitivity / robustness analysis.\", \"The scribbles are \\\"simulated\\\" using morphological operations on the ground truth (A.2, A.3): does this lead to realistic scribbles? Figure 3 (which is unclear) shows that the \\\"scribbles\\\" might be precise outlines or contours, which are very different than the expected scribbles illustrated in Figure 2. Contours provide much stronger information for segmentation, and are much more likely to effectively leverage the connectivity prior (esp. with the diffusion heuristic), but are they really scribbles / cheap supervision?\", \"What is the importance of the superpixel coverage by scribbles or missing scribbles or the location of scribbles relative to segment boundaries? What are the impact of realistic deviations from the expected scribble policy that are likely to happen in practice? Measuring sensitivity to different types of noise (by perturbing / dropping out scribbles) seems important to assess the practical usefulness and robustness of the method.\", \"PASCAL VOC and Cityscapes are small datasets. Experiments on bigger more recent ones like Mapillary Vistas and COCO are becoming the standard protocol in the instance/semantic/panoptic segmentation community. How would this method fare on those much more challenging datasets? What are the benefits of the proposed interactive methods in terms of scalability?\"], \"additional_feedback\": \"- Fig. 4 is too low resolution / blurry;\\n- typos: \\\"tarining set\\\", \\\"weekly supervised\\\".\\n\\n## Update following the rebuttal\\n\\nThanks to the authors for their replies. Sadly, my concerns are only answered at a high-level, and the consensus among reviewers is clear. Hence I confirm my rating to reject. I hope the feedback provided above will assist the authors in improving the work or finding a more suitable venue.\"}"
]
} |
B1lqDertwr | Regularization Matters in Policy Optimization | [
"Zhuang Liu",
"Xuanlin Li",
"Bingyi Kang",
"Trevor Darrell"
] | Deep Reinforcement Learning (Deep RL) has been receiving increasingly more attention thanks to its encouraging performance on a variety of control tasks. Yet, conventional regularization techniques in training neural networks (e.g., $L_2$ regularization, dropout) have been largely ignored in RL methods, possibly because agents are typically trained and evaluated in the same environment. In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks. Interestingly, we find conventional regularization techniques on the policy networks can often bring large improvement on the task performance, and the improvement is typically more significant when the task is more difficult. We also compare with the widely used entropy regularization and find $L_2$ regularization is generally better. Our findings are further confirmed to be robust against the choice of training hyperparameters. We also study the effects of regularizing different components and find that only regularizing the policy network is typically enough. We hope our study provides guidance for future practices in regularizing policy optimization algorithms. | [
"Regularization",
"Policy Optimization",
"Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=B1lqDertwr | https://openreview.net/forum?id=B1lqDertwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"771c8FZSgv",
"eA8ubw6DE9",
"HJlwM292sB",
"rkgs0schoS",
"SJeyVKXjor",
"HJx1P0MooS",
"rken3DMjjB",
"rkg5D1F5or",
"rkx1EyK5oH",
"SkgDRAd5jr",
"SJeZG0_9jr",
"SJeBbpOqjB",
"SJes-2O5oH",
"S1lSCid9jS",
"rkeScHLTKr",
"SJlYWS2hYB",
"rJgXCiRoFr"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1582765807583,
1576798747368,
1573854223334,
1573854162983,
1573759270934,
1573756502960,
1573754803983,
1573715809801,
1573715750833,
1573715663515,
1573715465303,
1573715196720,
1573714946664,
1573714893417,
1571804557201,
1571763457294,
1571707851465
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2370/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2370/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2370/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2370/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2370/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2370/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2370/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2370/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2370/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2370/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2370/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2370/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2370/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2370/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2370/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2370/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Thank you and new version\", \"comment\": \"We would like to thank the AC and all reviewers for the constructive feedback! We have since incorporated new metrics (scaled rewards, z-scores), and corresponding statistical significance test in our new version https://arxiv.org/abs/1910.09191v2 . We have also emphasized our novelty aspect and added analytic experiments following the rebuttal.\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes an analysis of regularization for policy optimization. While the multiple effects of regularization are well known in the statistics and optimization community, it is less the case in the RL community. This makes the novelty of the paper difficult to judge as it depends on the familiarity of RL researchers with the two aforementioned communities.\\n\\nBesides the novelty aspect, which is debatable, reviewers had doubts on the significance of the results, and in particular on the metrics chosen (based on the rank). While defining a \\\"best\\\" algorithm is notoriously difficult, and could be considered outside of the scope of this paper, the fact is that the conclusions reached are still sensitive to that difficulty.\\n\\nI thus regret to reject this paper as I feel not much more work is necessary to provide a compelling story. I encourage the authors to extend their choice of metrics to be more convincing in their conclusions.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"About the ranking metric [1/2]\", \"comment\": \"Thank you for your reply!\\n\\nThere exist some prior works which use average rank, e.g., Table 4 and 5 from [1], Table 2 from [2]. We believe average rank can be a useful (but certainly not perfect) summary tool when the original performance scores on different datasets/tasks are not comparable. In such cases, taking the mean on the original scores is not meaningful and averaging rankings might be more plausible.\\n\\nWe found there is quite a lot of interesting debate/analysis of whether it is meaningful to take ordinal data\\u2019s mean (e.g., see blogpost at [3]). Standard deviation of rank may be less used and interpretable. We were trying to standard deviations to address the question on variance of algorithms. We\\u2019ll be happy to remove the standard deviation tables if needed.\\n\\nAs for interpretation of your example, in this case maybe we cannot conclude M1 is truly better than M2, but the same problem exists for any other performance score (e.g., error rate in classification) with a standard deviation: if M1 is better than M2 in terms of mean, but they have different stds (either that of M1 or M2 is larger), it is always hard to say which one is truly better. We think the best way to interpret this is to say \\u201cM1 is better than M2 in terms of average performance, and/but M1/M2 is more stable in terms of variance\\u201d. In our experiment, we found regularizations with higher average rankings (L2, L1, weight clipping) tend to have relatively smaller standard deviations (Table 3 and 4\\u2019s caption).\\n\\nFor statistical significance testing, we use the rank data of regularization methods\\u2019 performance on each environment for each algorithm, and we perform t-test for correlated samples (scipy.stats.ttest_rel). Note that we can\\u2019t use independent two-sample t-test based only on means and stds, because rank data points between two regularization methods are not independent (e.g., ranks of M1 and M2 on the same environment). \\n\\nWe calculate p-values based on the ranks on the six hard environments, to tell whether the difference between the ranks of two regularization methods is significant. We use the data from Section 5 because there are more data points than Section 4 for each algorithm. In Section 5, there are 3 hard environments and each environment has 5 hyperparameters, so for each algorithm, there are 3*5=15 ranks for each regularization method (and 4 algorithms * 15 ranks = 60 ranks for the \\u201cTOTAL\\u201d entry). For example, for SAC, the ranks of L2 are [1 2 1 1 3 3 3 7 2 6 1 6 2 1 2], and the ranks of baseline are [7 6 5 6 2 7 6 2 6 2 5 2 5 7 7], so scipy.stats.ttest_rel gives us the p-value of 0.0361. We compare all regularizations versus the baseline, and we compare conventional regularizations versus entropy regularization, which are the focus of our work. The p-values are shown in the tables below:\"}",
"{\"title\": \"About the ranking metric [2/2]\", \"comment\": \"p-values:\", \"regularization_versus_baseline\": \"------------------------------------------------------------------------------------\\n A2C TRPO PPO SAC TOTAL\\n------------------------------------------------------------------------------------\\nL2 0.0022 0.0182 0.0000 0.0361 0.0000\\nL1 0.0395 0.0104 0.0000 0.1887 0.0000\\nWeight Clip 0.0947 0.5951 0.0001 0.3923 0.0014\\nDropout 0.0001 N/A 1.0000 0.2735 0.0010\\nBatchNorm 0.0000 0.0000 0.3963 0.0413 0.0077\\nEntropy 0.0369 0.4499 0.0838 0.4332 0.0070\\n------------------------------------------------------------------------------------\", \"regularization_versus_entropy\": \"------------------------------------------------------------------------------------\\n A2C TRPO PPO SAC TOTAL\\n------------------------------------------------------------------------------------\\nL2 0.2515 0.0131 0.0001 0.0677 0.0000\\nL1 0.4441 0.0637 0.0026 0.5622 0.0020\\nWeight Clip 0.8358 1.0000 0.0382 0.8166 0.3225\\nDropout 0.0000 N/A 0.1643 0.7650 0.0019\\nBatchNorm 0.0000 0.0000 0.0326 0.2115 0.0000\\n------------------------------------------------------------------------------------\\n\\nIt can be seen that, in the TOTAL column, all regularization methods\\u2019 rankings are statistically significantly different from baseline (p<0.05), and only weight clipping is not significantly different from entropy. For each individual algorithm, the significance is lower, partially due to the fewer number of data points. We can conclude that most of the differences in rankings, when summarized in TOTAL and supported by enough data points, are statistically significant. Note that we did not claim that for each algorithm, every considered regularization is statistically significantly better than entropy/baseline.\\n\\nFinally we would like to mention that ranking is only one of our tools for summarizing and comparing regularizations. From our improvement percentage results and the training curves, we can mostly draw similar observations. If needed, we are happy to move the ranking tables to Appendix, and/or list the complete result tables for each algorithm so the full information is available.\\n\\n[1] Katrin Lasinger, Ren\\u00e9 Ranftl, Konrad Schindler, and Vladlen Koltun. Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer. arXiv:1907.01341, 2019\\n[2] A. Knapitsch, J. Park, Q.-Y. Zhou, and V. Koltun. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics, 36(4), 2017\\n[3] https://measuringu.com/mean-ordinal/\"}",
"{\"title\": \"Explanation on Appendix G\", \"comment\": \"Thanks for your quick reply! We are glad to give more explanations on experiments in Appendix G.\\n\\nFor experiments in Figure 5 and 6, we take an already trained model, and then sample multiple trajectories in the environment and evaluate each trajectory\\u2019s return. These trajectories are unseen samples during training, since the state space is continuous. The trajectory return\\u2019s distributions are plotted in the figures. For baseline, some of the trajectories yield relatively high returns, while others yield low returns, demonstrating the baseline cannot stably generalize to unseen examples; for regularized models, the returns are mostly high and have smaller variance, demonstrating they can generalize more stably to unseen samples.\\n\\nFor experiments in Figure 7, we vary the number of timesteps/samples seen during training, and present results in the barplot. We find that for regularized models to reach the same level of return as baseline, they only need much fewer samples in training. Note that the return is also on unseen samples. Since the ability to learn from fewer samples is closely related to the notion of generalization, we can conclude that regularized models have better generalization ability than baselines.\\n\\nWe have revised text in the new revision to make it more clear. Finally we would like to add that similar to supervised learning (SL), in RL the model naturally faces the problem of generalization to unseen samples. For RL, the seen samples during training is like the training set in SL, and the sampled trajectories during evaluation is like the test set in SL, and the whole state space of the environment is similar to the input data distribution in SL. The model must learn to generalize from seen examples (training set in SL) to unseen examples (test set in SL) to solve the task, since it cannot traverse the entire state space of the environment (the whole data distribution in SL) during training. Therefore, in RL we might expect regularization to help generalization just as it helps generalization in supervised learning.\"}",
"{\"title\": \"Quick Reply\", \"comment\": \"I am happy with your response in (1), (3) and (4). For (2), I am still concerned about the average and standard deviation of the ranks. Is there any existing literature using these statistics? It is unclear to me how to interpret the mean and standard deviation of ordinal (or categorical) variables. For example, M1 has an average rank=1.33 and standard deviation=0.47, and M2 has an average rank=1.7 and standard deviation=0.87. We can always compare the numbers (i.e. 1.33 < 1.7), however, can we say M1 is better than M2 (in term of the average rank)? Is M1 statistically significant better than M2? How should we interpret the standard deviation of the ranks?\"}",
"{\"title\": \"Quick Reply\", \"comment\": \"Thanks for your reply. I agree with your response in A1.\\n\\nFor A2, it is still unclear to me how the paper concludes that the baseline cannot stably generalize to unseen samples based on the experiments in Appendix G.\"}",
"{\"title\": \"Response to AnnoReviewer1 [1/3]\", \"comment\": \"Thanks for your constructive comments! We try to address your concerns below, and we have uploaded a revision reflecting the changes. For easier reading we pasted some of your comments and please bear with our response length.\\n\\nQ1. Remove repetition.\\n\\nA1. We have removed some repetition (of observations) in section 4, for example, \\u201cBN and dropout are generally not favorable for the three on-policy algorithms, but they can be useful on SAC (ranking higher than baseline). L1 and weight clipping perform similarly as L2 in TRPO and PPO, better than entropy regularization, but worse in A2C and SAC\\u201d in the paragraph of \\u201cranking all regularizations\\u201d.\\n\\nQ2. Suggestion on introduction\\n\\nA2. \\n(1) \\u201cI might also add one the main reason that the researchers in the field of DRL have spent less time on regulation...\\u201d\\nFollowing your suggestion, we added \\u201cMoreover, researchers in deep RL focus more on high-level algorithm designs, which is more closely related to the field of reinforcement learning, and focus less on network training techniques such as regularization\\u201d in the second paragraph of intro as a reason why common regularizations were not widely considered.\\n\\n(2) \\u201cIt would be useful to also emphasize the role of the questions the researchers investigate to answer.\\u201d\\nWe also added \\u201cour results also show that neural network training techniques, such as regularization, can be as important as high-level reinforcement learning algorithms in terms of boosting performance\\u201d in the last paragraph of intro (second last sentence), to emphasize our investigated questions\\u2019 role in this field.\\n\\nQ3. \\u201cDirectly regularize\\u201d parameters\\nA3. Following your suggestion, we have changed the sentence to \\u201calso, these techniques consider regularizing the output of the network, while conventional regularization methods mostly directly regularize the parameters\\u201d.\\n\\nQ4. $H_{s_i}$ not defined.\\nWe have defined the $H_{s_i}$ in the revision (\\u201c$H_{s_i} = -\\\\mathbb{E}_{a_i\\\\sim \\\\pi(a_i|s_i)} \\\\log \\\\pi(a_i|s_i)$, where $(s_i, a_i)$ is the state-action pair\\u201d). The left hand side of the equation was mistakenly omitted. Thanks for catching this.\\n\\nQ5. Repeated \\u201cthe\\u201d.\\nWe have corrected this typo. Thanks for pointing out.\\n\\nQ6. Term \\u201cNot converge\\u201d\\nYes, your guess is correct: by this term we mean \\u201cthe algorithm does not converge to a reasonable solution\\u201d. In the revision, we have changed the term \\u201cnot converge\\u201d to \\u201cnot converge to a reasonable solution\\u201d, and \\u201cconverge\\u201d to \\u201cconverge to a high level\\u201d in corresponding places. Sorry for the confusion.\\n\\nQ7. \\u201cBN and dropout hurts on-policy algorithms but can bring improvement only for the off-policy SAC algorithm.\\u201dDoes it mean that deploying BN, results in a more sensitive algorithm? or it means that the performance degrades (which is a different topic than section 5 is supposed to serve)?\\n\\nA7. We mean the performance degrades. This is to confirm the same phenomenon as in section 4 about BN/dropout still holds. \\n\\nWe would like to clarify that the main purpose of section 5 is to confirm our findings in section 4 still hold with multiple hyperparameters, since results in RL are sensitive to hyperparameter changes [1, 2] and thus the conclusions can also be vulnerable to them. In this section, by varying hyperparameter configurations, we found that regularizations can consistently improve the performance with different sampled hyperparameters. \\n\\nAs a side product, this also leads to our additional conclusion: proper regularization can reduce the hyperparameter sensitivity and ease the hyperparameter tuning process of RL algorithms, since they can bring up the performance of baselines with suboptimal hyperparameters to be even higher than baselines with better hyperparameters, as shown in Figure 2 and its corresponding analysis.\\n\\nQ8. Testing the hypothesis about generalization between samples. \\u201cThe author might be interested in training the models with bigger sample sizes, more training iteration, different function classes, and more fitting in order to test this hypothesis.\\u201d\\n\\nA8. We have added two sets of experiments to provide evidence for this hypothesis in Appendix G:\\n\\nFor experiments in Figure 5 and 6, we take an already trained model, and then sample multiple trajectories in the environment and evaluate each trajectory\\u2019s return. These trajectories are unseen samples during training, since the state space is continuous. The trajectory return\\u2019s distributions are plotted in the figures. For baseline, some of the trajectories yield relatively high returns, while others yield low returns, demonstrating the baseline cannot stably generalize to unseen examples; for regularized models, the returns are mostly high and have smaller variance, demonstrating they can more stably generalize to unseen samples.\"}",
"{\"title\": \"Response to AnnoReviewer1 [2/3]\", \"comment\": \"(..Continued on A8) For experiments in Figure 7, we vary the number of timesteps/samples seen during training, and present results in the barplot. In the barplot, we find that for regularized models to reach the same level of return as baseline, they only need much fewer samples in training. Note that the return is also on unseen samples. Since the ability to learn from fewer samples is closely related to the notion of generalization, we can conclude that regularized models have better generalization ability than baselines.\\n\\nQ9. \\u201cSection 7 on \\u2018Why do BN and dropout work only with off-policy algorithms?\\u2019 while I agree with the authors on their first reason which is quite commonly known, I might hesitate to make the second statement.\\u201d\\n\\nA9. Batch Normalization layers can be sensitive to input distribution shifts, since the mean and standard deviation statistics depend heavily on the input, and if the input distribution changes too quickly in training, the mapping functions of BN layers can change quickly too, and it can possibly destabilize training. One evidence for this is that in supervised learning, when transferring a ImageNet pretrained model to other vision datasets, sometimes the BN layers are fixed (e.g., see [3,4]) and only other layers are trained. For on-policy algorithms, we always use the samples generated from the latest policy; for off-policy algorithms, the sample distributions are relatively slow-changing since we always draw from the whole replay buffer which holds cumulative data. Like in supervised learning, the faster-changing input distribution for on-policy algorithms could be harmful to BN. We have revised the text to make it more clear in the revision.\\n\\nWe agree that our qualitative analysis is only one of the possible reasons for the BN & on-policy incompatible issue. The real concrete reasons are open to discussion and could be interesting future research. We are happy to remove this part of the analysis if needed.\\n\\nQ10. Significance of Contribution. \\u201cGenerally, I found this paper an interesting paper and appreciate the authors for their careful empirical study. But I found the contribution of this work to be not significant enough. Most of the statements and claims in this paper are well know in the community, especially among deep learning practitioners. While I acknowledge the scientific value of this study, its concreteness, and appreciate the contribution of this paper, due to the low acceptance rate of this conference, I might be reluctant in accepting this paper.\\u201d\\n\\nA10. We are glad to see the reviewer finds our work interesting and we thank the reviewer for the acknowledgement on the scientific value of our study. However, we slightly disagree with the statement that most of our statements and findings are already well known in the community. Our reasons are below:\\n\\n1. To our best knowledge, our work is the first to study the effects of common regularizers in policy optimization, and no prior publications have experimented with or discussed this issue in detail. Most prior works on RL regularization use or study entropy regularization [5,6], which we have shown to be generally inferior to L2 regularization in our experiments on continuous control tasks; other related works [7,8] study agents\\u2019 ability to generalize to new environments, including some on the effects of regularizations [9,10]. Our work is the first to study agent\\u2019s performance in the same environment and found common regularizers to be effective, often better than the entropy regularization. \\n\\n2. Popular works in this field, such as DQN, TRPO, PPO, SAC, did not consider using common regularizers and did not mention regularization\\u2019s effect in their papers. Regularization in RL is an important yet largely ignored issue, as the reviewer also pointed out that most RL works focus on high-level reinforcement learning algorithms.\\n\\n3. Popular RL codebases (such as OpenAI Baselines [11], Stable Baselines [12], Ray [13]) do not support our investigated regularization methods other than entropy regularization. \\n\\n4. To our best knowledge, our work is the first to discuss whether to regularize policy and/or value network, and the behavior discrepancy between on/off-policy algorithms in terms of BN and dropout, which are very practical problems to consider.\\n\\nTherefore, we believe our work brings new (and possibly surprising) findings to the community. Despite some practitioners may have some experience in trying common regularizers for RL, our work is the first systematic and comprehensive study that brings the problem and findings to the community, which we believe could guide future research/practices and be a good contribution to ICLR.\"}",
"{\"title\": \"Response to AnnoReviewer1 [3/3]\", \"comment\": \"\", \"references\": \"[1] Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger.Deep reinforcement learning that matters. In Thirty-Second AAAI Conference on Artificial Intelli-gence, 2018.\\n[2] Islam, R., Henderson, P., Gomrokchi, M., and Precup, D. (2017). Reproducibility of benchmarked deep reinforcement learning tasks for continuous control. In ICML 2017 Reproducibility in Machine Learning Workshop.\\n[3] https://github.com/jwyang/faster-rcnn.pytorch/blob/master/lib/model/faster_rcnn/resnet.py#L288\\n[4] https://github.com/torch/nn/issues/873\\n[5] Zafarali Ahmed, Nicolas Le Roux, Mohammad Norouzi, and Dale Schuurmans. Understanding the impact of entropy in policy learning. arXiv:1811.11214, 2018.\\n[6] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, TimHarley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928\\u20131937, 2016.\\n[7] Chenyang Zhao, Olivier Sigaud, Freek Stulp, and Timothy M. Hospedales. Investigating generalisation in continuous deep reinforcement learning. arXiv:1902.07015, 2019.\\n[8] Chiyuan Zhang, Oriol Vinyals, Remi Munos, and Samy Bengio. A study on overfitting in deep reinforcement learning. arXiv:1804.06893, 2018.\\n[9] Jesse Farebrother, Marlos C Machado, and Michael Bowling. Generalization and regularization indqn.arXiv preprint arXiv:1810.00123, 2018.\\n[10] Karl Cobbe, Oleg Klimov, Chris Hesse, Taehoon Kim, and John Schulman. Quantifying generalization in reinforcement learning. arXiv:1812.02341, 2018.\\n[11] https://github.com/openai/baselines\\n[12] https://github.com/hill-a/stable-baselines\\n[13] https://github.com/ray-project/ray\"}",
"{\"title\": \"Response to AnnoReviewer2 [1/2]\", \"comment\": \"Thank you for your constructive comments! We have uploaded a revision and we answer your questions below. For easier reading we pasted some of your comments and please bear with our response length.\\n\\nQ1. \\u201cHowever, the improvement can simply due to better hyperparameter optimization. When we introduce more hyperparameters and computation compared to baselines, it\\u2019s not surprising to see a better performance, especially in deep RL where using a different seed or using a different implementation can have significant difference in performance [1].\\u201d\\n\\nA1. \\n(1). New Hyperparameter/Hyperparameter Optimization\\nWe would like to draw the reviewer\\u2019s attention to Appendix H. In this section, we restrict the regularization methods to a single strength for each algorithm, across different environments. We show that our main findings in Sections 4 and 5 still hold, and that a shared single strength is already enough for a regularization method to yield better performance in most environments. Regularizations\\u2019 effectiveness doesn\\u2019t depend on heavy hyperparameter tuning for each environment.\\n\\nAlso, to demonstrate that our results and findings are stable/reproducible, we reran the baseline and L2 regularization in Section 4\\u2019s experiments, with the same selected hyperparameters on the 6 MuJoCo environments and all 4 algorithms. We ran the experiments for five new different seeds and obtain $\\\\mu_{env, r}$ (mean of final return over five seeds) and $\\\\sigma_{env,r}$ (standard deviation of final return over five seeds). Out of the 24 (algorithm, environment) pairs, compared with the original experiments, we find that there is only one instance that changes from \\u201cimproving\\u201d to \\u201cnot improving\\u201d (SAC Walker), and another instance changing from \\u201cnot improving\\u201d to \\u201cimproving\\u201d (TRPO Ant). Therefore, the percentage of \\u201cimprovement\\u201d stays the same, and regularization can consistently improve the performance.\\n\\n(2). New computation\\nNegligible computation overhead is induced when a regularizer is applied. Specifically, the increase in training time for BN is ~10%, dropout ~5%, while the other more effective regularizers (L2, L1, Weight clipping, entropy) are all <1%.\\n\\n(3). Different seeds and implementations\\nWe would like to emphasize that we tried our best to validate our findings and statements with different configurations, including seeds and training hyperparameters, given the known reproducibility issue [1,2] in RL research: 1) throughout the work, we run each experiment with 5 random seeds, and define \\u201cimprovement\\u201d to be at least at one-std level; 2) in section 5, we vary training hyperparameters to ensure our findings are not specific to the default configuration from the implementation we use; 3) we experiment with single shared hyperparameter in Appendix H; 4) We detailed our hyperparameter search range in Appendix B, open-sourced our code, and checked our results are reproducible in the point #1 above.\\n\\nQ2. \\u201cMoreover, it is unclear that inability to generalize to unseen samples is a problem in the continuous control tasks evaluated in the paper. I think the paper should demonstrate that this is indeed a problem. If it is not a problem, why would you expect regularization to help?\\u201d\\n\\nA2. We have added two sets of experiments to provide evidence for this problem in Appendix G:\\n\\nFirst we investigate the agent\\u2019s obtained rewards on a set of sampled trajectories. We run PPO Humanoid and TRPO Ant, then summarize the rewards on 100 sampled trajectories and plot the reward distribution in Figure 5 and 6. We find that the trajectories generated by the baselines have very large variance: some of the rewards are high, but others are low. This indicates that the baseline cannot stably and reliably generalize to unseen samples during training. On the other hand, L2, L1 and weight clipping are able to reduce the variance between trajectories, with most trajectories having high rewards. This suggests that conventional regularization can improve the model\\u2019s generalization to larger portion of unseen samples.\\n\\nNext, we present the results of varying the number of training samples/timesteps in Figure 7. We find that the baseline needs to train on more samples (typically 2x more in Figure 7) to reach the same level of performance as those with certain regularizations. In addition, regularization\\u2019s gain over baseline can be larger when the samples are fewer (SAC Ant, TRPO Ant). This demonstrates that the agent\\u2019s generalization ability improves with the help of regularization, since it can learn better with relatively fewer samples.\"}",
"{\"title\": \"Response to AnnoReviewer2 [2/2]\", \"comment\": \"Q3. Missing Details\\n\\n(1). How was $\\\\sigma_{env,r}$ calculated.\\n\\n$\\\\sigma_{env, r}$ is the standard deviation of the 5 returns obtained by 5 runs of random seeds. It is calculated as $\\\\sqrt{\\\\frac {\\\\sum_{i=1}^{n} {(r_i - \\\\mu_{env,r})^2}} {n}}$, where $n=5$ and $r_i$ is the return with $i$th seed. In the revision, we have made it clear in the second sentence of the paragraph, that we use standard deviation (not standard error of mean return).\\n\\n(2). \\u201cWhat does the average rank mean (in Table 2 and 3)? the average ranking over 5 seeds and all environments? If so, does it make sense to compare these numbers? e.g. Algorithm A with rank 1, 1, 7, 7 and Algorithm B with rank 4, 4, 4, 4 have the same average rank, but totally different performance.\\u201d\\n\\nThe ranks of mean return ($\\\\mu_{env, r}$), are collected for each environment. Then the average is calculated. In other words, the average ranks are over environments, but not over different random seeds, since we only rank $\\\\mu_{env,r}$s which is already averaged over random seeds. We have made how we calculated average rank more clear in the paragraph \\u201cRanking all regularizations\\u201d in section 4 of the revision.\\n\\nWe agree that average rank alone is not the best metric to reflect detailed algorithm behaviors, especially the stability/variance of the algorithm. To better measure the variation, we added three tables (Table 3, 5, and 15) in the paper presenting the standard deviation of ranks. If we look closely, we find L2, L1 and weight clipping actually have relatively smaller stds in most times, whereas baseline, entropy, dropout and BN have larger stds. This means the methods that rank higher also rank (slightly) more stably.\\n\\nThe average rank and rank standard deviations serve as summary statistics of each method. In addition to that, we provided the average percentage of \\u201cimproving\\u201d and \\u201churting\\u201d using our definition. We hope those summarized information could serve as a fair comparison among different regularizers, as fully analyzing the detailed results for each (algorithm, regularizer, environment) tuple would be overwhelming and use too much space. For the detailed behaviors/training curves of each (algorithm, regularizor, environment) tuple, we refer our readers to Figure 1, Appendix C, K and L.\\n\\n(3). Conclusion of Figure 3.\\n\\nWe had a brief analysis of Figure 3 in the paragraph below it, and we have added more analysis in the revision. There are several observations we can draw: 1) The baseline performance can be either increasing, decreasing or staying roughly the same when the network depth/width increases. 2) Certain regularizations can help with various widths or depths, demonstrating their robustness against these hyperparameter s and ability to ease hyperparameter tuning. 3) Regularizations do not necessarily help more when the network sizes are bigger, contrary to what we might expect: larger networks may suffer more from overfitting and thus regularization can help more. As an example, L2 sometimes helps more with thinner network (TRPO Ant), and sometimes more with wider network (PPO HumanoidStandup).\\n\\n(4). \\\"Why do you use difference hyperparameter ranges (lambda for L2, L1 and entropy regularization) for different algorithms in appendix A?\\\"\\n\\nFor the three on-policy algorithms (A2C, TRPO, PPO) we use the same tuning range, and the only exception is the off-policy SAC. The reason why SAC\\u2019s tuning range is different is that SAC uses a hyperparameter that controls the scaling of the reward signal, while A2C, TRPO, and PPO don\\u2019t. In the original implementation of SAC, the reward signals are pre-tuned to be scaled up by a factor ranging from 5 to 100, according to specific environments. Also, unlike A2C, TRPO, and PPO, SAC uses unnormalized reward because if the reward magnitude is small, then, according to the paper, the policy becomes almost uniform. Due to the above reasons, the reward magnitude of SAC is much higher than the magnitude of rewards used by A2C, TRPO, and PPO. Thus, the policy network loss and the value network loss have larger magnitude than those of A2C, TRPO, and PPO, so the appropriate regularization strengths become higher. Considering the SAC\\u2019s much larger reward magnitude, in our preliminary experiments, we selected a different range of hyperparameters for SAC before we run the whole experiments.\\n\\nQ4. Minor comment\\nA4. In the revision, we have added brief descriptions of each algorithm in Appendix A.\\n\\nThank you again for your review! If you have any further questions we are happy to answer.\\n\\n[1] Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger.Deep reinforcement learning that matters. In Thirty-Second AAAI Conference on Artificial Intelli-gence, 2018.\\n[2] Islam, R., Henderson, P., Gomrokchi, M., and Precup, D. (2017). Reproducibility of benchmarked deep reinforcement learning tasks for continuous control. In ICML 2017 Reproducibility in Machine Learning Workshop.\"}",
"{\"title\": \"Response to AnnoReviewer3 [1/2]\", \"comment\": \"Thank you for your positive feedback! We address your concerns below and we have uploaded a revision reflecting the changes. For easier reading we pasted some of your comments and please bear with our response length.\\n\\nQ1a. \\u201c\\u2018rewards are already normalized using running mean filters.\\u2019 I thought that rewards are also normalized for SAC, so I'm not sure how this could explain the difference between the on-policy algorithms and the off-policy ones.\\u201d\\n\\nA1a. In the official implementation of SAC [1] we are using, the reward is not normalized. Here we were trying to understand why regularizing value networks does not help both on- and off-policy methods, instead of explaining their differences. Our intuition is that observation/reward normalization (for on-policy algorithms) or \\u201cclipped double Q learning\\u201d (in SAC) can mitigate value overestimation bias, and overestimation bias is related to the need for regularizing critic.\\n\\nQ1b. \\u201c\\u2018mitigates the overestimation bias...further regularization is unnecessary.\\u2019 Could you clarify the connection between regularization and overestimation bias? Related to this point, in section 2 of the paper, it is written that \\\"L2 regularization is applied to the critic Q network because it tends to have overestimation bias (Fujimoto et al., 2018)\\\" but I was not able to find such an explanation in the cited paper though I may have missed it.\\u201d\\n\\nA1b. We would like to thank the reviewer for carefully checking the referenced work (Fujimoto et al., 2018). After double check, we found that this work does not explicitly mention the connection between overestimation bias and critic regularization. As a result, we decided to remove explanation in our manuscript, and replace it with empirical analysis on results. We provide our original reasoning for the analysis below: \\n\\nDDPG (Lillicrap et al., 2015) originally uses L2 regularization in the critic. Fujimoto et al. (2018) found that there is a significant overestimation bias of the Q-network in DDPG. To alleviate this overestimation bias, they proposed TD3 by introducing clipped double Q-learning (SAC inherits this). At the same time, they also removed the critic L2 regularizer. This might imply that if overestimation bias is not a problem, regularization is not needed on critic. However, after double check, we found there is no direct evidence for the relation between overestimation bias and regularizing critic. Thank you for bringing this up and this helped improve our analysis.\\n\\nQ2. In section 7, in the paragraph on BN/Dropout, could you clarify the point starting from \\\"1) For both BN and dropout,...\\\"? In particular, which discrepancy between the sampling policy and the optimization policy is being referred to here? \\n\\nA2. The theory behind on-policy policy gradient methods (A2C, TRPO, and PPO) necessitates that the same policy should be used for sampling trajectories (collecting data) and performing policy update (forward/backward of NN), otherwise off-policy issues can harm performance.\\n\\nFor BN and dropout layers, there is a distinction between training and testing mode. For BN, during the test mode, moving average of batch statistics are used as normalization; during the training mode, current batch statistics are used as normalization. For dropout, during the testing mode, all neurons in the neural network are kept; during the training mode, only a random subset of the neurons are kept.\\n\\nIn our experiments, when BN or dropout is applied, the testing mode is used to sample trajectories (collecting data) while the training mode is used for policy update (forward/backward of NN). Therefore, the policy $\\\\pi(a|s)$ parameterized by the network is different between trajectory sampling and policy update, which violates the condition for on-policy algorithms, and causes severe off-policy issues. \\n\\nThis discrepancy still exists even if we use training modes both for sampling trajectories and policy update, because for BN, the batch statistics are different for different batches, and thus the policy network mapping function will be different between sampling and update; for dropout, different subsets of neurons will be dropped out for each iteration, thus the policy will be different between sampling and update. Finally, it is infeasible to train if we both use testing mode for sampling and update. Thus, no matter how we set the training/test mode of BN/dropout, the discrepancy (off-policy issue) still exists.\"}",
"{\"title\": \"Response to AnonReviewer3 [2/2]\", \"comment\": \"Q3. Weight Decay (Loschilov et al. 2018) and L2 regularization.\\n\\nA3. Following your suggestion, we implemented \\u201cfixed weight decay\\u201d (AdamW in the paper) following Loschilov et al. 2018 and compared it with baseline and L2 regularization. We evaluated them with PPO on Humanoid and HumanoidStandup. Similar to L2, we briefly tune the strength of weight decay in AdamW and the optimal one is used. The results are shown in Appendix J (Figure 9). \\n\\nInterestingly, we found that while both L2 regularization and AdamW can significantly improve the performance over baseline, the performance of AdamW tends to be slightly lower than the performance of L2 regularization.\\n\\nQ4. Step size changes in hyperparameters sensitivity plots.\\n\\nA4. We have added experiments on step size (learning rate) variation in Figure 2, Section 5. We find that L2, L1 and weight clipping can consistently improve baseline and make the algorithm less sensitive to learning rate changes. We would like to mention that learning rate is an important hyperparameter we vary in our original experiments in section 5 (Table 4 and 5). (More hyperparameter sampling details in Table 11, Appendix E)\\n\\n\\nQ4. Minor Comments and typos\\nThanks for your detailed comments! We have revised the draft in the revision.\\n\\n(1). Looser definitions of \\u201churting\\u201d.\\nWe have added the resulting percentages with definition of hurting being $\\\\mu_r < \\\\mu_b$ in the same paragraph. The results are 11.1% for L2, 16.7% for L1, 22.2% for weight clipping, 55.6% for dropout, 72.2% for BN, and 16.7% for entropy. For reference, if we define hurting $\\\\mu_r - \\\\sigma_r < \\\\mu_b - \\\\sigma_b$, the results are 5.6% for L2, 16.7% for L1, 19.4% for weight clipping, 55.6% for dropout, 69.4% for BN, and 13.9% for entropy. We observe similar trends among different methods with different definitions, and we still observe that regularization rarely hurts, except for BN and dropout (for off-policy algorithms).\\n\\n(2). Rephrase of \\u201conly regularizing policy network\\u201d.\\nWe have replaced \\u201cis typically enough\\u201d to \\u201cis typically the best option\\u201d to reflect that it is better than regularizing both policy and value network.\\n\\n(3)-(4). We have corrected the typos accordingly.\\n\\n(5). Weight clipping.\\nWe have corrected the typo, and changed the sentence to \\u201cThis plays an important role in stabilizing the training of GANs\\u201d to indicate it is not the sole factor.\\n\\n(6)-(11). We have corrected the typos accordingly.\\n\\nThanks again for your review! We hope our response addresses your concerns. Any further questions or suggestions are welcome.\\n\\n[1] https://github.com/haarnoja/sac\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper investigates the use of conventional regularizers for neural networks in the reinforcement learning setting. Contrary to the standard practice of foregoing regularizers in deep RL, the paper finds that their addition can improve the performance of policy gradient algorithms on a standard suite of continuous control tasks. Various regularizers are tried, including l2/l1 regularization, entropy regularization and dropout in a combination with a few standard deep RL algorithms such as TRPO, PPO and SAC. Other experiments also verify the impact of these regularizers on the sensitivity of other hyperparameters and whether regularization should be applied to the value or policy networks.\\n\\nOverall, I find this paper to be a solid empirical study of regularization in deep reinforcement learning. The experiments are thorough, with various aspects being examined in more detail. I find several of the findings interesting, such as the importance of regularizing solely the policy network and that batch norm/dropout are effective for off-policy methods but not on-policy ones. There were certain points which warranted some further clarification. \\n\\nI would be willing to increase my score based on the authors' response to the following points:\\n1) In section 6, the last two sentences (\\\"For A2C, TRPO, and PPO ... so further regularization is unnecessary.\\\") are unclear to me.\\n\\t- \\\"rewards are already normalized using running mean filters.\\\" I thought that rewards are also normalized for SAC, so I'm not sure how this could explain the difference between the on-policy algorithms and the off-policy ones.\\n\\t- \\\"mitigates the overestimation bias...further regularization is unncessary.\\\" Could you clarify the connection between regularization and overestimation bias? Related to this point, in section 2 of the paper, it is written that \\\"L2 regularization is applied to the critic Q network because it tends to have overestimation bias (Fujimoto et al., 2018)\\\" but I was not able to find such an explanation in the cited paper though I may have missed it.\\n\\n2) In section 7, in the paragraph on BN/Dropout, could you clarify the point starting from \\\"1) For both BN and dropout,...\\\"? In particular, which discrepancy between the sampling policy and the optimization policy is being referred to here? \\n\\n3) Did you consider trying weight decay (\\\"Fixing Weight Decay Regularization in Adam\\\", Loschilov et al. 2018) as a regularizer? Given the success of L2 regularization, it could be possible that weight decay is even more effective.\\n\\n4) For the hyperparameter sensitivity plots, where one hyperparameter is varied at a time, why are the step sizes for the policy and value networks not included in these experiments? They are usually a critical hyperparameter.\", \"minor_comments_and_typos\": [\"On p.5, when defining \\\"hurting\\\", perhaps it could be better to choose a looser definition such as \\\"\\\\mu_r < \\\\mu_b\\\" or \\\"\\\\mu_r - \\\\sigma_r < \\\\mu_b - \\\\sigma_b\\\". This way, there could be a larger distinction between the most effective methods. Currently, both l2 and entropy regularization achieve 0.0% and the next best two regularizers are also under 10%.\", \"In abstract: \\\"regularizing the policy network is typically enough.\\\" Rephrase perhaps? The experiments seem to show that applying a regularizer to only the policy network is better than on both.\", \"In abstract: \\\"large improvement\\\" -> \\\"large improvements\\\"\", \"p.2, par. 2: \\\"those regularizations\\\" -> \\\"those regularizers\\\"\", \"p.3, Weight Clipping: \\\"This greatly stablizes\\\" -> \\\"This greatly stabilizes\\\". This sentence could be rephrased since \\\"This\\\" seems to refer to only weight clipping, but is not the only change in WGANs.\", \"p.3, Dropout: \\\"regularization technique\\\" -> \\\"regularization techniques\\\"\", \"p.4, par. 1: \\\"due to more stochasticity\\\" -> \\\"due to increased stochasticity\\\"\", \"p.4, 2nd to last par.: \\\"during policy update\\\" -> \\\"during policy updates\\\"\", \"p.5, 2nd to last par.: \\\"sometimes help\\\" -> \\\"sometimes helps\\\", \\\"easier ones baseline is\\\" -> \\\"easier ones the baseline is\\\"\", \"p.8, 2nd to last par.: \\\"it naturally accepts\\\" -> \\\"they naturally accept\\\", \\\"been shown effective\\\" -> \\\"been shown to be effective\\\"\", \"p.8, last sentence: \\\"policy network without the value network.\\\" -> \\\"policy network but not the value network.\\\"\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper provides an empirical study of regularization in policy optimization methods in multiple continuous control tasks. The paper focuses on the effect of conventional regularization on performance in training environments, not generalization ability to different (but similar) testing environments. Their findings suggest that L2 and entropy regularization can improve the performance, be robust to hyperparameters on the tasks studied in the paper.\\n\\nOverall, the paper is well written. However, I am leaning to reject this paper because (1) the experimental finding is not well justified (2) the experiments are missing some details and do not provide convincing evidence. \\n\\nFirst, the paper does not well justify why regularization methods improve performance in training environments. One potential reason is discussed in Section 7: regularization can improve generalization to unseen samples. However, the improvement can simply due to better hyperparemer optimization. When we introduce more hyperparemers and computation compared to baselines, it\\u2019s not surprising to see a better performance, especially in deep RL where using a different seed or using a different implementation can have significant difference in performance [1]. Moreover, it is unclear that inability to generalize to unseen samples is a problem in the continuous control tasks evaluated in the paper. I think the paper should demonstrate that this is indeed a problem. If it is not a problem, why would you expect regularization to help?\", \"there_are_some_missing_details_which_makes_it_difficult_to_draw_conclusion\": \"1. How was \\\\sigma_{env,r} computed? Is it the standard error of the mean return, or the standard deviation of the return? \\n2. What does the average rank mean (in Table 2 and 3)? the average ranking over 5 seeds and all environments? If so, does it make sense to compare these numbers? e.g. Algorithm A with rank 1, 1, 7, 7 and Algorithm B with rank 4, 4, 4, 4 have the same average rank, but totally different performance. \\n3. The experiment in Figure 3 seems very interesting, however, what\\u2019s the conclusion here? \\n4. Why do you use difference hyperparamer ranges (lambda for L2, L1 and entropy regularization) for different algorithms in appendix A?\", \"minor_comment_which_does_not_impact_the_score\": \"1. It would have been better if there\\u2019s a brief description of each algorithm (before section 4 or in appendix). \\n\\n[1] Reproducibility of Benchmarked Deep Reinforcement Learning Tasks for Continuous Control\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"An interesting paper on the role of regularization in policy optimization\\n\\n\\nIn this paper, the authors study a set of existing direct policy optimization methods in the field of reinforcement learning. The authors provide a detailed investigation of the effect of regulations on the performance and behavior of agents following these methods.\\n\\nThe authors present that regularization methods mostly help to improve the agents' performance in terms of final scores. Specifically, they show that direct regularizations on model parameters, such as the standard case of L2 or L1 regularization, generally improve the agent performance. They also show that these regularizations, in their study, is more proper than entropy regularization. The authors also show that, in the presence of such regularizations, the learning algorithms become less sensitive to the hyperparameters.\", \"few_comments\": \"1) The paper is well written and easy to follow. I appreciate it. I found the writing of the paper has a bit of repetition. The authors might find it slightly more proper to remove some of the repetitions (e.g. section 4.2)\\n\\n2) While I appreciate the clear writing and reasoning in this paper, I might suggest a slight change in the second paraphrase of the intro. I agree with the authors' reason on the first three lines, but I think it would be useful to also emphasize the role of the questions the researchers investigate to answer. I might also add one the main reason that the researchers in the field of DRL have spent less time on regulation or architecture search was their focus on more high-level algorithm design which is in the more immediate step of relevance and specialty to the field of reinforcement learning. \\n\\n3) I would suggest rephrasing the last two sentences of the second paragraph in related work: \\\"Also, these techniques consider ...\\\". Regularizing the output also regularizes the parameters, I think the authors' point was \\\"directly regularize\\\" the parameters. \\n\\n4) In the \\\"Entropy Regularization\\\" part of section 3, I guess the Hs has not been defined. \\n\\n5) Repeated \\\"the\\\" in the last paragraph of section 4.1 (despite it already incorporates the the maximization of)\\n\\n6) The authors used the term \\\"not converge\\\" multiple times. While it is hard from the plots to see whether the series converges or not, I have a strong feeling that by this term the authors mean the algorithm does not converge to a resealable solution rather than being divergent up to a bandwidth. Maybe clarifying would be helpful.\\n\\n7) In section 5, the authors study the sensitivity to the hyperparameters. In this section, I had a hard time to understand the role of term 3\\n\\\"BN and dropout hurts on-policy algorithms but can bring improvement only for the off-policy SAC algorithm.\\\" Does it mean that deploying BN, results in a more sensitive algorithm? or it means that the performance degrades (which is a different topic than section 5 is supposed to serve)?\\n\\n8) In section 7, the authors put out a hypothesis \\\"\\nHowever, there is still generalization between samples: the agents are only trained on the limited\\\" but the provided empirical study might not fully be considered to be designed to test this hypothesis. In order to test this hypothesis, the author might be interested in training the models with bigger sample sizes, more training iteration, different function classes, and more fitting in order to test this hypothesis.\\n\\n\\n9) Section 7 on \\\"Why do BN and dropout work only with off-policy algorithms?\\\" while I agree with the authors on their first reason which is quite commonly known, I might hesitate to make the second statement (2)\\n\\n\\n\\nGenerally, I found this paper an interesting paper and appreciate the authors for their careful empirical study. But I found the contribution of this work to be not significant enough. Most of the statements and claims in this paper are well know in the community, especially among deep learning practitioners. While I acknowledge the scientific value of this study, its concreteness, and appreciate the contribution of this paper, due to the low acceptance rate of this conference, I might be reluctant in accepting this paper.\"}"
]
} |
HkgFDgSYPH | Adaptive Online Planning for Continual Lifelong Learning | [
"Kevin Lu",
"Igor Mordatch",
"Pieter Abbeel"
] | We study learning control in an online lifelong learning scenario, where mistakes can compound catastrophically into the future and the underlying dynamics of the environment may change. Traditional model-free policy learning methods have achieved successes in difficult tasks due to their broad flexibility, and capably condense broad experiences into compact networks, but struggle in this setting, as they can activate failure modes early in their lifetimes which are difficult to recover from and face performance degradation as dynamics change. On the other hand, model-based planning methods learn and adapt quickly, but require prohibitive levels of computational resources. Under constrained computation limits, the agent must allocate its resources wisely, which requires the agent to understand both its own performance and the current state of the environment: knowing that its mastery over control in the current dynamics is poor, the agent should dedicate more time to planning. We present a new algorithm, Adaptive Online Planning (AOP), that achieves strong performance in this setting by combining model-based planning with model-free learning. By measuring the performance of the planner and the uncertainty of the model-free components, AOP is able to call upon more extensive planning only when necessary, leading to reduced computation times. We show that AOP gracefully deals with novel situations, adapting behaviors and policies effectively in the face of unpredictable changes in the world -- challenges that a continual learning agent naturally faces over an extended lifetime -- even when traditional reinforcement learning methods fail. | [
"reinforcement learning",
"model predictive control",
"planning",
"model based",
"model free",
"uncertainty",
"computation"
] | Reject | https://openreview.net/pdf?id=HkgFDgSYPH | https://openreview.net/forum?id=HkgFDgSYPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"qdjEOPz41T",
"rkgwdyz3oS",
"BygGoCnLsB",
"H1lYy0nUiH",
"BkeaSa2Iir",
"rJl9W3nIsB",
"BJlIWgx6Fr",
"rkgTvCJ2tB",
"Hygm-g2oYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747340,
1573818222741,
1573469849632,
1573469665100,
1573469509111,
1573469186175,
1571778558186,
1571712612744,
1571696634655
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2369/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2369/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2369/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2369/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2369/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2369/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2369/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2369/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"A new setting for lifelong learning is analyzed and a new method, AOP, is introduced, which combines a model-free with a model-based approach to deal with this setting.\\n\\nWhile the idea is interesting, the main claims are insufficiently demonstrated. A theoretical justification is missing, and the experiments alone are not rigorous enough to draw strong conclusions. The three environments are rather simplistic and there are concerns about the statistical significance, for at least some of the experiments.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Final Summary of Changes\", \"comment\": [\"Our replies to specific concerns were left in the comments below. We show a summary of our total changes since the paper was first submitted (> indicates change after 11/11 (after last summary), - indicates change before 11/11 (in last summary)):\", \"> There was an issue with the Ant environment causing learning to be unstable for all algorithms, which is now fixed, and experiments were updated\", \"All experiments are now run with 5+ seeds (from old number of 3 seeds)\", \"Added new experiments in Section 4.4, highlighting policy degradation and backwards transfer effects in a simple episodic context\", \"Added a hyperparameter grid search in Appendix C.1.1, showing robustness of AOP to choice of thresholds\", \"Further discussion of results in Section 4.3, Challenges in Continual Lifelong Learning Setting to clarify results/takeaways from experiments\", \"Moved main experimental graphs to Appendix A, and instead summarized them compactly in Tables 1 & 2\", \"Minor typos fixed, wording changes in various sections\"], \"to_summarize_some_of_our_past_responses\": \"we introduced a novel reinforcement learning setting closer to real world usage, showed that existing approaches can fail even with access to a ground truth dynamics model, and proposed a new algorithm for success in this setting. We only utilize the dynamics model locally, which represents strong learning of a model around recent data collected by a policy; even when we assume this model is perfect, TD3 and PPO still fail. Our algorithm uses around one-tenth of the planning of MPC, and a third of POLO -- both strong ground truth baselines -- and achieves comparable performance in most settings. The environments, though not complex in the standard offline RL setting, become extremely difficult in our continual lifelong learning setting.\\n\\nWe thank the reviewers and area chair for the time spent reviewing our work, and would appreciate if the reviews could be updated if our responses have been satisfactory.\"}",
"{\"title\": \"Summary of Changes\", \"comment\": [\"We would like to thank all of the reviewers for their responses; we have left specific comments in individual responses. We summarize here changes made in our updated version of the paper (11/11/19):\", \"All experiments are now run with 5+ seeds (from old number of 3 seeds)\", \"Changed main experimental results in Section 4 to be presented in table form, and moved the per-timestep graphs into Appendix A\", \"Added new experiments in Section 4.4, highlighting policy degradation and backwards transfer effects in a simpler, standard episodic context\", \"Added a new hyperparameter grid search in Appendix C.1.1, showing the robustness of AOP to choices of thresholds\", \"Further discussion of results in Section 4.3, Challenges in Continual Lifelong Learning Setting: notably the difficulty of Ant and the additional learning of AOP in sparse maze\", \"Restructured appendix, added some new details\", \"Minor typos fixed, small wording changes in various sections\", \"We hope that these address most of the concerns. From a big picture, our work is broadly a study into a new continual lifelong learning setting, and additionally the proposal of an algorithm that performs well in this setting -- we would kindly like to ask that our paper be evaluated in this context. Please let us know if there are any remaining concerns or topics that you would like us to address.\", \"(We are planning to release an additional update before the end of the review period).\"]}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for taking the time to read our paper and for providing feedback! We have added some details to the paper and hope to address some of your concerns:\\n\\n1) Significance of results/seeds:\\nWe have updated all experiments to now include five seeds, as done in Henderson et al. 2018. In general, it is difficult for us to include more seeds due to computational constraints, but we hope this is satisfactory. In general, most of the AOP experiments are low-variance, as the standard deviations presented suggest.\\n\\n2) Dynamics model:\\nIt is true that an accurate model is not a given in real-world robotics settings. However, we think the problem is still interesting. First, we would like to clarify that we compare AOP to other algorithms that also have access to an updated and correct dynamics model -- none of the algorithms discussed lack this access. Second, we believe that there are still many unsolved challenges and interesting ideas to consider, even with a perfect model. Control is still difficult, and learning control in a way that does not impact the agent\\u2019s future ability to learn is highly nontrivial. We observe that all algorithms struggle with this, even MPC, and notably PPO. If we cannot first do well in this setting with access to a model, then it would be extremely difficult to do so without one. Additionally, the idea of an agent that is not only knowledgeable about how to act, but also of when to plan, is not something that has been previously explored. Finally, some of the insights from our setting extend to other settings that are not obviously directly related: for example, in multi-agent settings, policies must be learned in a continually nonstationary environment, which we observe can be difficult with traditional methods, but can possibly be improved by strong exploration techniques -- multi-agent RL in some settings is deeply concerned with control, and not as much with learning the dynamics (in some settings they may even give agents access to other agents). We have further clarified some of the insights towards policy learning in a new Section 4.4.\\n\\n3) Backwards transfer:\", \"we_have_now_added_an_experiment_demonstrating_backwards_transfer_in_the_changing_worlds_hopper_environment\": \"we take a trained policy from the end of AOP, and then train it in standard TD3 fashion in the initial setting (which it has not seen since the initial time). We compare this to a new policy (what is was when it first saw the world), and show that it adapts much more quickly, demonstrating backward transfer. Furthermore, we add some more analysis on the policies in general in Section 4.4.\\n\\n4) Online learning:\", \"we_define_online_learning_as\": \"for a particular timestep, the agent first trains on its own, and then is forced to make an action, before repeating for the next timestep. We have now further clarified this in our background section.\\n\\n5) Threshold parameters:\\nIt is reasonable that any choice for \\\\sigma_{thres} and \\\\epsilon_{thres} is somewhat arbitrary; for our experiments, we picked a reasonable value around levels that the ensemble typically takes on throughout training. We have added an experiment to Appendix C.1.1 consisting of a grid search over a reasonable range of choices for these hyperparameters, and show that AOP is overall not particularly sensitive to them, so it is not especially important what we pick. In general, these parameters correspond to the algorithm\\u2019s inclination to cut planning.\\n\\n6) Clarification on comparable performance/Ant environment:\\nThis statement refers to that, across most environments, AOP generally performs well, most of the time. We have amended this statement to clarify this. We would also like to discuss the Ant environment results in particular (old Figure 4 d & e): the Ant environment is particularly difficult, as most of the time the agent never gets up/takes a long time to get up after falling over, which showcases the sharp challenge of exploration in continual lifelong learning. Safe exploration is a well-studied topic, which we do not directly tackle, but is certainly an interesting problem to consider for future work in this setting. We have added more discussion on this in the \\u201cVast Worlds\\u201d commentary in Section 4.3. *We do believe it is possible to improve this performance, and will likely post an update on it later this week.\\n\\n7) Minor concerns:\\nThese have been corrected; in particular, we changed \\u201cdeep exploration\\u201d to \\u201ctemporally extended exploration\\u201d.\\n\\nAgain, thank you for your feedback! Please let us know if you have other concerns, or topics you would like us to address/clarify further.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for taking the time to read our paper and for providing feedback! We have added some details to the paper and hope to address some of your concerns:\\n\\n1) Novelty of work:\\nOur setting of continual lifelong learning has not been studied in the past, and is a new setup for which we analyze a new algorithm and adaptations of existing algorithms on. Additionally, past work into reducing the computation of a planner has been limited. We kindly ask that you consider our work in the broader context of our setting.\\n\\n2) Catastrophic forgetting:\", \"we_have_now_added_an_experiment_demonstrating_backwards_transfer_in_the_changing_worlds_hopper_environment\": \"we take a trained policy from the end of AOP, and then train it in standard TD3 fashion in the initial setting (which it has not seen since the initial time). We compare this to a new policy (what is was when it first saw the world), and show that it adapts much more quickly, demonstrating backward transfer. Furthermore, we add some more analysis on the policies in general in Section 4.4.\\n\\n3) Increase in performance for any RL algorithm:\\nThere are many algorithms that can be fit into the AOP framework; however, we think that it is important to note that the goal of AOP is not directly to increase performance, but rather primarily to reduce computation, and in some cases improve exploration.\\n\\nAgain, thank you for your feedback! Please let us know if you have other concerns, or topics you would like us to address/clarify further.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for taking the time to read our paper and for providing feedback! We have added some details to the paper and hope to address some of your concerns:\\n\\n1) Theoretical justification of algorithm and motivation:\\nWhile we do not provide theoretical justification for AOP, our main contribution is the introduction of a new problem setting, and the proposal of an initial idea to tackle it. This problem has close ties to nonstationary environments, which are broadly relevant in many settings, ex. multi-agent settings, policy learning in learned dynamics, real-world robotics where resets are costly, etc. Furthermore, we identify several challenges in such a setting, and show where previous methods fail, which can lead to insights on how to improve methods more generally. We hope you will consider our contribution as whole.\\n\\n2) Takeaways from Figures 3/4 [now Figures A.1 and A.2] (computation/rewards):\\nWe agree that the graphs are difficult to see information from; we have now summarized the information compactly, moved the detailed graphs into Appendix A, and added some clarifications on takeaways from the experiments. For the particularly interesting takeaway of policy degradation, we have kept the relevant graphs and added further discussion in Section 4.4. We hope this is now more clearly showing the reduction in computation and the strength of performance of the model-based/model-free algorithms.\\n\\n3) Comparisons in Figure 6 [now Figure 5] (behavior of AOP):\\nWe dedicate Section 4.5 to discussing the specific components of the AOP algorithm, namely individual statistics (Bellman error and standard deviation of the value ensemble, planning horizon length, planning iterations, policy usage) that help to give a clearer picture of what the algorithm is doing at each stage of training. Therefore, we do not plot other algorithms on the same graph. Notably, uncertainty and planning decrease as the agent progresses farther in each world.\\n\\n4) Complexity of environments:\\nIt is true that the environments themselves are not particularly complex control environments, and have been solved adequately in the past in the offline setting. However, we show that these environments become problematic for state-of-the-art algorithms (TD3, PPO, POLO) when tackled in continual lifelong learning, due to the lack of ability to reset, nonstationary dynamics, etc. Therefore, we believe that they are complex enough for our investigations, and are capable of crisply showing where existing work struggles.\\n\\nAgain, thank you for your feedback! Please let us know if you have other concerns, or topics you would like us to address/clarify further.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The work is heuristically motivated by the goal of reducing the high computation of model-based learning while achieving high performance. For achieving that, the authors propose an algorithm, Adaptive Online Planning (AOP) combining a model-free policy learning method and a model-based planner. In terms of the empirical study, they test the algorithm in 3 environments, Hopper, Ant, and Maze. They compare their algorithms with several model-based methods.\\n\\nFrom my perspective, the paper has several weaknesses for which I give a weak rejection. \\n\\nThe motivation is interesting to me, but the authors do not provide enough justification. The authors claim that the proposed method is able to reduce high computation. However, seemingly they only intuitively illustrate how it saves energy without strong proofs, which weakens the claim. What\\u2019s more, the experiment is not clear to me. What are the take-aways of Figure 3 and Figure 4 while I cannot see an improvement from them? There is no comparison in Figure 6; not clear how the plots of other models look like. The last comment is about the 3 environments that are not complex enough.\", \"minor_comments\": [\"Some typos and grammar mistakes, e.g., \\u2018planing\\u2019 and \\u2018(d)by\\u2019 in the third last line (p.4); the second sentence in Sec. conclusion (p.8).\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents an adaptive online planning(AOP) strategy in a model-free policy setting, a reinforcement learning method aimed to solve catastrophic forgetting problem by combining model-based planning and model-free policy learning. AOP is able extensive plan only when necessary, leading to over all average reduced computation times. AOP can be easily integrated into other reinforcement learning frameworks such as to any offline-planning reinforcement learning algorithms. The experiments demonstrate that AOP is computationally efficient compared to traditional baselines MPC-8 and MPC-3 while maintaining the performances.\\n\\nThe algorithm is developed based on heuristic solutions to address some of the fundamental problems in reinforcement learning, and although the proposed strategies definitely seem to provide some benefits in terms of computation complexity, the solution is not very elagant or noval. It is hard to justify the computational efficiency and performance in dynamically changing environments just based on the presented results. While the improvement in computation is there, what I find lacking is the experiments demonstrating clear evidence of overcoming catastrophic forgetting problem. The paper gives off a feeling that AOP as an add-on that can increase the performance of any RL algorithm.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": [\"The authors study continual, lifelong learning. They suggest a new algorithm, named Adaptive Online Planning (AOP) that combines model-based planning with model-free learning. AOP decides how much additional planning is needed based on the uncertainty of the model-free learner and the performance of the planner. Experiments are carried out on three tasks, i.e. Hopper, Ant and a Maze.\", \"This paper should be rejected. The main reason is that the experiments were only performed for 3 different seeds and are therefore not statistically relevant (see Henderson et al. \\\"Deep reinforcement learning that matters.\\\" Thirty-Second AAAI Conference on Artificial Intelligence. 2018.).\", \"Besides the issue of significance of the results section, there are other concerns. Some of them are:\", \"Page 2: 'The dynamics model is updated immediately at world changes.' - Is this a reasonable assumption? Where does an accurate model come from? Given a perfect model, it is not surprising that a learner that is combined with such a model achieves a superior performance.\", \"Although the authors state that the 'ability to perform well in old tasks (backward transfer)' is important, they don't explicitly show their algorithm to achieve this goal. Backwards transfer might be included into the experimental section, but I could not find a statement that addresses this explicitly.\", \"I would like the authors to crisply define their use of the word 'online learning'. Does online learning simply mean to process each sample as it is available or does the term include real-time?\", \"How is \\\\sigma_{thres} chosen? What is the influence of this parameter?\", \"The statement that 'AOP uses only 0 - 25% of the number of timesteps as MPC-8, but achieves generally comparable or stronger performance.' is wrong (see Fig 4, d and e). This statement is especially difficult, as results are only averaged over 3 runs.\"], \"there_are_furthermore_a_few_minor_concerns\": [\"the interval for \\\\gamma should exclude 1 in this setting, as the return would otherwise be unbounded.\", \"In the background section, the authors confuse the definition of the return with reward.\", \"the term 'deep exploration' is used but not defined\", \"There are two figures between the subsection header for 4.4 and the text - this is highly confusing\"]}"
]
} |
B1lKDlHtwS | Measuring causal influence with back-to-back regression: the linear case | [
"Jean-Remi King",
"Francois Charton",
"Maxime Oquab",
"David Lopez-Paz"
] | Identifying causes from observations can be particularly challenging when i) potential factors are difficult to manipulate individually and ii) observations are complex and multi-dimensional. To address this issue, we introduce “Back-to-Back” regression (B2B), a method designed to efficiently measure, from a set of co-varying factors, the causal influences that most plausibly account for multidimensional observations. After proving the consistency of B2B and its links to other linear approaches, we show that our method outperforms least-squares regression and cross-decomposition techniques (e.g. canonical correlation analysis and partial least squares) on causal identification. Finally, we apply B2B to neuroimaging recordings of 102 subjects reading word sequences. The results show that the early and late brain representations, caused by low- and high-level word features respectively, are more reliably detected with B2B than with other standard techniques.
| [
"regression",
"causal influence",
"linear case",
"observations",
"causes",
"potential factors",
"difficult",
"complex",
"issue"
] | Reject | https://openreview.net/pdf?id=B1lKDlHtwS | https://openreview.net/forum?id=B1lKDlHtwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ZiH5Xy3UTq",
"H1eor6rosr",
"S1x3LhHijH",
"HkxGB2rjoB",
"HygiRjHsoH",
"B1xmsjBsoS",
"rJg4DjHijr",
"Bkgo2vCzcr",
"rJl4BEwAYH",
"HyxooSwIYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747312,
1573768514978,
1573768275823,
1573768249546,
1573768147090,
1573768090753,
1573768028443,
1572165555047,
1571873851645,
1571349922716
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2368/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2368/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2368/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2368/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2368/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2368/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2368/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2368/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2368/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors introduce a method for disentangling effects of correlated predictors in the context of high dimensional outcomes. While the paper contains interesting ideas and has been substantially improved from its original form, the paper still does not meet the quality bar of ICLR due to its limitations in terms of limited applicability and experiments. The paper will benefit from a revision and resubmission to another venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response summary\", \"comment\": \"We thank our three reviewers for their careful reviews and helpful comments. The main changes of our manuscript are:\\nA clarification and substantial expansion of the proof of our theorem\\nA successful comparison of B2B against l2-regularized CCA for both synthetic and MEG experiments, and a clarification of its link with B2B\\nA simplification of the feature importance algorithm\\nAdditional tests of B2B and other baseline models when the MEG recordings are modeled with a larger set of potential factors derived from word-embeddings.\\n\\nOverall, these additional analyses and clarification confirm that B2B is an efficient method to disentangle the causal influence of linearly correlated predictors ($X$) onto noisy multivariate observations ($Y$).\"}",
"{\"title\": \"Reply to R1: technical comments\", \"comment\": \"# (1) Why is $Diag(\\\\hat{H})$ guaranteed to have binary elements?\\n\\nIn the linear case, $diag(H)$ is binary only in the absence of noise $N$ (more precisely, of noise on the active causal features). \\n\\nWithout loss of generality, we suppose that the non-zero features of $E$ are the $k$ first.\\n\\nSince $EH = H$ (Theorem), the last $n-k$ rows of $H$ are zero, and so are the last $n-k$ elements of $diag(H)$.\\n\\nWe denote $X_1$ and $N_1$ as the first k features of $X$ and $N$. Therefore, the top left submatrix of $H$ is $Cov(X_1,X_1) (Cov(X_1, X_1)+Cov(N_1,N_1)$ (Eq 8). The diagonal elements of this submatrix will be in $[0,1]$. In the absence of noise on the k first features, this submatrix is identity. \\n\\nTherefore, $diag(H)$ is binary.\\n\\nIn the presence of noise, the n-k last terms in $diag(H)$ will remain zero, but the average of the k first will be equal to $Var(X_1) / (Var(X_1)+Var(N_1))$\\n\\nWe now added clarifications at the end of Section 2.2 and in the appendix.\\n\\n\\n# (2) The Theorem 1 needs more explanation about why it proves consistency, which is currently isolated from other parts of the paper. Why E\\\\hat{H} = \\\\hat{H} guarantees the consistency of \\\\hat{E} = Diag(\\\\hat{H})? For example, \\\\hat{H} can have more all-zero rows than E which still satisfies E\\\\hat{H} = \\\\hat{H} but \\\\hat{E} is not equals to E. In an extreme case, \\\\hat{H} = 0 will have E\\\\hat{H} = \\\\hat{H} but is clearly not consistent. \\n\\nThe possibility that $\\\\hat{H}$ has more all-zero rows than E violates one of our assumptions, namely that \\u201c$X$ and $F$ are full rank on $Img(E)$\\u201d (the subspace spanned by the columns of E). \\n\\nIndeed, full rank of $X$ over $Img(E)$ implies that $Cov(X_1,X_1)$ has full rank. Similarly $Cov(X_1,X_1) (Cov(X_1,X_1) Cov(N_1,N_1))^{-1}$ is full rank too. Consequently, from Eq (9), none of the first k diagonal elements can be equal to zero.\\n\\nIn layman terms, our hypothesis implies that $E$ can only be recovered if all of its active elements lead to a change in some dimension(s) of $Y$. Otherwise, these elements will be estimated to be non-causal, as expected.\\n\\n# (3) How does the Eq (4),(5) give an estimation \\\\hat{E}?\\n\\nThe estimation of E derives from Eq (2) and (3). In contrast, Eq (4) and (5) implement the optimization technique described by Rifkin and Lippert 2007 in order to efficiently estimate optimal L2-regularization parameters through leave-one-sample-out cross-validation over the training set. We clarify this passage in the updated manuscript, in Section 2.1.\\n\\n# (4) By Eq. (10,11), H and G seem to be determined give X,Y. Then what are the maximization of Eq. (12,13) over? \\n\\nWe apologize for our confused notation. Equation (10) describes the calculation of a forward model.\\nEquation (11) describes the calculation of a backward model.\\nEquation (12) describes CCA.\\nEquation (13) describes PLS\\nIn other words, G and H represent different matrices in each equation. We used similar notations to highlight the functional similarities of these matrices in the different models; but in order to avoid any confusion, we now add indices (e.g. $H_{cca}$, $G_{pls}$) to emphasize their differences (see equations in Section 3.2).\"}",
"{\"title\": \"Reply to R1: general comments\", \"comment\": \"# (1) How does the problem in Eq.(1) differ from variable selection in linear regression where a plenty algorithms exist such as LASSO, spike-slab prior, SCAD, etc. ?\\n\\nVariable selection methods are essentially unidimensional in Y. For example, linear regression selects a subset of causes of X for each dimension of Y independently. This is what the Forward models does : the model would be equivalent if each dimension of Y were considered independently.\\n\\nIn contrast, B2B and other cross-decomposition models such as CCA and PLS work with multidimensional X and Y. \\n\\nB2B, unlike CCA and PLS, efficiently provides interpretable coefficients for causal discovery.\\n\\n\\n# (2) The experiments are a bit weak with a simple synthetic experiment and a real dataset with just four features. Can experiment directly demonstrate the correctness of the theorem?\\n\\nWe generally understand your concern, and in the following we hope to convince you that our experimental setup is sound. On a high level, our reasoning about this method and is the following:\\n\\nWe state and prove a consistency theorem in the linear case (where using theory is possible with our current mathematical knowledge). We believe that the proof is correct (we hope you can help us find mistakes if any), and we believe that the theorem is correct as a consequence. This theoretical study allows us to understand the behavior of this method better.\\n\\nWe then compare our method against baselines on controlled synthetic experiments varying parameters of the problem over a large grid of 25,000 combinations, in order to check that our approach holds, where it succeeds and where it can fail. We believe that our study on synthetic data is thorough. The simplicity of Figure 2, showing merely an aggregate of our results, contrasts with the actual complexity of the study.\\n\\nWe finally apply our method on a real case with non-linear data, using actual brain recordings of human subjects (brain response is non-linear). There, we focus only on a few features, not because our method does not allow using more, but because those features were thoroughly studied in neuroscience: our goal is to compare our conclusions with verified reference points in existing literature, and obtain results assessing the validity of our method and baselines. \\n\\nMoreover (and we may be to blame for not emphasizing it enough in the manuscript), our experiment is one of the largest ever run : most neuroimaging studies investigate approximately 20 subjects (we have ~100), and use a single analytical method (we study several), generally applied to a single factor of interest.\"}",
"{\"title\": \"Reply to R3: part 2\", \"comment\": \"# (4) The authors do no comparisons against any regularized form of CCA\\n\\nWe thank R3 for this comment. We now revised the manuscript (see Section 3, both synthetic and MEG experiments) to change CCA into an l2-regularized CCA, as implemented by the Pyrcca package provided by Bilenko and Gallant (2016). L2-regularization CCA is now optimized similarly to B2B, i.e. over a nested-grid search optimization of the training set over 20 values logarithmically distributed between 1e-4 and 1e4. Overall, our updated results do not change the conclusion of our paper. However, we do observe one experimental case where regularized CCA outperforms B2B: the feature importance of the word function effect in MEG is higher with regularized CCA than with B2B. This unexpected superiority of CCA over B2B disappears when more than 4 features are tested.\\n\\n# (5) The real data case might be better conditioned than the simulated case\\n\\nWe agree with R1 that the optimal use-case for B2B appears to be when a large number of covarying factors are investigated, as demonstrated in the synthetic experiments.\\n\\nIn this first method paper, we aimed to verify that B2B yields to plausible results. We thus intentionally investigated well-described phenomena (the neural correlates of word length and word frequency). B2B successfully matched our expectations. First, word length and word frequency revealed early and late brain effects respectively. Second, B2B did not reveal any spurious effect before stimulus onset. Third B2B appeared reliably better than other baseline methods.\\n\\nTo address R1 comments, we now added an additional MEG analysis (see Figure 7 in the appendix) in which introduce additional features: the word-embedding vectors of each word provided by the Spacy package. Our results show that B2B remains robust to the introduction of additional factors.\\n\\n\\n# (6) The word \\u201ccausal\\u201d can mean many things, and here it refers specifically to disentangling correlated predictors, rather than confounding in observations or direction of effect. It would improve the paper to add some discussion of this point. \\n\\nWe thank R3 for this remark. We now clarify the definition in the manuscript:\\n\\n\\u201c\\nThe present paper focuses on the restricted issue of disentangling the causal influence of linearly correlated predictors ($X$) onto multivariate observations ($Y$). The present approach thus differs from other causal discovery algorithms based on temporal-delays and/or nonlinear interaction in systems where the directionality of causation (from X to Y or vice versa) is unknown (e.g. \\\\citep{peters2017elements, granger1969investigating, janzing2013quantifying, scholkopf2016modeling}.\\n\\u201c\\n\\n\\n# (7) The comparisons in the experiments are done between E estimated from B2B and E=sum_j H_j^2 for other methods that do not directly estimate E. However, a more natural comparison might be against EF as this also includes estimates of the strength of influence of each observation, which is implicit in the sum above.\\n\\nWe agree that EF is likely to be closer to Sum_j H_j^2. However, the precise purpose of B2B, unlike other methods, is to retrieve E when F is unknown. The introduction of Sum_j H_j^2 is solely designed to provide a fair chance to previous baseline.\"}",
"{\"title\": \"Reply to R3: part 1\", \"comment\": \"# (1) This paper appears to be technically sound, but it should be rejected based on 1) the relatively limited applicability of the model and 2) a lack of thorough experimentation indicating that this is an appropriate method under more general circumstances.\\n\\nWe agree with R3 that the present paper focuses on a specific issue, originally motivated by a precise empirical problem: i.e. finding, among multiple competing factors, those that impact a noisy system measured with multiple channels.\\n\\nHowever, we partially disagree on \\u201cthe limited applicability\\u201d of our method. Disentangling causes under noisy multidimensional observations is pervasive in observational studies (e.g. Detecting gravitational waves necessitate analytical solutions to disentangle confounding heterogeneities) as well as experimental studies, where cause disentanglement is often restricted to factorial designs (e.g. collecting brain responses to words carefully selected such that they are matched in length across different frequencies, an approach that does not scale to increasingly numerous parameters).\\n\\nHere, we provide, with theoretical guarantees, a solution to this general issue in the linear case.\\n\\nWe demonstrate the usability of our method both in a variety of synthetic experiments and confirm that B2B can (1) systematically outperform baseline methods and (2) be robust to numerous factors. Furthermore, we demonstrate the usability of our method by analyzing an exceptionally large dataset (100 MEG subjects, where most neuroimaging studies are recording 15-20 subjects) and we now provide additional analyses to assess the robustness of B2B when additional word-embedding features are included in the analysis (Appendix: Robustness to increasing number of factors). \\n\\nWhile we agree with R3 that that it would be best to show that B2B effectively address disentanglement in an additional experimental setups, we believe that providing our solution to the community is the first step to achieve this objective.\\n\\n\\n# (2) It is odd that the model assumes the outcomes are measured without error. A more appropriate model may be: Y=(XE+N)F+M\\n\\nThis is a good point which we insufficiently detailed in our original submission. We agree with R3 that measurement noise M is likely: i.e.\\n\\n$Y = (XE + N) F + M$\\n\\nwhere F represents the unknown response function of the measurement apparatus, and N corresponds to noise before measurement (e.g. background brain activity, eye movements) and M corresponds to measurement noise (e.g. bad sensor, electronic irregularities etc).\\n\\nHowever, we can rewrite $M$ as $M\\u2019F$ over $Img(F)$, the subspace spanned by the columns of $F$. Note that by hypothesis, $Img(E)$ is included in $Img(F)$. Furthermore, by definition, $F$ is full rank over $Img(F)$. Therefore, $M\\u2019 = M F^{-1}$, which yields:\\n\\n$Y = (XE + N) F + M = (XE + N) F + M\\u2019 F = (XE + N + M\\u2019) F = (XE + N\\u2019) F$\\n\\nConsequently, measurement noise $M$ is absorbed by $N F$ when these two matrices are unknown.\\n\\nWe added this clarification to the manuscript (see Appendix: Modeling measurement noise). \\n\\n# (3) The model starts to look a lot like canonical correlation analysis. \\n\\nWe agree with R1, that B2B and CCA share a common ground. Specifically, in the basic non-regularized case, both B2B and CCA compute :\\n\\n$O = (X\\u2019 X)^{-1} XY (Y\\u2019Y)^{-1} YX$\\n\\nCCA additionally perform an eigen decomposition of O in order to identify the orthogonal components where X and Y are maximally correlated (a.k.a canonical components).\\n\\nB2B additionally extracts the diagonal of O, to recover E (the non-invertible component of the X to Y mapping).\\n\\nTherefore, B2B and CCA have different objectives. \\n\\nCCA could be adapted to perform causal estimation. However, this must be performed in canonical space, and thus across all canonical components. In contrast, B2B performs causal estimation in feature space. As a result (i) B2B is more interpretable and (ii) does not dilute causal estimation over several dimensions.\\n\\nFinally, B2B allows several technical and computational improvements such as (i) the necessity to use bagging between the two regressions, (ii) the possibility to vary regularization parameters for each of the two regressions, and (iii) the use of using computationally efficient grid search (Eq. 4-5).\\n\\nWe now updated the discussion to clarify the similarities and differences between B2B and CCA.\"}",
"{\"title\": \"Reply to R2\", \"comment\": \"# (1) clarify $\\\\hat{E}=Diag(H)$\\n\\nWe agree with #R2 that the original manuscript insufficiently detailed the relationship between diag(H) and the causal estimates.\\n\\nWe have now expanded the proof and added a section in the Appendix to clarify how E can be recovered from $diag(\\\\hat H)$. \\n\\nIn addition, we describe in the appendix three methods to binarize diag(H) into causal and non causal features, when (1) signal to noise ratio is known (2) independent experiments are repeated or (3) we know that X contains both causal and non-causal features.\\n\\n\\n# (2) Can E use a sparse prior?\\n\\nE should tend towards a sparse diagonal vector when a small proportion of factors causally influence Y.\\n\\nIn our implementation, B2B uses L2-regularization in both the forward regression (H) and the backward (G) regressions. However, any regularization can be used. \\n\\nNote that a distinct regularization can be implemented and optimized for each regression separately (e.g. L2 for G and L1 for H). In this regard, we did pilot with L1-regularization on the H regression, to induce sparsity in E. However, we did not observe any clear improvement on our synthetic or MEG experiments, and this approach was significantly less efficient computationally. Indeed, the efficient leave-one-out optimization of l2-regularization parameters detailed by Rifkin and Lippert 2007 only applies to l2 regularization.\\n\\nFinally, sparsity can be a posteriori enforced onto $\\\\hat E$ via a thresholding method. As mentioned above, we describe three thresholding methods in the appendix, together with their respective assumptions.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes \\\"Back-to-Back\\\" regression for estimating the causal influence between X and Y in the linear model Y=(XE+N)F, where the E denotes a diagonal matrix of causal influences. Furthermore, this work theoretically shows the consistency of B2B and the experiments also verify the effectiveness of this approach.\", \"the_writing_is_well_and_clear_and_there_are_some_minors_issues\": \"- Further analysis and explanation for using \\\\hat{E}=Diag(H)to estimate the causal influence might be needed.\\n- The model defined in Fig. 1 seems the influence E should have a sparse diagonal vector. It is possible to introduce an L1 regulation in E?\\n\\n##############\\nAfter reading the author's feedback and the comments from other reviewers, I keep the current rating but tend to a borderline score and it is ok if it must be rejected because of the concerns of limited applicability and the experimental.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors introduce a method (B2B) for disentangling effects of correlated predictors in the context of high dimensional outcomes. Their model assumes the outcomes are constructed by a linear transformation of a set of true causes plus measurement noise. Specifically, they fit the model Y=(XE+N)F, where X are the predictors, E is a binary matrix indicating the true causes, N is a noise term, and F is a mixing term. They provide a closed form solution the model fit based on a pair of l2-regularized regressions. They simulate from the given model and provide comparisons against least-squares regression, canonical correlation analysis, and partial least squares. They also apply the method to brain imaging data containing 102 individuals reading 120 sentences plus scrambled sentences, with the goal of inferring which features of the words have an effect on imaging results.\\n\\nThis paper appears to be technically sound, but it should be rejected based on 1) the relatively limited applicability of the model and 2) a lack of thorough experimentation indicating that this is an appropriate method under more general circumstances. It is odd that the model assumes the outcomes are measured without error. Instead, it is assumed that the causes are measured with error, and mixed via F. A more appropriate model may be: Y=(XE+N)F+M, where an additional noise term M allows for Y to be measured imprecisely. Viewed in this light, the model starts to look a lot like canonical correlation analysis. Consider a model Y=ZF+M, X=ZG+N, if dim(Z) = dim(X) and G is invertible, this can be re-written as Y=(X inv(G)-N)A+M, and we arrive at a similar model with specific restrictions about the structure of inv(G) (E will in general not be invertible, so they are not the same). It is particularly odd, then, that the authors do no comparisons against any regularized form of CCA, which would seem to be the most natural method to use in this circumstance. Moreover, in the simulations where they show B2B outperforms CCA, they use 1000 training samples with 10-100 possible causes. In their experiments on real data, where it seems the CCA results are much closer to the results that B2B gives, they have 2700 samples and 4 possible causes. This seems to imply the real data case might be better conditioned than the simulated case, so that regularization would have less of an impact.\\n\\nIn conclusion, the authors present a sound method for disentangling correlated possible causes when the outcome is high-dimensional. However, the authors do not provide enough evidence that this method is generally useful and better than established methods to merit acceptance to ICLR. A comparison to regularized CCA, application to more datasets and simulations under violations of the model would greatly improve the paper. I also have two minor points. 1) the word \\u201ccausal\\u201d can mean many things, and here it refers specifically to disentangling correlated predictors, rather than confounding in observations or direction of effect. It would improve the paper to add some discussion of this point. 2) The comparisons in the experiments are done between E estimated from B2B and E=sum_j H_j^2 for other methods that do not directly estimate E. However, a more natural comparison might be against EF as this also includes estimates of the strength of influence of each observation, which is implicit in the sum above.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper provides an iterative linear method to identify causal influences of putative cause matrix to signal matrix. The idea is a natural extension of previous forward and backward such as CCA and PLS. The paper has provided consistency guarantee and several synthetic and real data experiments as support.\", \"technical_questions\": \"1) The estimation of binary causal influence matrix E is set as \\\\hat{E} = Diag(\\\\hat{H}). Why is Diag(\\\\hat{H}) guaranteed to have binary elements?\\n\\n2) The Theorem 1 needs more explanation about why it proves consistency, which is currently isolated from other parts of the paper. Why E\\\\hat{H} = \\\\hat{H} guarantees the consistency of \\\\hat{E} = Diag(\\\\hat{H})? For example, \\\\hat{H} can have more all-zero rows than E which still satisfies E\\\\hat{H} = \\\\hat{H} but \\\\hat{E} is not equals to E. In an extreme case, \\\\hat{H} = 0 will have E\\\\hat{H} = \\\\hat{H} but is clearly not consistent. \\n\\n3) How does the Eq (4),(5) give an estimation \\\\hat{E}?\\n\\n4) By Eq. (10,11), H and G seem to be determined give X,Y. Then what are the maximization of Eq. (12,13) over?\", \"general_comments\": \"1) How does the problem in Eq.(1) differ from variable selection in linear regression where a plenty algorithms exist such as LASSO, spike-slab prior, SCAD, etc. ?\\n\\n2) The experiments are a bit weak with a simple synthetic experiment and a real dataset with just four features. Can the experiment directly demonstrate the correctness of the theorem?\", \"typo\": \"Page 1, \\u201care are based on\\u201d\\n\\nIn general, I think B2B is an algorithm that has improvement over CCA and PLS. I am looking forward to the author response to address my above concerns.\\n\\n##############\\nI have read the author's feedback which addresses some of the confusion parts in the paper. I maintain the current rating mainly because of the experimental strength.\"}"
]
} |
ByxODxHYwB | Multi-source Multi-view Transfer Learning in Neural Topic Modeling with Pretrained Topic and Word Embeddings | [
"Pankaj Gupta",
"Yatin Chaudhary",
"Hinrich Schütze"
] | Though word embeddings and topics are complementary representations, several
past works have only used pretrained word embeddings in (neural) topic modeling
to address data sparsity problem in short text or small collection of documents.
However, no prior work has employed (pretrained latent) topics in transfer learning
paradigm. In this paper, we propose a framework to perform transfer learning
in neural topic modeling using (1) pretrained (latent) topics obtained from a large
source corpus, and (2) pretrained word and topic embeddings jointly (i.e., multiview)
in order to improve topic quality, better deal with polysemy and data sparsity
issues in a target corpus. In doing so, we first accumulate topics and word representations
from one or many source corpora to build respective pools of pretrained
topic (i.e., TopicPool) and word embeddings (i.e., WordPool). Then, we identify
one or multiple relevant source domain(s) and take advantage of corresponding
topics and word features via the respective pools to guide meaningful learning
in the sparse target domain. We quantify the quality of topic and document representations
via generalization (perplexity), interpretability (topic coherence) and
information retrieval (IR) using short-text, long-text, small and large document
collections from news and medical domains. We have demonstrated the state-ofthe-
art results on topic modeling with the proposed transfer learning approaches. | [
"Neural Topic Modeling",
"Transfer Learning",
"Unsupervised learning",
"Natural Language Processing"
] | Reject | https://openreview.net/pdf?id=ByxODxHYwB | https://openreview.net/forum?id=ByxODxHYwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"sGgjWrMaSl",
"S1xyosr2iH",
"HylGDaWujB",
"BJlcKuzbjr",
"HJeZYDGWir",
"HyenXIMZoH",
"BJl1wBf-iH",
"SygTrGoa9r",
"HyeGN5C85B",
"SkgKHY_aKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747282,
1573833623056,
1573555545643,
1573099649920,
1573099384880,
1573099044048,
1573098838562,
1572872772900,
1572428329668,
1571813696657
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2366/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2366/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2366/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2366/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2366/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2366/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2366/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2366/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2366/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a transfer learning framework in neural topic modeling. Authors claim and reviewers agree that this view of transfer learning in the realm of topic modeling is novel.\\n\\nHowever, after much deliberation and discussion among the reviewers, we conclude that this paper does not contribute sufficient novelty in terms of the method. Also, reviewers find the experiments and results not sufficiently convincing.\\n\\nI sincerely thank the authors for submitting to ICLR and hope to see a revised paper in a future venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"\\\"Enough contribution of transfer learning\\\"\", \"comment\": \"Thanks for increasing your rating and leaning towards accept!\\n\\nThanks for acknowledging contribution of our proposed transfer learning approaches in topic modeling.\"}",
"{\"title\": \"Rejection without a single constructive/negative comment? Please justify negative scores!\", \"comment\": \"Dear Reviewers,\\n\\nThanks again for reviewing our paper! We have responded to your queries and we are looking forward to discuss further. \\n\\nEven though there is NO negative/critical criticism, the ratings are NOT positive. We mostly found clarification queries that we have addressed in our response. We have also highlighted our contributions and SIGNIFICANT GAINS that our proposed methods achieved. \\n\\nWe would appreciate if the reviewers could participate in the rebuttal and raise further questions, if any. \\n\\nAlso, we would acknowledge if you could justify your negative ratings or update them accordingly based on our response below.\\n\\nThanks!\"}",
"{\"title\": \"About \\\"combining existing methods and can be incremental\\\": First Work to perform Multi-view and Multi-source transfer learning in Neural topic modeling\", \"comment\": \"Thanks for your reviews, positive comments about \\\"novelty\\\" and acknowledging gains obtained by our proposed modeling.\\n\\nAs far as we know, this is the first/novel work that introduces:\\n(1) single-source pre-trained topic embeddings,\\n(2) single-source joint pre-trained word and topic embeddings, and \\n(3) multi-source transfers using pre-trained topics and word embeddings jointly in neural topic modeling under transfer learning paradigm.\\n\\nThis work DOES NOT focus on introducing a new topic model; however, we focus on introducing a novel transfer learning mechanism in neural topic modeling using complementary representations. Therefore, we have used the existing neural topic model, i.e., DocNADE to address data sparsity issues. \\n\\nThe experimental results have clearly shown noticeable gains in topic modeling due to the proposed transfer learning methodology using 7 target datasets from several domains (e.g., news, medical, etc.), evaluated using perplexity, topic coherence and information retrieval task. \\n\\nThanks for the minor comment. We will correct it and will update the gain% as well. :)\"}",
"{\"title\": \"About the \\\"regulariser and clear contribution of each component\\\"\", \"comment\": \"As far we we know, we have covered all the experimental settings, where we have clearly/individually shown contributions of each of the components.\\n\\nSee Table 5, 6 and 7, where the scores are reported due to: \\n(1) only single-source word embedding transfer, i.e., LVT, \\n(2) multi-source word embedding transfer, i.e., MST+LVT, \\n(3) only single-source topic embedding transfer, i.e., GVT, \\n(4) multi-source topic embedding transfer, i.e., MST+GVT, \\n(5) single-source joint word and topic embeddings transfers, i.e., MVT=LVT+GVT, and \\n(6) multi-source joint word and topic embeddings transfers, i.e., MST+ MVT. \\n\\nNotice that the topic-embedding transfer is performed via the regulalrization term. Also, mentioned in algorithm #1.\\n\\nWe are happy to answer if something is still not clear. Please point out precisely.\"}",
"{\"title\": \"\\\"20 Evidences of significant improvements using 7 datasets (small/large) across 3 evaluation measures\\\"\", \"comment\": \"Thanks for your reviews and positive comments, e.g., \\\"well written\\\".\\n\\nThe extensive experimental results (Table 5, 6 and 7) have shown significant improvements in terms of perplexity (PPL), topic coherence (COH) and IR scores using 7 datasets. The improvements are EXPLICITLY mentioned in Tables 5, 6 and 7 (see \\\"Gain%\\\"). Also, see plots 2 (a,b,c,d,e), where our proposed model outperforms all the baselines at all the fractions in terms of retrieval precision.\\n\\nBeyond perplexity, we have also shown large gains in topic coherence scores due to improved topic quality and noticeable gains in precision for IR task. \\n\\nFollowing are the 20 (some) EVIDENCES of significant improvements:\\n\\n\\\"Gain% vs DocNADE baseline\\\" (Table 5):\", \"on_20nsshort\": \"9.95% (COH), 8.84% (IR)\", \"on_tmntitle\": \"4.60% (COH) and 7.04% (IR)\", \"on_20nssmall\": \"39.3% (COH).\", \"on_ohsumedtitle\": \"17.3% (PPL) and 4.0% (IR) (Table 7)\", \"on_ohsumed\": \"8.5% (PPL) and 4.91% (IR) (Table 7)\\n\\nAdditionally, #R4 and #R2 have acknowledged the noticeable gains achieved in this paper.\"}",
"{\"title\": \"About \\\"Word Embedding Alignment\\\": Yes, we do align Word Embeddings (mentioned 3 times in the paper)\", \"comment\": \"Thanks for your (emergency) reviews.\\n\\nThanks for your positive comments on experimental setup and acknowledging that our transfer learning approaches introduced in neural topic modeling clearly outperform several baselines.\\n\\n>> \\\"Word Embedding Alignment\\\"\\nYes, we do.\\nPlease see section 3, page 6 in \\\"Reproducibility\\\" paragraph (line 3). Also, mentioned in caption of figure 6 as well as in Appendix C.4 (the last paragraph).\\n\\nWe perform the word embeddings alignment in all the \\\"+Glove\\\" settings (Table 6) to \\n(1) overcome the DocNADEe (baseline topic model) limitation (word-embedding size must be same as the number of topics), and \\n(2) align vector spaces of word-embeddings obtained from several sources as well as from several different training processes, e.g., from Glove, FastText and word embeddings from topic models. \\n\\nThe focus of our work is to demonstrate the joint word and topic embeddings transfer in neural topic models from one or many sources.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This is an emergency review.\\n\\nThis work proposes a novel method to use pre-trained topic embeddings and pre-trained word embeddings obtained from various corpora in the transfer learning framework. \\n\\nTheir model architecture is based on DocNADE, unsupervised neural-network based topic model, and the authors propose two strategies to use pre-trained topic embeddings and pre-trained word vectors.\\n1) Addition of a weighted sum of pre-trained word embeddings and the hidden vector of DocNADE.\\n2) L2-Regularization term between topic embedding of DocNADE and pre-trained topic embeddings. They propose to align these two embeddings by multiplying align matrix \\\"A\\\" to the topic embedding of DocNADE.\\n\\nThey show the transfer learning performance of their model on various source/target domain datasets, including medical target corpora, and verify that their model outperforms on a short text and small document collection.\\n\\nStrengths.\\n1. Comparison with the data augmentation baseline shows the performance gain is not only from bigger training data. Even though comparison with the naive baseline (data augmentation) seems too obvious, I think the results clearly show their claim about the importance of using transfer learning in neural topic modeling domain.\\n2. As the first approach that introduces a novel transfer learning framework with pre-trained topic embeddings, they show tons of experimental results with various datasets and metrics to show the specification of their method. Their experimental setting is well designed.\", \"weaknesses_and_comments\": \"Their method to combine pre-trained word embeddings and pre-trained topic embeddings is too simple. Since this is the first approach to use topic embedding in the transfer learning field, the simplicity of the proposed method is somewhat necessary. However, a weighted sum of pre-trained topic/word vectors seems not enough to transfer multisource knowledge. For instance, word vectors obtained from individual training processes do not share embedding vector space. As you apply the alignment method to topic embeddings from various sources, you should align word embeddings too.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a multi-source and multi-view transfer learning for neural topic modelling with the pre-trained topic and word embedding. The method is based on NEURAL AUTOREGRESSIVE TOPIC MODELs --- DocNADE (Larochelle&Lauly,2012). DocNADE learns topics using language modelling framework. DocNADEe (Gupta et al., 2019) extended DocNADE by incorporating word embeddings, the approach the authors described as a single source extension of the existing method.\\n\\nIn this paper, the proposed method adds a regularizer term to the DocNADE loss function to minimize the overall loss whereas keeping the existing single-source extension. The authors claimed that incorporating the regularizer will facilitate learning the (latent) topic features in the trainable parameters simultaneously and inherit relevant topical features from each of the source domains and generate meaningful representations for the target domain. The analysis and evaluation were presented to show the effectiveness of the proposed method. However, the results are not significantly improved than the based line model DocNADE. \\n\\nOverall, the paper is written well. However, it is not clear to me that the improved results are resulted due to multi-source multi-view transfer learning or for the better leaning of the single-source model due to the incorporation of the regularizer.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"On the basis of existing topic modelling approaches, the authors apply a transfer learning approach to incorporate additional knowledge to topic models, using both word embeddings and topic models. The underlying idea is that topic models contain a global view that differs on a thematic level, while word embeddings contain a local, immediate contextual view. The combination of both local and global view transfer to enhance a topic model is the main contribution of this paper, especially when using multiple sources (therefore the title: multi-source multi-view transfer).\\nGiven a document collection, DocNADE is used to generate the topic-word matrix. In the local view transfer step, the pre-trained WordPool is used, from which knowledge is transferred on the target document. The global view transfer is done by transferring knowledge from the pre-trained TopicPool to the target. As described in Algorithm 1 in the paper, both Word- and TopicPool are jointly used in the transfer learning process. \\nFor evaluation, three different measures are taken into account: Perplexity, Topic Coherence and Precision (Information Retrieval). In comparison to a DocNADE only approach, all values are better in the settings that use the transfer learning approach. Compared to DocNADE + word embeddings, the results are competitive as well. In both experiments, the multi-source setting evaluates best overall.\\n\\nIn conclusion, the paper shows that exploiting multiple sources and views in transfer learning leads to an overall improvement in the given tasks. The main contribution is the usage topic models in a transfer learning framework. Additionally the use of multi-source word embeddings is novel too, especially in the joint setting with the topic model transfer. The paper shows how the DocNADE approach is enhanced to make use of both local and global view transfer and how this enhancement leads to improved performance on various related tasks. \\nStill, the overall contribution is mostly in combining existing methods and can be judged as rather incremental.\", \"minor_note\": \"A small mistake has been found in Table 5. The best perplexity value in the first column is not the bold 638, but the 630 in the local-view transfer setting.\", \"edit_after_rebuttal\": \"In my review I did not value the contribution of the transfer learning approach enough. So, when also considering the extensive evaluation I am now leaning towards accept.\"}"
]
} |
Bke_DertPB | Adversarial Lipschitz Regularization | [
"Dávid Terjék"
] | Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability and sample quality. However, Wasserstein GANs require the critic to be 1-Lipschitz, which is often enforced implicitly by penalizing the norm of its gradient, or by globally restricting its Lipschitz constant via weight normalization techniques. Training with a regularization term penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient, was found to be practically infeasible in most situations. Inspired by Virtual Adversarial Training, we propose a method called Adversarial Lipschitz Regularization, and show that using an explicit Lipschitz penalty is indeed viable and leads to competitive performance when applied to Wasserstein GANs, highlighting an important connection between Lipschitz regularization and adversarial training. | [
"generative adversarial networks",
"wasserstein generative adversarial networks",
"lipschitz regularization",
"adversarial training"
] | Accept (Poster) | https://openreview.net/pdf?id=Bke_DertPB | https://openreview.net/forum?id=Bke_DertPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"E3CJgBqpS4",
"r1epF44hsB",
"Skx1dpejsS",
"BJl7kngssr",
"S1leVwxijH",
"H1lqnfeior",
"HkeAUFKOsB",
"B1gDLnGOsH",
"rJgbWoMOjH",
"HkllGwGusr",
"H1xTs6z6qH",
"Hkl6hcj55H",
"SJlkWrqe5S"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747242,
1573827716833,
1573748071157,
1573747674967,
1573746472245,
1573745330220,
1573587286276,
1573559375245,
1573559033372,
1573558024353,
1572838820658,
1572678325174,
1572017398702
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2365/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2365/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2365/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2365/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2365/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2365/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2365/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2365/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2365/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2365/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2365/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2365/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper introduces an adversarial approach to enforcing a Lipschitz constraint on neural networks. The idea is intuitively appealing, and the paper is clear and well written. It's not clear from the experiments if this method outperforms competing approaches, but it is at least comparable, which means this is at the very least another useful tool in the toolbox. There was a lot of back-and-forth with the reviewers, mostly over the experiments and some other minor points. The reviewers feel like their concerns have all been addressed, and now agree on acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Additional revision\", \"comment\": \"*** Following the line of thought shared by AnonReviewer5, we have added a paragraph to Section 4 discussing the relationship of ALR and two-sided penalties.\"}",
"{\"title\": \"Revision\", \"comment\": \"Dear Reviewer,\\n\\nWe have uploaded a revision which incorporates the feedback you have given. Specifically,\\n\\n1. We have added a discussion about Table 1. Only the results of WGAN-ALP are ours, the others were taken from the cited papers.\\n\\n2. We have explored the usage of BN a bit more, showing that adding BN to the critic in WGAN works with ALR, while it does not work very well with GP.\\n\\n3. We believe there is no apparent weakness of ALR in the high dimensional setting, and hypothesize that ALR performing worse than GP is caused by the fact that the official Progressive GAN implementation was fine-tuned using GP, and we did not change anything other than the regularization method. We shortly discuss this in the paper as well.\\n\\nThank you again for the review, it has been very helpful in improving our paper.\"}",
"{\"title\": \"Revision\", \"comment\": \"Dear Reviewer,\\n\\nWe have uploaded a revision which incorporates the feedback you have given. Specifically,\", \"claimed_contributions_and_their_significance\": \"We do not consider \\\"VAT as Lipschitz Regularization\\\" as a contribution, and even moved the section describing it to the Appendix as it does not contribute much to the paper.\", \"related_work\": \"We cited both Khrulkov et al. and Gemici et al. which are relevant work.\", \"questions_and_points_of_improvement\": \"1. We have described the properties of Inception Score and FID. Unfortunately we did not have the time to implement other metrics.\\n\\n2. We sadly did not have the time to explore the important semi-supervised direction further. We see Lipschitz regularization of neural networks other than WGAN an important research area, and we hope ALR contributes to it.\\n\\n3. We did compute the gradient norms on a 2D grid and visualized them as heatmaps. We also described in Section 4 \\\"WGAN-ALP\\\" that by monitoring the gradient norms of the critic it is visible that ALR indeed effectively restricts the Lipschitz constant.\\n\\n4. Defining ALR with k=0 steps of power iteration effectively results in random perturbations, which were evaluated by Petzka et al. They have found that it makes one unable to train WGANs on CIFAR-10 using the explicit penalty. Doing at least 1 step of power iteration produces better perturbations and in turn makes the training work.\\n\\n5. We have revised the discussion of SN in the toy example.\\n\\nAll minor comments have been incorporated as well.\\n\\nThank you again for the review, it has been very helpful in improving our paper.\"}",
"{\"title\": \"Revision\", \"comment\": \"Dear Reviewer,\\n\\nWe have uploaded a revision which incorporates the feedback you have given. Specifically,\\n\\n1) We added a citation to (Arjovsky et al., 2017) to support the calim, and mentioned that \\\"recent GAN variants do not always use this objective\\\", citing (Brock et al., 2019).\\n\\n2)a) We cited the older papers noting that they also connected low Lipschitz constants with good generalization.\\n\\n2)b) We removed the paragraph containing Banach WGAN.\\n\\n3)a) VAT was defined with an arbitrary divergence D, so we restricted our discussion to divergences that are also metrics.\\n\\n3)b) We worked out another perspective of VAT as Lipschitz regularization where the metric is not the trivial one but the Euclidean distance. We moved a shorter version of this section to Appendix A.2 \\\"Virtual Adversarial Training as Lipschitz regularization\\\".\\n\\n4) We replaced P_\\\\epsilon with the uniform distribution over [10^{-6}, 10^{-5}], and moved the whole toy example to Appendix A.4 \\\"Toy example\\\". Section 3.3 \\\"Comparison with other Lipschitz regularization techniques\\\" now contains only the key takeaways from the toy example.\\n\\n4)a) We clarified this discussion. \\n\\n5)a) We have described how we arrived at the results, and also that results for the other models are from the cited articles and were computed differently than ours.\\n\\n5)b) We have added a discussion of using the regularization term as it is or its square (or both) to Section 3.2 \\\"Hyperparameters\\\".\\n\\n5)c) We have added guidance in choosing the right hyperparameter values in Section 3.2 \\\"Hyperparameters\\\".\\n\\nAll minor comments have been incorporated as well.\\n\\nThank you again for the review, it has been very helpful in improving our paper.\"}",
"{\"title\": \"Revision\", \"comment\": \"We have uploaded a revision incorporating the feedback given by the reviewers. Key changes are the following:\\n\\n*** Section 3 \\\"Virtual Adversarial Training as Lipschitz Regularization\\\" has been removed. Part of it was incorporated into Section 4 (now Section 3) \\\"Adversarial Lipschitz Regularization\\\", specifically the part that takes a mapping f between metric spaces and arrives at the notion of adversarial perturbation wrt. the Lipschitz continuity and the ALP loss term. The rest has been reworked to use the more sensible Euclidean metric as d_X instead of the trivial 0-1 metric, and for d_Y we now only consider divergences that are metrics as well, to avoid the difficulties arising when one tries to define Lipschitzness with premetrics. This part has been moved to Appendix A.2 \\\"Virtual Adversarial Training as Lipschitz Regularization\\\".\\n\\n*** Section 3.2 \\\"Hyperparameters\\\" has been added which describes the hyperparameters of ALR and gives some guidance towards tuning them.\\n\\n*** Section 3.3 \\\"Comparison with other Lipschitz regularization techniques\\\" now contains only the key takeaways from the toy example, which has been moved Appendix A.4 \\\"Toy example\\\".\\n\\n*** In Section 4 \\\"WGAN-ALP\\\" the metrics (Inception Score and FID) used in the evaluation are now described in more detail, as well as the reported numbers in Table 1 and how they were calculated. We also discuss how WGAN-LP fares in our implementation, and that monitoring the gradient norms during training shows that ALR in fact restricts the Lipschitz constant of the network.\\n\\n*** Also in Section 4, motivated by the toy example, an additional experiment is described in which we add Batch Normalization to the critic in WGAN, and show that while it degrades performance when trained with GP, training with ALR is still successful. Details of the high-dimensional Progressive GAN example were also moved to this section from the Appendix, except the generated CelebA-HQ images which can be seen in Appendix A.5 \\\"Images generated by Progressive GAN trained on CelebA-HQ\\\".\"}",
"{\"title\": \"Responding to author comments\", \"comment\": \"Thank you for your response.\\n\\nYes, those are the references I pointed at in my review. \\n\\nWhile I believe a more diverse set of evaluation metrics, combined with some discussion on the type of qualities they evaluate would improve the paper, I understand that this is difficult to complete before the rebuttal deadline and will take this into consideration when forming my final thoughts.\"}",
"{\"title\": \"Clarification regarding evaluation\", \"comment\": \"Dear Reviewer,\\n\\nFirst of all, thank you for taking the time and reviewing our paper.\\n\\nWhile we are working on a revision, we would like to clarify the issues regarding the evaluation. Table 1 is not complete because the cited papers did not always reported best Inception Score or Frechet Inception Distance. We did train WGAN-LP in our implementation with \\\\lambda = 0.1 and 10 (both 2 times). The best final IS is 8.009, while the best final FID is 15.42. During training, the best observed IS was 8.127, while the best FID was 18.49. We'd like to include these in the revision as well.\"}",
"{\"title\": \"Question regarding citations and clarification regarding evaluation\", \"comment\": \"Dear Reviewer,\\n\\nFirst of all, thank you for taking the time and reviewing our paper.\\n\\nWhile we are working on a revision, we would like to make sure that we got the correct papers for the following citations:\\nKhrulkov et al (2017) - \\\"Art of singular vectors and universal adversarial perturbations\\\"\\nGemici et. al. - \\\"Primal-Dual Wasserstein GAN\\\"\\nXu et. al. 2018 - \\\"An empirical study on evaluation metrics of generative adversarial networks\\\"\\nJust to make sure, are these the papers you were referring to?\\n\\nRegarding evaluation, we see that it would be an important improvement to evaluate the WGAN models with metrics other than IS and FID, such as MMD for example. Unfortunately this is currently a stretch goal for us as it is not sure that we can do this until the rebuttal deadline. But, in the paper \\\"An empirical study on evaluation metrics of generative adversarial networks\\\" by Xu et al. (2018), it is shown that FID successfully captures mode collapse (see Figure 3), as well as other model issues. They conclude by saying \\\"Fr\\u00e9chet Inception Distance performs well in terms of discriminability, robustness and efficiency. It serves as a good metric for GANs, despite only modeling the first two moments of the distributions in feature space.\\\"\"}",
"{\"title\": \"Clarification regarding evaluation\", \"comment\": \"Dear Reviewer,\\n\\nFirst of all, thank you for taking the time and reviewing our paper.\\n\\nWhile we are working on a revision, we would like to clarify the issues regarding the evaluation metrics. The method we used to compute the values was the following:\\n\\nDuring training for 100000 iterations, after every 1000 iteration we generated 10000 images to compute the Inception Score and the Frechet Inception Distance. After training, we generated 50000 images to compute the final IS and FID. We did this 5 times. The best IS reported was computed during 1 of the 5 trainings from 10000 samples. To get the average IS and FID, we calculated the mean and std of the 5 final IS and FID values. This means that the best reported value was not included in calculating the average and std values, because it was computed during training from 10000 samples, and not after training from 50000 samples.\\n\\nWe checked the cited papers to see how they computed the reported results, and found the following:\\n\\nWGAN-GP (Gulrajani et al., 2017): the method used to compute the IS is not described, the FID reported in our paper is from (Zhou et al., 2019a), see below how it was calculated\\n\\nWGAN-LP (Petzka et al., 2018): \\\"The maximal scores reached in 100000 training iterations with different regularization parameters are reported in Table 1.\\\" \\\"Table 1: Inception Score on CIFAR-10. Reported are the maximal mean values reached during training. Means are computed over 10 image sets, variances given in parenthesis.\\\" We included the maximal value from that table.\\n\\nLGAN (Zhou et al., 2019a): \\\"We use 200,000 iterations for better convergence and use 500k samples to evaluate IS and FID for preferable stability.\\\" Again we included the best values.\\n\\nCT-GAN (Wei et al., 2018): \\\"For model selection, we use the first 50,000 samples to compute the inception scores (Salimans et al., 2016), then choose the best model, and finally report the \\u201ctest\\u201d score on another 50,000 samples.\\\"\\n\\nSN-GAN (Miyato et al., 2018): \\\"Following the procedure in Salimans et al. (2016); Warde-Farley & Bengio (2017), we calculated the score for randomly generated 5000 examples from each trained generator to evaluate its ability to generate natural images. We repeated each experiment 10 times and reported the average and the standard deviation of the inception scores.\\\" \\\"We computed the Fr\\u00e9chet inception distance between the true distribution and the generated distribution empirically over 10000 and 5000 samples.\\\"\\n\\nBWGAN (Adler and Lunz, 2018): \\\"For evaluation, we report Fr\\u00e9chet Inception Distance (FID)[8] and Inception scores, both computed from 50K samples.\\\" \\n\\nProgressive GAN (Karras et al., 2018): \\\"We report our scores in two different ways: 1) the highest score observed during training runs (here \\u00b1 refers to the standard deviation returned by the inception score calculator) and 2) the mean and standard deviation computed from the highest scores seen during training, starting from ten random initializations.\\\"\\n\\nThe closest to our method is that of BWGAN. We have since completed another 5 trainings with the same setup to have 10 in total, and the new values are the following:\", \"is\": \"8.41598 +- 0.067931580284872\", \"fid\": \"16.9014 +- 0.3634391833581\\nWhile IS is better because of the hand-picking, FID is worse. This is because these values are based on FIDs that are computed from 10000 samples instead of 50000 samples. See Figure 6 in \\\"An empirical study on evaluation metrics of generative adversarial networks\\\" by Xu et al. (2018), there it is visible that FID is getting smaller as the sample size used to evaluate it gets bigger, which explains why the FIDs computed from bigger samples are better. Also see the documentation of the official TensorFlow implementation of FID (https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/contrib/gan/python/eval/python/classifier_metrics_impl.py, lines 466-471): \\\"Note that when computed using sample means and sample covariance matrices, Frechet distance is biased. It is more biased for small sample sizes. (e.g. even if the two distributions are the same, for a small sample size, the expected Frechet distance is large). It is important to use the same sample size to compute frechet classifier distance when comparing two generative models.\\\"\\n\\nBased on these, we believe the original method we used is relatively sensible, but we are open to suggestions regarding other methods.\\n\\nWe also trained WGAN-LP in our implementation with \\\\lambda = 0.1 and 10 (both 2 times). The best final IS is 8.009, while the best final FID is 15.42. During training, the best observed IS was 8.127, while the best FID was 18.49. We'd like to include these in the revision as well.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #5\", \"review\": \"Final Edit:\\n\\nI have reviewer the final version of the paper and have decided to increase my score to a weak accept. I maintain some concerns around the empirical evaluation in the paper (collating results from multiple sources with different experimental procedures). But my major concerns have been addressed by the authors in their response and changes to the paper.\\n\\n----------------------------------\", \"post_rebuttal_edit\": \"Following updates to the paper manuscript, which address my concerns around the correctness of the empirical results, I have updated my review score from 1 to 3.\\n----------------------------------\", \"summary\": \"This paper draws on connections between virtual adversarial training (VAT) and Lipschitz regularization to utilize VAT techniques in the training of WGAN architectures. While this work may be touching on something quite interesting, I felt that the theoretical exposition of the ideas was lacking. The empirical evidence seemed promising in some direction though lacking in others.\\n\\nI was requested as an emergency reviewer for this paper.\", \"overview\": \"This paper is 9 pages length in total. Unfortunately, I felt that the use of an additional page was unwarranted and that the paper contained unnecessary content.\\n\\nDue to a concern over correctness of some empirical results and issues with the presented derivations I have opted to reject this paper. I hope that these issues can be addressed by the authors in which case I will reassess my score.\\n\\n1) Under equation 5, \\\"with substantially more stable training behaviour and improved sample quality\\\". A citation should be included for this claim. In fact, recent advances in GAN methods have not required the Wasserstein distance objective [1].\\n\\n2) I found some issues with Section 2.2.\\n\\na) First a comment on related work. There is older work studying the generalization properties of Lipschitz neural networks which is not mentioned in this section. For example, [2]. You also write that learning under Lipschitz constraints became prevalent with the introduction of WGAN. While this is probably true, I think it is fair to point out that many older papers also utilized similar bounds in the vein of improving generalization. For example [3], which also aimed to limit the gradient norm of deep neural networks.\\n\\nb) I felt that this subsection was a little bloated and the content did not fit entirely under the heading. A large chunk of this section is dedicated to discussing potential issues arising with the gradient penalty formulation of WGAN and alternative approaches such as the Banach WGAN. While these are useful additions they did not feel critical to this work and in my opinion did not deserve an extension beyond the 8 page recommendation.\\n\\n3) I found the discussion in Section~3 a little difficult to follow. I will summarize my key concerns below.\\n\\na) The authors assume that generalizing Lipschitz continuity to a premetric space is trivial. While the more important results seems believable I am not convinced by the presentation of these results and would prefer to have seen this given more careful treatment. For example, premetrics need not obey symmetry or the triangle inequality (assuming this is the definition used by the authors --- including this would be valuable). It is written (paraphrasing) that a mapping $f$ is $K$-Lipschitz iff for any $x$ the supremum over $r$ is bounded by $K$. However, $r$ only appears on the right-hand side of a potentially asymmetric distance function. Moreover, many properties of Lipschitz continuous functions depend on the triangle inequality holding in the metric space and these would fail to hold here.\\n\\nb) When connecting ALP to VAT some of the differences in the formulations are hand-waved away in unconvincing ways. Under the trivial metric, the Lipschitz constant is given by the maximum distance in the output space. With this observation, it seems trivial that the VAT formulation will perform a form of Lipschitz regularization. However, the Lipschitz constant does not take into account distance in the input space in a meaningful way and so I am unsure to what extend the connection is really meaningful. Further, the $r < \\\\epsilon$ constraint in the VAT model is treated as an inconvenient implementation detail but I am not convinced this is sufficient. Indeed, this $\\\\epsilon$ could be used to bound the input deviations and thus could be seen as affecting the Lipschitz constant under a more reasonable metric.\\n\\n4) I felt that Section~4.1 presented an important discussion coupled with an interesting toy problem to highlight benefits and shortcomings of the proposed method. However, this section was moderately long and contributed only a little towards understanding the practical settings users of ALR would care about in practice. I did not gain much intuition into how ALR might generalize to high dimensional settings and was concerned by the fact that the distribution $P_\\\\epsilon$ used was heavily hand-engineered and did not match up with the ones used in later experiments.\\n\\na) I did not understand the comments that constraining the Lipschitz constant globally may be undesirable in WGANs. The dual optimization problem requires a Lipschitz constraint be enforced over the support of the distributions and we should not care outside of this region in any case (except in cases where the generator might move the support to a currently under-regularized region of the critic domain in which case a global constraint may be advantageous).\\n\\n5) I am not particularly up to date with evaluation of GAN models but to me the presented results in the main paper looked mostly reasonable. Some major concerns did remain to me which I would appreciate being addressed by the authors.\\n\\na) I have one question on the reported \\\"Best\\\" inception score for the WGAN-ALP and Progressive-GAN models. You stated that each model was trained 5 times and reported the mean, standard deviation and best results. However, the difference between the best and average scores alone would constitute a higher standard deviation: $\\\\sqrt{(8.80 - 8.56)^2 / 4} = 0.12$. Please can you clarify how exactly each of these values was computed?\\n\\nb) In the main paper the authors write that ALP is able to work in high dimensional settings (though is not competitive with state of the art). In the appendix however the authors point out that they must make significant modifications to the training objective by including a squared Lipschitz constraint violation term (violating further the comparison to VAT). I do not consider this a huge issue, but it should be discussed in the main paper.\\n\\nc) Finally, the authors employed a range of different hyperparameter settings through their experiments but gave little guidance on how to choose these settings in practice or how sensitive their proposed method is to changes in these hyperparameters. I believe that this would be a highly-valuable addition to the paper and would help distinguish this method from other training stabilization proposals.\", \"minor_comments\": [\"In paragraph 1, you write that WGAN requires critic to consist of only 1-Lipschitz functions. This is true of the Wasserstein distance estimation problem but the WGAN only requires the correct gradient direction (scaling of the critic is fine).\", \"In summary points in intro, \\\"ALR\\\" is used before acronym is defined.\", \"Equation (12) and (13) are twice normalized (||r_k||^2=1 by definition). Similar issue in (22) and (23).\", \"First para of Section 3, you write \\\"on the space of labels\\\". Do you mean on the probability simplex?\"], \"references\": \"[1] \\\"Large scale GAN training for high fidelity natural image synthesis\\\", Brock, Donahue, and Simonyan\\n[2] \\\"The sample complexity of pattern classification with neural networks\\\", Bartlett\\n[3] \\\"Double backpropagation increasing generalization performance\\\", Drucker and LeCun\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary: Virtual Adversarial Training (Miyato et al., 2017) can be viewed as a form of Lipschitz regularization. Inspired by this, the paper proposes a Lipschitz regularization technique that tries to ensure that the function being regularized doesn\\u2019t change a lot in virtual adversarial directions. This method is shown to be effective in training Wasserstein GANs.\", \"motivation_and_placement_in_literature\": \"Being able to effectively enforce the Lipschitz constraint on neural networks has wide ranging applications. Even though the paper predominantly considers the WGAN setting, the topic at hand is within the scope of NeurIPS and will of interest to the machine learning community at large.\", \"claimed_contributions_and_their_significance\": \"1. Practical method with good performance: The proposed method can be used to train WGANs with high (subjective) sample quality. Although better, quantitative evaluation methods are needed to make stronger claims about the efficacy of this approach for GAN training in general (see below), the method described here will likely be useful for practitioners and GAN community. I\\u2019m also convinced that this method has the potential to work for higher dimensions. \\n2. VAT as Lipschitz regularization: There is a relatively straightforward connection between the Lipschitz constraint and adversarial robustness - both imply that small changes in the inputs should lead to small changes in the outputs, in their respective space. There are also a number of papers that make strong connections between adversarial training and Lipschitz regularization (Parseval Networks (Cisse et. al, 2017) for example). Therefore, it is perhaps not too surprising that the LDS term from Miyato et. al. can be rephrased as a Lipschitz regularization term by picking suitable input-output (pre-)metrics (in Section 3). I currently don\\u2019t see this as a major contribution of this paper, although I\\u2019m open to changing my mind if this involves a subtlety that I\\u2019m missing.\", \"related_work\": \"Khrulkov et al (2017) looks like a related work - especially related to how the way the adversarial perturbation is computed and backpropagation is performed. Also Gemici et. al. also discuss the limitations of the original gradient penalty paper (for Section 2.2)\\n\\nQuestions and Points of Improvement\\n1. Better evaluation of GANs: Could you further convince us that this method alleviates common pitfalls of GAN training, such as mode collapse? There are a number of papers that give quantitative metrics for this purpose (such as Xu et. al. 2018). Since the quality of the WGANs presented is one of the biggest strengths of this paper, further evidence in this direction will make the paper stronger. \\n\\n2. Different tasks: \\nThe method described looks flexible enough to be applied on domains other than Wasserstein distance estimation. Did you try other tasks where a Lipschitz penalty might help, such as adversarial robustness? The semi-supervised setting mentioned in the appendix look promising yet perhaps under-explored. \\n\\n3. Resultant Lipschitz constant: \\nSince this paper is about enforcing the Lipschitz constraint through regularization, more experiments on how well the Lipschitz constraint is enforced in practise would be helpful. For example, how much do your WGAN critics violate the 1-Lipschitz constraint? Once this is quantified, how does ALR compare to other Lipschitz regularization techniques? The function approximation task in Section 4.2 seems simple enough that you can probably compute gradient norms on a 2D grid and draw a histogram. How would the histograms look if you did this, for different methods?\\n\\n4. Sample efficiency: \\nSection 4.2 claims that using the explicit Lipschitz penalty is inefficient because violations of the Lipschitz constraint on samples from P_real, P_generated or P_interpolated likely be non-maximal. Could you make a theoretical or empirical case that the additional time spent for finding adversarial directions is actually worth it? If you have a way of quantifying how well the Lipschitz constraint is satisfied (as described above), then doing this empirically should be possible. \\n\\n5. Problematic baseline for spectral normalization: \\nThe way spectral normalization (SN) was used/described in Section 4.1 seem to have some issues. First of all, batch normalization is incompatible with methods that achieve Lipschitz constraint via. architectural constraints, such as spectral normalization. Also, this statement looks problematic: \\u201cIt can be seen that SN has a very strong regularization effect, which is because SN works by approximating the spectral norm of each layer and then normalizing the layers by dividing their weight matrices by the corresponding spectral norms, thereby resulting in overregularization if the approximation is greater than the actual spectral norm.\\u201c In most practical cases, power iteration used in spectral normalization can get a very close approximation of the spectral norm of the weight matrices with a reasonable number (<20 is a conservative guess) of iterations. The over-regularization effect, however, does exist and is more connected to the loss of gradient norm as described in Anil et. al. than bad approximations to the spectral norm of weight matrices.\", \"writing\": \"The paper is well-written and easy to understand.\", \"decision\": \"Weak Accept.\\n\\nOther, lesser important points of improvement:\\n1. The argmax expression in (18) looks problematic - r doesn\\u2019t seem bounded, hence can be chosen arbitrarily large. \\n2. Equation (25) describes the optimal approximation. According to which metric is this optimal? \\n3. Use \\\\leq for \\u201cless than or equal to\\u201d in 25. \\n4. Consider adding a colormap to Figure 1. \\n\\n________\", \"post_rebuttal_edit\": \"The revisions made to the paper address some of the points of improvement listed above. I maintain my initial assessment of weak accept (leaning more towards accept), as I believe the methods discussed in this paper will be of interest to the research community.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"It is an interesting idea about how to enforce the Lipsthitz constrain in WGAN by using virtual adversarial training. The connection between virtual adversarial and this paper method - ALR is quite simple and clear. In the experiments, the FID score in the table is not complete which can not clearly compare the ability of the Lipschitz regularization to other regularization methods. The paper addresses that the approximation of r_{adv} will affect the performance of ALR. How to balance the quality and computation complexity is quite important. This paper did not provide the reason about why this method can not work better than GP method in high-dimensional setting.\\nIn general, this paper provides an interesting direction for regularization.\", \"pros\": \"1. This paper derived as a generalization of VAT (Virtual Adversarial Training) which provided the new way to think of the regularization.\\n2. ALR (Adversarial Lipchitz Regularization) is an new method for learning Lipschitz constrained rather than weight clipping or gradient penalty.\\n3. This method provides the connection between Lipschitz regularization and adversarial training.\", \"cons\": \"1. The comparison of the experiments was not complete. Some of the Inception Scores and FID were blank in the table.\\n2. The results of adding BN were not clearly explained. These included LP and ALR method. Might have some inference about the effect of BN in regularization term.\\n3. In high-dimensional setting, the authors did not clearly describe the weakness of ALR method.\"}"
]
} |
rJePwgSYwB | SGD Learns One-Layer Networks in WGANs | [
"Qi Lei",
"Jason D. Lee",
"Alexandros G. Dimakis",
"Constantinos Daskalakis"
] | Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax problem to global optimality, but in practice, are successfully trained with stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution in polynomial time and sample complexity. | [
"Wasserstein GAN",
"global min-max",
"one-layer network"
] | Reject | https://openreview.net/pdf?id=rJePwgSYwB | https://openreview.net/forum?id=rJePwgSYwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"OyHf0auxT",
"SyxWQ30Ysr",
"HJgRps0Ysr",
"r1gt9c0YjS",
"B1xjCMHAtr",
"HJgk6-WCFr",
"Byl5lg8TKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747212,
1573674009091,
1573673925840,
1573673616986,
1571865298614,
1571848631451,
1571803121899
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2363/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2363/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2363/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2363/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2363/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2363/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This article studies convergence of WGAN training using SGD and generators of the form $\\\\phi(Ax)$, with results on convergence with polynomial time and sample complexity under the assumption that the target distribution can be expressed by this type of generator. This expands previous work that considered linear generators. An important point of discussion was the choice of the discriminator as a linear or quadratic function. The authors' responses clarified some of the initial criticism, and the scores improved slightly. Following the discussion, the reviewers agreed that the problem being studied is a difficult one and that the paper makes some important contributions. However, they still found that the considered settings are very restrictive, maintaining that quadratic discriminators would work only for the very simple type of generators and targets under consideration. Although the article makes important advances towards understanding convergence of WGAN training with nonlinear models, the relevance of the contribution could be greatly enhanced by addressing / discussing the plausibility or implications of the analysis in a practical setting, in the best case scenario addressing a more practical type of neural networks.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for your reviews. Unfortunately there is a significant misunderstanding of our contributions. We will try to clarify some concerns here and hope it will justify our contributions more clearly.\\n\\nWe want to emphasize our contributions first. \\n1. To begin with, the global convergence of gradient descent-ascent in the GAN setting has not been extensively studied. We provide the $\\\\textbf{first result}$ to show $\\\\textbf{convergence to global equilibrium points}$ for $\\\\textbf{non-linear generators}$. The difficulty in analyzing gradient descent-ascent is twofold: the generator dynamics and discriminator dynamics. On the discriminator side, our choice of quadratic discriminator not only simplifies the dynamics but also has sufficient discriminating power (we will justify it below). On the generator side, its minimization problem is non-convex, and therefore our convergence result to global equilibria is highly non-trivial. Our primary contribution in gradient descent-ascent analysis is to choose a proper discriminator set and to understand the generator dynamics. \\n\\n2. For the generator class we are considering, we proved the quadratic discriminator both $\\\\textbf{simplifies the gradient ascent dynamics}$ and $\\\\textbf{attains a nearly optimal sample complexity}$ (see point 3 below). Had we chosen to use a more complex discriminator, even if the maximization step were tractable, this would increase the sample complexity, potentially to a non-parametric rate (Feizi et al., 2017; Bai et al., 2018). \\n\\n3. Our sample analysis also matches the upper bound of $O(1/\\\\epsilon^2)$ on dependence of the error $\\\\epsilon$ provided in (Wu et al., 2019). This is also a side proof that with WGAN we could $\\\\textbf{learn one-layer generator}$ via $\\\\textbf{appropriate discriminator class at a parametric rate}$.\\n\\nNext we justify our choice of discriminator class. \\nWe want to emphasize that our goal is to show that SGD learns the ground truth generating distribution, with minimal requirements for the discriminator class.\\n\\nOur choice of discriminator class, quadratic discriminators, already $\\\\textbf{has sufficient distinguishing power}$ to learn the family of distributions parametrized by our generator class. As shown in Theorem 3, the quadratic discriminator class is sufficient to learn the optimal generator. In fact using a larger discriminator family will only make the learning harder by increasing the sample complexity; see (Feizi et al., 2017; Bai et al., 2018) for a discussion of the importance of appropriately constraining the discriminator class to attain parametric sample complexity. Our choice of small discriminator class is a strength, not a weakness.\\n\\nWhen more complex discriminators are necessary (on studying more complicated generators for future work), we believe the discriminator dynamics can be analyzed using recent developments in the training of neural networks for classification problems (e.g. NTK results). However, this is not the focus of our paper since we are learning to recover one-layer generator, which does not need a complex discriminator.\\n\\nFinally we clarify some other points you\\u2019ve raised.\", \"q\": \"\\u201cmore complex discriminator will cause train error propagation\\u201d\", \"a\": \"As we have shown in Theorem 1, it is unnecessary to have a complex discriminator for our generator architecture.\\n\\nFor more complex discriminator architectures, we believe it is possible to apply NTK results on the discriminator to analyze the discriminator dynamics. However, this is not the focus of our paper since we are learning to recover one-layer generator, which does not need a complex discriminator.\", \"reference\": \"(Feizi et al. 2017) Feizi, S., Farnia, F., Ginart, T., & Tse, D. (2017). Understanding GANs: the LQG setting. arXiv preprint arXiv:1710.10793.\\n(Bai et al. 2018) Bai, Y., Ma, T., & Risteski, A. (2018). Approximability of discriminators implies diversity in GANs. arXiv preprint arXiv:1806.10586.\\n(Wu et al. 2019) Wu, S., Dimakis, A. G., & Sanghavi, S, \\u201cLearning Distributions Generated by One-Layer ReLU Networks\\u201d, NeurIPS 2019\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thank you for your reviews. Unfortunately there is a significant misunderstanding of our contributions. We will try to clarify some concerns here and hope it will justify our contributions more clearly.\\n\\nWe want to emphasize our contributions first. \\n1. To begin with, the global convergence of gradient descent-ascent in the GAN setting has not been extensively studied. We provide the $\\\\textbf{first result}$ to show $\\\\textbf{convergence to global equilibrium points}$ for $\\\\textbf{non-linear generators}$. The difficulty in analyzing gradient descent-ascent is twofold: the generator dynamics and discriminator dynamics. On the discriminator side, our choice of quadratic discriminator not only simplifies the dynamics but also has sufficient discriminating power (we will justify it below). On the generator side, its minimization problem is non-convex, and therefore our convergence result to global equilibria is highly non-trivial. Our primary contribution in gradient descent-ascent analysis is to choose a proper discriminator set and to understand the generator dynamics. \\n\\n2. For the generator class we are considering, we proved the quadratic discriminator both $\\\\textbf{simplifies the gradient ascent dynamics}$ and $\\\\textbf{attains a nearly optimal sample complexity}$ (see point 3 below). Had we chosen to use a more complex discriminator, even if the maximization step were tractable, this would increase the sample complexity, potentially to a non-parametric rate (Feizi et al., 2017; Bai et al., 2018). \\n\\n3. Our sample analysis also matches the upper bound of $O(1/\\\\epsilon^2)$ on dependence of the error $\\\\epsilon$ provided in (Wu et al., 2019). This is also a side proof that with WGAN we could $\\\\textbf{learn one-layer generator}$ via $\\\\textbf{appropriate discriminator class at a parametric rate}$.\\n\\nNext we justify our choice of discriminator class. \\nWe want to emphasize that our goal is to show that SGD learns the ground truth generating distribution, with minimal requirements for the discriminator class.\\n\\nOur choice of discriminator class, quadratic discriminators, already $\\\\textbf{has sufficient distinguishing power}$ to learn the family of distributions parametrized by our generator class. As shown in Theorem 3, the quadratic discriminator class is sufficient to learn the optimal generator. In fact using a larger discriminator family will only make the learning harder by increasing the sample complexity; see (Feizi et al., 2017; Bai et al., 2018) for a discussion of the importance of appropriately constraining the discriminator class to attain parametric sample complexity. Our choice of small discriminator class is a strength, not a weakness.\\n\\nWhen more complex discriminators are necessary (on studying more complicated generators for future work), we believe the discriminator dynamics can be analyzed using recent developments in the training of neural networks for classification problems (e.g. NTK results). However, this is not the focus of our paper since we are learning to recover one-layer generator, which does not need a complex discriminator.\\n\\nFinally we clarify some other points you\\u2019ve raised.\", \"q\": \"\\u201cwhat is one-layer generator\\u201d & \\u201cit can be learned easily\\u201d\", \"a\": \"By one-layer generator we mean the second case as you have suggested. This terminology is also used in some prior work, for instance in (Wu et al., 2019). As we have emphasized, our goal is not just to learn the one-layer generator by any method, but to understand the dynamics of gradient descent-ascent with WGAN on learning the distribution. We also demonstrate the near optimal sample complexity when learning with WGAN. Even though the generator is a simple formulation, this work still provides the first result on successful learning a non-linear generator with WGAN setting.\", \"reference\": \"(Feizi et al. 2017) Feizi, S., Farnia, F., Ginart, T., & Tse, D. (2017). Understanding GANs: the LQG setting. arXiv preprint arXiv:1710.10793.\\n(Bai et al. 2018) Bai, Y., Ma, T., & Risteski, A. (2018). Approximability of discriminators implies diversity in GANs. arXiv preprint arXiv:1806.10586.\\n(Wu et al. 2019) Wu, S., Dimakis, A. G., & Sanghavi, S, \\u201cLearning Distributions Generated by One-Layer ReLU Networks\\u201d, NeurIPS 2019\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your reviews. Unfortunately there is a significant misunderstanding of our contributions. We will try to clarify some concerns here and hope it will justify our contributions more clearly.\\n\\nWe want to emphasize our contributions first. \\n1. To begin with, the global convergence of gradient descent-ascent in the GAN setting has not been extensively studied. We provide the $\\\\textbf{first result}$ to show $\\\\textbf{convergence to global equilibrium points}$ for $\\\\textbf{non-linear generators}$. The difficulty in analyzing gradient descent-ascent is twofold: the generator dynamics and discriminator dynamics. On the discriminator side, our choice of quadratic discriminator not only simplifies the dynamics but also has sufficient discriminating power (we will justify it below). On the generator side, its minimization problem is non-convex, and therefore our convergence result to global equilibria is highly non-trivial. Our primary contribution in gradient descent-ascent analysis is to choose a proper discriminator set and to understand the generator dynamics. \\n\\n2. For the generator class we are considering, we proved the quadratic discriminator both $\\\\textbf{simplifies the gradient ascent dynamics}$ and $\\\\textbf{attains a nearly optimal sample complexity}$ (see point 3 below). Had we chosen to use a more complex discriminator, even if the maximization step were tractable, this would increase the sample complexity, potentially to a non-parametric rate (Feizi et al., 2017; Bai et al., 2018). \\n\\n3. Our sample analysis also matches the upper bound of $O(1/\\\\epsilon^2)$ on dependence of the error $\\\\epsilon$ provided in (Wu et al., 2019). This is also a side proof that with WGAN we could $\\\\textbf{learn one-layer generator}$ via $\\\\textbf{appropriate discriminator class at a parametric rate}$.\\n\\nNext we justify our choice of discriminator class. \\nWe want to emphasize that our goal is to show that SGD learns the ground truth generating distribution, with minimal requirements for the discriminator class.\\n\\nOur choice of discriminator class, quadratic discriminators, already $\\\\textbf{has sufficient distinguishing power}$ to learn the family of distributions parametrized by our generator class. As shown in Theorem 3, the quadratic discriminator class is sufficient to learn the optimal generator. In fact using a larger discriminator family will only make the learning harder by increasing the sample complexity; see (Feizi et al., 2017; Bai et al., 2018) for a discussion of the importance of appropriately constraining the discriminator class to attain parametric sample complexity. Our choice of small discriminator class is a strength, not a weakness.\\n\\nWhen more complex discriminators are necessary (on studying more complicated generators for future work), we believe the discriminator dynamics can be analyzed using recent developments in the training of neural networks for classification problems (e.g. NTK results). However, this is not the focus of our paper since we are learning to recover one-layer generator, which does not need a complex discriminator.\\n\\nFinally we clarify some other points you\\u2019ve raised.\", \"q\": \"\\u201cwhy not study the two layer network discriminator\\u201d\", \"a\": \"As we explained above, the choice of discriminator is designed in tandem with the choice of generator. If we use a standard two layer ReLU network as discriminator, this would hurt the sample complexity.\", \"reference\": \"(Feizi et al. 2017) Feizi, S., Farnia, F., Ginart, T., & Tse, D. (2017). Understanding GANs: the LQG setting. arXiv preprint arXiv:1710.10793.\\n(Bai et al. 2018) Bai, Y., Ma, T., & Risteski, A. (2018). Approximability of discriminators implies diversity in GANs. arXiv preprint arXiv:1806.10586.\\n(Wu et al. 2019) Wu, S., Dimakis, A. G., & Sanghavi, S, \\u201cLearning Distributions Generated by One-Layer ReLU Networks\\u201d, NeurIPS 2019\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors provide a long text to justify their contributions and I have read it thoroughly. Unfortunately, I find the responses don't really address my concerns.\\n\\nMy major concern is that I cannot understand how quadratic discriminator can be treated as WGAN. The authors replied that the regularization considered in the paper might be treated as Lipschitz constraint for bounded data sets. However, the data sets can\\u2019t be bounded because in the paper, the authors consider a special case where the data sets generated from a teacher network where the input is Gaussian noise. Moreover, the authors said that they would add an explanation of this important point in the revision but I haven\\u2019t found any revision yet.\\n\\nMy another concern is that why the authors don\\u2019t study the two layer network discriminator. The authors replied that the choice of discriminator is designed in tandem with the choice of generator. If they use a standard two layer ReLU network as discriminator, this would hurt the sample complexity. I partly agree with that it will be nice if we can design a better discriminator according to the different choice of generator. However, it will be more convincing to show the convergence of WGAN if the authors consider NN discriminator rather than quadratic discriminator which hardly be used in GAN.\\n\\n==================================================================================================\\nI found this paper over claims its contribution a lot, which is quite misleading. The title of this work is SGD LEARNS ONE-LAYER NETWORKS IN WGANS. And the authors claim that they analyze the convergence of stochastic gradient descent ascent for Wasserstein GAN on learning a single layer generator network. But actually this paper only considers two kinds of simplified discriminators: a (rectified) linear discriminator and quadratic discriminator, which are very different from WGAN used in practice. The analysis of two special cases are hard to be extended to the analysis of WGAN and thus can hardly help to explain why WGAN is successfully trained by SGD in practice.\\n\\nIn section 3, the authors consider the rectified linear discriminator, which is quite similar to the standard two layer network with relu activation but the first layer is fixed. The authors prove that the generator can learn the marginal distribution but may not learn the joint distribution. In the beginning of section 4, the authors explain that this is because there is no interaction between different coordinates of the random vector. To learn joint distribution, the authors extend the linear discriminator to the quadratic discriminator and think of it as a natural idea.\\n\\nFor the rectified linear discriminator, the regularization of the discriminator is the norm the output layer of discriminator which can be related to the Lipschitz constraint in WGAN. But for quadratic discriminator, I cannot understand how this setting can be treated as WGAN without further explanation from the authors.\\n\\nI wonder why this work doesn\\u2019t consider the standard two layer network discriminator which also has the interaction between different coordinates in the first layer.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"I have read the authors response. In the response the authors clarified the contributions of this paper. I agree with the authors that the analysis of gradient descent-ascent is a difficult problem, and the optimization results given in this paper is a contribution of importance. Because of this I have improved my score.\\n\\nHowever, I do not agree with the authors that studying quadratic discriminators instead of more complicated ones should be considered as a contribution instead of drawback. In my opinion, as long as the focus is on WGAN, results involving standard neural networks are still more desired compared with the results in this submission. For example, similar results for a neural network discriminator might be even more impactful, because the optimization problem is even more difficult. Therefore I still consider the simple discriminator and generator as a weak point of this paper.\\n\\n\\n======================================================================================================\\n\\nThis paper studies the training of WGANs with stochastic gradient descent. The authors show that for one-layer generator network and quadratic discriminator, if the target distribution is modeled by a teacher network same as the generator, then stochastic gradient descent-ascent can learn this target distribution in polynomial time. The authors also provide sample complexity results.\\n\\nThe paper is well-written and the theoretical analysis seems to be valid and complete. However, I think the WGANs studied in this paper are simplified too much that the analysis can no longer capture the true nature of WGAN training. \\n\\nFirst, the paper only studies linear and quadratic discriminators. This is not very consistent with the original intuition of WGAN, which is to use the worst Lipschitz continuous neural network to approximate the worst function in the set of all Lipschitz continuous functions in the definition of Wasserstein distance. When the discriminator is as simple as linear or quadratic functions, there is pretty much no \\u201cWasserstein\\u201d in the optimization problem.\\n\\nMoreover, the claim that SGD learns one-layer networks can be very misleading. In fact what is a \\u201cone-layer\\u201d neural network?\\n- if the authors meant \\u201ctwo-layer network\\u201d or \\u201csingle hidden layer network\\u201d, then this is not true. Because as far as I can tell, the model $x = B \\\\phi(A z)$ is much more difficult than the model $x = \\\\phi(A z)$. The former is a standard single hidden layer network which is non-convex, while the latter is essentially a linear model especially when \\\\phi is known.\\n- if the authors meant \\u201ca linear model with elementwise monotonic transform\\u201d, then I would like to suggest that a more appropriate name should be used to avoid unnecessary confusion.\\n\\nAs previously mentioned, the discriminators are too simple to approximate the Wasserstein distance, and therefore in general it should not be possible to guarantee recovery of the true data distribution. However, in this paper it is still shown that certain true distributions can be learned. This is due to the extremely simplified true model. In fact, even if the activation function $\\\\phi$ is unknown, it seems that one can still learn $A^* (A^*)^\\\\top$ well (for example, by Kendall\\u2019s tau).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors attempt to prove that the Stochastic Gradient Descent-Ascent could converge to a global solution to the min-max problem of WGAN, in the setting of a one-layer generator and simple discriminator. They also show that the linear discriminator could be used to learn the marginal distributions of each coordinate, while a quadratic one could obtain joint distributions of every two coordinates. Since the linear discriminator and the quadratic one could be solved in one step Gradient Ascent, the author applied the standard analysis method to reveal the property of the Gradient Descent method. Experiments are also carried out to justify their theory that the WGAN could recover the distribution.\\n\\nHowever, the most significant drawback of this paper is that the settings for the discriminator are too simple, which leads to the following two problems: 1) Revealing the joint distributions of two coordinates is still much weaker than the desired result of recovering the true distribution of the data. 2) The analysis of this paper could not be extended to a complex discriminator since it would be suffered from the training error propagation in the Gradient Ascent step, instead of getting an accurate solution for the Gradient Ascent step. \\n\\nTherefore, more explanations are desired to be given to bound the error propagation and what will the complimentary discriminator learn from the data distribution.\"}"
]
} |
r1gIwgSYwr | Localized Meta-Learning: A PAC-Bayes Analysis for Meta-Leanring Beyond Global Prior | [
"Chenghao Liu",
"Tao Lu",
"Doyen Sahoo",
"Yuan Fang",
"Steven C.H. Hoi."
] | Meta-learning methods learn the meta-knowledge among various training tasks and aim to promote the learning of new tasks under the task similarity assumption. However, such meta-knowledge is often represented as a fixed distribution, which is too restrictive to capture various specific task information. In this work, we present a localized meta-learning framework based on PAC-Bayes theory. In particular, we propose a LCC-based prior predictor that allows the meta learner adaptively generate local meta-knowledge for specific task. We further develop a pratical algorithm with deep neural network based on the bound. Empirical results on real-world datasets demonstrate the efficacy of the proposed method. | [
"localized meta-learning",
"PAC-Bayes",
"meta-learning"
] | Reject | https://openreview.net/pdf?id=r1gIwgSYwr | https://openreview.net/forum?id=r1gIwgSYwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"vuw3-o8vjE",
"S1lQWNwhir",
"HJlljbvhoB",
"B1gMMeD3oS",
"SJeZu6U2iS",
"HJxKbpLhiH",
"ByxKbnL3ir",
"Bye1xeI2ir",
"Syli4yL3or",
"HygYpKr2sr",
"SyeChbnu5S",
"rkeqdjJOcS",
"S1gb7pTV5B",
"BJelocwZ5H",
"HJljKWOXdB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1576798747181,
1573839866540,
1573839256385,
1573838858473,
1573838185330,
1573838081279,
1573837825300,
1573834727037,
1573834547317,
1573833152593,
1572549046272,
1572498289903,
1572293912948,
1572072087586,
1570107778693
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2362/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2362/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2362/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2362/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2362/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2362/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2362/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2362/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2362/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2362/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2362/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2362/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2362/AnonReviewer1"
],
[
"~Tomer_Galanti1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes PAC-Bayes bounds for meta-learning. The reviewers who are most knowledgeable about the subject and who read the paper most closely brought up several concerns regarding novelty (especially a description of how the proposed bounds relate to those in prior works (Pentina el al. (2014), Galanti et al. (2016) and Amit and Meir (2018))) and regarding clarity. The reviewers found theoretical analysis and proofs hard to follow. For these reasons, the paper isn't ready for publication at this time. See the reviewer's comments for details.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We sincerely appreciate your comments, but we think there is a misunderstanding of our work. We respond to your main concerns below:\", \"q\": \"Comparison with MAML and MatchingNet\", \"a\": \"In the experiments, we add two methods, Matching Network and MAML, which are popular in meta-learning and few-shot learning area for comparison. Our method and other PAC-Bayes baselines outperform these two methods. This is because MAML and MatchingNet adopt the episodic training paradigm to solve the few-shot learning problem. These two methods requires hundreds of thousands of tasks and each task contains limited samples, which is not the case in our experiment. (To make a fair comparison with PAC-Bayes baselines, we follow the same joint optimization method as Amit et al. 2018, to ensure that the benefit of the proposed LML is not from using an improved optimization method). We also follow the similar meta-learning environment setting as Amit et al. 2018. That is, meta-learner observes limited number of tasks (from 1 to 11) and each task has sufficient samples.) Scarce tasks in meta-training leads to severely meta-overfitting. Moreover, MAML aims to learn a good initialization for base model that can achieve good performance with a few gradient updates. Taking many gradient steps at each task diminishes the effect of the initialization. Therefore, MatchingNet and MAML is especially suited for few-shot learning with sufficient tasks for meta-training which is not the case in our experiment. In our method, the learned prior serves both as an initialization of base model and as a regularizer which restricts the solution space while allowing variation based on specific task data. It yields a model with smaller error than its unbiased counterpart when applied to a similar task.\"}",
"{\"title\": \"Revision\", \"comment\": \"Thank you for helping us improve the paper! We have uploaded it.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We greatly appreciate your comments and thoughtful suggestions. We respond to each comment as follows.\", \"q\": \"typos and related work\", \"a\": \"Thanks for pointing out the spelling mistakes and the related work. [2] proposes a multimodal MAML to handle diverse task distribution for few-shot problem. Our work is motivated by the PAC-Bayes meta-learning framework. We revised our wording and added the suggested reference in related work.\"}",
"{\"title\": \"Response to Reviewer #2 part 2\", \"comment\": \"Q: unsatisfying to use pretrain model as initialization.\", \"a\": \"Thanks for pointing it out. We revise the related wording. We aim to design different meta-learning environment settings (i.e., with or without pre-trained model) to verify the efficacy of localized meta-learning. In Figure 3, it shows that LML consistently outperforms other PAC-Bayes baselines in both settings.\\n\\nIn the meta learning framework, meta model extracts meta-knowledge as a prior to improve the learning of base model for new task. We consider the pre-trained model as a data-dependent hyperprior for meta-training. In our framework, if the distance between the hyperprior and hyperposterior (the learned meta model) is small, it will improve the generalization performance (reduce the environment complexity). This has been verified in our experiment that the methods with pre-train model consistently outperform those without pre-train model. We added this explanation in Section 5.\"}",
"{\"title\": \"Response to Reviewer #2 part1\", \"comment\": \"We deeply appreciate the reviewer for the positive remarks, constructive suggestions and the interest. We\\u2019ve revised our paper following the suggestions and will explain your concerns in the following.\", \"q\": \"use generalization error of prior predictor to characterize samples per task, number of anchor points.\", \"a\": \"We agree that Thm 3 (Thm 2 in original version) shows the relationship between generalization error and the number of tasks n, samples per task m in the task-complexity term $\\\\|E_v w^Q- \\\\bar{\\\\Phi}_v(S)\\\\|$ for localized meta-learning and $\\\\|Ew^Q-W^\\\\mathcal{Q}\\\\|$ for regular meta-learning.\\nAs shown in Thm 3, the derived generalization error converges at the rate of (1/nm). It indicates that even if each task contains very few samples, the generalization error is small if the meta-training set contains sufficient number of tasks. \\nThe number of anchor points is related to the LCC-based prior predictor. Lemma 1 demonstrates that the approximation error of LCC depends on the intrinsic dimension of the manifold instead of the dimension of input. As [2] claimed below thm 2.1, \\u201cLCC does not require the data to lie precisely on a manifold, and it does not require knowing the intrinsic dimension of the manifold. In fact, similar results hold even when the data only approximately lie on a manifold.\\u201d In practice, a small |C| is often sufficient.\\n[2] Yu, Kai et al. Nonlinear learning using local coordinate coding 2009.\"}",
"{\"title\": \"Revision\", \"comment\": \"Could you please upload the revised version, so I can give it a second look?\\n\\nThanks.\"}",
"{\"title\": \"Thanks for your interest\", \"comment\": \"A: Thanks for pointing it out. We added it in our related work. It would be interesting for considering non i.i.d setting for localized meta-learning from the transfer learning perspective.\"}",
"{\"title\": \"Response to Reviewer #4 Part 2\", \"comment\": \"Q: how to estimate $O_{\\\\alpha,\\\\beta}(\\\\gamma,C)$\", \"a\": \"Thanks for pointing it out. We added it in our related work. It would be interesting for considering non i.i.d setting for localized meta-learning from the transfer learning perspective.\", \"q\": \"typo, the work of Galanti et al. (2016)\"}",
"{\"title\": \"Response to Reviewer #4 Part 1\", \"comment\": \"Thanks for your constructive and valuable comments. We\\u2019ve revised our paper following the suggestions and will explain your concerns in the following.\", \"q\": \"constant term \\u201cE_i[const(n,mi,delta)]\\u201d is unclear, analyze about it, definition of m_{ik}\", \"a\": \"Thanks for your suggestion. We move the explicit quantity into the main content in Thm 3 in Eq (12). This quantity contains two parts, as follows. First, distance between \\\\bar{w}^P (LCC-based prior predictor) defined in Eq. (7) and \\\\hat{w}^P (empirical prior predictor) defined in Eq. (6). We analyzed it below Lemma1 in line173-178. Second, distance between \\\\hat{w}^P(empirical prior predictor) and w^p (expected prior predictor) defined in Eq. (5), we analyzed it below Lemma 2 in line 183-187.\\n\\nThe definition of m_{ik} is below Eq. (5), it means the number of samples for category k in task i.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"### Summary\\n\\u200b\\nThis paper proposes a tight bound in generalization to new tasks in meta-learning framework, by controlling the task prior with Local Coordinate Coding (LCC) prediction. In classification tasks, the algorithm using this bound demonstrates superior performance over other meta-learning methods which are based on PAC-Bayes bounds, but without the proposed prior prediction. \\n\\u200b\\n\\u200b\\n### Strengths\\n- The paper is well written, and maintains a logical flow with the proofs and inference from them.\\n- The idea and intuition for using a learned prior is sound, and is backed by PAC-Bayes theory.\\n- Proposing a tighter generalization bound O(1/m) as opposed to existing bounds of O(1/sqrt(m)) is a meaningful contribution and its efficacy is well shown in the results.\\n\\u200b\\n### Weaknesses\\n- Could the authors comment on how their LCC-basedd prior prediction can be extended to other meta learning setups like regression and reinforcement learning?\\n- The baselines compared with are other PAC-Bayes bounds and successfully justifies the contribution. Could the authors provide a comparison with other meta-learning methods (like [1]) to have a holistic view of where this proposed bound gets this line of work?\\n\\u200b\\n\\u200b\\n#### Minor:\\n- Spellings: \\\"pratical\\\" -> \\\"practical\\\" (pg1, abstract); \\\"varible\\\" -> \\\"variable\\\" (pg 3); \\\"simplifies\\\" -> \\\"simplify\\\" (pg6, optimization of LLC)\\n- [2] seems to be a related work, as instead of using the global prior, they identify the task first (similar to localized prior), and then utilize it for better performance.\\n\\u200b\\n### References\\n[1] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. \\\"Model-agnostic meta-learning for fast adaptation of deep networks.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\\n\\u200b\\n[2] Vuorio, R., Sun, S. H., Hu, H., & Lim, J. J. (2018). Toward multimodal model-agnostic meta-learning. arXiv preprint arXiv:1812.07172.\\n\\u200b\\n\\u200b\\n### Score\\n6 - Weak Accept\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper discusses a theoretical analysis of localized meta-learning. The authors are motivated from a PAC-Bayes perspective of meta-learning wherein the system learns a hyper-prior on the hypothesis space using the posteriors of the weak-learners learnt on their individual tasks. The first contribution of the paper is to construct a PAC-Bayes generalization bound for the case when the hyper-prior and the prior on the new task are both isotropic Gaussian. The second contribution is to develop a localized meta-learning algorithm which predicts a better prior for the weak-learner using the data distribution of the new task. The authors use ideas from metric learning to learn the predictor for the prior using Local Coordinate Coding.\\n\\nI would like to accept this paper. It has very clear contributions to meta-learning. I expect the core idea of splitting a single hyper-prior into anchor-points to have good impact on real problems because the task diversity in most realistic meta-learning scenarios makes it difficult to learn a meaningful hyper-prior.\\n\\nI have a few minor comments which I would like the authors to address.\\n\\n1. Choosing the prior w^P using data from the new task will work well if one has access to a few support samples. How do your experiments reflect this? I think the numerical results in Figure 3-4 are high because the shot is abnormally high (it is 50). It is possible to simply fine-tine a pre-trained network to near perfect accuracy with this much data. This is a major drawback of the predictor for the prior.\\n2. Can you show results for more standard settings, e.g., 5-way, 5-shot? Can you show results on mini-ImageNet?\\n3. One way to build upon Theorem 2 would be to use the generalization error of the predictor Phi_{v^Q}(S) and characterize the number of support samples one necessary for a given number of anchor points.\\n4. The authors note in Section 5.1 that the meta-learning objective is difficult to optimize. It is unsatisfying to use the pre-trained model as an initialization for the meta-training. Can you elaborate more on this?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"In this work, the authors introduce PAC-Bayesian generalization bounds for Meta-Learning. In their framework, they have a hyper-prior distribution, a class of hyper-posteriors and an algorithm A that takes a sample set Si from task Di and a prior P and returns a posterior Q.\", \"pros\": \"-- Overall, the main text of the paper is finely written.\\n-- The motivation is well articulated.\\n-- The relationship with local coordinate coding seems like an interesting direction.\\n-- The experiments seem sensible. \\n\\nThe novelty of the bound in Thm. 1:\\n-- It seems that several aspects of the high-level derivation methodology of the bound are not new. Instead of applying McAllester\\u2019s bound twice (as done by Galanti et al. 2016, Amit and Meir 2018), the authors employ Catoni\\u2019s bound (a different variant of PAC-Bayes) twice. In addition, the authors apply the standard Gaussian randomization which leads to L2 regularization terms as the capacity terms -- also well known in the literature (see for example Hazan et al. 2013, etc\\u2019). \\n-- I would be happy if the authors point out which parts of their derivations are novel, for instance, the application of local coordinate coding, etc'.\\n\\nI think the authors' claim that their bound (Thm. 1) is tighter than previously existing bounds is a bit questionable:\\n-- There are already PAC-Bayesian bounds with the proposed orders of magnitudes with the exact same coefficients by Catoni 2007 and Pascal Germain et al. 2009. In fact, the authors employ Catoni\\u2019s bound within the appendix. In my opinion, it should be addressed in the main text. \\n\\n-- In addition, their bound has two coefficients that are being ignored in their analysis of its magnitude: c/(1-exp(-c)) (here, c stands for c1 or c2) for the error term and 1/(1-exp(-c)) for the capacity term and an additional constant that depends on n,mi and \\\\delta. In the previous bounds, the coefficients are 1 for the error term and 1 for the capacity terms. I think the paper would greatly benefit from a direct comparison between the two.\\n\\nFor instance, as a direct comparison between the two bounds, I would select c, such that, 1/(1-exp(-c)) is close to 1. However, in this case, the coefficient 1/(1-exp(-c)) of the capacity term is huge and makes the overall capacity term very large, which is not favorable. Therefore, it seems to me that the proposed bound is favorable only when the training error is very small (realizable case) and c can be selected to be large. However, when the training error is assumed to be small, it is well known that the gap between the generalization risk and the empirical risk is of order O(capacity/m) (see Thm. 6.8 (bullet 3) in S. Shalev-Schwarz and S. Ben-David 2014). Again, this is also a property of the original bound by Catoni.\\n\\nFinally, the authors mention that the parameters c1 and c2 control the tradeoff between the empirical error and the capacity terms. I think this is a bit inaccurate, since, c1 and c2 are selected a-priori to the estimation of the error term. Therefore, should be independent of the error term. \\n\\n-- The presented bound has an additional constant \\u201cE_i[const(n,mi,delta)]\\u201d. \\nIt is unclear to me what is the magnitude of this quantity. I think it might improve the paper if the authors explicitly analyze this term.\\nFrom the proof of Thm. 2 (Eq. 36) it seems to be of order >= ( O_{\\\\alpha,\\\\beta}(\\\\gamma,C) + (1/m_{ik})^{1/2} )^2. \\nWhat is the definition of m_{ik} (couldn\\u2019t find it, maybe it was defined somewhere -- I think it should have been recalled)? I\\u2019m assuming it is mi or something similar. \\nSecond, I\\u2019m not sure how to estimate the size of O_{\\\\alpha,\\\\beta}(\\\\gamma,C). From Lem. 1 it depends on some arbitrarily selected quantity \\\\epsilon > 0 and |C| is a function of \\\\epsilon, so I\\u2019m not sure how to measure it. \\n------------------------------------\", \"soundness\": \"There are a few things that I'd be happy if clarified regarding Thms. 1 and 2. \\n-- I\\u2019m not sure what is w^Q_i (couldn\\u2019t find its definition anywhere). I guess w^Q_i is the center of Q_i = A(Si,P), where P ~ \\\\mathcal{Q}. How can w^Q_i be free within the inequality without being bound to an expectation over P? Especially since you take an expectation over P within the training error. I guess the correct formulation of the bound is one that has E_{P ~ \\\\mathcal{Q}}[||w^Q_i - w^{\\\\mathcal{Q}}||^2] instead of ||w^Q_i - w^{\\\\mathcal{Q}}||^2. \\nMaybe the issue is somewhere in between Eq 39 and 40, where the authors take an expectation over Pi (which is not clearly defined as well), but not over Qi?\\n-- In Eq. 27 last equality: we have a term of the form: E_v ||w^Q - w^P||^2. Where is v in the distance squared?\\n-- I\\u2019m not sure what is the motivation of applying Eq. 28, the bound should be tighter with the LHS term instead of the RHS.\\n-- In the top half of page 15, P stands for a prior independent of \\\\mathcal{Q}, while in the main text, P~\\\\mathcal{Q}. Therefore, the relationships between different arguments are unclear to me. \\n-- Finally, I think the paper could be improved if the authors would organize the appendix.\", \"experiments\": \"In your experiments you compare the derived algorithm to other PAC-Bayesian baselines. Can you show that your algorithm outperforms other existing baselines in the literature of Meta-Learning (in terms of generalization, etc')?\", \"typos\": \"-- \\u201cCatino\\u201d ---> \\u201cCatoni\\u201d.\\n\\nFinally, I think the authors should address the public comment by Tomer Galanti. It seems that Thm. 9 in Galanti et al. (2016) introduces a PAC-Bayesian theorem for Meta-Learning (they call it transfer learning), similar in its nature to Thm. 1 in the current paper. In Thm. 9 they have a hyperprior, denoted by P, learn a hyper posterior Q, for selecting posteriors qi for many different tasks di. In their framework, for each task di, their method returns a posterior qi that minimizes the bound for a specific task. This distribution, qi, is a function of a prior B selected by Q (P in your notation) and the i\\u2019th task\\u2019s dataset si (Si in your notation). Therefore, instead of denoting it by A(Si,P) as done by the authors, they simply call it a minimizer (but it is a function of the same argument that the authors address). Overall, it seems to me that Galanti et al. had a very similar setting, with different notations and focus on a specific algorithm A. \\n\\n\\nI think that overall, the paper has interesting insights and the relationship with local coordinate coding seems like an interesting direction. However, I think the paper should not be published in its current form.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Post-rebuttal:\\n===========\\nThank you to the authors for responding to my review, and for adding the comparison to other meta-learning methods besides Amit et al. (2018), which makes it clearer in which settings this technique outperforms purely discriminative approaches (in particular with few tasks & many samples from each task). However, I would assume that not using support-query partitioning for MAML and Matching Nets is likely to reduce their performance.\\n\\n> \\\"Second, from the algorithm perspective, we use the whole sample set S as the input for LCC-based prior predictor \\\\bar{w}^P = \\\\bar{\\\\Phi}_v(S) .\\\"\\nThanks to the authors for this clarification--it is now clear to me that the experimental setup is not the episodic training setup of Vinyals et al. (2016) that partitions episodes into support and query sets, thus the difficulty of comparison to other few-shot learning methods that use this setup.\\n\\nHowever, this exposes a potential problem with the formulation: As the authors state in the submission, \\\"...the prior P_i in each task has to be chosen independently of the sample set S_i\\\", in alignment with prior works in data-dependent PAC-Bayes bounds [Catoni, 2007; Parrado-Hernandez et al., 2012; Dziugaite & Roy, 2018]. However, even though the authors begin section 4.1 by stating \\\"Fortunately, the PAC-Bayes theorem allows us to choose prior upon the data distribution D_i. Therefore, we propose a prior predictor...which receives task data distribution Dm and outputs the mean of prior wP\\\"; the prior predictor employed is the \\\"empirical prior predictor\\\" that operates directly on the sample set S_i.\\n\\nThis appears to be a contradiction that is not sufificiently addressed in the text (nor in the response to Reviewer #4). To fix this, the authors would have to more clearly explain why their theoretical results do not require the separation of task-specific data into a subset of data used to produce the prior and a subset used in the computation of the bound, or adapt the experimental setting to meet this theoretical requirement (in which case the setup is very similar to the support-query partitioning commonly used to evaluate few-shot learning methods, therefore bringing into question the necessity of using an alternate evaluation protocol to the one that is standard in few-shot learning).\", \"before_rebuttal\": \"=============\\nThe submission makes use of a data-dependent PAC-Bayes bound on the generalization error of a classifier estimated in a few-shot learning setup. The episodic few-shot learning setup from Vinyals et al. (2016) provides a small dataset for each task, partitioned into a support and a query set; at test time, only the labels for the support set are provided. The submission takes advantage of this setup by leveraging the support set in the construction of a data-dependent prior, an idea referred to as \\\"locality\\\"; this is in contrast to prior work in PAC-Bayes for hierarchical models (e.g., Pentina & Lampert, 2014; Amit & Meir, 2018) in which the data-dependency enters only across tasks, and not within a task.\", \"strengths\": [\"Coherent formulation of data-dependent PAC-Bayes for a meta-learning setting that partitions episodic data into a support set (used to compute the data-dependent prior) and a query set (used to produce the posterior.\", \"The method outperforms prior approaches constructed from PAC-Bayes generalization bounds (LML; ML-A; ML-AM; ML-PL) on the Caltech-256 and CIFAR-100 datasets.\"], \"weaknesses\": [\"The framing as \\\"localized meta-learning\\\" obscures the lack of difference from the standard partitioning in few-shot episodes in a support and query set.\", \"The proposed method makes heavy use of prior machinery (LCC, prototype-based prior predictor), and as such, the algorithmic novelty is limited.\", \"No comparison is made to approaches that are not constructed use PAC-Bayes generalization bounds (Vinyals et a. 2016; Finn et al. 2017), even though they are readily applied in such settings.\"]}",
"{\"comment\": \"We like your work. Please consider citing \\\"A Theoretical Framework for Deep Transfer Learning\\\" by Galanti et al. 2016, which introduces generalization bounds for transfer learning and PAC-Bayesian bounds in particular. This does not impact the overall novelty of your work.\", \"title\": \"A Theoretical Framework for Deep Transfer Learning\"}"
]
} |
SJxSDxrKDr | Adversarial Training and Provable Defenses: Bridging the Gap | [
"Mislav Balunovic",
"Martin Vechev"
] | We present COLT, a new method to train neural networks based on a novel combination of adversarial training and provable defenses. The key idea is to model neural network training as a procedure which includes both, the verifier and the adversary. In every iteration, the verifier aims to certify the network using convex relaxation while the adversary tries to find inputs inside that convex relaxation which cause verification to fail. We experimentally show that this training method, named convex layerwise adversarial training (COLT), is promising and achieves the best of both worlds -- it produces a state-of-the-art neural network with certified robustness of 60.5% and accuracy of 78.4% on the challenging CIFAR-10 dataset with a 2/255 L-infinity perturbation. This significantly improves over the best concurrent results of 54.0% certified robustness and 71.5% accuracy.
| [
"adversarial examples",
"adversarial training",
"provable defense",
"convex relaxations",
"deep learning"
] | Accept (Talk) | https://openreview.net/pdf?id=SJxSDxrKDr | https://openreview.net/forum?id=SJxSDxrKDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"_O9Ea5Bee",
"B1g7RvSniH",
"HkgbVDojiB",
"SJlJOp15sH",
"rJli321ciH",
"ryesunycsr",
"B1gfN2J9sB",
"rkeFKsk5or",
"S1xH9B9Gor",
"BJgNNQyF5r",
"BJecVs_6KS",
"B1xSsNgtKS",
"ByxXM0cuKH",
"rJe1s3zruS",
"SkxwmsGB_H",
"HJlVRSuADr",
"HJeKpAyTvH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"comment",
"comment"
],
"note_created": [
1576798747152,
1573832651490,
1573791529423,
1573678439168,
1573678259041,
1573678194869,
1573678121583,
1573677953216,
1573197196701,
1572561708466,
1571814194187,
1571517597100,
1571495435331,
1570217111088,
1570216734879,
1569781196372,
1569681088992
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2360/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2360/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2360/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2360/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2360/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2360/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2360/Authors"
],
[
"~Greg_Yang1"
],
[
"ICLR.cc/2020/Conference/Paper2360/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2360/Area_Chair1"
],
[
"ICLR.cc/2020/Conference/Paper2360/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2360/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2360/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2360/Authors"
],
[
"~Jeremy_Cohen1"
],
[
"~Anthony_Wittmer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Talk)\", \"comment\": \"The reviewers develop a novel technique for training neural networks that are provably robust to adversarial attacks, by combining provable defenses using convex relaxations with latent adversarial attacks that lie in the gap between the convex relaxation and the true realizable set of activations at a layer of the network. The authors show that the resulting procedure is computationally efficient and able to train neural networks to attain SOTA provable robustness to adversarial attacks.\\n\\nThe paper is well written and clearly explains an interesting idea, backed by thorough experiments. The reviewers were in consensus on acceptance and relatively minor concerns were clearly addressed in the rebuttal phase.\\n\\nHence, I strongly recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for the response.\", \"comment\": \"It's great to hear that you have managed to scale your approach to larger networks now. This certainly further strengthens your contribution.\\n\\nI'm also satisfied with the clarification regarding the relatively weak performance of previous provable defences.\"}",
"{\"title\": \"Response to author response (reviewer #4)\", \"comment\": \"Thank you for your response. That answered my questions. I would encourage the information about hyper-parameters and computing power, to be included in the paper. Other than that, I think this is a solid contribution to the literature, and am excited to see future work by the authors.\"}",
"{\"title\": \"Response to the concern\", \"comment\": \"Thank you for your feedback. We believe it is enough to modify our claim that we achieve a \\u201cmodel with state-of-the-art accuracy and certified robustness\\u201d to \\u201cstate-of-the-art neural network\\u201d. A smoothed classifier is not a neural network (e.g. this is explained in related work in [1] and the comment by Jeremy below), so we believe this clarification should help address your comment. We updated the abstract in the newly updated PDF to reflect this.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your feedback. Below we answer the main concerns.\", \"q\": \"The authors' explanation that they couldn't achieve state-of-the-art certified robustness because of smaller network capacity makes sense, however, it also highlights that their protocol doesn't scale as well as previous approaches.\\n\\n\\u2192 We have now managed to scale our approach to larger networks using the approach of Wong et al. (2018) to statistically estimate bounds during training. Please see our main points above for a more detailed description.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your feedback. Below we answer the main concerns.\", \"q\": \"Could there be significant variability in results due to the fact that only 1000 images from the test set were certified?\\n\\n\\u2192 To check the amount of variability, we certified another random subset of 1000 images, with little difference in the results. Please see the main response for the results of this experiment.\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"Thank you for your feedback. Below we answer the main concerns.\", \"q\": \"How do the comparison methods compare in terms of training time/machines used? E.g. do all the methods reported in Table 1 use similar amounts of computing power?\\n\\n\\u2192 Methods in Table 1 are very different in terms of computing power. While it is hard to directly compare them, Gowal et al. report their method takes 3.5 seconds per epoch and Wong et al. takes 2 minutes per epoch on the MNIST dataset. On MNIST, our method takes 2.5 minutes per epoch while training the first layer, 5 minutes per epoch for the second layer and 10 minutes per epoch for the third layer. Our CIFAR-10 networks take roughly 1 day to train on 1 GeForce RTX 2080 Ti GPU.\"}",
"{\"title\": \"Improved results and answers to common questions\", \"comment\": \"We thank the reviewers for their comments. We first explain the improvements we introduced since the time of submission and then proceed to answer the questions raised by the reviewers.\\n\\n## Improved results\\n\\nSince the time of submission we scaled our approach to larger networks and improved the results. On CIFAR-10, these improvements led to training a network with 78.8% standard accuracy (4% improvement) and 58.1% (2.2% improvement) certified robustness for 2/255 perturbations. For 8/255 perturbation, we trained a network with 49.7% (3.5% improvement) standard accuracy and 26.0% (1.6% improvement) certified robustness. We updated the paper with new network and results.\", \"we_list_the_changes_we_incorporated_to_achieve_these_improvements\": \"- We applied random projections from Wong et al. (2018) to statistically estimate the region C_l instead of computing it exactly. We note that the general method still stays the same, so sections 3, 4 and 5 were not changed (except one clarification sentence in section 5), but this instantiation allows it to scale to larger networks. In Appendix B, we provide full derivation of the bounds using random projections. During certification, we also still use the same procedure as before (with exact guarantees, without estimating the bounds).\\n\\n- In the projection operator, we avoid the computation of full matrix A_l and instead only compute matrix-vector product A_l * e by changing the order of computation, which we clarified now in Section 5.\\n \\n- Similarly as Xiao et al., we incorporated additional regularizer to introduce ReLU stability. Our regularizer is slightly different as it is tailored to minimize the volume for the particular linear convex relaxation we are using. The regularizer is explained in Appendix D.\\n\\n## Common questions:\\n\\nR3, R4: Could there be significant variability in results due to the fact that only 1000 images from the test set were certified?\\n\\n\\u2192 First, we clarify that we always evaluate standard accuracy on the full test set. To check the amount of variability in the certification results, we evaluated our CIFAR-10 network with 2/255 perturbation on another random subset consisting of 1000 images. On this subset we can certify 56.7% images, compared to 58.1% on the first 1000 images. In the next revision of our paper, we will evaluate all 10 000 images to match the evaluation setting used by prior work. Given the repetition experiment, we believe the results will not change significantly.\\n\\nR3, R4: Could you add more evaluation to see the performance on a variety of datasets and perturbations?\\n\\n\\u2192Yes. We performed additional experiments on SVHN and MNIST datasets and provided the results in Appendix C. \\nFor SVHN and perturbation 0.01 we also achieve state-of-the-art standard accuracy and certified robustness.\\nFor MNIST and perturbation 0.1 our results are comparable to state-of-the-art, while for perturbation 0.3 our certified accuracy is lower than the one achieved by approaches based on interval propagation. We believe that, because of large perturbation of 0.3, random projections are imprecise and one would need to use the exact bounds which introduces much higher cost at runtime. This is also reflected in the poor performance of Wong et al. (2018) who use the same random projections. We believe that instantiating our method with a convex relaxation that is more memory friendly than what we used would likely yield better results.\"}",
"{\"title\": \"A Suggestion for the Abstract\", \"comment\": \"Dear Authors,\\n\\nThanks for an interesting paper. While not comparing to randomized smoothing approaches in the paper is fine, perhaps it is misleading to claim, unconditionally, state-of-the-art accuracies in the abstract. For one, I believe a typical reader would assume the randomized smoothing is compared in the paper based on the abstract. Another reason is that the numbers reported is very far from the unconditional state-of-the-art, which achieves 68% provable accuracy and 87.2% clean accuracy (via SmoothAdv + ImageNet pretraining) [1].\\n\\nI think some simple conditionals here would suffice, such as \\\"nonprobabilistic certificates with no extra data\\\" or something of the sort.\\n\\nI apologize in advance for such an \\\"annoying\\\" comment, but I hope this would only improve the presentation of your paper from the get-go (in the abstract)!\\n\\n[1] https://arxiv.org/abs/1906.04584\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper was very clearly written and easy to follow. Kudos to the authors. In particular, the experimental evaluation section was exceptionally clear. Thanks to the authors for making the paper so easy to review. The \\u201cMain Contributions\\u201d section was excellent as well as it allows the reader to quickly understand what the paper is claiming.\\n\\nThe introduction & related work section was very clear, and seemed to quickly get the reader up to speed.\", \"minor_critiques\": [\"It\\u2019s not clear to me that the network size is actually as impressive an improvement as is implied. Barring an extensive hyper-parameter search that demonstrated that this network architecture is the smallest possible that could achieve the presented results, I strongly suspect that applying techniques from papers like EfficientNet [1] or MobileNet would allow the authors of Mirman et. al (2018) to reduce the number of parameters required to achieve their results. I don\\u2019t think this takes away from the paper, though- the results are strong despite that. I would encourage the authors to weaken the claims that the only better network is 5 times larger.\", \"In general, I would have liked to see more evaluation- e.g. I would have liked to see more results with a variety of perturbations (2 through 8, not just 2 & 8), and on a variety of datasets.\"], \"questions_to_the_authors\": [\"How robust is the algorithm to architecture choice?\", \"What happens if you change the test set? E.g. instead of evaluating on the first 1000 images, what if you evaluate on another random subset? Does that make a difference? I\\u2019m concerned that the subset of the test set the authors are using for evaluation isn\\u2019t representative of the entire test set.\", \"Is the current architecture the largest network that can be run? I would be interested in seeing how network size affects the performance of your technique.\", \"What hyper-parameter tuning did you do? What other network architectures did you try?\", \"How do the comparison methods compare in terms of training time/machines used? E.g. do all the methods reported in Table 1 use similar amounts of computing power?\", \"Overall, this is a great paper, with some interesting results presented in a tight, clear manner. While I would like to see more experiments on larger datasets- e.g. ImageNet- the results seem solid and absolutely worthy of publication.\", \"[1]: https://arxiv.org/abs/1905.11946v2\", \"[2]: https://arxiv.org/abs/1704.04861\"]}",
"{\"comment\": \"I would also add that randomized smoothing has another disadvantage on inference time - since at inference time a randomized smoothing classifier has to average predictions of several random perturbations of the input data, it significantly slows down inference, which can be a deal-breaker in latency critical applications (like for models that power web search, Google translate etc.)\", \"title\": \"Follow-on\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"\", \"summary\": \"This paper provides a promising new general training methodology to obtain provably robust neural networks (towards adversarial input perturbations). The paper provides promising experimental results on CIFAR-10 by obtaining state-of-the-art certified accuracy while also simultaneously improving clean accuracy. The paper is overall well-written and the algorithm is clearly described.\", \"important_questions_to_be_answered\": \"I find the need to clarify my understanding and request for more information in order to make a decision. \\n\\n--Methodology/motivation for the method: I am trying to understand abstractly what the proposed layerwise training is trying to optimize. To be concrete, let's compare to the relaxation of Wong and Kolter (which this paper uses in the instantiation of layerwise adversarial training). What's the exact difference?\", \"one_way_to_view_this_is_the_following\": \"The same training objective, but a different way to optimize. The new proposal to train involves freezing weights until one layer iteratively starting from the input layer. It is possible that this kind of training provides some inductive bias in finding better solutions. Is this an appropriate understanding?\\nHowever, the paper\\u2019s experimental results unfortunately change the certification procedure. In other words, they haven\\u2019t evaluated the same training objective as that of Wong and Kolter. Hence, it\\u2019s not clear if the gains are from the better networks, or better certification method, or network being better suited for certification by the method used. The phrase \\u201csame relaxation\\u201d is not appropriately used. Their certification procedure uses a different (and tighter) relaxation. \\n\\n\\n--Effect on latent adversarial examples: I am unable to understand why this training procedure would reduce the number of latent adversarial examples. The definition of latent adversarial examples seems to suggest that it\\u2019s the gap between the actual set of activations corresponding to the input perturbations and the convex hull. However, the proposed layerwise adversarial training procedure involves replacing the actual set S_i with the convex hull C_i when freezing things till below i-1. I do not follow how the proposed method tries to make C_i = S_i. Implicitly the optimization objective does try to make C_i small because the bounds being optimized are tighter when C_i is small. But this is true even for normal certified training, and not sure what changes in the new training procedure.\", \"specific_experimental_results_that_would_help\": \"--Certified accuracy on using the same LP based certification procedure used in Wong and Kolter with the new layerwise trained networks\\n--The paper\\u2019s own certification procedure (a combination of previous methods) on the network from Wong and Kolter or a note on why that doesn\\u2019t apply (if it doesn\\u2019t)\\n--The paper currently provides only one data point to suggest this training method is superior. Would be good to try SVHN or MNIST. MNIST is perhaps \\u201cessentially\\u201d solved for small \\\\eps. But would be good to see if the training method offers gains at larger \\\\eps. In general, would be good to see more consistent gains. \\n--The paper reports results on first 1000 examples of CIFAR10 test set. From my personal experience, there is a lot of variability in the robustness of test examples when evaluated on 1000 random test instances. Especially since the paper doesn't take a random subset, it might be good to make sure the gains are consistent on some other subset. The Wong et al. baseline is evaluated on the entire test set for example, and hence might not be a fair comparison? What's the Wong et al. accuracy on just the first 1000 test exampes\\n\\nOverall, I am leaning towards accept but need some conceptual and empirical clarification from the authors (detailed above).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary: the paper introduces a novel protocol for training neural networks that aims at leveraging the empirical benefits of adversarial training while allowing to certify the robustness of the network using the convex relation approach introduced by Wong & Kolter. The key ingredient is a novel algorithm for layer-wise adversarial (re-)training via convex relaxations. On CIFAR-10, the proposed protocol yields new state-of-the-art performance for certifying robustness against L_inf perturbations less than 2/255, and comparable performance over existing methods for perturbations less than 8/255 (where the comparison excludes randomized-smoothing based approaches as proposed by Cohen et al.).\\n\\nThe proposed methodology seems original and novel. The concept of latent adversarial examples, the layer-wise provable optimization techniques and the sparse representation trick are interesting in their own regard and could be valuable ingredients for future work in this direction. The improvement over the state-of-the-art on CIFAR-10 for perturbations less than 2/255 is significant (although I wouldn't call it substantial). For perturbations less than 8/255 the picture is less clear. The authors' explanation that they couldn't achieve state-of-the-art certified robustness because of smaller network capacity makes sense, however, it also highlights that their protocol doesn't scale as well as previous approaches.\\n\\nI am not concerned about the missing comparison with randomized smoothing-based approaches (I find the rationale provided in Section 2 convincing).\\n\\nThe discussion of the relatively weak performance of previous provable defenses on page 3 is a bit vague, e.g. the statement that \\\"the way these methods construct the loss makes the relationship between the loss and the network parameters significantly more complex than in standard training\\\", thus causing the \\\"resulting optimization problem to be more difficult\\\". To me, these are one and the same thing, and a bit more rigour in the argumentation would be advisable here, in my opinion.\\n\\n-------------\\n\\nI acknowledge I have read the authors' response and also the other reviews/comments which confirm my opinion that this paper is worthy to be published at ICLR.\"}",
"{\"comment\": \"Dear Anthony,\\n\\nThanks for your interest in our work. Below we respond to your main concerns:\", \"q\": \"\\u201cThe idea of combining the adversarial training and the provable defense is not vey novel. Previous work [2] has combimed the adversarial training and the provable defense (randomized smoothing) to boost the provable robustness.\\u201d\\n\\n\\u2192 In this work, we propose the combination of adversarial training and certification of neural networks with exact guarantees. To the best of our knowledge, such a combination was not considered in prior work. The work you mention uses adversarial training to improve smoothed classifier, which ultimately provides stronger *probabilistic* guarantees for the same smoothed classifier. As mentioned in the previous question, in light of this, we believe our combination is novel. We will further clarify the differences with the smoothing based approach.\\n\\nThe authors\", \"title\": \"Response to main concerns\"}",
"{\"comment\": \"Dear Jeremy, thank you for your helpful clarifications of the limitations of randomized smoothing and differences with certification of neural network classifiers. We certainly agree that both research directions are worth pursuing.\\n\\nThe authors\", \"title\": \"Thank you for helpful clarifications\"}",
"{\"comment\": \"Hi Anthony,\\n\\n(I don't know the authors of this submission, I just came across your comment.) There are two aspects of randomized smoothing that are unsatisfying:\\n (1) We don't certify the robustness of a neural network, we certify the robustness of a smoothed neural network g, which is a (deterministic) classifier whose predictions cannot be evaluated exactly, only approximated to arbitrarily high confidence. Alternatively, you could view the Monte Carlo approximation to the smoothed neural network (i.e. a classifier which returns the majority vote of the base classifier over 1000 randomly corrupted inputs) as a randomized classifier g_hat, and you could say that we \\\"probabilistically\\\" certify the robustness of this classifier g_hat, in that we give guarantees of the form: for every input x+delta in a ball around x of radius R, 99% of the time when you evaluate g_hat at (x+delta) you would see g_hat(x+delta) .= cA.\\n\\n(2) Our certification procedure for certifying the robustness of g around x is also probabilistic, in the sense that there is always some probability that it will \\\"fail\\\", by returning a radius larger than the radius in which g is truly robust. In our paper, we set this failure probability so low that there is absolutely no doubt that the true certified accuracy of g is more than a hair away from the \\\"approximate\\\" certified accuracies that we reported in the paper. But it is still sort of unsatisfying to not be able to deterministically certify the robustness of the smoothed classifier.\\n\\nGiven these disadvantages of randomized smoothing, I certainly think that research on the problem of certifying neural network classifiers is worthwhile in its own right, even though the numbers don't currently match those of randomized smoothing.\", \"title\": \"different setttings\"}",
"{\"comment\": \"Hi, it is a great work but I have some questions about some claim in this paper.\\n\\nIn this paper, the claim about \\\"We do not compare to smoothing-based approaches Cohen et al. (2019)[1], as these provide probabilistic instead of exact guarantees.\\\" may not be true. \\n\\nFor an input image, the model of Randomized Smoothing can give a robustness radius R, where for any perturbations $\\\\left \\\\| \\\\delta \\\\right \\\\|_2 \\\\leq R$, the model can provide the robustness guarantee. So I do not know why the authors make the above claim. \\n\\nMoreover, Cohen et al. have compared their smoothing method with the method of Wong et al. (2018), which is a baseline in this paper. So I think it necessary to compare to smoothing-based approaches, .Cohen et al. (2019) and [2]. In addition, randomized smoothing is currently the only approach that can provide provable robustness guarantees on ImageNet-scale problems.\\n\\nThe idea of combining the adversarial training and the provable defense is not vey novel. Previous work [2] has combimed the adversarial training and the provable defense (randomized smoothing) to boost the provable robustness.\\n\\n\\n[1] Certified Adversarial Robustness via Randomized Smoothing. ICML 2019\\n[2] Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers. upcoming NeurIPS 2019\", \"title\": \"Questions about the claim\"}"
]
} |
SyeHPgHFDr | Finding Deep Local Optima Using Network Pruning | [
"Yangzi Guo",
"Yiyuan She",
"Ying Nian Wu",
"Adrian Barbu"
] | Artificial neural networks (ANNs) are very popular nowadays and offer reliable solutions to many classification problems. However, training deep neural networks (DNN) is time-consuming due to the large number of parameters. Recent research indicates that these DNNs might be over-parameterized and different solutions have been proposed to reduce the complexity both in the number of parameters and in the training time of the neural networks. Furthermore, some researchers argue that after reducing the neural network complexity via connection pruning, the remaining weights are irrelevant and retraining the sub-network would obtain a comparable accuracy with the original one.
This may hold true in most vision problems where we always enjoy a large number of training samples and research indicates that most local optima of the convolutional neural networks may be equivalent. However, in non-vision sparse datasets, especially with many irrelevant features where a standard neural network would overfit, this might not be the case and there might be many non-equivalent local optima. This paper presents empirical evidence for these statements and an empirical study of the learnability of neural networks (NNs) on some challenging non-linear real and simulated data with irrelevant variables.
Our simulation experiments indicate that the cross-entropy loss function on XOR-like data has many local optima, and the number of local optima grows exponentially with the number of irrelevant variables.
We also introduce a connection pruning method to improve the capability of NNs to find a deep local minimum even when there are irrelevant variables.
Furthermore, the performance of the discovered sparse sub-network degrades considerably either by retraining from scratch or the corresponding original initialization, due to the existence of many bad optima around.
Finally, we will show that the performance of neural networks for real-world experiments on sparse datasets can be recovered or even improved by discovering a good sub-network architecture via connection pruning. | [
"network pruning",
"non-convex optimization"
] | Reject | https://openreview.net/pdf?id=SyeHPgHFDr | https://openreview.net/forum?id=SyeHPgHFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"UF9ZlhlBAl",
"H1lOu8zkcS",
"H1exfgYCYB",
"SkgJL1KqKr"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747121,
1571919471633,
1571880968098,
1571618631294
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2359/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2359/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2359/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper provides empirical evidence on synthetic examples with a focus on understanding the relationship between the number of \\u201cgood\\u201d local minima and number of irrelevant features. The reviewers find the problem discussed to be important. One of the reviewers has pointed out that the paper does not present deep insights and is more suitable for workshops. The authors did not provide a rebuttal, and it appears that the reviewers opinion has not changed.\\n\\nThe current score is clearly not sufficient to accept this paper in its current form. Due to this reason, I recommend to reject this paper.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper addresses the very important topic of local minima in Deep Learning. This is one of the central questions in the theory of Deep Learning for the last years, and despite many interesting results the main questions remain wide open.\\nThe reviewer really likes the approach proposed in the paper, to use a simple model and an artificially generated data to study a certain phenomenon. The reviewer represents the opinion that more focus on such setups would greatly benefit the community in terms of progressing the theoretical understanding.\\nThe claim made in the paper that there is a relationship between the number/suboptimality of local minima and the scarcity of the data is both convincing and interesting. The result is well motivated and explained.\\nWhat the reviewer thinks the paper would greatly benefit from would be improving the Related Work section. There was a lot of valuable work in the field done in the past years that ids very relevant to the results presented that is not mentioned.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This submission studies losses at local minima of a set of neural networks trained on an XOR-like synthetic dataset, finds that local minima are of varying quality, and proposes a network pruning method to find better local minima. The pruning method is evaluated on XOR-like datasets as well as real-world datasets.\\n\\nThe use of an XOR-like dataset to study loss landscapes is interesting, making for a controlled and analyzable setting to carry out the study. The way the authors set it up, the XOR-like problem involves nuisance variables that naturally introduce suboptimal local minima into the loss landscape (this is my observation as a reviewer -- I am not sure if the authors were aware of this). I am unsure if Section 2 of the paper was intended as a core contribution or as a motivation for the pruning algorithm proposed in Section 3. Given the set-up\\u2019s simplicity, a short theoretical argument (maybe even a theorem) about the quality and number of local minima one would expect to find could have been more concise and compelling than the empirical analysis from the paper. The findings from Section 2 may not be surprising enough to warrant two full pages. \\n\\nSection 3 proposes a network pruning method to find better local minima. The authors cite a paper by Adrian Barbu as the inspiration for their pruning algorithm with annealing, and use it \\u201cto improve the capability of NNs to find a deep local minimum even when there are irrelevant variables\\u201d. The cited paper by Barbu as well as https://arxiv.org/pdf/1805.01930.pdf (also by Adrian Barbu, not cited, maybe because it appeared) explore feature selection and regularization with (nearly) the same annealed pruning algorithm in some detail. I would be grateful if the authors could highlight the differences between their work and Barbu\\u2019s. \\n\\nI vote to \\u201cweak reject\\u201d this paper. The paper discusses interesting ideas, but other ICLR submissions present deeper and more novel material, and there appears to be some (unintentional, I believe) overlap with already-published work. I recommend that the authors cite and discuss https://arxiv.org/pdf/1805.01930.pdf , and possibly submit the paper at a less competitive conference. \\n\\n\\nFurther comments / questions / advice\\n=================================\\n\\n- It would be helpful if the authors made more clear what they consider the key contributions of their paper. If contributions build directly on earlier work, it\\u2019s helpful to highlight the differences. \\n\\n- Section 4.2 states that datasets were \\u201ccarefully selected\\u201d in what sounds like a case-by-case basis, probably with the goal of finding data sets on which CPNA outperforms networks trained with vanilla gradient descent methods. This process would have selection bias and surface data sets on which CPNA outperforms. I could be grateful if the authors could clarify if this was indeed the process, or if a less biased criterion was used. For example, one could have chosen data sets on which a 1-layer fully connected neural network achieves between 50% and 90% F-1. \\n\\n- A reader of the paper might wonder for what data sets they should use CPNA in order to train network that achieves low out-of-sample loss. I could be grateful if the authors could comment on this. Following up on the previous point: it would be great the authors could include data sets where CPNA does not outperform. \\n\\n- Could the authors include information on how long training takes for the experiments from Table 3? \\n\\n- https://openreview.net/pdf?id=HkghWScuoQ should probably be cited\\n\\n- https://arxiv.org/pdf/1805.01930.pdf should definitely be cited\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a few sets of experiments related to deep local optima. The first set of experiments are to study the local minima of two layer neural networks with ReLU activation for the hidden layer and specialized for XOR problem. The paper claims that it is quite easy to find a deep local minimum with good generalization when the number of irrelevant features is small, and it becomes harder to find a deep minimum with good generation as the number of irrelevant features increases. It also claims there is a large difference between the test AUC of the best local minimum and the worse one if the training data is difficult.\\n\\nThe second experiment set is about pruning fully connected neural networks to find deeper and better optima. The proposed pruning method employs a annealing schedule and iteratively pruning connections to reduce features and nodes. For XOR datasets, the pruning seems be effective. For several real datasets, pruned models are better than original or equivalent models. \\n\\nOne thing concerns me is that there are a lot of experiment settings seem to be arbitrary. For instance, why use 500 hidden nodes, p is 4, 16, then 100, ...It will be better to explain why those setting are representative so the statements derived from those are valid. \\n\\nFor Figure 1 and 2, why switch sequence? The top 3 subfigures in Figure 1 is AUC, but the top 3 subfigures of Figure 2 is Loss. It is a bit confusing. \\n\\nThe paper is interesting and the experiments are comprehensive. I think the results and conclusion are specific for FC networks. It will be more interesting to study on CNN, etc. Overall, I am a bit concerned with the significance of this paper.\"}"
]
} |
S1ervgHFwS | Adversarial Training Generalizes Data-dependent Spectral Norm Regularization | [
"Kevin Roth",
"Yannic Kilcher",
"Thomas Hofmann"
] | We establish a theoretical link between adversarial training and operator norm regularization for deep neural networks. Specifically, we present a data-dependent variant of spectral norm regularization and prove that it is equivalent to adversarial training based on a specific $\ell_2$-norm constrained projected gradient ascent attack. This fundamental connection confirms the long-standing argument that a network's sensitivity to adversarial examples is tied to its spectral properties and hints at novel ways to robustify and defend against adversarial attacks. We provide extensive empirical evidence to support our theoretical results. | [
"Adversarial Robustness",
"Adversarial Training",
"Spectral Norm Regularization"
] | Reject | https://openreview.net/pdf?id=S1ervgHFwS | https://openreview.net/forum?id=S1ervgHFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"H8JxNMBdVe",
"r1eOQ7j2iB",
"BylmbysnjS",
"BklZ9jKKsS",
"rJetHsYKoH",
"S1lb22MUjS",
"HJgdP1MIoS",
"H1lL7yzIir",
"H1g6ACW8jS",
"HJxvL6Z8iB",
"r1lbZ6bUir",
"B1eJ6jbIjS",
"B1xIuisysH",
"SyeD7eqHcH",
"B1laVbdjYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747091,
1573856032031,
1573854971319,
1573653385047,
1573653313391,
1573428392548,
1573424991929,
1573424925691,
1573424852544,
1573424462676,
1573424377255,
1573424055010,
1573006189878,
1572343838566,
1571680564688
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2358/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2358/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2358/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2358/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2358/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2358/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2358/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2358/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2358/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2358/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2358/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2358/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2358/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2358/AnonReviewer4"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper shows an theoretical equivalence between the L2 PGD adversarial training and operator norm regularization. It gives an interesting observation and support it from both theoretical arguments and practical experiments. There has been a significant discussion between the reviewers and authors. Although the authors made efforts in rebuttal, it still leaves many places to improve and clarify, especially in improving the mathematical rigor of the proof and experiments using state-of-the-art networks.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Amended plot for large alpha\", \"comment\": \"Dear Reviewer\\n\\nIn response to your comment wondering what happens in the practical (PGA) algorithm when \\\\alpha goes to infinity, we empirically tested that the effect of Adversarial Training remains constant when provided with consecutively larger \\\\alpha-values. Please refer to the last plot in the Appendix in the revised version of our paper.\\n\\nPlease also have a look at our response to your review below.\\nThank you for your time.\"}",
"{\"title\": \"Additional Results\", \"comment\": \"Dear Reviewer\\nIn response to your comments - and in parallel to formulating our response - we had started replicating our experiments on a WResNet architecture. In clean training, this reaches 96.3% accuracy on CIFAR10 after 200 epochs of training. We hoped that this platform would satisfy the reviewer's request for evaluation on state-of-the-art models.\\nHowever, due to the high computational requirements of this model, our evaluations did not finish in time for this rebuttal period. Specifically, for both AT and ddSNR we only managed to perform 95 epochs, leaving the model training in an unfinished state.\\nWe do not want to include half-done results in our paper, but we feel it contributes to the discussion if we state here that as of now, there appears to be no significant difference in adversarial robustness between the adversarially trained variant and a model trained using ddSNR (that has an equivalent drop in clean accuracy as the one trained using AT). This is further evidence in favor of our main claim.\", \"accuracy_on_clean_samples_after_95_epochs\": \"\", \"non_regularized_model\": \"0.310\", \"with_adversarial_training\": \"0.625\\nWith d.d. Spectral Norm Regularization: 0.633\\n\\nWe would like to stress again that the purpose of this is to provide evidence for the reasonable assumption that our experimental findings do not change qualitatively when moving to this much more complex architecture, not to attempt to outcompete any other training method. We hope this is satisfactory. We are happy to include the final results in the camera-ready version of the paper.\\nPlease also see our response to your response below.\", \"accuracy_on_adversarial_samples_after_95_epochs\": \"\"}",
"{\"title\": \"Author Response to Comment (Part 1 of 2)\", \"comment\": \"*This is part 1 of 2 of our response*\\n\\nThank you very much for engaging with our comments, we highly appreciate this.\\nPlease find our comments below.\\n\\n\\n>> Regarding the theoretical analysis:\\nThe claim of theorem 1 is ambiguous, I mean, it\\u2019s not precise. Can you give some precise condition for epsilon and alpha, like the upper bound of epsilon and lower bound of alpha for your theorem to hold even under some idealized case like two-layer ReLU network? The current way of presentation is NOT a theorem, but a proposition or intuition.\\n\\nWe respectfully disagree. The conditions in Theorem 1 *are* precise. Namely, Theorem 1 states that there is an exact correspondence between AT and ddSNR if (i) B\\u03b5(x) \\u2282 X(\\u03c6_x), i.e. if the epsilon-ball is contained in the ReLU cell X(\\u03c6_x) around x and (ii) if \\\\alpha \\\\to \\\\infty, i.e. if in the update equation for x_k all the weight is placed on the current gradient direction v_k whereas no weight is put on the previous iterate x_{k\\u22121}, as was clearly stated in the paper.\\nAlso our Theorem does not hold only in some idealized case, it holds in *any* ReLU network.\\nAnd for this statement, we provide a formal proof. How is this not a theorem?\\n\\nWhat we believe the reviewer might have in mind instead is an \\u201cextended proof\\u201d for the \\u201capproximate correspondence\\u201d between AT and ddSNR. \\n\\nSuch an \\u201cextended proof\\u201d would ultimately boil down to introducing a tolerance parameter, say \\\\delta, deriving a radius r such that for all x* with || x* - x || < r, the Jacobian J_f(x*) is \\u201c\\\\delta-close\\u201d to J_f(x), and then proving that the correspondence between AT and ddSNR holds \\\\delta\\u2019-approximately (where \\\\delta\\u2019 depends on \\\\delta). \\n\\nProving such an extension is highly non-trivial (one would have to take into account how much \\u201cnearby\\u201d Jacobians can change based on the crossing of ReLU boundaries) and thus out-of-scope of the current paper. We instead opted to verify this \\u201capproximate correspondence\\u201d experimentally, showing that in practice, the correspondence between AT and d.d. SNR holds approximately in a region much larger than proved in the Theorem, as already discussed in our previous reply.\\n\\n\\n>> Regarding the core contribution and the experiments:\\nIn my opinion, I haven\\u2019t found the information that this paper can take to the whole community. Like, for example, for practitioner, there is no motivations to use the current methods instead of the adversarial training. For theorists, this paper brought no idea on how to inspire and improve the theory on adversarial training and robust generalization. I expected the authors to achieve either of it, but the theories in this paper DO NOT mean to solve the theoretical problem I mentioned, and I think the proposed methods DO NOT outperform the existing baseline methods on both the efficiency and the accuracy.\", \"here_is_what_a_practitioner_could_ask_themselves\": \"Should I go through the effort to add data-dependent spectral norm regularization in addition to adversarial training on my network to make it more robust? Thanks to us, they now know that this won\\u2019t be very fruitful because the two methods do the same thing.\\n\\nFor theoreticians, the case is even more clear: There have been numerous papers that have shown an empirical correlation between spectral norm regularization and adversarial robustness, yet none of them managed to make a clear formal connection between the two. We establish the first direct proof of this connection.\\n\\nLastly, again, we never claim that our methods outperform AT, or that they are in any way preferrable. We also do not claim to solve the learning theory problem of deriving adversarially robust generalization bounds, although we do believe that the correspondence we establish opens the door for such bounds via generalizations of existing global spectral norm based ones [e.g. Bartlett et al. \\u201cSpectrally-normalized margin bounds for neural networks\\u201d or Neyshabur et al. \\u201cNorm-based capacity control in neural networks\\u201d] to our new notion of data-dependent spectral norm regularization.\"}",
"{\"title\": \"Author Response to Comment (Part 2 of 2)\", \"comment\": \"*This is part 2 of 2 of our response*\\n\\n>> For the claim on the experiments, as the concerns I mentioned, if this methods do not consistently show the improvement of the current methods on the existing state-of-the-art methods, I don\\u2019t find the meaning of using such of the methods. \\nFor state-of-the-art methods, I would like to see some of the top methods in https://paperswithcode.com/sota/image-classification-on-cifar-10. \\n93.5% clean accuracy does not mean state-of-the-art performance to me. \\nAlso, most of the experiments aims to show that the condition to show the equivalence between the proposed methods and adversarial training can be satisfied in some real world applications and give some interesting empirical observations.\\n\\nWe would like to stress again that our goal is not to outperform any existing methods. In fact, it would be contrary to our main claim if ddSNR would outperform AT. Instead, our goal is to verify the relevance of our Theorem in a practical setting, by empirically confirming the conditions necessary for the correspondence between AT and ddSNR to hold. By establishing this correspondence in theory and practice, our paper contributes significantly to the understanding of adversarial robustness.\\n\\nMore experiments can of course always be requested, but we believe that confirming our main claims in practice is more important and valuable than including further architectures.\\nIs there a reasonable expectation why there should be a qualitative difference in experimental results between our 93.5%-accurate model and one that is, say, 96% accurate? Our goal is not to improve the state-of-the-art in adversarial robustness, but to compare all methods fairly on a platform that performs competitively, even if it's not at the very top of the leaderboard. \\nThe CNN architecture we used is similar to the one used in the prominent works of Carlini & Wagner\\u2019s \\u201cTowards evaluating the robustness of neural networks\\u201d, In Security and Privacy (SP), IEEE, 2017 and Papernot et al.\\u2019s \\u201cDistillation as a defense to adversarial perturbations against deep neural networks\\u201d, In Security and Privacy (SP), IEEE, 2016, both reporting a clean test accuracy of 80.9% for standard training without data-augmentation.\\n\\nOur accuracies (on clean test) after AT / SNR (~83%) match the ones in related papers that use the architectures requested by the reviewer, e.g. 79% (AlexNet), 83% (ResNet) in Farzan et al. [1] (which the reviewer referred to multiple times), or 79% (ResNet), 87% (WideResNet) in Madry et al. \\u201cTowards Deep Learning Models Resistant to Adversarial Attacks\\u201d.\\n\\n\\n>> For the detailed feedback on the experiments:\\nI miss some of the claims in Section 5.2, so I misunderstood the purpose of Appendix A.5. Sorry for that.\\n\\nThank you for the feedback. We are glad to hear that we could clarify these misunderstandings.\\n\\n\\n>> For the fairness claim, the authors claimed to tune the hyper-parameters to make sure the regularization methods have the similar test accuracy. Is this shown means the result in Table 1? I tend to have as little hyper-parameters as possible. The current hyper-paremeter setting seems weird to me. Also, if 1 iteration and 10 iterations performs the same, I prefer to use 10 iterations to eliminate the potential question.\\n\\nThe purpose of Table 1 is to facilitate reproducibility of our results. Table 1 summarizes the hyperparameters we have found following our protocol to choose the regularization constants such that the models achieve roughly the same test set accuracy on clean examples as the adversarially trained model does. \\n\\nFor each training method we did a sweep over a relatively broad range of hyper-parameters and the numbers we report represent the best configurations for each of the training methods. For your convenience, we have amended the appendix to include a table of searched values for each method.\\n\\nFor global SNR \\u00e0 la Yoshida & Miayto, our aim was to stay as close as possible to the authors' suggestions, as indicated already in our previous reply. For your convenience, we have repeated the main experiments with global SNR using 10 iterations instead of one. Please find the plots in the updated appendix. There is no difference between the 1- and 10-iteration versions.\"}",
"{\"title\": \"Thanks for your response. Below are my ideas.\", \"comment\": \"Regarding the theoretical analysis:\\nBy saying that the claim of theorem 1 is ambiguous, I mean, it\\u2019s not precise. Can you give some precise condition for epsilon and alpha, like the upper bound of epsilon and lower bound of alpha for your theorem to hold even under some idealized case like two-layer ReLU network? The current way of presentation is NOT a theorem, but a proposition or intuition.\", \"regarding_the_core_contribution_and_the_experiments\": \"In my opinion, I haven\\u2019t found the information that this paper can take to the whole community. Like, for example, for practitioner, there is no motivations to use the current methods instead of the adversarial training, as there are several existing efficient implementations of adversarial training that the authors claimed not worse than the proposed methods. For theorists, this paper brought no idea on how to inspire and improve the theory on adversarial training and robust generalization. I expected the authors to achieve either of it, but the theories in this paper DO NOT mean to solve the theoretical problem I mentioned, and I think the proposed methods DO NOT outperform the existing baseline methods on both the efficiency and the accuracy. So I tend to reject this paper.\\n\\nFor the claim on the experiments, as the concerns I mentioned, if this methods do not consistently show the improvement of the current methods on the existing state-of-the-art methods, I don\\u2019t find the meaning of using such of the methods. For state-of-the-art methods, I would like to see some of the top methods in https://paperswithcode.com/sota/image-classification-on-cifar-10. 93.5% clean accuracy does not mean state-of-the-art performance to me. Also, most of the experiments aims to show that the condition to show the equivalence between the proposed methods and adversarial training can be satisfied in some real world applications and give some interesting empirical observations.\", \"for_the_detailed_feedback_on_the_experiments\": \"I miss some of the claims in Section 5.2, so I misunderstood the purpose of Appendix A.5. Sorry for that.\\n\\nFor the fairness claim, the authors claimed to tune the hyper-parameters to make sure the regularization methods have the similar test accuracy. Is this shown means the result in Table 1? I tend to have as little hyper-parameters as possible. The current hyper-paremeter setting seems weird to me. Also, if 1 iteration and 10 iterations performs the same, I prefer to use 10 iterations to eliminate the potential question.\\n\\nIn summary, due to the reason I mentioned, I tend to reject this paper. But if all of the other reviewers think this should be accepted, I will follow their ideas.\"}",
"{\"title\": \"Author Response to Review (Part 1 of 3)\", \"comment\": \"*This is part 1 of 3 of our response*\\n\\nBefore we begin to address the individual comments, we would like to emphasize that the majority of them are already addressed in our paper, including a citation to Farzan et al. [1], which the reviewer refers to multiple times. Most importantly, Farzan et al. [1] (among many others) study \\u201cspectral normalization of the DNN\\u2019s weight matrices\\u201d, i.e. data-INdependent SNR (similar to Yoshida & Miyato - which they also cite and which we have implemented in our paper). \\n\\nWe would like to stress that this is fundamentally different from what we do, which is data-dependent SNR. Notably, data-INdependent SNR can only establish and minimize a loose upper bound on the data-dependent spectral norm. In particular, it cannot account for / does not explain adversarial robustness (see Section 5.4, Figures 3 (left) and 5 (left)). \\n\\nOur data-dependent variant, on the other hand, is a much stronger regularizer. In fact, we show equivalence with adversarial training, which none of the other previous works can establish. Hence, the data-dependent SNR introduced in our paper is not simply a competing method, but a significant generalization.\\n\\nFrom reading the reviewer's comments, we believe that we may not have put enough emphasis in our paper to make this distinction clear. We hope that through this discussion, we can resolve these concerns. We will improve the writing in our paper accordingly.\\n\\n>> The authors only give a data-dependent version of SNR based on the Jacobian of the neural network, which I think is somewhat weak. \\n\\nNote that our presented d.d. SNR is a stronger notion of SNR than any of the previous works, including [1], which the reviewer refers to. \\nWe show this in theory, as our method can be proven to be equivalent to AT under certain conditions, while previous work cannot do that. And we explicitly show in practice that spectral normalization of the DNN\\u2019s weight matrices, as studied by [1], cannot account for adversarial robustness (see Figures 3 (left) and 5 (left)), whereas our data-dependent SNR variant does.\\n\\n>> The fast computational of maximum singular value with power methods have been proposed in [1]. \\n\\nThe power-method based computation of dominant singular values has been known for almost a century now (von Mises 1929). And even in the context of adversarial robustness, it has been studied before [1], see e.g. Yoshida & Miato (which is also cited by [1]).\\n\\nAs such, we do not claim power method based regularization of singular values to be a novel contribution. However, we are the first to provide a power method based formulation of AT and establish a theoretical equivalence between AT and d.d. SNR.\\n\\n>> The experiments are limited with specific settings that are not generally used in practice. Also, the experiment section contain several not so important information.\\n\\nOur model architecture and hyperparameter settings come from publically available and widely used settings and reach comparable performance to state-of-the-art models, while still being feasible to do research on without huge resource requirements.\\n\\nNote that our goal is not to outperform adversarial training, but to show its equivalence to d.d. SNR. We believe our experimental section does show this very thoroughly. Further, what the reviewer calls \\\"not so important information\\\" is actually extremely vital, since it shows that the conditions of our Theorem are well fulfilled in practice. If we wanted to suggest a new method for robustifying networks and improve over adversarial training, the reviewer would be entirely correct and the experimental section should look very different, but that would be contrary to our main Theorem. Our experimental section is aimed at showing that d.d. SNR corresponds to AT in a practical setting and that it does so in accordance with our theory, as we empirically confirm the conditions necessary for our theory to hold.\\n\\nWe have received and continue to receive praise for the thorough experimental evaluation in this paper from other researchers, precisely because it achieves what it is supposed to achieve. Hopefully, given this new perspective, the choice of our experiments makes more sense.\"}",
"{\"title\": \"Author Response to Review (Part 2 of 3)\", \"comment\": \"*This is part 2 of 3 of our response*\\n\\n>> Detailed comments:\\n\\n>> 1. I think the claim of theorem 1 is somewhat ambiguous. How to guarantee there exists such epsilon satisfies this condition? What will happen if \\\\alpha is not sufficiently large? If we don\\u2019t use logits pairing and \\\\ell_2 norm constraint, will the claim hold? I think the correlation behinds the spectral norm and adversarial training is well investigated and use this correlation as the intuition behinds work is enough. \\n\\nFirst, we disagree strongly that this is well investigated and that using the correlation as an intuition is enough. Just because previous work has measured a correlation between two things does not mean further research is unnecessary. Establishing the reason behind such a correlation in a formal manner is a definite step forward in our understanding of this phenomenon. None of the previous work has been able to make this theoretical link - or even the necessary insistence that data-dependence is needed to establish this connection.\\n\\nTo address the specific questions. \\nSince every datapoint exhibits some configuration of activations of the nonlinearities in the network, the existence of the required epsilon-ball is guaranteed by definition. \\nThe requirement of alpha to be large is a weak assumption and one that is well-defined because of the projection operator. We took account of this in the paragraph following the theorem. That being said, for finite alpha, the theorem will not hold exactly, but approximately. This is why our experimental section is geared towards showing empirically (with alpha < infinity) what the theorem claims theoretically (for alpha -> infinity).\\nExtending this work to other settings, such as other \\\\ell_p norms is a non-trivial extension to this work, which we are currently investigating. \\n\\n>> 2. The global SNR only needs to calculate the spectral norm of each layer\\u2019s weight matrix, whose computational cost is acceptable. However, to calculate the Jacobian and use the power methods, we will additionally do several forward pass and backward pass just as AT. As a regularization technique, is this calculation tolerable? If this is some variant of AT, I don\\u2019t find the experiment results support the claim that it will outperform AT consistantly.\\n\\nWe stress again that we never aimed at, nor claimed that, our method outperforms AT. In fact, our experiments show that they are on par, supporting our Theorem that there is a correspondence between the two.\\n\\nThe reviewer is correct that data-dependent SNR is computationally more costly than global SNR, which we state in our paper. In detail, one power-method iteration of d.d. SNR is as costly as one power-method iteration of global SNR, as they involve the same number of matrix-vector products. The reason why d.d. SNR is ultimately a constant factor more costly compared to global SNR is because in global SNR the computation of the regularizer is data-independent (it decouples from the empirical loss), hence the power-method iterations can be amortized across data-points. \\nThat said, data-dependent SNR is equally costly as PGD-based AT. This again supports our claim that the two correspond to each other and as such, yes, the calculation is tolerable.\"}",
"{\"title\": \"Author Response to Review (Part 3 of 3)\", \"comment\": \"*This is part 3 of 3 of our response*\\n\\n>> 3. Why don\\u2019t use some standard neural network architecture like ResNet? Also, are the comparisons fair? For example, the regularization coefficient of global SNR and d.d. SNR are different. And the authors use only 1 iteration to calculate the singular value in global spectral norm regularization, why to do that? \\n\\n\\u201cThe regularization constants were chosen such that the models achieve roughly the same test set accuracy on clean examples as the adversarially trained model does.\\u201d as was clearly stated in our paper. Hence, yes, the comparisons are fair.\\n\\nFor global SNR, we try to stay as close as possible to the original authors' suggestions. Yoshida & Miato write \\u201cOne [power method] iteration [per parameter update] was adequate in our experiments\\u201d and \\u201cwe performed only one [power method] iteration [per parameter update] because it was adequate for obtaining a sufficiently good approximation\\u201d. Note, the computation of the data-independent regularizer decouples from the empirical loss, hence the power-method iterations can be amortized across data-points.\\n\\nAs stated above, our network architecture is standard, is publically available and is used throughout research.\\n\\n\\n>> 4. The evaluation of some assumption on the network is better moved to appendix, as this is only some sanity check, not the core contribution. More experiments with ResNet, WideResNet, MobileNet etc. on CIFAR100 and ImageNet are more convincing.\\n\\nFirstly, it is unclear what evaluation of assumptions the reviewer is referring to. Secondly, we sincerely do not expect to see any difference regarding the correspondence \\u201cAT <-> d.d. SNR\\u201d on other architectures / data sets. Our theorem proves that \\\\ell_2-norm constrained PGA-based AT and d.d. SNR are equivalent for small enough epsilon, while our extensive experiments show that in practice, the correspondence between AT and d.d. SNR holds approximately in a region much larger than proved in the Theorem, the region being roughly the size of the epsilon*-ball used during adversarial training (epsilon* = 1.75 >> epsilon in Theorem), see Figure 2 (left) and discussion in Section 5.3 \\u201cValidity of linear approximation\\u201d. \\n\\nSure, more experiments can always be requested, but we believe that confirming our main claims in practice is more important and valuable than including one further architecture or dataset. Please also consider our comments from the \\\"general comments\\\" section at the beginning of this review on this topic.\\n\\n\\n>> 5. What\\u2019s the attack method in the main context?\\n\\nWe evaluated against \\\\ell_2-norm constrained PGA in the main text, as stated in Section 5.1 and Table 1. Additional results for \\\\ell_\\\\infty PGA attack are provided in the Appendix.\\n\\n\\n>> 6. The discussion in Appendix A.5 is somewhat confusing. If the authors want to argue that the network is locally linear so that we can approximate with linear regression, why should we use the power methods?\\n\\nWe use the power method during training, since we only need access to the dominant singular vector. In the experiment section, we more generally study the spectral properties of the Jacobian, requiring us to compute the full spectrum and not just the dominant singular value / vector pair. The full spectral decomposition requires much more computation and is only viable when evaluating / investigating certain properties, not during training. We very clearly stated this in the first paragraph of Section 5.1.\\n\\n\\n>> \\\"Still, I feel the contribution of this paper is somewhat weak. I don\\u2019t see any improvements of the proposed algorithms compared with the standard adversarial training, as well as the theoretical contribution like adversarial generalization. The experiments are not convincing, as the setting is different from the general setting the community used in adversarial training. I\\u2019m not familiar with the results in global spectral normalization and it\\u2019s possible that the global spectral normalization may have little gain in adversarial robustness, but in my opinion, the main contribution [1] is the generalization analysis of spectral normalized adversarial trained neural networks, which this paper lacks. On the empirical side, the computation efficiency and performance of the proposed algorithms don\\u2019t outperform adversarial training much. So I tend to reject this paper.\\\"\\n\\nAgain, we 1. do not claim to outperform AT, we claim to show its correspondence to d.d. SNR and 2. we show this correspondence in a theoretical way that no previous work has managed to establish. Our experimental section reflects and supports these points very well. Also, we are not in a competition with [1].\"}",
"{\"title\": \"Author Response to Review (Part 1 of 2)\", \"comment\": \"*This is part 1 of 2 of our response*\\n\\nWe would like to thank the reviewer for these comments. We hope that with our detailed answers below we can initiate a fruitful discussion that resolves all concerns.\", \"general_concerns\": \"(i) limited theoretical results \\n\\nNote that while our Theorem establishes an exact equivalence for sufficiently small epsilon, as noted by the reviewer, we verified in extensive experiments that in practice the correspondence between AT and d.d. SNR holds up very well in a region much larger than proven in our Theorem, specifically large enough to cover common use cases. Therefore, the conclusions of our theory are directly relevant to practitioners.\\n\\n(ii) No significant improvement for practical algorithm\\n\\nWe would like to stress that we never claimed that d.d. SNR outperforms AT. The main point of our Theorem (indeed our paper) is that there is a correspondence between the two. In fact, it would be contrary to our main claim if one would outperform the other, be that in terms of final adversarial robustness or computational complexity. We believe that our paper shows this equivalence in theory and practice and thereby increases the understanding of adversarial robustness.\", \"specific_concerns\": \">> Main theorem is only valid for small perturbations, unclear how this assumption relates to practice.\\n\\nThe condition on epsilon in the Theorem guarantees that the Jacobian is fixed for all x* with ||x* - x|| <= epsilon, in which case the correspondence between \\\\ell_2 norm constrained PGA-based AT and data-dependent SNR was proven to be an exact equivalence. \\n\\nIn practice, however, the correspondence between AT and d.d. SNR holds approximately in a region much larger than proved in the Theorem: As shown in Figure 2 (left) and discussed in Section 5.3 \\u201cValidity of linear approximation\\u201d, we verified that the Jacobian is almost constant in a region that is roughly the size of the epsilon*-ball used during adversarial training (epsilon* = 1.75 >> epsilon in Theorem).\\n\\nThe correspondence is in fact consistently supported by all our experiments. In Section 5.4 Adversarial Robustness, for instance, we show that a network trained with d.d. SNR is equally robust to adversarial perturbations with varying magnitude as the PGA trained network is.\\n\\nIn other words, the Theorem is applicable in practice as long as the Jacobian of the network remains approximately constant in the epsilon-ball under consideration. We will add a paragraph below the Theorem to make this clear.\\n\\n\\n>> It is unclear how the assumption \\\\alpha \\\\to \\\\infty influences the practical algorithm.\\n\\nWe elaborate on the condition on \\\\alpha in the paragraph below Theorem 1: \\u201cin the update equation for x_k all the weight [if \\\\alpha -> \\\\infty] is placed on the current gradient direction v_k whereas no weight is put on the previous iterate x_{k\\u22121}\\u201d. Note that because of the projection operator, the limit case is well defined.\\n\\nMathematically, lim_{\\\\alpha \\\\to \\\\infty} \\\\Proj ( x_{k-1} + \\\\alpha*v_k ) is equivalent to lim_{\\\\alpha \\\\to \\\\infty} \\\\Proj ( 1/alpha*x_{k-1} + v_k ).\\nTherefore, in the practical algorithm, instead of letting the prefactor \\\\alpha in front of v_k go to \\\\infty, we can equivalently let the prefactor 1/alpha in front of x_{k-1} go to zero, see Equations (18)-(24) in Appendix 7.2 \\u201cProof of Main Theorem\\u201d.\\n\\nThe key insight of our experiments Section is that there is no significant difference between adversarial training with small \\\\alpha and data-dependent spectral norm regularization (corresponding to AT with \\\\alpha -> \\\\inty). Both have a similar regularizing effect on the spectrum, similar local linearity, similar adversarial robustness. This supports our claim that the effect of AT is captured by d.d. SNR.\\n\\n\\n>> Generalize theorem to other \\\\ell_p norms.\\n\\nThis is a non-trivial generalization that we are currently investigating for a future publication.\"}",
"{\"title\": \"Author Response to Review (Part 2 of 2)\", \"comment\": \"*This is part 2 of 2 of our response*\\n\\n>> Discussion of computational complexity of the proposed regularization method compared to PGD.\\n\\nIn the last sentence of Section 4.1 we compare the computational complexity of d.d. SNR with that of global (data-independent) SNR. \\n\\nCompared to PGD, d.d. SNR is equally computationally expensive. As stated in Table 1, both PGD and d.d. SNR were implemented with 10 iterations. Since our main claim is to show equivalence of the two methods, this is a valuable piece of information. We will add a sentence to the paper to emphasize this point.\\n\\n\\n>> Many other types of regularization decreases the (local) Lipschitz constant. Could you further give result that distinguishes the proposed norm?\\n\\nIndeed there are many works that aim at regularizing the Lipschitz constant. However, these works mostly focus on decreasing the global Lipschitz constant, which corresponds to data-independent SNR and gives only a loose bound on adversarial robustness. We would like to stress that this is different from (and weaker than) our presented data-dependent SNR.\\n\\nOne of the main points in our paper, especially our experiments, is that the Lipschitz regularization of previous work cannot account for / does not explain adversarial robustness (see Section 5.4, Figures 3 (left) and 5 (left)). The data-dependent SNR variant introduced in our paper is a novel and significant generalization and the first type of SNR that is equivalent to AT.\\n\\n\\n>> It would be better to give improved algorithm for adversarial training based on the current result. The current contribution for further theoretical is too weak and I don\\u2019t see significant contribution to empirical algorithm.\\n\\nAs we stated above, it is not our goal to improve the practical algorithm of adversarial training, but to show its correspondence to data-dependent SNR. In fact, it would be contrary to our main result to try to improve the practical algorithm.\\n\\nOther than that, we do not understand what the reviewer means by his or her request. Please elaborate.\\n\\n\\n>> Equ (10) seems not the typical one used and seems not the one studied later.\\n\\nWe do study this equation for p=2. See also Equations (33) and (34) in the Appendix, where we show that Equation (10) reduces to Equation (7) under the conditions of our Theorem.\\n\\nPerhaps the reviewer refers to the setting in which the network is trained to only minimize the adversarially perturbed empirical loss. It is however customary to train the network to minimize a convex combination of a clean empirical loss and an adversarially perturbed empirical loss, see the equation on page 5 in Goodfellow et al. \\u201cExplaining and harnessing adversarial examples\\u201d.\\n\\n\\nIn conclusion, we agree that our Theorem makes strong assumptions, but we believe that 1. it is valuable to be on record and theoretically confirm this long-standing hypothesis and 2. we show extensively that the claim of the Theorem holds well beyond its assumptions in practice. As for improving over AT in the practical sense, we never claim to do so, and it would actually run contrary to our claim.\\n\\nWe hope that these comments provide clarification and we look forward to continuing the discussion.\"}",
"{\"title\": \"Author Response to Review\", \"comment\": \"Dear Reviewer\\nWe would like to thank you for your comments.\\nYour feedback is highly appreciated.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Adversarial training generalizes data-dependent spectral norm regularization\\n\\nThis paper shows that, projected gradient descent based adversarial training is similar to the data-dependent spectral norm regularization, and under very restrictive condition, the authors show that this two methods are the same. Some experiments are conducted to support the theory.\\n\\nOverall, I think this paper is marginal, while the experiments are not convincing. First, the relation between spectral normalization and adversarial training have been investigated by [1], while the fast computational of maximum singular value with power methods have also been proposed in [1]. The authors only give a data-dependent version of the spectral normalization based on the Jacobian of the neural networks, which I think is somewhat weak. The experiments are limited with specific settings that are not generally used in practice, which alleviate my confidence on this paper\\u2019s results. Also, the experiment section contain several not so important information. I think the authors should do far more experiments to support the main claim, while move these additional justification to the appendix.\", \"detailed_comments\": \"1. I think the claim of theorem 1 is somewhat ambiguous. How to guarantee there exists such epsilon satisfies this condition? Is this the case we face in the real world? What will happen if \\\\alpha is not sufficiently large? If we don\\u2019t use logits pairing and \\\\ell_2 norm constraint, will the claim hold? I think the correlation behinds the spectral norm and adversarial training is well investigated and use this correlation as the intuition behinds work is enough. This theorem cannot convince me that the proposed methods have a strong theoretical basis.\\n2. Generally, the neural networks have a large number of parameters (~ millions) for image classification task. The global spectral norm regularization only needs to calculate the spectral norm of each layer\\u2019s weight matrix, whose computational cost is acceptable. However, to calculate the Jacobian and use the power methods, we will additionally do several forward pass and backward pass just as adversarial training. As a regularization technique, is this calculation tolerable? If this is some variant of the adversarial training, I don\\u2019t find the experiment results support the claim that it will outperform the adversarial training consistantly.\\n3. Why don\\u2019t use some standard neural network architecture like ResNet? As this results is not comparable to other existing work, I\\u2019m not sure if this result is meaningful. Also, are the comparisons fair? For example, the regularization coefficient of global spectral norm regularization and data-dependent spectral norm regularization are far more different. And the authors use only 1 iteration to calculate the singular value in global spectral norm regularization, why to do that? Also, what\\u2019s the result compared with \\\\ell_p norm constraint adversarial training?\\n4. The evaluation of some assumption on the network is better moved to appendix, as this is only some sanity check, not the core contribution. More experiments with ResNet, WideResNet, MobileNet etc. on CIFAR100 and ImageNet are more convincing.\\n5. What\\u2019s the attack method in the main context?\\n6. I think the discussion in Appendix A.5 is somewhat confusing. If the authors want to argue that the network is locally linear so that we can approximate with linear regression, why should we use the power methods?\\n\\nStill, I feel the contribution of this paper is somewhat weak. I don\\u2019t see any improvements of the proposed algorithms compared with the standard adversarial training, as well as the theoretical contribution like adversarial generalization. The experiments are not convincing, as the setting is different from the general setting the community used in adversarial training. I\\u2019m not familiar with the results in global spectral normalization and it\\u2019s possible that the global spectral normalization may have little gain in adversarial robustness, but in my opinion, the main contribution [1] is the generalization analysis of spectral normalized adversarial trained neural networks, which this paper lacks. On the empirical side, the computation efficiency and performance of the proposed algorithms don\\u2019t outperform adversarial training much. So I tend to reject this paper.\\n\\n\\n[1] Farnia, Farzan, Jesse Zhang, and David Tse. \\\"Generalizable Adversarial Training via Spectral Normalization.\\\" International Conference on Learning Representations, 2019.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This largely theretical paper establishes a theoretical link between adversarial training and operator norm regularization\\nfor DNNs. It is well written and structured, and it falls squarely within the the remit of the conference. The experimental apparatus is thorough and the derivations, proofs and the maths at large seem sound to me, even if I have not checked them in full detail. The study delivers a data-dependent variant of spectral norm regularization affecting large singular values of the DNN. It is proved to be equivalent to adversarial training based on a type of norm-constrained projected gradient ascent attack.\\nResults are novel and relevant and, in my opinion, they merit acceptance.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studies the link between adversarial training and the proposed data-dependent operator norm regularization for ReLU network. Under specific conditions, in theory, the authors show the equivalence between the l2 PGD training and the regularization method. Empirical experiments are conducted to support the theory.\\n\\nWhile this paper gives interesting observation on both theory and empirical study, I think this paper is not qualified for publishing in ICLR due to the following reasons: (1) limited theoretical results; (2) No significant improvement for practical algorithm;\", \"main_argument\": \"The main theorem 1 seems to be weak as it is only valid for small perturbation region \\\\epsilon and it is unclear how this assumption is consistent to the practice. It is also unclear how the assumption that \\\\alpha \\\\to \\\\infty influences the practical algorithm.\\n\\nIt would be better to generalize the theorem to other \\\\ell_p attack, instead of just \\\\ell_2. \\n\\nDiscussion of computational complexity of the proposed regularization method compared with PGD is missed.\\n\\nThe adversarial robustness is related to the (local) Lipschitz continuity and many other types of regularization decreases the (local) Lipschitz constant. Could you further give result that distinguishes the proposed norm?\\n\\nIt would be better to give improved algorithm for adversarial training based on the current result. The current contribution for further theoretical is too weak as the main theorem requires strong assumption. And I don\\u2019t see significant contribution to empirical algorithm.\\n\\n\\nMinor\\nEqu (10) seems not the typical one used and seems not the one studied later.\"}"
]
} |
H1lVvgHKDr | Knowledge Transfer via Student-Teacher Collaboration | [
"Tianxiao Gao",
"Ruiqin Xiong",
"Zhenhua Liu",
"Siwei ma",
"Feng Wu",
"Tiejun Huang",
"Wen Gao"
] | Accompanying with the flourish development in various fields, deep neural networks, however, are still facing with the plight of high computational costs and storage. One way to compress these heavy models is knowledge transfer (KT), in which a light student network is trained through absorbing the knowledge from a powerful teacher network. In this paper, we propose a novel knowledge transfer method which employs a Student-Teacher Collaboration (STC) network during the knowledge transfer process. This is done by connecting the front part of the student network to the back part of the teacher network as the STC network. The back part of the teacher network takes the intermediate representation from the front part of the student network as input to make the prediction. The difference between the prediction from the collaboration network and the output tensor from the teacher network is taken into account of the loss during the train process. Through back propagation, the teacher network provides guidance to the student network in a gradient signal manner. In this way, our method takes advantage of the knowledge from the entire teacher network, who instructs the student network in learning process. Through plentiful experiments, it is proved that our STC method outperforms other KT methods with conventional strategy. | [
"Network Compression and Acceleration",
"Knowledge Transfer",
"Student-Teacher Collaboration",
"Deep Learning."
] | Reject | https://openreview.net/pdf?id=H1lVvgHKDr | https://openreview.net/forum?id=H1lVvgHKDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Qt4B4GPcQU",
"Wk6jd2UkAV",
"HJlEDovqjS",
"rJgj05DqjS",
"H1xKvcvqor",
"B1ezUtD5sr",
"HyxdAeIpFS",
"rkxVAH9LtB",
"rkezle-NtH",
"HylYU_cZKB",
"Bkeh1VJbFr"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1596890372151,
1576798747040,
1573710684282,
1573710546854,
1573710432609,
1573710154125,
1571803343956,
1571362251831,
1571192809914,
1571035217196,
1570989028375
],
"note_signatures": [
[
"~Erika_Taylor1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2357/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2357/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2357/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2357/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2357/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2357/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2357/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2357/Authors"
],
[
"~Grigory_V._Sapunov1"
]
],
"structured_content_str": [
"{\"title\": \"Interdisciplinary addition?\", \"comment\": \"I don't know if it's just me, but I think this paper would benefit from a small excursus adding a sociocultural element to it. As I've read on https://www.myhubintranet.com/knowledge-transfer/, any kind of KT bridging two generations (which is absolutely the case between students and teachers) comes along with certain specifics and characteristics.\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper has been assessed by three reviewers scoring it as follows: 6, 3, 8. The submission however attracted some criticism post-rebuttal from the reviewers e.g., why concatenating teacher to student is better than the use l2 loss or how the choice of transf. layers has been made (ad-hoc). Similarly, other major criticism includes lack of proper referencing to parts of work that have been in fact developed earlier in preceding papers. On balance, this paper falls short of the expectations of ICLR 2020, thus it cannot be accepted at this time. The authors are encouraged to work through major comments and resolve them for a future submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of updates in the paper\", \"comment\": \"We would like to extend our sincere thanks to three reviewers for their constructive feedback and helpful comments. We have made the following major modifications to our manuscript:\\n\\n1.\\tWe have improved our literature survey. More related works are discussed in our paper on page 3.\\n\\n2.\\tWe have modified our claims on contributions and conclusion on page 2 and 8.\\n\\n3.\\tThe definition of the corresponding layers is mentioned in the footnote on page 4.\\n\\n4.\\t\\u201cFor cases that the intermediate representation from the student sub-network has different number of channels with the teacher sub-network, a simple conv layer is employed to transform the dimension.\\u201d This is pointed out in the first paragraph of Sec. 4.1.\\n\\n5.\\tWe have corrected all the typos in the updated version of our paper.\"}",
"{\"title\": \"Responses to Review #3\", \"comment\": \"Dear Reviewer #3:\\n\\nWe would like to extend our sincere thanks to you for your positive comments and constructive feedback. We also want to thank you for taking the time to patiently check the grammar and expression errors in our paper. We have corrected all the typos based on your suggestions in the updated version of our paper. \\n\\nFor the improvement of our paper which is about the choice of the intermediate layer selections, we believe this is a very meaningful work. An instruction on how to choose the intermediate layer from where to teach the representation allows student network to improve performance more effectively during knowledge transfer process. Thank you for your constructive suggestions and we will study it in our future work.\\n\\nBest regards,\"}",
"{\"title\": \"Responses to Review #1\", \"comment\": \"Dear reviewer #1:\\n\\nWe would like to extend our sincere thanks to you for your constructive feedback. \\nWe feel sorry for your confusion. In our method, the student sub-network is all the front part of a selected layer of the student network and teacher sub-network is all the subsequent layers of teacher network corresponding to the selected layer. The definition of the corresponding layers is mentioned in the updated version of our paper and the description of our proposed method have been modified, which can be seen on page 4. For cases that the intermediate representation from the student sub-network has different number of channels with the teacher sub-network, a simple convolutional layer is employed to transform the dimension. We have pointed this out in the first paragraph of Sec. 4.1. in the updated version of our paper. \\n\\nFor your other concerns, here are our responses:\", \"q\": \"Several related works are not discussed\", \"a\": \"The related works you mentioned have been included in our updated version. Due to the page limitation mentioned in ICLR submission instructions, these work can only be described briefly in our article. We hope to get your understanding.\\n\\nBest regards,\"}",
"{\"title\": \"Responses to Review #2\", \"comment\": \"Dear reviewer #2,\\n\\nWe would like to extend our sincere thanks to you for your constructive feedback. We have modified our claims and improved our literature survey based on your comments. Here are our responses to your major concerns.\", \"q\": \"Leveraging back part of teacher model's guidance to improve student performance has been investigated by other researchers on OCR tasks in Ding H, Chen K, Huo Q. Compressing CNN-DBLSTM models for OCR with teacher-student learning and Tucker decomposition[J]. Pattern Recognition, 2019, 96: 106957. They combine student's CNN with teacher's DBLSTM to learn better representations.\", \"a\": \"We are sorry for missing this paper in our literature survey. However, our work started half a year ago, while this paper is published on July 7, 2019. We have cited this paper in the updated version of our paper.\\nIn paper \\u201cCompressing CNN-DBLSTM models for OCR with teacher-student learning and Tucker decomposition\\u201d, authors employed a knowledge distillation method with DarkNet-DBLSTM as student network and VGG-DBLSTM as teacher network. The DBLSTM modules of the student and teacher networks in this paper have same topology, so the student\\u2019s BLSTM and inner product layers can borrow parameters from the teacher\\u2019s counterparts during training and inference. In contrast, the student networks in our proposed method do not take any part of teacher network for inference and the back part of the teacher network is only employed during the training process. Therefore, our method can be generalized to situations where the student network and the teacher network have different structure. Besides, our method can also be applied to different tasks, which has been confirmed through our experiments.\\n\\n\\nBest regards,\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Overall the method proposed in this paper is simple but effective, and adequate experimental results are given to show its performance improvements. However, the literature survey of this paper is not satisfactory.\\n\\n1. To reduce model size, there are several different ways including efficient architecture design, parameter pruning, quantization, tensor decomposition and knowledge distillation. The authors forgot to mention tensor decomposition and mixed it with efficient architecture design. As for parameter pruning and quantization, many important papers are missing.\\n\\n2. Utilizing the \\\"soft targets\\\" to transfer knowledge from teacher to student model is not first proposed by Hinton et al. (2015). To the best of my knowledge, it is first proposed in \\nJ. Li, R. Zhao, J.-T. Huang, Y. Gong, \\u201cLearning small-size DNN with output-distribution-based criteria,\\u201d Proc. Interspeech-2014, pp.1910-1914.\\n\\n3. Leveraging back part of teacher model's guidance to improve student performance has been investigated by other researchers on OCR tasks in \\nDing H, Chen K, Huo Q. Compressing CNN-DBLSTM models for OCR with teacher-student learning and Tucker decomposition[J]. Pattern Recognition, 2019, 96: 106957.\\nThey combine student's CNN with teacher's DBLSTM to learn better representations.\\n\\nIn conclusion, I will give a weak reject currently, unless the authors improve their literature survey and modify their claims.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed a new method for knowledge distillation, which transfers knowledge from a large teacher network to a small student network in training to help network compression and acceleration. The proposed method concatenate the first a few layers of the student network with the last a few layers of the teacher network, and claims the gradient directly flows from teacher to student, instead of through a KL or L2 similarity loss between teacher and student logits. \\n\\nThe experimental results look good, and extensive experiments have been done on CIFAR, ImageNet and PASCAL VOC. \\n\\nHowever, the description of the proposed method looks rather unclear. First, the \\u2018front\\u2019 and \\u2018back\\u2019 part of networks are very vague. I have to guess that is the first a few layers of student and last a few layers of teacher. And it is still unclear how many layers in student and teacher are concatenated to form the \\u2018collaboration network\\u2019. How could the authors connect the two subnetwork with different structures?\\n\\nIt is unclear to me why proposed method is better than AT, FT or FitNets. It looks to me the proposed method use an ad-hoc selected layer to transfer knowledge from teacher to student, and the transfer is indirect because it has to go through the pre-trained subnetwork in teacher.\\n\\nMinor issue, FT and AT are not defined when they first appear in page 1. \\n\\nCould the authors show the student and teacher accurayunder standard supervised training in the result tables?\\n\\nSeveral related works are not discussed, such as\\nXu et al. 2018 https://arxiv.org/abs/1709.00513\\nBelagiannis et al. 2018 https://arxiv.org/abs/1803.10750\\nWang et al. 2018 https://papers.nips.cc/paper/7358-kdgan-knowledge-distillation-with-generative-adversarial-networks\\n\\n\\n============ after rebuttal ================\\nI updated my rate to weak accept. Though it is a borderline or below paper to me. The paper has really good empirical results. However, I cannot understand the intuition behind the paper why concatenating teacher to student is better than use l2 for intermediate layers. The choice of the transferring layer seems to be rather ad-hoc, and it is hard to say how much tuning needed to get the empirical benefits.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper suggests a new method for knowledge transfer from teacher neural network to student: student-teacher collaboration (STC). The idea is that teacher not only provides student with the correct answer or correct intermediate representation, but also instructs student network on how to get the right answer. The paper suggests to merge the front part of the student network and back part of the teacher network into the collaboration network. In this way, the weights in the student subnetwork are learned from the loss backpropagated from the teacher subnetwork. The target labels for collaboration network are outputs of teacher network. The method is adapted for different tasks, classification and bounding box regression are presented in the experiments. It outperforms previous methods on various datasets. Furthermore, the method shows good results when integrated with traditional knowledge distillation.\\n\\nOverall, the paper is a significant algorithmic contribution. The related work section provides thorough review of different methods to decrease computational costs, including not only knowledge distillation, but also pruning, compressing and decomposition approaches. The idea is elegant and, to the best of my knowledge, has never been suggested in other works. Considering the theoretical part, it is clearly shown how the gradient signal from the teacher sub-network guides the student network on which part of the weights should be paid attention on. All derivations are easy to follow. The paper also considers how the suggested idea is aimed to solve the problems of previous knowledge transfer methods. The experimental section is consistent and clearly shows the advantage of the suggested method. Teacher and student networks used are different sizes of ResNet, Wide ResNet and VGG. The paper presents classification experiments on CIFAR-10, CIFAR-100, ImageNet and object detection experiment on PASCAL VOC 2007 dataset. STC outperforms previous methods, both with KD integration and without. The performance is always better than pure student training (which was not always the case for previous methods) and sometimes the results are even better than teacher performance. Finally, the choice of teacher output as target over soft target and ground truth, which was previously motivated in the theoretical section, is shown to be superior in the experiment.\\n\\nPossible improvement of the paper is the instruction on how to choose the intermediate layer from where to teach the representation, i.e. where the student sub-network ends and teacher sub-network begins. For object detection experiment the choice of the border is naturally defined by the architecture of the network in Faster-RCNN approach. Could the choice be different? May be somewhere inside the BackBone part of the networks? For classification, it could be interesting to study how this choice influences the results. However, this question didn\\u2019t affect my score of the paper, and, as far as I know, it is also not considered in the previous works on knowledge distillation.\\n\\nMinor comments\\n1.\\tIn the context of accelerating the models using decomposition, Lebedev et al., ICLR 2015 could be cited.\\n2.\\tPage 2: difference tasks -> different tasks\\n3.\\tPage 2 the first bullet point: additionally utilizes -> additionally utilizing/which additionally utilizes\\n4.\\tPage 2 the third bullet point: brings good generalizability -> which brings good generalizability\\n5.\\tPage 5: \\u201ctraining strategy is more accelerate than\\u201d \\u2013 odd phrase\\n6.\\tPage 6: while KT has conflicts with KD in some cases -> while FT has conflicts with KD in some cases.\"}",
"{\"comment\": \"Hi,\\nThank you for your comment! For the experiments on ImageNet dataset, we employ the same hyper-parameters as ResNet (He et al., 2016) for all methods and all data from ImageNet dataset are fully used. It can be seen from the manuscript of Factor Transfer (Kim et al., 2018), the conclusion of KD method on ImageNet dataset is same as ours.\", \"title\": \"Reply\"}",
"{\"comment\": \"You mentioned, \\\"KD method suffers from the gap of depths between teacher and student network, leads to an even worse performance than training the student from scratch\\\".\\n\\nAs I understand, you took pretrained teacher and student networks from PyTorch model zoo, in this sense these networks are heavily optimized and well trained. \\n\\nThen you performed KD, but it's unclear, how does your procedure (initialization, size of datasets, length of training and so on) compare to that one used for pretrained models. \\n\\nIs it possible that KD works worse just because you trained the models on less data or for a shorter time? Are your methods comparable?\", \"title\": \"Unclear details of comparison\"}"
]
} |
H1lNPxHKDH | A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case | [
"Greg Ongie",
"Rebecca Willett",
"Daniel Soudry",
"Nathan Srebro"
] | We give a tight characterization of the (vectorized Euclidean) norm of weights required to realize a function $f:\mathbb{R}\rightarrow \mathbb{R}^d$ as a single hidden-layer ReLU network with an unbounded number of units (infinite width), extending the univariate characterization of Savarese et al. (2019) to the multivariate case. | [
"inductive bias",
"regularization",
"infinite-width networks",
"ReLU networks"
] | Accept (Poster) | https://openreview.net/pdf?id=H1lNPxHKDH | https://openreview.net/forum?id=H1lNPxHKDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"oUCpuH5QEc",
"H1gbPLWqoH",
"BkxnrM-5oH",
"rJxPr1-9jr",
"ryx--vwTYr",
"Hkx2sBPpKH",
"BkgBuY4sFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798747008,
1573684825135,
1573683780146,
1573683006606,
1571809017439,
1571808675978,
1571666285053
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2356/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2356/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2356/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2356/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2356/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2356/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The article studies the set of functions expressed by a network with bounded parameters in the limit of large width, relating the required norm to the norm of a transform of the target function, and extending previous work that addressed the univariate case. The article contains a number of observations and consequences. The reviewers were quite positive about this article.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"summary of changes in revision\", \"comment\": \"We thank all the reviewers for their careful reading of the manuscript, and have uploaded a revision based on their feedback. Changes addressing the reviewers comments are indicated by blue text. The main change is an expanded discussion in Section 5.1 regarding the order of smoothness in our Sobolev norm bounds, as requested by Reviewer 1.\\n\\nAdditionally, we have made some small changes to Section 5.3. Previously, we claimed in Proposition 5 that *all* continuous piecewise linear functions with compact support have infinite R-norm. However, there was a flaw in our proof, and we needed to weaken the result slightly. Namely, for the result to hold, we need to make some extra conditions on the boundary sets separating the regions on which the function is linear. We discuss these conditions in detail in Appendix I with an updated proof of the result. These conditions are met for a broad class of piecewise linear functions, including the pyramid function in Example 4, and so the change to Proposition 5 does not affect our depth separation result.\"}",
"{\"title\": \"Comparison with Bach 2017 and other comments\", \"comment\": [\"Thank you for the positive review and the constructive feedback.\", \"is it possible to obtain precise characterizations of interpolating solutions in this setting (other than a mere representer theorem with ReLUs), as done in Savarese et al (2019, Theorem 3.3) for the univariate case?\", \"We did pursue this question some, but unfortunately did not come up with a satisfying answer. In the univariate case, it is straightforward to show that a minimum norm solution is given by a linear spline with knots at the sample locations, since the function space norm is essentially the second-order total-variation penalty. In the multivariate case, we know that a minimum norm solution is given by an interpolating piecewise linear function, but due to the more complicated function space description involving a Radon transform, we found it difficult to give a more concise description than this.\", \"perhaps the results of Section 5.1 should be contrasted with those of Bach (2017, e.g. Prop. 5), which only require ~ d/2 derivatives instead of ~ d here, albeit with stronger requirements, for essentially the same functional space (though the approximation result is obtained from an associated RKHS, which is smaller).\", \"We thank the reviewer for pointing this out. We have added discussion in Section 5.1 comparing our result with Bach 2017, and explaining why ~d order derivatives are necessary in our setting. The difference in derivative order comes from the fact that we consider an L^1-type Sobolev norm (i.e., sum of the L^1 norms of derivatives) and not an L^2-type Sobolev norm, as considered in Bach 2017. For an L^1-type Sobolev norm, the scaling ~d is optimal in the sense that this is the scaling required for a sequence of functions approaching a \\u201cpoint evaluation\\u201d (i.e., f(x) = 1 if x=x_0 and 0 otherwise) to have unbounded norm. Whereas, for an L^2-type Sobolev norm, the required scaling for this to occur is ~d/2.\", \"are the results on radial bump functions intended to provide insight on approximation or depth separation? what was the motivation behind this section?\", \"These results were meant to build intuition for dealing with the R-norm where we can obtain more explicit expressions, and to illustrate how the R-norm scales with dimension for a certain class of bump function, which could be important for future approximation and generalization results. We have added a sentence addressing this at the beginning of Section 5.2.\", \"Other minor comments/typos:\", \"after Prop. 1: \\\"intertwining\\\" appears twice\", \"eq. (22): missing f in l.h.s.\", \"eq. (23): is the first minus sign needed?\", \"before Thm. 1: point to which Appendix\", \"Section 4.1, \\\"In particular, this is what would happen ... d+1\\\": this should be further explained\", \"Section 4.1, final paragraph, \\\"in order R-norm to be\\\": rephrase\", \"Section 5.4, \\\"required norm with three layers is finite\\\": which norm? maybe point to a reference? Also, Example 5 could be explained in further detail\", \"Section 5.5: what is an RKHS semi-norm? you'd always have ||f|| = 0 => f = 0 in an RKHS, by the reproducing property\", \"Thank you for your careful reading. We have addressed all these issues in the revision.\", \"In particular, the d+1 scaling of derivatives is described in more detail in Sec. 5.1. And the issue of our claim about RKHS norms versus semi-norms is addressed with a footnote on page 10.\", \"Finally, you are correct that in Eq. (23) the first minus sign is not needed, but we prefer to leave it there to make subsequent derivations tidier, such as Example 1, and many proofs in the Appendix.\"]}",
"{\"title\": \"strict derivation of equation (19)\", \"comment\": \"Thank you for your comments. While we use the Dirac delta somewhat informally in equations (19) and (20), this calculation is done rigorously in the proof Lemma 9 in Appendix D. In the revised draft, we now indicate this with a footnote on page 6.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the author analysis the (approximate) function class generated by an infinite-width network when the Euclidean norm is bounded. They extend the work of Savarese et al. on the univariable function by introducing the Randon Transform and R-norm to this problem. The authors finally prove that any function in Sobolev space could be (approximately) obtained by a bounded network. The results achieved implies some generalization performance analysis and the induction error. Also, according to the authors, the difference between R-norm and RKHS norm might lead to the distinct from neural networks and kernel methods.\\n\\nI would recommend accepting this paper since it might give a good insight into understanding the performance of the network beyond the traditional method.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper gives characterization of the norm required to approximate a given multivariate function by an infinite-width two-layer neural network. An important result is the relation between Radon-transform and the $\\\\mathcal{R}$-norm. This paper also shows application of the norm on some special case.\\n\\nI suggest this paper being accepted because it provides new insights into the approximation theory for neural networks. The perspective of norm constraint is different from the traditional approximation theory and may serve as a good contribution to the community.\", \"one_question_is_that\": \"in section 4, the equation (19) is differentiated twice to get the equation (20) containing Dirac delta. Although this is intuitively correct, this seems not a strict derivation to my mathematical background. It would be great if the authors can show the strict definition and derivation presented here.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper studies the function space regularization behavior of learning with an infinite-width ReLU network with a bound on the l2 norm of weights, in arbitrary dimension, extending the univariate study of Savarese et al. (2019).\\n\\nThe authors show that the corresponding regularization function is more or less an L1 norm of the (weak) (d+1)st derivatives of the function, and provide a rigorous formal characterization in terms of the \\\"R-norm\\\", which is expressed via duality through the Radon transform and powers of the Laplacian.\\n\\nIn addition, the paper provides a number of implications of this study, such as approximation results through Sobolev spaces, an analysis of the norm of radial bump functions, and a new type of depth separation result in terms of norm as opposed to width.\\n\\nOverall, this is a strong paper making several interesting and important contributions for our understanding of the inductive bias of ReLU networks. I thus recommend acceptance.\", \"a_few_comments\": [\"is it possible to obtain precise characterizations of interpolating solutions in this setting (other than a mere representer theorem with ReLUs), as done in Savarese et al (2019, Theorem 3.3) for the univariate case?\", \"perhaps the results of Section 5.1 should be contrasted with those of Bach (2017, e.g. Prop. 5), which only require ~ d/2 derivatives instead of ~ d here, albeit with stronger requirements, for essentially the same functional space (though the approximation result is obtained from an associated RKHS, which is smaller).\", \"are the results on radial bump functions intended to provide insight on approximation or depth separation? what was the motivation behind this section?\", \"Other minor comments/typos:\", \"after Prop. 1: \\\"intertwining\\\" appears twice\", \"eq. (22): missing f in l.h.s.\", \"eq. (23): is the first minus sign needed?\", \"before Thm. 1: point to which Appendix\", \"Section 4.1, \\\"In particular, this is what would happen ... d+1\\\": this should be further explained\", \"Section 4.1, final paragraph, \\\"in order R-norm to be\\\": rephrase\", \"Section 5.4, \\\"required norm with three layers is finite\\\": which norm? maybe point to a reference? Also, Example 5 could be explained in further detail\", \"Section 5.5: what is an RKHS semi-norm? you'd always have ||f|| = 0 => f = 0 in an RKHS, by the reproducing property\"]}"
]
} |
rkxmPgrKwB | Weight-space symmetry in neural network loss landscapes revisited | [
"Berfin Simsek",
"Johanni Brea",
"Bernd Illing",
"Wulfram Gerstner"
] | Neural network training depends on the structure of the underlying loss landscape, i.e. local minima, saddle points, flat plateaus, and loss barriers. In relation to the structure of the landscape, we study the permutation symmetry of neurons in each layer of a deep neural network, which gives rise not only to multiple equivalent global minima of the loss function but also to critical points in between partner minima. In a network of $d-1$ hidden layers with $n_k$ neurons in layers $k = 1, \ldots, d$, we construct continuous paths between equivalent global minima that lead through a `permutation point' where the input and output weight vectors of two neurons in the same hidden layer $k$ collide and interchange. We show that such permutation points are critical points which lie inside high-dimensional subspaces of equal loss, contributing to the global flatness of the landscape. We also find that a permutation point for the exchange of neurons $i$ and $j$ transits into a flat high-dimensional plateau that enables all $n_k!$ permutations of neurons in a given layer $k$ at the same loss value. Moreover, we introduce higher-order permutation points by exploiting the hierarchical structure in the loss landscapes of neural networks, and find that the number of $K$-th order permutation points is much larger than the (already huge) number of equivalent global minima -- at least by a polynomial factor of order $K$. In two tasks, we demonstrate numerically with our path finding method that continuous paths between partner minima exist: first, in a toy network with a single hidden layer on a function approximation task and, second, in a multilayer network on the MNIST task. Our geometric approach yields a lower bound on the number of critical points generated by weight-space symmetries and provides a simple intuitive link between previous theoretical results and numerical observations. | [
"Weight-space symmetry",
"neural network landscapes"
] | Reject | https://openreview.net/pdf?id=rkxmPgrKwB | https://openreview.net/forum?id=rkxmPgrKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"A-Rv6ur7lh",
"Skg_YFaqiS",
"Sylo4Kp9ir",
"HkewfK69sS",
"ryxT3dTqor",
"SylvFuSmjH",
"HklMZ14z5H",
"BJxkDMC6YS",
"BkluC-v3FS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746981,
1573734784064,
1573734706559,
1573734670546,
1573734581020,
1573243007280,
1572122362332,
1571836502865,
1571742159881
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2355/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2355/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2355/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2355/Authors"
],
[
"~Micah_Goldblum1"
],
[
"ICLR.cc/2020/Conference/Paper2355/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2355/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2355/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"After communicating with each reviewer about the rebuttal, there seems to be a consensus that the paper contains a number of interesting ideas, but the motivation for the paper and the relationship to the literature needs to be expanded. The reviewers have not changed their scores, and so there is not currently enough support to accept this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for pointing to your interesting work. We will cite your work where appropriate in the next version.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your feedback and ideas.\\n\\nThe symmetries you describe for linear networks are well-known in the field (see e.g. Baldi and Hornik 1989 https://doi.org/10.1016/0893-6080(89)90014-2 ) and generalizations to non-linear activation functions are possible thanks to local linear approximations. That is, it may be that (local) minima are embedded in small flat subspaces that can be characterized with such symmetries. It is, however, for networks with non-linear activation functions in general not possible to move along an equal-loss path from one permutation of the hidden neurons to another permutation (because the linear approximations hold only locally). Our work shows one way to find nevertheless low-loss paths between such permutations in non-linear neural networks.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your thorough feedback and for pointing to some interesting references.\\n\\nConnecting \\u2018partner\\u2019 minima vs. connecting independent minima: Indeed, our pathfinding method can only link \\u2018partner\\u2019 minima whereas other methods can find paths between independent minima (representing different functions). Distinctively, our method enables observing how individual weight vectors move from a minimum to a permutation point (see Fig 2) and equivalently how saddles emerge in between minima, therefore provides a geometric perspective on the landscape.\\n\\nRelation to Fort et al. 2019: Their work introduces a method that enables connecting m minima through an (m+1)-dim manifold. In our work, we prove that permutation points lie in high-dimensional plateaus of critical points (Lemma 3). In other words, one can pick any m critical points (or infinitely many) in this hyperplane and these points are connected in an n_k-dim hyperplane (n_k is the number of neurons in the next layer, this is usually bigger than 10, thus suggesting high-dimensional manifolds compared to the empirical findings in Fort 2019 for m=10). There is also an interesting connection between these two works: in their phenomenological model, the wedges intersect, suggesting connectivity of minima (see their Fig1) whereas we prove that the permutation points are connected at equal-loss (see our Fig 4 in the appendix).\", \"a_lower_bound_on_the_number_of_critical_points\": \"We also wanted to point out to an interesting part in our paper: in addition to studying landscape connectivity, we find a lower bound for the number of permutation points corresponding to the midpoints in our pathfinding algorithm. We prove that there are at least polynomially more permutation points than the global minima (Proposition 3).\\n\\nRelation to Kuditipudi et al. 2019: Their method seems quite powerful while being quite general. However, their requirement of robustness against a 50% dropout, although practically no problem for big networks, is a major assumption and could actually be difficult to meet in layers with only a few units (e.g. 32 filters in an early convolutional layer).\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your thorough feedback.\", \"related_to_proposition_3\": \"We have not encountered other work in the literature that gives a lower bound on the number of critical points (that have higher loss than the global minimum). We see our contribution in proving that there are polynomially more permutation points than global minima in the landscapes of multi-layer neural networks (Proposition 3). Furthermore, we prove that these permutation points lie in high-dimensional plateaus of critical points (Lemma 3).\\n\\nA new insight on \\u201cneuron splitting\\u201d: In addition to reviewing the results from Fukumizu and Amari, we prove that ALL permutation points are connected through continuous paths of the same loss.\\n\\nRelation to Garipov et al.: In Garipov et al the authors search through certain parametrized paths to connect independent minima (specifically, polygonal chains and bezier curves) and find optimal parameters that yield the lowest barrier (for a chosen parametrization). In our pathfinding method, we do not restrict our path method to a certain geometrical family. This enables finding paths of arbitrary shape and thus may lead to lower-barrier paths at the cost of connecting only the \\u2018partner\\u2019 minima.\", \"saddles_between_global_minima\": \"We agree that the weight parameters (points on the path) created by our Algorithm 1 may not be a critical point of the loss function. However there must be at least one saddle point on the path as can be seen as follows: Either the permutation point is already a saddle or it is a local minimum of the loss. In the second case, the path must cross a point of higher loss at some point theta(d_s). By construction, at every point on the path, the derivative is zero in all directions but in d-direction, i.e. in the direction of the path. Since the loss values on the path before and after the point theta(d_s) are lower, the derivative at theta(d_s) in d-direction (along the path) must be zero as well. Thus theta(d_s) is a critical point; the saddle we were looking for.\\n\\nConnecting the \\u201cglobal minima\\u201d: In the paper we use a teacher-student setup to ensure being in a global minimum (of the student loss) before applying Algorithm 1. However we expect to find similar results for paths connecting local minima as encountered in classical network training and experiments on this are currently running.\"}",
"{\"title\": \"An Interesting Connection\", \"comment\": \"Hi Authors,\\nThank you for your interesting paper. I noticed that your work concerning minima of the loss landscape is related to our paper, which yields both theoretical and empirical results concerning the suboptimal local minima.[1] Please consider mentioning the relationship with our work in your next version.\\n\\n[1] https://arxiv.org/abs/1910.00359\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies a special type of weight symmetry in neural networks. I think studying the geometry of neural-nets is an interesting and important direction for understanding neural-nets, and along this direction, weight-space symmetry is an important subject. However, it seems to me this paper does not make enough contributions. Details are given below.\", \"list_of_contributions_of_the_paper\": \"1.\\tPropose an algorithm to find a (low-loss) path connecting arbitrary two partner local minima and passing through a permutation point, where a permutation point is defined as a weight setting where a pair of neurons (in the same layer) have the same fan-in and fan-out weights. \\n\\n2.\\tTheoretically prove that some permutation points are connected via paths with equal loss. (Proposition 1 and 2)\\n\\n3.\\tProvide a lower bound for the number of permutation points and high-order permutation points (Proposition 3).\", \"cons\": \"1.\\tThe theory of this paper is a bit weak. There are three propositions. Proposition 1 and Proposition 2 are about the equal-loss surface and theoretical existence of an equal-value path. They are kind of straightforward to prove. Prop. 3 is about counting the number of permutation points. It is a rather simple combinatorial problem, and the lower bound of the expression (an exponential bound) seems standard. \\n\\n2.\\tSimple proofs can sometimes provide nice insight, but it is not clear how the study of partner global minima can help improve the understanding of DNN. \\n --First, partner global minima are just a special case of critical points created by \\\"neuron splitting\\\", which has been comprehensively studied in [FA2000] (Fukumizu and Amari, 2000). Note that Theorem 1 of this paper is also directly borrowed from [FA2000]. To me, this paper does not provide much additional theoretical insight on neuron splitting. \\n --Second, what is the significance of the simulation results? Prior works have shown the existence of low-cost path; this paper shows the existence of a low-cost path containing a permutation point. The major difference is that the new low-cost path is more special. Why is this finding interesting and useful? (noting that the proof of the existence of such a path seems to be much easier than proving the existence of such a path for two general global minima).\\n \\n\\n3.\\tIt is not clear how the path-finding algorithm helps in practice.\\na)\\tThe algorithm is computationally expensive. It is a double-loop algorithm: in the outer loop, d is reduced by a tiny amount at each time; in the inner loop, for a fixed d, gradient descent is run for the entire DNN (with only one parameter excluded) until convergence. The total time is (# of d) * (original running time). Here, # of d\\u2019s depends on the grid: if the initial d is 10 and the grid size is 0.1, then (# of d) = 100. \\n Compared to the path-finding algorithm in Garipov et al., which only runs GD for one time, this algorithm is much more expensive, yet the benefit is unclear. \\nb)\\tThe motivation of the algorithm is not clear. Why choose to monotonically decrease the difference between the fan-in weights (of the chosen pair of neurons), but let all other weights freely optimized during the pathfinding algorithm? The paper does not provide a detailed explanation of this. \\n\\n4.\\tMinor issues\\n -- The weight parameters created by Algorithm 1 may not be a critical point of the loss function, and thus not necessarily a saddle point. I think the claim \\u201csince the path connects two partner minima, there must be at least one saddle point on the path\\u201d is incorrect without extra assumptions. \\n --The paper mentioned \\\"global minima\\\" in the introduction; but in practical training, one does not always find global minima. Is \\\"global minima\\\" crucial for the theory and for the algorithm?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the permutation symmetry of deep neural networks. It was known that by reordering neurons and their connections in each layer, the input -> output map the neural network represents can be preserved. This corresponded to a set of unconnected equivalent points in the weight space. The authors study the weight-space connectivity of these points through midpoints they call \\u201cpermutation points\\u201d. They demonstrate that such points are members of high-dimensional manifolds of equivalent points. After that, they look at empirical experiments and explicitly construct a path between two equivalent weight-space points on a toy task and MNIST.\\n\\nI generally like the paper and its geometrical lens on the problem. I find the figures very helpful in understanding what is going on. However, there are a few points that I wasn\\u2019t entirely clear on. I will detail them below.\\n\\n-- Point 1 --\\nConnecting equivalent minima vs connecting SGD-found minima. \\n\\nIf I understood the paper correctly, the derivation connects two weight space points A and B whose weights and biases, once loaded to the neural network, would have the exact same answers on all inputs X i.e. f_A(X) == f_B(X), i.e. they are a pair of equivalent points. I understand that those are the ones we obtain by using the permutation symmetries.\\n\\nHowever, some of the papers cited look at the low-loss paths between pairs of optima found by training with SGD from independent initializations, which in turn represent different functions. I.e. for two such optima C and D, the predictions on the val/test set are (sometimes) different, showing that the functions are not the same. I found the initial evidence in:\\n\\nLarge Scale Structure of Neural Network Loss Landscapes. Stanislav Fort and Stanislaw Jastrzebski. NeurIPS 2019. (https://arxiv.org/abs/1906.04724)\", \"and_also_in_much_more_detail_in_another_openreview_submission\": \"\", \"deep_ensembles\": \"A Loss Landscape Perspective. (https://openreview.net/forum?id=r1xZAkrFPr)\\n\\nI found your results very compelling, however, the two problems seem to be quite different -- on one hand you are connecting a pair of minima that are in fact *identical* by construction. On the other hand the empirical work in literature (especially in the two papers I provided above) deals with pairs of minima that in fact do differ on the test set (at least).\\n\\nWould you mind commenting on how the two approaches relate to each other? \\n\\n-- Point 2 --\\nHigher order connectivity\\n\\nIn Large Scale Structure of Neural Network Loss Landscapes. Stanislav Fort and Stanislaw Jastrzebski. NeurIPS 2019. (https://arxiv.org/abs/1906.04724), the authors look at higher-order connectivity between SGD-found optima (e.g. connecting 3 optima on a 2-manifold, 4 optima on a 3-manifold etc.). They also have a particularly simple path-finding algorithm. This seems relevant to the approach you are presenting, although the points in Point 1 still stand.\\n\\n-- Point 3 --\\nPrevious work on connecting two optima using layer-wise weight merging\\n\\nExplaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets. Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Sanjeev Arora, Rong Ge. (https://arxiv.org/abs/1906.06247)\\n\\nThey prove that a low-loss path between 2 optima exists provided you can apply a p_keep = 0.5 dropout on each of the optima without incurring a significant loss punishment for it. This paper seems very related to your approach. Would you mind commenting on the differences?\\n\\n-- Conclusion -- \\nI like the paper and the idea in general. I appreciate the geometrical lens the authors took. My main point of confusion relates to the connection between this work and the low-loss connectivity between inequivalent optima found in literature, which (at least to me) seems to be the more interesting of the two connectivities.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presented a method for studying the landscape of the loss function w.r.t. parameters in a neural network from the perspective of weight-space symmetry. The detailed method includes constructing/optimising a low-loss path in between two parameter vectors (incoming connections from the previous layer) of two neurons respectively, and then set the output weight vectors (outgoing connections to the next layer) to be the same without changing the output of the current layer.\\n\\nThe empirical results show that the proposed optimisation scheme for finding the path is indeed low-loss, which implies that there exists numerous critical points in between two equivalent local minima (which are global minima in over-parametrised models).\\n\\nMy major concern is that scope of the study is very limited, since only permutation is considered here. I am not an expert in this field, however, I could come up with examples which could easily generalise the method of study to rotations since rotation matrices include permutation matrices. \\n\\nWe can look into the loss landscape of 2D matrix factorisation in this form UV^T=W, and it is obvious that any rotation of U on the RHS of it is an optimal solution as long as the same rotation is applied to V. The perspective from this optimisation problem is that it gives us a continuous plateau and it includes permutation of the dimensions. \\n\\nFor neural networks with only one hidden layer. For example, consider a neural network in this form y=Uf(Vx) where U and V are parameter matrices and f() is a monotonic squashing function. It is easy to show that, when U is timed with a matrix R, where RR^T= I, as in U'= UR, there exist a V' and \\\\alpha that gives R^Tf(Vx) = \\\\alpha f(V'x) so that Uf(Vx) = URR^Tf(Vx) = U'f(V'x) for the hyperbolic tangent function. In the case where ReLU activation function is used as f(), then as long as the rotation doesn't produce negative entries, V' exists. In addition, when f is ReLU, isotropic scaling of the outputs from the first layer also gives rise to equivalent optima. \\n\\nCompared to the proposed study, the aforementioned way of studying the neural networks naturally gives continuous plateaus w.r.t. U in the loss landscape, and, by studying the discontinuity of the landscape w.r.t. V, more understanding could be unveiled.\"}"
]
} |
rJx7wlSYvB | Differentiable Bayesian Neural Network Inference for Data Streams | [
"Namuk Park",
"Taekyu Lee",
"Songkuk Kim"
] | While deep neural networks (NNs) do not provide the confidence of its prediction, Bayesian neural network (BNN) can estimate the uncertainty of the prediction. However, BNNs have not been widely used in practice due to the computational cost of predictive inference. This prohibitive computational cost is a hindrance especially when processing stream data with low-latency. To address this problem, we propose a novel model which approximate BNNs for data streams. Instead of generating separate prediction for each data sample independently, this model estimates the increments of prediction for a new data sample from the previous predictions. The computational cost of this model is almost the same as that of non-Bayesian deep NNs. Experiments including semantic segmentation on real-world data show that this model performs significantly faster than BNNs, estimating uncertainty comparable to the results of BNNs.
| [
"Bayesian neural network",
"approximate predictive inference",
"data stream",
"histogram"
] | Reject | https://openreview.net/pdf?id=rJx7wlSYvB | https://openreview.net/forum?id=rJx7wlSYvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"649iNz_YB",
"SJgiWVZPoH",
"H1x8fZ-Dsr",
"Syx_8h1wiB",
"rkgsfR0Lir",
"Hkg457uTtS",
"HJeDxzgnKS",
"SyeZuXesFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746951,
1573487619361,
1573486862039,
1573481551784,
1573477907224,
1571812236210,
1571713519356,
1571648361488
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2354/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2354/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2354/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2354/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2354/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2354/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2354/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The main contribution is a Bayesian neural net algorithm which saves computation at test time using a vector quantization approximation. The reviewers are on the fence about the paper. I find the exposition somewhat hard to follow. In terms of evaluation, they demonstrate similar performance to various BNN architectures which require Monte Carlo sampling. But there have been lots of BNN algorithms that don't require sampling (e.g. PBP, Bayesian dark knowledge, MacKay's delta approximation), so it seems important to compare to these. I think there may be promising ideas here, but the paper needs a bit more work before it is to be published at a venue such as ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for detailed and constructive comments.\", \"comment\": \"We really appreciate your comments. We have revised the paper according to the suggestions and would like to clarify several things:\\n\\n1. How is the posterior distribution of the weights computed?\\n\\nIn this paper, as mentioned in the introduction and Section 4.2, we only consider predictive inference. We assume that we have access to a posterior. In the experiments, the posterior is given in Section 5.1. Section 5.2 trained the fully-bayesian NN using variational inference and Section 5.3 used BNN with MC-dropout.\\n\\n\\n2. Figure 2 is not clear. What do you mean by degenerated in Section 5.1.?\\n\\n\\\"Degenerated\\\" means the random variable has only one value, i.e., the distribution is a Dirac delta function. We removed the term \\\"degenerated\\\" from this revision to avoid confusion. Instead, we added detailed descriptions of Figure 2. Could you refer to reply to revewer 2's comment?\\n\\n\\n3. The description of the baselines the authors compare with is not clear.\\n\\nWe compare four methods in the experiments. The first two methods are deterministic NN (DNN) and Bayesian NN (shown as MU). DU and DBNN are the methods that OCH is added to input and output of DNN and BNN, respectively. In short, DU = OCH + DNN + OCH and DBNN = OCH + BNN + OCH = posterior + OCH + DNN + OCH. We also added a description to this revision to clarify the details of the baselines.\\n\\n\\n4. The paper is missing a related work section describing state of the art methods to address stream data.\\n\\nWe agree with you. In this revision, we cited some related work on vector quantization and data streams in Section 3.\\n\\n\\n5. In table 2 the benefits of the proposed approach are not very significant.\\n\\nTable 2 shows the computational and predictive performance of DBNN using modern NNs in semantic segmentation. In Table 2, we argue that DBNN is significantly faster than MU, and uncertainty is comparable in practical problem on real-world dataset. According to this table, the throughput of DBNN is significantly higher than that of MU. The uncertainty (measured by Acc-90, Unc-90, IoU-90, and NLL) of DBNN is comparable with that of MU, and is better estimated than that of DNN.\\n\\n\\n6. The experiments are missing error bars.\\n\\nFigure 2 has error bars. As you pointed out, table 1 and table 2 have no error. Only the error of throughput is written in the Computational Performance paragraph. We didn't include the error numbers in table 1 and table 2, because we thought the table with errors is too wide and unreadable. For example, table 2 with errors is as follows:\\n\\n Method | Thr (fps) | Acc | Acc-90 | Unc-90 | IoU | IoU-90 | NLL | Cov-90 \\n \\u2014\\n DNN | 6.14\\u00b10.43 | 85.8\\u00b10.0 | 89.1\\u00b10.0 | 30.4\\u00b10.0 | 58.5\\u00b10.0 | 62.5\\u00b10.0 | 1.22\\u00b10.00 | 93.1\\u00b10.0\\n MU | 0.189\\u00b10.002 | 86.4\\u00b10.0 | 93.0\\u00b10.0 | 60.1\\u00b10.1 | 61.0\\u00b10.1 | 69.9\\u00b10.0 | 0.728\\u00b10.001 | 84.2\\u00b10.0\\n DU | 5.33\\u00b11.36 | 85.4\\u00b10.1 | 91.5\\u00b10.2 | 51.3\\u00b10.7 | 57.3\\u00b10.2 | 63.3\\u00b10.3 | 0.980\\u00b10.010 | 81.9\\u00b10.4\\n DBNN | 5.22\\u00b11.40 | 85.8\\u00b10.2 | 92.3\\u00b10.4 | 63.0\\u00b12.7 | 58.9\\u00b10.6 | 68.6\\u00b10.9 | 0.826\\u00b10.016 | 80.4\\u00b10.9\\n\\nWe may add this expanded table to Appendix if it would be more informative.\\n\\n\\n7. It seems the real experiments of Section 5.1. only consider one baseline MU.\\n\\nSection 5.1 contains the results of DNN, MU, DU, and DBNN. If you are referring to table 1 in Section 5.2, I would like to answer on the premise. A classification experiment in Section 5.2 is designed to compare performance changes when converting BNNs to DBNNs using shallow and narrow NNs in various situations. However, according to [Guo, 2017], shallow DNN is well-calibrated and predicts uncertainty well, and predictive results of DNN and BNN (not DBNN) are more dependent on the characteristics of data, not on the type of NN. Therefore, we excluded the results of DNN because they do not clearly show the purpose of this experiment.\", \"reference\": \"- Guo, Chuan, et al. \\\"On calibration of modern neural networks.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\\n\\n\\n8. The paper needs to improve the writing. For example, the sentence \\\"it has been unable to estimate the uncertainty of the predictions until recently\\\" sounds awkward. The authors have to better explain the features and challenges of stream data. There are several steps of the proposed method that are not well described. The notation x_0 and x_1 for the test point and the NN weights is confusing.\\n\\nThank you for your detailed comments. We added a more detailed explanation of the proposed method in this revision. In addition, we released the implementation of DBNN and experiments as open source. Readers will find more details about DBNN in the code. According to your suggestion, we changed the notation as follows to avoid confusion: data $\\\\textbf{x}_0$ \\u2192 $\\\\textbf{x}$, weight $\\\\textbf{x}_1$ \\u2192 $\\\\textbf{w}$, and $\\\\textbf{x} = (\\\\textbf{x}_0, \\\\textbf{x}_1)$ \\u2192 $\\\\textbf{z} = (\\\\textbf{x}, \\\\textbf{w})$.\"}",
"{\"title\": \"We appreciate your insightful comments (2/2)\", \"comment\": \"As you suggested, to address the fairness of the evaluation, we added more data in the Appendix E.\\n\\nIn this paper, all experiments use sampling-based BNNs, requiring multiple forward passes. Section 5.2 uses a fully-Bayesian NN, i.e., all weights are random variables, and Section 5.3 which is semantic segmentation experiment uses BNN with MC-dropout layers. (We mentioned only in Section 5.3 that the BNN contains the MC-dropout layers, but this can be confused with the description of the baseline. So, in this revision, we added the description to the baseline.)\\n\\nIn the semantic segmentation experiment, \\\"overhead of sampling weights is negligible\\\" means \\\"overhead of dropout layer is negligible in single forward pass\\\" (we updated it in this revision). Thus, even though the BNN has additional dropout layers on the DNN, the execution time of BNN with 30 forward pass is just 30 times that of the DNN.\\n\\nDBNN also includes MC-dropout layer like BNN. However, while BNN executes repetitive NN on one data, DBNN executes only one forward NN pass. DBNN uses OCH to accumulate recent predictions, and it uses the distribution of these memorized predictions as well as the most recent prediction to predict the result and the uncertainty. In conclusion, past predictions are used to reinforce predictive uncertainty.\", \"the_following_questions_may_arise\": \"\\u201cCan BNN with a small number of forward pass get uncertainty equivalent to DBNN?\\u201d Table 4 added in this revision shows computational and predictive performance for the number of forward pass of BNN (shown as MU). According to this table, to achieve the same uncertainty measures (Acc-90, Unc-90, IoU-90, NLL) as DBNN, MU requires 10 forward passes. In particular, Unc-90 \\u2014 probability that the pixel is not 90% confident pixel when prediction is incorrect, i.e., $p(\\\\textbf{unconfident} | \\\\textbf{inaccurate})$ \\u2014 of DBNN has always outperformed that of MU. For the improvement rate from Acc and IoU to Acc-90 and IoU-90, the uncertainty of DBNN is the same as that of MU with 30 forward passes.\\n\\n\\n\\n\\nAlso, according to your suggestions, we added some detailed descriptions to introduction and changed the notation as follows to avoid confusion: data $\\\\textbf{x}_0$ \\u2192 $\\\\textbf{x}$, weight $\\\\textbf{x}_1$ \\u2192 $\\\\textbf{w}$, and $\\\\textbf{x} = (\\\\textbf{x}_0, \\\\textbf{x}_1)$ \\u2192 $\\\\textbf{z} = (\\\\textbf{x}, \\\\textbf{w})$.\"}",
"{\"title\": \"We appreciate your insightful comments (1/2)\", \"comment\": \"Thank you for insightful and constructive comments.\\n\\nAs you pointed out, online codevector is not new. The online codevector histogram (OCH) is a combination of the vector quantization histogram [Kotani, 2002] and stochastic variant of the algorithm which adds and deletes prototypical vectors, e.g. ILVQ [Xu, 2012] and Growing Neural Gas [Frezza-Buet, 2014], depending on the data stream. Thus, we agree that the novelty of the OCH algorithm is incremental.\\n\\nIn this paper, however, we want to emphasize how to use this rather than the algorithm of OCH itself. As pointed out in [Gal, 2016; Mohamed, 2019], three methods have been widely used to calculate the gradient of MC estimation: re-parametrization trick, score function, and measure-valued gradient estimation. However, as described in Section 2.2 using the re-parametrization trick as an example, these methods are computationally inefficient for DBNN. In short, OCH does not change the positions of codevectors. Instead, it only adds and removes codevectors and changes its weights. Thus, OCH calculates the difference of MC estimation by calculating the difference of the weights of the samples; this method is computationally efficient.\", \"references\": [\"Hern\\u00e1ndez-Lobato, Jos\\u00e9 Miguel, and Ryan Adams. \\\"Probabilistic backpropagation for scalable learning of bayesian neural networks.\\\" International Conference on Machine Learning. 2015.\", \"Wang, Hao, S. H. I. Xingjian, and Dit-Yan Yeung. \\\"Natural-parameter networks: A class of probabilistic neural networks.\\\" Advances in Neural Information Processing Systems. 2016.\", \"Hwang, Seong Jae et al. \\u201cSampling-free Uncertainty Estimation in Gated Recurrent Units with Applications to Normative Modeling in Neuroimaging.\\u201d UAI (2019).\", \"Wu, Anqi, et al. \\\"Deterministic variational inference for robust bayesian neural networks.\\\" (2018).\"]}",
"{\"title\": \"Thank you for thoughtful and comments.\", \"comment\": \"Thank you for your comments and suggestions.\\n\\n1. According to you suggestions, we included [Frezza-Buet, 2014], in Section 3, which is closely related to our online codevector histogram and cited other related work.\\n\\n2. We agree to your point that our explanation about codevector was not enough. We added the following to this revision: Initially, OCH is empty and has no codevector. OCH adds input data as a codevector with high probability if the number of codevectors is small. After a period of time, OCH contains codevectors, which are the recent data from data stream.\\n\\n3. Thank you for the constructive feedback.\\nIn this revision, we added more explanation for Figure 2 in Section 5.1. (We changed the notation as follows to avoid confusion: data $\\\\textbf{x}_0$ \\u2192 $\\\\textbf{x}$, weight $\\\\textbf{x}_1$ \\u2192 $\\\\textbf{w}$, and $\\\\textbf{x} = (\\\\textbf{x}_0, \\\\textbf{x}_1)$ \\u2192 $\\\\textbf{z} = (\\\\textbf{x}, \\\\textbf{w})$)\\n\\nSimple linear regression experiment in Section 5.1 shows the difference between DNN, MU, DU and DBNN. The top of (a)-(d) in Figure 2 are approximated distributions of data and weight samples $p(x, w)$, and bottom are approximated distributions of output samples (with data) $p(x, y)$. To predict result, DNN uses point-estimated weight and data $x = 0$, which is the data at $t = 0$. MU uses weight distribution, instead of point-estimated weight, and data $x = 0$. DU uses point-estimated weight, which is the same as DNN. However, since DU contains OCHs, it uses not only the most recent data ($x = 0$ at $t = 0$) but also past data ($x < 0$ when $t < 0$). As a result, DU predicts the distribution of y. DBNN predicts y using both weight distribution and data from the past to now.\\n1) The simple linear regression experiment in this submission is slightly different from the typical regression: x is time-variant and is given by $p(x | t) = \\\\delta(x - v t)$, so $p(x, w) = p(x)p(w)$ is time-variant ($p(w)$ is time-invariant). At the same time, y is also time-variant. The input probability $p(x, w)$ and output probability $p(y)$, i.e., marginalized $p(x, y)$ over x, are time-variant and (a)-(d) are snapshots at $t = 0$.\\n2) The data vector x is from the input data stream. NN parameter w is sampled from the posterior. OCH_Z, which represents the distribution of the input vector z, picks the x and the w with probability.\\n3) In this experiment, as you mentioned, the distribution of y obtained by DU and DBNN is biased. This is because OCH_Z and OCH_Y do not forget past codevectors fast enough because the data stream changes so fast; we set an extremely high data stream change speed to show the difference between each method.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a differentiable Bayesian neural network. Traditional BNN can model the model uncertainty and data uncertainty via adding a prior to the weight and assuming a Gaussian likelihood for the output y. However, it's slow in practice since evaluating the loss function of BNN involves multiple runs over the entire network. Also when the input data is non-stationary, the output function can not be differentiated with respect to the input data. The paper proposes to use an online code vector histogram (OCH) method attached to the input and output of a classical DNN. Input OCH captures the distribution of both input data and network weights, and the output OCH captures the distribution of the predictions. Since these OCHs are differentiable, the proposed DBNN model can be used for streaming input data with time-variant distributions.\\n\\nI think the idea is interesting and novel. It explores a different way of modeling distributions with DNN. Instead of adding priors, DBNN relies on histograms, which is usually used to describe distributions for discrete observed input data. So the paper is well-motivated. \\n\\n1. The paper needs more literature review in the area of data streaming. Papers, such as [1], have proposed to use a vector quantization process that can be applied online to a stream of inputs. This paper introduces the vector quantization but doesn't mention the use of it in streaming data in related work, which kind of blurs the contribution a bit. Moreover, it would be helpful for readers to learn about useful techniques for streaming data from this paper. \\n\\n[1] Herv\\u00e9 Frezza-Buet. Online computing of non-stationary distributions velocity fields by an accuracy controlled growing neural gas\\n\\n2. I think the paper might need a bit more explanation about codevector, since it's not a very well-acknowledged concept in this field. The main issue for me to understand it is how to get these codevectors. When DBNN deals with streaming data and starts from no input, is the set of codevector empty at the beginning? The input data points are accumulated as codevectors? I hope the authors could clarify this process a bit more. \\n\\n3. Given the insufficient understanding of codevector, figure 2 is a bit hard to read. 1) (a)-(d) are figures for x0 at t=0, which is not time-variant. 2) what are these codevectors picked. 3) It seems that the codevectors are out of the regime of the distribution of y. But according to algorithm 2, y_*<-T(c_*), would that be a problem? I think (a)-(d) are informative but not straightforward to read. The authors need to put more text to explain these figures, since this simulated example can help readers to understand what is codevector and how it helps for uncertainty estimation. \\n\\nOverall I think the paper is well-written. The idea is novel and practical in the scenario of DNN. I would vote for accept.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposed differentiable Bayesian neural networks (DBNN) to speed up inference in an online setting. The idea is to combine online vector quantization with Bayesian neural networks (BNN), so that only incremental computation is needed for a new data point.\\n\\nIt seems the idea of online codevector histogram, or online vector histogram, is not new, which is not surprising since the original codevector histogram work is from decades ago (1982). A quick search shows several related work in this domain, e.g., \\u2018An Online Incremental Learning Vector Quantization\\u2019. It would be better if the authors could clarify the differences if they want to claim the contribution.\\n\\nOne of my major concerns is the fairness in terms of comparison to BNN approaches. For example, in Table 2, BNN (shown as MU) is significantly slower than DU/DBNN and DNN. The authors mentioned that this is because MU predicts results for 10 batches of size 3, and therefore 30 times slower. Since DU and DBNN also uses MC-dropout, why is this not an issue for DU/DBNN. Such large overhead is also inconsistent with the description in Section 5.3, saying that the \\u2018overhead of sampling weights is negligible\\u2019. Could the authors elaborate on this?\", \"a_related_problem_for_clarification\": \"it is mentioned before Section 5.1 that 30 samples are drawn from the BNN\\u2019s posterior. Do you mean 30 feedforward passes of MC-dropout, as done in the MC-dropout paper? Also, the authors should have made it clear that they are using MC-dropout as a BNN.\\n\\nIn the introduction, the authors motivate the proposed DBNN by saying that BNN needs dozens of samples from weight distributions and therefore is rather inefficient. However, there are a lot of modern BNN that are both sampling-free and differentiable. For example, natural-parameter networks (NPN) parameterize both the weight and neuron distributions using natural parameters. Even the earlier work, probabilistic BP (PBP), as cited in the current paper, also counts as sampling-free. The claim of \\u2018being differentiable\\u2019 without acknowledging prior work is rather misleading.\\n\\nThe point above is also related to the MU baseline in Table 2. The issue of needing 30 passes can be readily resolved if modern BNN such as NPN (or PBP), which takes only one pass, is used.\\n\\nThe organization could be improved to make the paper more readable. It would be better if the problem setting of online inference is introduced at the beginning, followed by the overview of DBNN and then the OCH details. Otherwise, it is not clear what the focus of DBNN is, until Section 4.\", \"minor\": \"It might be better to denote the weight as \\u2018w\\u2019 rather than \\u2018x\\u2019, to avoid confusion.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary of the Paper:\\n \\nThis paper describes a method for training Bayesian neural networks in the context of stream data. The method proposed is based on using a quantization approach with some techniques to estimate the change in probability distributions. The proposed approach is compared on some tasks, including real and synthetic datasets.\", \"detailed_comments\": \"The paper needs to improve the writing. For example, the sentence \\\"it has been unable to estimate the uncertainty of the predictions until recently\\\" sounds awkward.\\n\\nThe authors have to better explain the features and challenges of stream data.\\n\\nThe paper is unclear. There are several steps of the proposed method that are not well described.\\n\\nHow is the posterior distribution of the weights computed?\\n\\nThe notation x_0 and x_1 for the test point and the NN weights is confusing.\\n\\nFigure 2 is not clear.\\n\\nThe description of the baselines the authors compare with is not clear.\\n\\nWhat do you mean by degenerated in section 5.1.?\\n\\nThe paper is missing a related work section describing state of the art methods to address stream data.\\n\\nIt seems the real experiments of section 5.1. only consider one baseline MU. I believe this is insufficient.\\n\\nIn table 2 the benefits of the proposed approach are not very significant.\\n\\nThe experiments are missing error bars. It is not possible to extract conclusions of significance without them.\\n\\nMy overall impression is that the paper needs to better explain the approach followed and improve the notation to facilitate the reading. I believe that this paper needs for work and is not yet suitable for acceptance.\"}"
]
} |
ByeMPlHKPH | Lite Transformer with Long-Short Range Attention | [
"Zhanghao Wu*",
"Zhijian Liu*",
"Ji Lin",
"Yujun Lin",
"Song Han"
] | Transformer has become ubiquitous in natural language processing (e.g., machine translation, question answering); however, it requires enormous amount of computations to achieve high performance, which makes it not suitable for mobile applications that are tightly constrained by the hardware resources and battery. In this paper, we present an efficient mobile NLP architecture, Lite Transformer to facilitate deploying mobile NLP applications on edge devices. The key primitive is the Long-Short Range Attention (LSRA), where one group of heads specializes in the local context modeling (by convolution) while another group specializes in the long-distance relationship modeling (by attention). Such specialization brings consistent improvement over the vanilla transformer on three well-established language tasks: machine translation, abstractive summarization, and language modeling. Under constrained resources (500M/100M MACs), Lite Transformer outperforms transformer on WMT'14 English-French by 1.2/1.7 BLEU, respectively. Lite Transformer reduces the computation of transformer base model by 2.5x with 0.3 BLEU score degradation. Combining with pruning and quantization, we further compressed the model size of Lite Transformer by 18.2x. For language modeling, Lite Transformer achieves 1.8 lower perplexity than the transformer at around 500M MACs. Notably, Lite Transformer outperforms the AutoML-based Evolved Transformer by 0.5 higher BLEU for the mobile NLP setting without the costly architecture search that requires more than 250 GPU years. Code has been made available at https://github.com/mit-han-lab/lite-transformer. | [
"efficient model",
"transformer"
] | Accept (Poster) | https://openreview.net/pdf?id=ByeMPlHKPH | https://openreview.net/forum?id=ByeMPlHKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"JTjzaDPu5k",
"SJgrvtiYjH",
"HylA7KiFjH",
"SJezRdjtoB",
"rJgULPjYjS",
"rkebus9xqB",
"rkxtlq35FB",
"ryl7mTePFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746921,
1573661021262,
1573660965603,
1573660874113,
1573660493803,
1572019048573,
1571633648756,
1571388698746
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2353/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2353/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2353/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2353/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2353/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2353/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2353/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents an efficient architecture of Transformer to facilitate implementations on mobile settings. The core idea is to decompose the self-attention layers to focus on local and global information separately. In the experiments on machine translation, it is shown to outperform baseline Transformer as well as the Evolved Transformer obtained by a costly architecture search.\\nWhile all reviewers admitted the practical impact of the results in terms of engineering, the main concerns in the initial paper were the clarification of the mobile settings and scientific contributions. Through the discussion, reviewers are fairly satisfied with the authors\\u2019 response and are now all positive to the acceptance. Although we are still curious how it works on other tasks (as the title says \\u201cmobile applications\\u201d), I think the paper provides enough insights valuable to the community, so I\\u2019d like to recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Our General Response\", \"comment\": \"We thank all reviewers for their comments. In addition to the specific response below, here we list the answers for some common questions, some additional experiments and the changes we made in the revision.\\n\\n1. Clarifications of mobile setting\\nIn this paper, we defined the mobile setting for NLP tasks, which is under 500M Mult-Adds and 10M parameters. These two constraints are based on the real-world requirements for mobile devices and the conventions in the computer vision community.\\n(a) Mult-Add constraints: The floating-point performance of the ARM Cortex-A72 mobile CPU is about 48G FLOPS (4 cores @1.5GHz). To achieve the peak performance of 50 sentences per second, the model should be less than 960M FLOPs (480M Mult-Adds). This is a common constraint in the computer vision community. For example, MobileNet [1, 2] is less than 500M Mult-Adds; PNAS [3] uses 500M Mult-Adds as the constraint of its mobile setting.\\n(b) Parameter constraints: The constraint for the parameters is based on the download limitation. When an application is larger than 100MB, it cannot be downloaded with 4G LTE (only with WIFI) in the App Store. Therefore, the number of parameters for a mobile model should be limited. As MobileNet V2 (1.4) contains around 7M parameters, we round it to the nearest magnitude, 10M parameters, as our constraint. \\n\\n2. Additional experiments\\n(a) We conducted experiments on one additional language pair, WMT\\u201914 English to French. Our Mobile Transformer outperforms the vanilla Transformer by more than 1 BLEU score.\\n(b) We evaluated our Mobile Transformer on the standard setting. It outperforms the vanilla Transformer by 0.6 BLEU score on WMT\\u201914 English to French.\\n(c) We conducted some additional ablation studies including different combinations of two branches and different numbers of heads in LSRA.\\n\\n3. Other changes\\n(a) We included an additional comparison with the Evolved Transformer under the 360M Mult-Adds constraints, where the Evolved Transformer achieves 25.4 BLEU @ 364M Mult-Adds, and our Mobile Transformer achieves 25.8 BLEU @ 360M Mult-Adds.\\n(b) We removed the extremely efficient computation constraint for clarity.\\n(c) We added more discussions with some related papers.\", \"references\": \"[1] Andrew Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam, \\\"MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications\\\", arXiv 2017.\\n[2] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen, \\\"MobileNetV2: Inverted Residuals and Linear Bottlenecks\\\", CVPR 2018.\\n[3] Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy, \\\"Progressive Neural Architecture Search\\\", ECCV 2018.\"}",
"{\"title\": \"Our Response to Reviewer 1\", \"comment\": \"Thank you very much for your constructive comments.\\n\\n1. Specialization for mobile setting\\nAs mentioned in Section 4, the original multi-head self-attention captures \\u2018global\\u2019 and \\u2018local\\u2019 information in a single module. This unified design is not efficient especially when the computation budget is very limited (i.e., under the mobile setting). We need to make the module more specialized so that it can capture the context in a more efficient way. To this end, we proposed LSRA that captures the short-range information by convolution and the long-range information by attention. As shown in Figures 4 and 5, our LSRA achieves more improvements when the computation constraint is tighter. This is because the redundancy of the unified design is more severe under the mobile setting.\\n\\nWe also conducted experiments under the standard setting on the WMT En-Fr dataset. Our Mobile Transformer still outperforms the vanilla Transformer under the standard setting:\", \"base_transformer\": \"40.0 BLEU @ 1336M Mult-Adds\\nMobile Transformer (Ours): 40.6 BLEU @ 1328M Mult-Adds\\n\\n2. Clarification of mobile settings\\nWe defined our mobile setting based on the real-world requirements for mobile applications and the conventions in the computer vision community. Please refer to our general response for more information. We revised our paper accordingly in Section 5.\\n\\nWe have also listed all other changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper.\"}",
"{\"title\": \"Our Response to Reviewer 3\", \"comment\": \"Thank you very much for your encouraging and constructive comments.\\n\\n1. Clarification of mobile settings\\nWe defined our mobile setting based on the real-world requirements for mobile applications and the conventions in the computer vision community. Please refer to our general response for more information.\\n\\nInstead of the memory footprint and inference time, we set our mobile setting based on the number of Mult-Adds and parameters due to the following reasons:\\n(a) These two metrics are good indicators of the hardware resources required by the model: the number of Mult-Adds is correlated with energy consumption and inference time; the number of parameters indicates the storage on the mobile device.\\n(b) The number of Mult-Adds and parameters are independent of the specific hardware (e.g., CPU, GPU), deep learning framework (e.g., PyTorch, TensorFlow), and backend acceleration (e.g., cuDNN, MKL-DNN).\\n\\n2. Measured latency\\nWe measured the latency of our model and baselines on Raspberry Pi 4 (4 [email protected]):\\n---------------------------------------------------------------------------------\\n|\\t\\t\\t\\t\\t\\t\\t|\\tLatency (ms/word)\\t|\\n---------------------------------------------------------------------------------\\n| Transformer\\t\\t\\t\\t|\\t95.0\\t\\t\\t\\t\\t|\\n| LightConv\\t\\t\\t\\t\\t|\\t73.4\\t\\t\\t\\t\\t|\\n| Mobile Transformer (Ours)\\t|\\t65.2\\t\\t\\t\\t\\t|\\n---------------------------------------------------------------------------------\\nAll of these models have the same BLEU score (34.1) on the IWSLT dataset. Our Mobile Transformer is 1.5x faster than the vanilla Transformer in terms of the measured latency.\\n\\n3. Design cost\\nWe have run 5 experiments in total (on WMT En-De) to explore different implementations of convolution (i.e., vanilla convolution, depthwise convolution). However, we have not tuned our model architecture heavily: the two branches have the same embedding dimension and the same number of heads. Therefore, our total training cost (including tuning the model architecture) is around 16 * 5 = 80 GPU days, which is of the same magnitude as the training cost of the vanilla Transformer and is much lower than the search cost of the Evolved Transformer (about 250 GPU years).\\n\\n4. Experiments on other language pair\\nWe also conducted experiments on one more language pair, WMT\\u201914 English to French. Our Mobile Transformer consistently outperforms the Transformer by more than 1 BLEU score:\\n----------------------------------------------------------------------------------------------------------------------------------\\n|\\t\\t\\t\\t\\t\\t\\t|\\t#Params\\t|\\t#Mult-Adds\\t|\\tBLEU\\t|\\t\\u0394BLEU\\t|\\n----------------------------------------------------------------------------------------------------------------------------------\\n| Transformer\\t\\t\\t\\t|\\t2.8M\\t\\t|\\t87M\\t\\t|\\t33.6\\t\\t|\\t--\\t\\t|\\n| Mobile Transformer (Ours)\\t|\\t2.9M\\t\\t|\\t90M\\t\\t|\\t34.9\\t\\t|\\t+1.3\\t\\t|\\n| Transformer\\t\\t\\t\\t|\\t11.1M\\t\\t|\\t338M\\t\\t|\\t37.6\\t\\t|\\t--\\t\\t|\\n| Mobile Transformer (Ours)\\t|\\t11.7M\\t\\t|\\t360M\\t\\t|\\t38.7\\t\\t|\\t+1.1\\t\\t|\\n| Transformer\\t\\t\\t\\t|\\t17.3M\\t\\t|\\t527M\\t\\t|\\t38.4\\t\\t|\\t--\\t\\t|\\n| Mobile Transformer (Ours)\\t|\\t17.3M\\t\\t|\\t527M\\t\\t|\\t39.6\\t\\t|\\t+1.2\\t\\t|\\n----------------------------------------------------------------------------------------------------------------------------------\\nPlease refer to Table 3 and Figure 5 in the revised PDF for more information.\\n\\n5. Attention maps\\nIn Figure 3, the visualization is based on the average attention maps of all heads in the same layer.\\n\\n6. <EOS> token\\nAccording to Clark et al. [1], each head of attention implicitly learns a function over the input sequence. As some head of attention only focuses on a subset of the sequence (e.g., nouns), the attention weights in the rows of all the other tokens (e.g., non-nouns) will be aggregated into some special entries (e.g., <EOS>). Thus, the attention weights on these special tokens will have a high value after taking the average.\\n\\n7. Trade-offs between encoder and decoder\\nWe only experimented with the same number of layers in the encoder and decoder to have a fair comparison with the vanilla Transformer.\\n\\n8. Other tasks\\nOur LSRA is a general module that can, in principle, be plugged into the models for other tasks, including language modeling and abstractive summarization. This remains future work.\\n\\nWe have also listed all other changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper.\", \"references\": \"[1] Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher Manning, \\\"What Does BERT Look At? An Analysis of BERT\\u2019s Attention\\\", BlackBoxNLP 2019.\"}",
"{\"title\": \"Our Response to Reviewer 2\", \"comment\": \"Thank you very much for your constructive comments.\\n\\n1. Evaluation results\\nWe evaluated our Mobile Transformer on multiple machine translation datasets, including IWSLT De-En and WMT En-De. In our revision, we also added experimental results on WMT En-Fr (see Table 3 and Figure 4). Across all datasets, our Mobile Transformer consistently outperforms the vanilla Transformer and the AutoML-based Evolved Transformer. These are two strong baselines. Compared to the vanilla Transformer, we achieved twice the BLEU score improvement as the Evolved Transformer under similar constraints, which should be considered significant.\\n\\nOn the other hand, our Mobile Transformer can reduce the computation by 2.5x with only 0.3 BLEU degradation and 15x with 5 BLEU degradation on the WMT En-Fr dataset. To achieve further compression, some general techniques (e.g., pruning, quantization) can be applied. In principle, we can safely quantize the model to 8 bits (4x) and prune the model by 50% (2x) without much loss of accuracy, which will give us around 120x reduction in model size.\\n\\n2. Comparison with Evolved Transformer\\nOur manual design is not the one presented in the ET\\u2019s paper even though it lies within its search space. This, we believe, is because NAS is limited by the sample efficiency and therefore cannot fully explore all the samples within its design space. We hope that our paper can raise people\\u2019s awareness about the importance of design insights: e.g., flatten the transformer to enlarge the attention\\u2019s computation, incorporate parallel branches that specialize in extracting features from different ranges.\\n\\n3. Previous work\\nWe included the reference for adaptive span [1] and the all-attention [2] in our revision. Both methods are designed for the character-level language modeling, where the sequence is typically very long (>1000 tokens). This is drastically different from the machine translation, where the sequence is much shorter (<30 tokens). The adaptive span applies masks for the long-range relations, which will induce significant information loss when the sequence is relatively short. Also, these methods are orthogonal to our LSRA and can potentially be applied together.\\n\\n4. Ablation study\\nWe included several ablation studies on the IWSLT dataset. We explored different combinations of two branches (attention+attention, convolution+convolution), and we also validated the effectiveness of flattening the FFN.\\n---------------------------------------------------------------------------------------------------\\n| Model\\t\\t\\t\\t\\t\\t\\t\\t|\\t#Mult-Adds\\t|\\tBLEU\\t|\\n---------------------------------------------------------------------------------------------------\\n| Mobile Transformer (Ours)\\t\\t\\t|\\t209M\\t\\t|\\t34.5\\t\\t|\\n| - with two branches of attention\\t\\t|\\t232M\\t\\t|\\t33.6\\t\\t|\\n| - with two branches of convolution\\t|\\t217M\\t\\t|\\t33.8\\t\\t|\\n| - without flattening the FFN\\t\\t\\t|\\t207M\\t\\t|\\t34.0\\t\\t|\\n---------------------------------------------------------------------------------------------------\\n\\nWe have also listed all other changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper.\", \"references\": \"[1] Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin, \\\"Adaptive Attention Span in Transformers\\\", ACL 2019.\\n[2] Anonymous, \\\"Augmenting Self-attention with Persistent Memory\\\".\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper claims to propose an extension of the Transformer architecture specialized for the mobile environment (under 500M Mult-Adds).\\nThe authors propose their method called \\\"Long-Short Range Attention (LSRA),\\\" which separates the self-attention layers into two different purposes, where some heads focus on the local context modeling while the others capture the long-distance relationship. \\nThey also demonstrate consistent improvement over the transformer on multiple datasets under the mobile setting. \\nIt also surpasses the recently developed comparative method called \\\"Evolved Transformer\\\" that requires a far costly architecture search under the mobile setting.\\n \\nThis paper is basically well written and easy to follow what they have done.\\nThe experimental results look good.\\n \\nHowever, I have several concerns that I listed as follows.\\n \\n1,\\nI am not sure whether my understanding is correct or not, but it seems that the proposed method, LSRA, is not a method specialized for mobile computation.\\nIn fact, in the paper, they say, \\\"To tackle the problem, instead of having one module for \\\"general\\\" information, we propose a more specialized architecture, Long-Short Range Attention (LSRA), that captures the global and local information separately.\\\" \\n \\nThere is no explicit discussion that LSRA is somehow tackling the mobile setting.\\nThere is a large mismatch (gap) between the main claim and what they have done.\\nIn other words, LSRA can be simply applied to standard-setting (non-mobile setting). Is there any reason that the proposed method cannot be applied to the standard-setting?\\nIf my understanding is correct, the paper must be revised and appropriately reorganized to clear this gap.\\n \\n2,\\nI am not convinced of the condition of the so-called \\\"mobile setting (and also extremely efficient constraint).\\\" \\nPlease provide a clear justification for it.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes Mobile Transformer, an efficient machine translation model, which achieves state-of-the-art results on IWSLT and WMT. The Mobile Transformer is base on long-short range attention (LSRA) modules that combine a depthwise convolution branch to encode the local information and a self-attention branch to capture the long-range information.\\n\\nThe main contribution of this paper includes\\n1. bottlenecks are not beneficial to 1D attention models\\n2. having both convolution and attention modules in parallel performs better and more efficient than having one of them alone. While LSRA is included in the search space of Evolved Transformer, surprisingly, their searching algorithm doesn't discover it. Evolved Transformer has either two convolution branches or two attention branches in parallel.\\n\\nThe paper is well written and easy to follow. The experiments are quite solid; however, it would be if the authors can report how Mobile Transformer performs on other language pairs or other NLP tasks.\", \"questions\": \"1. Do the attention maps in Figure 3 come from the average of multiple heads or just one of them? \\n2. The constraint for the mobile setting is set to 10M parameters. Can you justify why you choose it? In my opinion, memory footprint or inference time on mobile devices can be more realistic. \\n3. Regarding the design cost shown in Figure (b). Does the number for Mobile Transformer include the cost of all the experiments you ran to search for your Mobile Transformer? \\n4. I wonder if LSRA can also be applied to other tasks such as language modeling or reading comprehension.\\n5. In terms of inference latency, how much faster Mobile Transformer is compared to Transformer and LightConv?\\n6. Have you considered having the trade-off between having more parameters in the encoder or decoder?\\n7. Have you done any analysis on why all tokens in Figure (c) assign high weights to the <EOS> token?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a new technique (LSRA) improving Transformer for constrained scenarios (e.g., mobile settings). It combines two attention modules to provide both global and local information separately for a translation task. In this manner, the authors place the attention and the convolutional module side by side, thus having different perspectives (globally and locally) of the sentence. They test their approach to 2 common translation benchmarks.\\n\\nEnhancing deep learning model efficiency is very important, and the authors succeeded in reducing the computation costs and consequently, the CO2 emissions. But the evaluation results are not so impressive and go in line with other previous efficient deep learning approaches for different domains. I\\u2019m not an expert in NLP, but overall results of 10-1000x or more wall clock time reduction with <1-5% loss in accuracy are usually obtained for domains that have seen more optimization for mobile deployment (especially mobile-optimized CNNs like MobileNet). LSRA-based appraoch is slightly better than the original version of Transformer and its evolved version. From the latter (ET) authors seem to take the idea of parallel branches for their architecture. Also, adaptive attention span in Transformer models and all-attention layers have already been investigated to make networks more efficient and simpler for longer sentences. Include clearer ablation studies would be also interesting to support their findings and superior performance.\\n\\nTo summarize, the paper is addressing an important and interesting idea. It is, in general terms a nice engineering paper, but I am not sure about whether the developments and results are relevant/novel enough yet at this point to publish at ICLR. \\n\\n-------------------------------------------------\\n\\nLooking at the other comments and the feedback provided by the authors, I have a more positive feeling about the contributions of the paper which are now sufficiently demonstrated. Therefore, I increase my original recommendation to \\\"Weak Accept\\\".\"}"
]
} |
r1gfweBFPB | Learning by shaking: Computing policy gradients by physical forward-propagation | [
"Arash Mehrjou",
"Ashkan Soleymani",
"Stefan Bauer",
"Bernhard Schölkopf"
] | Model-free and model-based reinforcement learning are two ends of a spectrum. Learning a good policy without a dynamic model can be prohibitively expensive. Learning the dynamic model of a system can reduce the cost of learning the policy, but it can also introduce bias if it is not accurate. We propose a middle ground where instead of the transition model, the sensitivity of the trajectories with respect to the perturbation (shaking) of the parameters is learned. This allows us to predict the local behavior of the physical system around a set of nominal policies without knowing the actual model. We assay our method on a custom-built physical robot in extensive experiments and show the feasibility of the approach in practice. We investigate potential challenges when applying our method to physical systems and propose solutions to each of them. | [
"Reinforcement Learning",
"Control Theory"
] | Reject | https://openreview.net/pdf?id=r1gfweBFPB | https://openreview.net/forum?id=r1gfweBFPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"dgedvlIOm",
"rJe-f_H3jB",
"rJe5wjhIsB",
"BJlps92LsH",
"HJxb-c28jS",
"S1eQvcorsr",
"S1eCOIEOqS",
"H1xKZJ6f9B",
"HJeEH7Lf9B",
"B1ggQEC6tS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746892,
1573832712955,
1573469025866,
1573468836918,
1573468664582,
1573399131311,
1572517494172,
1572159232905,
1572131643729,
1571836952246
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2352/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2352/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2352/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2352/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2352/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2352/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2352/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2352/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2352/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"While the reviewers generally appreciated the idea behind the method in the paper, there was considerable concern about the experimental evaluation, which did not provide a convincing demonstration that the method works in interesting and relevant problem settings, and did not compare adequately to alternative approach. As such, I believe this paper is not quite ready for publication in its current form.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response. Ultimately I think this is an interesting direction, but perhaps the core idea (estimating the gradient of trajectories w.r.t. the policy parameters by fitting a GP to a set of noisy trajectories executing the same controller) is just innately difficult because of very high-variance gradient estimates and therefore it is a difficult way to do policy learning. This is what my intuition says, and what the results in the paper suggest, as they are on a simple problem - robot control in free space. I believe all of the reviewers mention simulators instead of physical systems because the paper would benefit from more thorough experiments on more difficult tasks, where the core hypothesis of the paper can be tested by injecting noise into the simulator. A real-world physical system may be more convincing in situations where the noise would be difficult to specify or understand, such as in a problem involving contacts.\"}",
"{\"title\": \"Response to Anonymous Reviewer 2\", \"comment\": \"We thank the respected reviewer for taking time and closely reading our work. We'd like to bring up some points regarding the raised concerns in the following.\", \"regarding_the_variation_in_the_optimal_trajectory\": \"We don\\u2019t consider the reflected variation in the \\u201coptimal trajectory\\u201d. We consider the reflected variation in the \\\"resulting trajectory\\\".\", \"regarding_the_analogy_of_our_work_with_conventional_system_identification_and_the_difficulty_of_closed_loop_system_identification\": \"We are not identifying the system. We were aware of the difficulty of system identification and the fact that it requires a sufficiently rich signal in the input. However, our main argument was exactly proposing an alternative approach that learns something from the system without the need to recover true dynamical systems. We did not claim that we can recover all aspects of the dynamical system just from small perturbations.\"}",
"{\"title\": \"Response to Anonymous Reviewer 3\", \"comment\": \"We thank the respected reviewer and try to answer the comments in the following.\", \"regarding_the_use_of_simulation\": \"Even though working with a simulator was much easier than building a physical robot and experimenting on it, we intentionally went for a real robot since the results of the simulator were not reliable. For example, the separation between spatial and temporal noise was not known for us prior to running experiments on the real robot. Real challenges of the problem are not visible before doing real experiments on real robots.\", \"regarding_the_empirical_distribution_on_page_2\": \"It the sum of delta functions located on some trajectories in the dataset.\"}",
"{\"title\": \"Response to Anonymous Reviewer 1\", \"comment\": \"We thank the respected reviewer for the technical comments and detailed reviews. Here, we try to answer the raised points.\", \"regarding_the_simplicity_of_the_task\": \"Our main goal is to show that learning physical derivatives is feasible in practice despite all noises and challenges of real-world systems. We custom built the robot only to be able to safely run perturbation experiments with low cost. For the purpose of this paper, in total, we collected around 10k trajectories which are not possible with more complex robots. Therefore, going to more complex robots and more complex controllers is definitely interesting, but was not the purpose of this paper. Here we showed the feasibility of computing the physical derivatives such that they generalize well and a proof of concept via a downstream reaching task.\\n\\nRegarding the experiment of section 4.4:\\nThe purpose of this experiment is to show the usefulness of the physical derivative in a downstream task to showcase one of its uses. There might be methods that carry out this certain experiment better and we did not claim that we do it better than alternative methods in every aspect. In this paper, we introduce the physical derivative quantity as a middle ground between model-free and model-based approaches and address its challenges in a real physical system and that experiment only serves as a complementary example to show that the learned physical derivatives generalize to unseen perturbations. \\nAlso, we would like to emphasize that the presence of physical derivatives allowed us to compute the desired perturbation in the parameters of the PD controller as simple as evaluating equation (21). This is a much cheaper computation than planning two paths from x_0 to x*_t and from x*_t to x*. This cheap computation is a payoff we gain for the initial extensive experiments to compute physical derivatives and learn the regressors from parameter perturbations to trajectory deviations.\", \"regarding_the_difficulty_of_computing_physical_derivatives_compared_with_a_forward_model\": \"Yes, we agree that it is difficult to learn the physical derivatives and the main purpose of this paper and ideas that robustify the method against temporal and spatial noise is to combat these difficulties as an initial step towards this unexplored approach. We showed in the paper that using the proposed ideas, the physical derivatives can be learned from a real \\u201cphysical system\\u201d and it can generalize well. The main reason that we carry out our experiments directly on the physical system rather than on a simulator was to face these challenges and propose solutions to them. We think this side of our work has fairly unnoticed in the reviews.\", \"regarding_voxelization_and_gradient_information\": \"It is an assumption in this work that the change in trajectories caused by inherent noise is less than the change caused by perturbing the parameters of the controller. Hence, by voxelization, even though some local deviations vanish, the remaining deviations are caused by perturbing the parameters of the controller. This assumption is valid in physical systems (robots) which are built with reasonable accuracy. If the system exhibits too much noise such that its controller effect is dominated by the noise, the noise must be alleviated at the hardware level.\"}",
"{\"title\": \"Response to Anonymous Reviewer 4\", \"comment\": \"We thank the respected reviewer for the careful reading of our work and the constructive remarks. In the following, we separate the mentioned points and answer each in a distinct paragraph.\", \"regarding_low_dimensional_experiments\": \"This is true but it is due to the nature of the robotic arm and the controller. We don\\u2019t see this as a limitation of the method since we work with parametric controllers here not neural network policies. \\n\\nRegarding Novelty, practical interest and the need for extensive experimentation: \\nThis paper is an initial step towards learning a quantity from physical systems which is independent of a specific task. Compared to conventional RL where the reward function is an inseparable component, this work tries to test the feasibility of learning a useful quantity other than value functions.\\nWe believe learning physical derivatives is a fundamental learning problem and has not been addressed before. Therefore, showing its feasibility and methods to deal with its issues is a challenging problem by itself that we addressed in this work. Comparing with state of the art methods in RL and Control in downstream tasks is, of course, important but will be the next steps.\", \"regarding_exploration_in_the_parameter_space\": \"Physical derivatives is an unsupervised quantity and we do not learn it to solve a specific task. It might not be a good idea to learn physical derivatives for a wide range of parameters only to solve a single RL task. One can think of physical derivatives as the derived proxy model of the system that can be useful in all RL and Control tasks not only one. \\n\\nRegarding the number of required samples, Yes, it can be true. However, the exact purpose of learning Gaussian Process regressors from perturbations in parameter to deviations in trajectories and showing the prediction accuracy on test data is to emphasize the generalizability of the method that was shown qualitatively in the figures 16-24 in the appendix and also in Table 1.\", \"regarding_high_dimensional_policy\": \"Yes, this is an important question but is beyond the scope of this paper as the first work in learning physical derivatives. Moreover, many practical controllers live in low-dimensional parameter space. For example 3 parameters in a PID controller or a few parameters in a nonlinear controller (e.g. bang-bang controller with a single on-off threshold).\", \"regarding_high_dimensional_state_space\": \"We constructed separate predictors from the parameter perturbations to each dimension of the state space and showed the high accuracy of the predictions in Table 1. Please notice that we built a robot to conduct the experiments since we intentionally wanted to go beyond simulations and show the efficacy of the method on physical systems. One can build another robot with a higher dimensional state space that satisfies the safety requirements of the intended experiments in order to test the method in higher dimensions.\", \"regarding_direct_comparison_against_methods_that_learn_dynamics\": \"We agree these are interesting comparisons but can be future steps. Learning physical derivatives from physical systems (not simulators) by itself has challenges and the goal of this paper is to address those challenges and finally showcasing one of its applications as in section 4.4.\", \"regarding_the_usefulness_of_the_method_in_downstream_tasks\": \"Please see 4.4 where the usefulness of physical derivatives for a downstream task.\", \"regarding_the_content_of_table_1\": \"The prediction performance is evaluated on validation data which is left out of the initial training set.\", \"regarding_zero_shot_experiment\": \"We did not argue that using physical derivatives is the best way to perform the task of 4.4. The goal of this experiment is to show that the trained mapping from controller perturbations to trajectory deviations generalizes well and consequently can be used to tackle a downstream task.\", \"regarding_the_purpose_of_figure_4\": \"Figure 4 shows the effect of perturbing the parameters of the controller on the trajectories of the physical system. The reference was mistakenly omitted from the text.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes learning physical derivatives, the derivative of the trajectory distribution with respect to policy parameters. The proposed method estimates changes in trajectories at a particular theta by using finite differences,\\nthen fitting Gaussian Processes per timestep to generalize to new dtheta's. The paper then proposes techniques to robustify the process against noise in the system.\\nTo deal with temporal noise, where trajectories are approximately equal up to a time shift, they simply\\nestimate the optimal shift and use the shifted version to estimate. To address more complicated noise, they assume sensitivity of trajectories to noise is small relative \\nto sensitivity to the parameters, and discretize the state space at a level such that trajectories that \\ndiffer primarily due to inherent noise look the same at the discretized level, while perturbed policy \\nparameters still lead to different trajectories. They then use the discretized trajectories to estimate \\nthe finite differences.\\n\\nExperiments illustrate how the learned predictions compared to actual resulting perturbations and \\nillustrate resulting trajectories on certain toy domains and a physical robotic finger. All experiments\\nare done with very low dimensional state and policy spaces (1-3 dimensions each). \\n\\nWithout much more extensive experimental validation, this paper should be rejected. While I am not aware \\nof any prior work on learning physical derivatives, the actual methods used are not novel in of themselves\\nbeyond being applied towards learning derivatives with respect to the policy. As such, the method should be\\nof practical interest in order to be accepted. With limited experiments on a single very low dimensional \\ndomain and no comparisons against any alternative methods, there is little evidence demonstrating the actual \\neffectiveness of the proposed method, especially on more complex domains and for downstream tasks.\", \"suggested_experiments\": \"- Stability in gradient estimation\\nIt seems like it could require huge amounts of samples to be able to estimate gradients at parameters \\nwhere for which the system is not very stable, as states in at later timesteps can easily change in \\nhard to predict ways as the dynamics are propogated through time. This should be an especially big issue \\nif we do not already have good stable controllers close to the desired solution and needed to actually \\nconduct exploration in parameter space to solve a taask. I would appreciate more extensive evaluation \\nacross multiple different (simulated) domains and assessing the effectiveness of gradient estimation \\nalong random parameters.\\n\\n- Dimensionality of policy parameters and state spaces\\nThe current experiments only involve very small parameter spaces. It would be important to see how well\\nusing finite differences and GP regression scales with a higher dimensional search space, which can be \\ndemonstrated on varying dimensionalities of an LQR system for example. It would also be important to see\\nhow gradient estimation scales with high dimensional state spaces even with a small parameter space (like\\nin the PD controller experiment in the paper). \\n\\n- Direct Comparison against learning dynamics models\\nUsing the same data, compare (with same metrics as in table 1) physical derivatives estimated with the \\nproposed approach against learning GP dynamics models and rolling out the perturbed policy with the learned\\nmodel. Without a direct comparison against learning dynamics models and understanding what situations \\nlearning physical derivatives provides better estimates, it is unclear when or why one would prefer to learn \\nphysical derivatives in this way compared to a model based approach. \\n\\n- Quantitative results measuring costs of learned controllers\\nDespite the name of the paper and a description of how to compute a policy gradient via physical derivatives, \\nthere are no experiments involving such policy gradient updates as far as I can tell. While one advantage of the \\nmethod (as well as model based approaches) is the ability to learn in unsupervised manner, it would be extremely \\nhelpful to validate how well the physical derivatives are estimated in terms of how useful they are for a downstream \\ntask, such as optimizing a controller for a cost function. Right now, experimental results lack any comparisons\\nto other methods or any other way to assess the effectiveness of estimating physical derivatives. \\nA comparison against regular RL policy gradient methods (or other model free algorithms) and model based RL \\nwould give an idea as to whether the physical derivatives learned are actually useful.\", \"other_questions_and_comments\": \"- Table 1: is this evaluating the accuracy of the physical derivatives on the shaking data that it was\\n\\t used for learning, or on a validation set? If on a validation set, would the validation perturbations be\\n\\t drawn from the same distribution as the training set?\\n\\t- The zero shot planning experiment in section 4.4 seems very contrived. It does not seem like a useful task\\n\\t to adjust the parameters of the PD controller in order to reach a state that isn't the target. The figures \\n\\t illustrating trajectories are also not very convincing and unclear. Two points are labelled source state and\\n\\t target state, but it is not clear which is the intermediate state it is supposed to reach. In any case, most \\n\\t of the trajectories seem to vastly overshoot the target? final state, and it is hard to assess how close the\\n\\t trajectories end up being to the intended states from a 2d representation of a 3d space. Quantitative results\\n\\t would perhaps have been more useful in illustrating the effectiveness of using physical derivatives.\\n\\t- What is the purpose of figure 4? It does not appear to be referenced in the text and it is not clear what is\\n\\t being shown.\", \"other_notes_not_part_of_decision\": \"Paper exceeds the 8 page recommended length\\n\\tLots of small typos in the text\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a method for control by estimating the gradient of trajectories w.r.t. the policy parameters by fitting a GP to a set of noisy trajectories executing the same controller. This is opposed to the majority of current RL methods that either learn a forward model or learn a policy. They argue that learning this gradient is a middle step between model-based and model-free RL. The method is shown to estimate gradients on simple policies (linear and nonlinear open-loop controllers, and a linear PD controller) for a free-space reaching robot, and update a controller to add a trajectory constraint to pass an intermediate state.\\n\\nThe paper does show that they can learn these derivatives on controllers from data, which is a cool proof of concept. The method to estimate gradients by \\u201cshaking\\u201d in a probabilistic way by fitting a GP to noisy trajectories is clever and interesting. But there are a few reasons why I believe this work is not ready for publication.\\n\\nThe paper only considers free-space reaching as a task, which is not a difficult problem as it does not have contacts. The policies considered are also very simple: an affine open-loop controller (U = Wt + B with 6 parameters), a simple nonlinear open-loop controller (U = Asin(wt) with 2 parameters) and a PD controller with 2 parameters. The motivation is not too convincing without showing some results on hard tasks: model-based RL methods work great in this setting, and are very likely to outperform the method proposed in the paper. The motivation for the proposed method avoids explicit model learning which is a similar motivation as model-free methods, so the paper should at least show that it works as a proof of concept in settings where model-free learning has some advantages, eg. environments with contacts. The paper should probably also compare to existing methods in those settings, although I understand that it might not outperform existing methods.\\n\\nThe results in section 4.4 which is the result of using the model to plan really shows that using the learned model to update the policy is probably not straightforward. The parameters of the PD controller that go from x_0 to x* are updated to pass a waypoint x*_t using the learned model. But in practice what this is basically doing is changing k_p to introduce a large, possibly inefficient deviation in the path from x_0 to x* that hits x*_t at time t. Directly planning for a path between x_0 to x*_t and then x*_t to x* would probably give a much cleaner path.\\n\\nAt a high level, the proposed method is likely to be difficult to apply on real problems because estimating the gradient of T w.r.t. pi is probably just much noisier than estimating the forward model directly, which is already a significant challenge. Perhaps one useful experiment is to somehow explicitly show how these two methods compare (eg. measure the variance of trajectory predictions of this method vs rolling out a learned forward model repeatedly).\", \"comments\": \"Equation 11 and 12 do not make sense/do not use standard notation. I suggest defining n (as a signal or a value, it is not clear at the moment) and defining a new output signal y_t instead of the x_t <- \\u2026 notation. In particular, the way equation 11 is written seems to say the output x_t is a value-shifted version of the input x_t, NOT a time-shifted one.\\n\\nThe preliminaries section 1.1 does not discuss environment dynamics. This is significant because the paper seems to assume deterministic dynamics but this is never explicitly stated.\\n\\nVoxelization as a solution to spatial noise is a bit surprising because discretizing the space throws away local gradient information, which seems valuable to the method. It would be good to understand the effect of this design decision better with an ablation.\", \"minor_comments\": \"\", \"page_9\": [\"constrain -> constraint\", \"Assuem -> Assume\", \"such controller -> such a controller\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"*Summary of paper*\", \"This paper investigates the use of random perturbations applied to a robotic policy to learn a local gradient useful for policy optimization. The method aims to learn a policy directly on a real physical robotic system, bypassing both simulation models and model-free RL. Training pairs are gathered by perturbations of a starting policy, and the \\\"gradient\\\" is captured in a probabilistic model learned from the training data. The paper includes experiments on a custom 3-DOF robotic platform.\", \"*Decision*\", \"I vote for rejecting this paper. While the idea is interesting, the paper lacks precision in key areas and the method is not placed in context among related work. Further, it fails to communicate key ideas (particularly in the experiments) to a non-robotics reader. Without sufficient clarity and background, it is not suited to a general machine learning conference.\", \"Lemma 3, which attempts to justify the use of voxelization, and its proof are both imprecise and inadequate. To improve precision, please define \\\"error causes by voxelization\\\" in mathematical terms, e.g. ||c_i - x_i||. Also, while the statement of the lemma un-intuitively implies that larger voxels introduce smaller errors, the proof seems to say that larger errors will result for smaller gradients if larger voxels are used.\", \"Related work: How does this work relate to random search/evolutionary computation? How does it compare to performing those methods or a model-free RL method directly on the robot? How does it compare to learning using an inaccurate model for robot dynamics? Presumably there are numerous methods that have been tried in this area, so further context is needed.\", \"The evaluation is unclear, at least to a non-expert in robotics. A lack of quantitative evaluation further exacerbates this issue: nearly all experiments, even those with associated plots, are characterized qualitatively and without reference the performance of related methods.\", \"In addition to addressing the limitations above, I would encourage the authors to consider the use of experiments in simulation to thoroughly and quantitatively investigate the convergence/bias/variance of the gradient model w.r.t. #DoF of the robot, length of the trajectory, voxelization, # sampled trajectories, perturbation sampling method, and robot reliability/reproducibility\", \"*Additional feedback*\", \"spelling errors throughout; please check thoroughly\", \"the captions/labels/etc. in most figures is far too small to read in a printed copy of the paper\", \"What is the intuition for the \\\"empirical distribution p_e(T|\\\\pi) = ...\\\" on page 2? Is it counting the exact matches between the trajectory T and the M observed trajectories? (This may be more clear in the context of voxelization introduced later.)\", \"Figure 3: what are the units for \\\\gamma? what is the time step?\", \"many of the figures are out of order w.r.t. their introduction in the text\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper addresses a very good question - can we do better in terms of model learning, so that we can find the much sought after middle ground between model free and model based RL. In particular, the authors ask, can we find a way to learn a model that is reward/task independent, so that a new task can be equally well handled. This is timely and the general thrust of the thinking, in terms of learning from perturbation around trajectories, is good but I am not sure the proposed methods are sufficiently well developed to merit publication. I am also concerned that the authors do not consider numerous issues with the setup that are fairly well understood as issues for system identification.\\n\\nThe main idea, as laid out in \\u00a71.1, is to observe that the parameter update depends mainly on the way a small perturbation in parameters is reflected as a variation in the optimal trajectory (by asking for the probability of a trajectory, this variation becomes the probability of a nearby trajectory). The authors then approach the approximation of this in terms of a discrete finite differences estimate. There are some extensions, such as using a local GP model instead of a local linear model and consideration of ways in which the system might not be exactly repeatable given initial states. These are all proper questions but there are many more important unanswered ones:\\n\\n1. Starting with where the model setup begins, it is not clear why a complex nonlinear dynamical system, i.e., the typical multi-jointed robot taken as a dynamical system (so, not just kinematics and quasi-static movements), can be sufficiently well approximated using a discretised finite point set that is used at the start of \\u00a72 - how does one find the correct T, the correct step size, how does one change these for the local nature of the dynamics (some places might be smoother than others, in phase space), etc.? Even more importantly, are we assuming we know the proper state space ahead of time so that there is no history dependence due to unobserved variables?\\n\\n2. As such, the authors are proposing to perform closed-loop system identification in a completely data-driven manner. It is well known that this is hard because in the absence of suitable excitation, not all necessary modes in the dynamics will be observed. The only controlled example considered, in \\u00a74.3, and subsequent discussion about 'zero-shot' generalisation is getting at this. However, neither at the conceptual level nor in terms of the detailed experiment do I see a good account of what allows this approach to learn all aspects of the dynamics of the system from just small perturbations around a closed loop trajectory.\\n\\n3. In light of all this, I find the evaluation really weak. Some experiments I would have liked to have seen include - (i) a control experiment based on a standard multi-link arm to show how bad the issue of model mis-match is for the task being considered (I suspect, not much), (ii) experiments with local linearizations, and perhaps piecewise local linearizations, to show how much innovation is needed or is being achieved by the proposed advances, (iii) for us to be talking about 'zero shot' generalisation and the like, more sophisticated tasks beyond merely changing the reaching point (as I say before, it is not even clear that a good PID controller with a roughly plausible linearization is not sufficient to achieve similar effects, and certainly there is a plethora of more sophisticated baselines one could have drawn upon).\\n\\n4. Some of the discussion comes across as a bit naive, e.g., we have a lemma 3 whose proof is simply a geometric argument about cubes without sufficient consideration of properties of dynamics. I don't doubt the result but in the way it is presented here, it seems shoddy.\\n\\nAlso, some smaller questions not properly explained:\\na. How do you know which kernels for good for the GP in equations 9-10?\\nb. Why should we expect the correlation procedure in \\u00a73.0.1 to always work without aliasing and what is the way to get at the suitable domain?\"}"
]
} |
HylfPgHYvr | Occlusion resistant learning of intuitive physics from videos | [
"Ronan Riochet",
"Josef Sivic",
"Ivan Laptev",
"Emmanuel Dupoux"
] | To reach human performance on complex tasks, a key ability for artificial
systems is to understand physical interactions between objects, and predict
future outcomes of a situation. This ability, often referred to as
intuitive
physics
, has recently received attention and several methods were proposed to
learn these physical rules from video sequences. Yet, most these methods are
restricted to the case where no occlusions occur, narrowing the potential areas
of application. The main contribution of this paper is a method combining
a predictor of object dynamics and a neural renderer efficiently predicting
future trajectories and explicitly modelling partial and full occlusions among
objects. We present a training procedure enabling learning intuitive physics
directly from the input videos containing segmentation masks of objects and
their depth. Our results show that our model learns object dynamics despite
significant inter-object occlusions, and realistically predicts segmentation
masks up to 30 frames in the future. We study model performance for
increasing levels of occlusions, and compare results to previous work on
the tasks of future prediction and object following. We also show results
on predicting motion of objects in real videos and demonstrate significant
improvements over state-of-the-art on the object permanence task in the
intuitive physics benchmark of Riochet et al. (2018). | [
"intuitive physics",
"objects",
"occlusions",
"object dynamics",
"segmentation masks",
"results",
"occlusion resistant learning",
"videos",
"human performance"
] | Reject | https://openreview.net/pdf?id=HylfPgHYvr | https://openreview.net/forum?id=HylfPgHYvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0RDHYAzwB4",
"Bklh6GB2sH",
"ryed9h43oS",
"HJeCH2V3iH",
"HyxH-2NhiH",
"HJlwAevmoB",
"Bylu2aeMoB",
"rJejkkaRqr",
"S1eWc4LTtS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746862,
1573831364294,
1573829776319,
1573829701638,
1573829629138,
1573249230677,
1573158320432,
1572945635463,
1571804296677
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2351/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2351/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2351/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2351/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2351/AnonReviewer6"
],
[
"ICLR.cc/2020/Conference/Paper2351/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2351/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2351/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper studies the problem of modeling inter-object dynamics with occlusions. It provides proof-of-concept demonstrations on toy 3d scenes that occlusions can be handled by structured representations using object-level segmentation masks and depth information. However, the technical novelty is not high and the requirement of such structured information seems impractical real-world applications which thus limits the significance of the proposed method.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Answer to Reviewer#6\", \"comment\": \"Thank you for your helpful comments on the writing as well as the missing references.\\n\\nWe agree with your assessment that segmentation masks would be more difficult to obtain in real videos; probably, also the depth information would be less reliable than in the synthetic datasets we are using. Still, the main point of this paper is that a level of representation like object masks+depth, a kind of a 2.5D representation is a good level of representation to compute long term predictions, provided that occlusion can be addressed. Now, we agree that the next steps should address what happens when noisy masks are used instead of ground truth ones.\", \"ablation_studies\": \"We agree that additional ablation studies on the refinement network are interesting. Following suggestions from Reviewer#6, we trained the same Recurrent Interaction network on a forward prediction task with 15 and 25 degrees tilted views, without using position refinement with the renderer. Hence the locations of objects used as input of the renderer exactly match the centroid of each segmentation mask. Note that we choose the 15 and 25 degrees tilted views so that occlusions occur frequently (otherwise refinement is not so useful). On the 25 degree tilted view, we obtain a L2 distance (in pixels) to target of 13.1/23.8 for a prediction horizon of 5 and 10 frames respectively. On the 15 tilted view, we obtain a L2 distance to target of 18.5/31.2 for prediction horizon of 5 and 10 frames. These results are to be compared with those in Table1 in the main body. We observe that in such an environment where occlusions appear frequently, refining estimates positions with the renderer reduces prediction errors by about 30%, which confirms intuition coming from qualitative results in supplementary material.\", \"stochasticity\": \"the stochasticity of the model is indeed restricted to the prediction of an uncertainty term. It seems hard to compare this with the stochasticity of the ground truth data since this data is (almost) deterministic. One way to do it would be to add white noise to the input data and estimate the distribution of the resulting trajectories. This indeed looks interesting. In the present model, we have observed that in long rollouts behind occlusions uncertainty increases, as well as after objects contact (bouncing events).\"}",
"{\"title\": \"Answer to Reviewer#5\", \"comment\": \"Thank you for your helpful comments on the writing as well as the missing references which we will include.\", \"on_stochasticity\": \"Our model is not stochastic. The use of the word Proba-RIN may be misleading, but we only predicts a term of error (or uncertainty). One way to make it stochastic and have various possible outcomes would be to sample predictions from this distribution (centered around the prediction, with std equal to predicted uncertainty). We mainly used this term of error to stabilize learning and represent uncertainty, not to model possible alternative outcomes.\", \"comparison_with_state_of_the_art_video_prediction_approaches\": \"Indeed it is interesting to compare our results with video prediction models. To that aim we compare with Riochet et al. baseline, trained to predict future video frames in a similar setup, and which predicts segmentation masks which we can compare with our approach. Other models that directly produce RGB images are difficult to compare because they predict in a different space where colour and texture rendering matters a lot. A nice thing with predicting in mask space is that it enables to concentrate on issues of position and interaction.\"}",
"{\"title\": \"Answer to Reviewer#3\", \"comment\": \"Yes, we agree that our paper does not tackle prediction from images, but from segmentation masks. Here, we use ground truth masks, which is possible because of the synthetic nature of the dataset. Application to real videos would require to use a segmentation system. While it is true that initial position and velocity can be estimated from a pair of masks, the task we are tackling is still not trivial. First, during total or partial occlusions, position estimates are incorrect or missing. Second, masks correspondence across frames is NOT provided, which makes it challenging to recover full trajectories, especially across long occlusions. Both of these challenges are tackled by our use of the neural renderer.\", \"on_the_need_of_ground_truth_segmentation_of_occluded_objects\": \"The model does not have access to the segmentation masks of occluded objects (at any time). During training, the position of partially occluded object is refined thanks to the renderer. Fully occluded object have no corresponding mask and therefore the refinement does not modify anything (no gradient loss). We will clarify this in the rewrite.\"}",
"{\"title\": \"Answer to Reviewer#2\", \"comment\": \"End-to-end training: Yes we tried to train end-to-end, but generalization wasn\\u2019t as good, and it was also longer to train (by a factor of 10). The resulting masks and trajectories were similar but generalization to longer rollouts was qualitatively not convincing. It seems therefore that the renderer trains faster than the prediction network, and that an incorrect initial renderer is detrimental to prediction learning. This pleads for a kind of a curriculum learning (giving more weights initially to rendering accuracy), but this would require many more experiments that we thought would be better suited to a followup study.\", \"remark_on_figure_3\": \"Thank you, we will update Figure 3 to make clearer and self-explanatory.\\n\\nDifference from Battaglia et al.: Training in full sequence (RNN) without the supervised velocity was not explored in previous work. Interaction Networks are being widely used in synthetic dataset where ground truth velocity and position is known. Showing that velocity can be kept as a latent variable, using only supervision of the a sequence of positions to train the Interaction Networks opens doors to real world applications where position is much easier to get than velocity.\", \"claim_on_similar_performance_with_battaglia_et_al\": \"We will soften this claim, saying that our approach performs similar to Battaglia et al. on long rollouts. Also note that on short rollouts, object dynamics are close to linear and the fact that the baseline is uses ground truth velocity may explain the gap in performance.\\n\\nConcerning the IntPhys benchmark, and the \\u201cmaximum error through the whole sequence\\u201d. The mask prediction is done through the whole sequence. For each frame, we get the reconstruction loss. We give to each sequence a loss corresponding to the maximum loss of its frames (maximum across time).\\n\\nConcerning comparison with Riochet et al. on other tasks, it is difficult to compare on the state prediction (predicting location of each object). However it is possible to compare on mask prediction, which we did but kept in the supplementary material (see table S1). In this table we can see that our method performs better than baseline Riochet et al., trained on future mask prediction.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #6\", \"review\": \"* Note: emergency review, done under a shorter time frame than good reviews require.\\n\\nIn this paper, the authors develop a highly structured model to predict motions of objects defined by segmentation masks and depths. The model trains a physics model (in the form of a slightly modified interaction network) and a renderer composed of a per-object renderer combined with an occlusion model which composes the per-object segmentation and depth into a scene segmentation and depth.\", \"positives\": [\"The jury is still out on the degree of structure required to do proper object processing (neural nets with large amounts of data, mildly structured nets like networks with attention, more structured nets like this, or a full fledged renderer-like probabilistic program); this work contributes novel work to the line of research which attempts to do object-level processing with structured models while still leveraging the power of neural networks.\", \"The experimental section appears very thorough and convincing, even if the dataset is relatively simple.\"], \"negatives\": [\"The model requires highly privileged information (segmentation mask, depth) at training and test time. Given that the segmentation/depth data are not too far from the actual images, it would have been interesting to see if it were possible to work with pixels (a variant of the occlusion model would probably still work), at least at test time.\", \"Regarding using segmentation/depth as input to the model: for a real dataset, segmentation is more relevant: it is both less informative than positions (due to significant occlusions) and easier to measure. In this highly synthetic dataset, this feels more debatable: objects are more entangled in the segmentation (which makes using segmentation more challenging), but only weakly, with many frames with no occlusion; furthermore, segmentation provides object shapes as information.\", \"The paper is generally well written, but could benefit from some reorganization - instead of defining each module separately, it would be better to describe the flow of information through different modules, then describe the module. I was wondering for a while how the initial positions were estimated (required as input to both the interaction net and the renderer), but this only comes at the end of the paper.\", \"Some ablation experiments felt missing, for instance, the importance of the refinement network (also unfortunate that the details of refinement were not given in the main body).\", \"The stochasticity of the interaction network appears a bit weak (simple Gaussians) - it would be interesting to display some data to see if the ground truth data is indeed Gaussian like .\", \"Missing potential references:\", \"Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects\", \"Learning to Decompose and Disentangle Representations for Video Prediction\"], \"monet\": \"Unsupervised Scene Decomposition and Representation\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The key contribution of this paper is a model that can predict the dynamics of pre-segmented image patches under multiple frames of occlusion. The input image is processed by a CNN, the dynamics are predicted by a recurrent interaction net, and the output image is generated by a (deconv) CNN.\", \"the_key_weaknesses_i_see_are\": [\"The objects must be pre-segmented by some externally defined mechanism. Where does this mechanism come from? Segmenting the objects is challenging, and there are various recent methods that explore how to learn to do this (van Steenkiste et al., 2018). But if one has the segmentation masks, that simplifies things considerably and also offers a good estimate of the location and velocity (if there are 2+ frames).\", \"During training, the error is computed on all frames, including occluded ones, and backpropagated into the weights. But if I understand this correctly, this means that for training you need access to ground truth rendered trajectories. It would be better if the model didn't require the ground truth segmentations for objects that are occluded. How would they be made available to a learning system?\", \"Generally the writing wasn't that clear and I struggled to understand some details of the model and training procedure.\", \"Overall I don't believe this work is ready for publication, as there isn't that much novelty and the requirements are impractical.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #5\", \"review\": \"This paper proposes a method to predict future trajectories by modeling partial and full occlusions. Although it is well-written and the topic sounds interesting, I failed to catch why this approach is required for this setting. So, to strengthen the message of this paper, I listed a couple of suggestions and comments below (from the most important to the least important):\\n\\n\\n1. It is a bit hard to catch how this model handles \\\"diversity.\\\" Specifically, when predicting the futures, it should be able to generate stochastic outputs. However, I failed to find how diverse the output of the model is. If the output is not that stochastic, then it would be tough to believe that the model can \\\"predict\\\" the future; instead, it may \\\"extrapolate\\\" the current condition only. To reassure such concerns, I recommend reporting how diverse your output is. (One easy way is to report the variance of the predicted center mass values between multiple samples while reporting the l2 distance.)\\n\\n\\n2. For the future prediction task, it would be much better if it is compared with various state-of-the-art future prediction approaches [1, 2, 3, 4, 5, 6]. For some of the models, it could not be able to compare directly with this approach (e.g., lack of 'center of mass' information). However, it would be still okay once it is compared with other state-of-the-art results without feeding some 3D information (e.g., provide projected 2D video as an input). By doing so, I believe the readers can easily catch (1) why it is better to predict physical interaction in 3D space (instead of directly predicting from a 2D space), and (2) also why predicting occlusion is essential in this problem setting.\\n\\n\\n3. Minor comments:\\n(a) It is a bit hard to catch how the author computes the \\\"aggregate pixel reconstruction error\\\" in Table S1. I recommend adding an equation number there to make it clear.\\n\\n(b) There are a couple of missing references: the last sentence on page 4, the first paragraph in Supplementary, the last sentence in Supplementary page 3, etc.\\n\\n(c) \\\\citep is often misused. Please replace some inappropriate \\\\citep with \\\\citet.\\n\\n(d) Please check the format of the reference, as well; currently, it has various styles even for the same source/conference.\\n\\n\\n\\n\\n------------------------------------------------------------------\\n\\n[Some comments based on the authors' rebuttal]\\n\\n\\nI thank the authors for their thorough comments and detailed explanations for each question. I carefully read the whole (not just my part), but it didn't change my mind; it would be much better if the claim comes with a more directly comparable result.\", \"some_additional_comments\": \"Q1-comment) I think the limitation of \\\"learning to extrapolate\\\"-style video prediction approach is partially presented in Reviewer #2's claim as well. Therefore, in this context, I recommend the author to show a better result to reassure the reader's concern. \\n\\nQ2-comment) I at least strongly recommend to add more experiments with other baselines, rather than relying mainly on the original model of the dataset. Although the input condition of a model could be different, I at least do believe that it will help the readers to catch the benefit of your setting more clearly.\\n\\nI hope this review phase would make your paper more powerful. \\n\\n\\n\\n\\n[1] Liang et al., Dual Motion GAN for Future-Flow Embedded Video Prediction, in ICCV, 2017\\n[2] Denton and Fergus, Stochastic Video Generation with a Learned Prior, in ICML, 2018\\n[3] Wichers et al., Hierarchical Long-term Video Prediction without Supervision, in ICML, 2018\\n[4] Wang et al., Video-to-Video Synthesis, in NeurIPS, 2018\\n[5] Heish et al., Learning to Decompose and Disentangle Representations for Video Prediction, in NeurIPS, 2018\\n[6] Minderer et al., Unsupervised Learning of Object Structure and Dynamics from Videos, in NeurIPS, 2019\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper proposes a method that combines a recurrent neural network that predicts values that are used as inputs to a rendered which interprets them and generates an object shape map and a depth map for every step of the dynamics predicted by the recurrent neural network. The proposed method is able to handle object occlusions and interactions. In experiments, the authors show improved performance against baselines for future prediction, object tracking, and object permanence.\", \"pros\": [\"Rendering network used with RNN\", \"Outperforms chosen baselines\", \"Weaknesses / comments:\", \"Compositional Rendering Network has to be pretrained: Did the authors try to train the model end-to-end? It would be interesting to see if this can be done so the proposed network is more unified.\", \"Figure 3 is not self explanatory: It would be good if the authors add labels to the predicted and gt frames. It is not easy to parse this figure from just looking at it.\", \"Difference from Battaglia et al., 2016: It seems that the only difference between the proposed method and this baseline is the change of input/outputs (including output with variance), and training in full sequence (RNN)? This looks like a minor change to me and reduces the novelty of the proposed method.\", \"Table 1 (trained on ground-truth positions): The authors claim that their network performs similar to Battaglia et al., 2016, but it seems that the baseline is better than the proposed method for the short term predictions with a relative improvement of about 20% and for long term when the baseline is better (half of the tests) it\\u2019s by a relative improvement of about 10%. Can the authors comment on this? Am I missing something?\", \"Implausibility score: What do the authors mean by \\u201cthe maximum error through the whole sequence\\u201d? How is this defined?\", \"The authors compare with Riochet et al., 2018 in Table 4, but not in the rest of the evaluations. Can the authors comment on why this is the case?\"], \"conclusion\": \"The paper proposes an interesting method, dataset, and seems to perform baselines in the quantitative evaluation. To the best of my knowledge, the current state of the method is novel in the rendering network. However, the rest of components have limited novelty. In addition, I have some comments about the paper which stated above.\"}"
]
} |
B1eZweHFwr | Statistical Verification of General Perturbations by Gaussian Smoothing | [
"Marc Fischer",
"Maximilian Baader",
"Martin Vechev"
] | We present a novel statistical certification method that generalizes prior work based on smoothing to handle richer perturbations. Concretely, our method produces a provable classifier which can establish statistical robustness against geometric perturbations (e.g., rotations, translations) as well as volume changes and pitch shifts on audio data. The generalization is non-trivial and requires careful handling of operations such as interpolation. Our method is agnostic to the choice of classifier and scales to modern architectures such as ResNet-50 on ImageNet. | [
"adversarial robustness",
"certified network",
"randomised smoothing",
"geometric perturbations"
] | Reject | https://openreview.net/pdf?id=B1eZweHFwr | https://openreview.net/forum?id=B1eZweHFwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"We5cRt6sTb",
"VAeW3Ppohl",
"BkxnCnHooS",
"HkloYnSosB",
"r1l0Tjrsjr",
"rkx4_jSsoS",
"r1x1Xg40tH",
"S1l6RuK2Kr",
"SkeL435jtB"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1609773431605,
1576798746834,
1573768403683,
1573768323412,
1573768134075,
1573768044173,
1571860502812,
1571752149124,
1571691566418
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2349/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2349/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2349/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2349/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2349/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2349/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2349/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2349/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Published Version\", \"comment\": \"An updated version of this paper was accepted at NeurIPS 2020 and can be found here: https://arxiv.org/abs/2002.12463\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a smoothing-based certification against various forms of transformations, such as rotations, translations. The reviewers have concerns on the novelty of the work and several technical issues. The authors have made efforts to address some of issues, but the work may still significantly benefit from a throughout improvement in both presentation and technical contribution.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"> Non-isotropic covariance matrices\\n\\nWe have addressed this point in our response to all reviewers.\\n\\n> To certify this attack, given a base classifier, we can simply do, for example, grid search, on the low dimensional space to find the worst case with very good accuracy. And it should be able to give much better than the randomized smoothing method.\\n\\nWith interpolation scheme such as linear and bilinear interpolation, every legal floating-point parameter would produce a (potentially) different interpolation. Thus enumeration (even though theoretically possible as floats are countable) is infeasible. For the case of nearest-neighbor interpolation Pei et al. [1] investigated an enumerative approach.\\n\\n[1] Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems, Pei et al., https://arxiv.org/abs/1712.01785\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"> How about the scenario when both rotation and translation exist?\\n\\nWe currently do not support this as a rotation concatenated with translation (and interpolation artifacts) does not commute with translation and rotation. We hope to address this issue in future work.\\n\\n> In experiments, can the authors consider the accuracy of the certified model under the adversarial setting, i.e., find the transformation that makes the deep nets performs worst?\\n\\nSimilar to Salman et al. [3] we could write an optimization problem to reflect this. Depending on the transformation the problem might be (piece wise) differentiable in the argument. However for rotations, prior research [1] found that randomly sampling perturbations is more effective. We will strive to include experimental results for this in the next version of the paper. \\n\\n> Randomized Nets and Manifold Interpolation by Wang\\n\\nWe were unaware of this line of work and thank the reviewer of the pointer. Similar to Cohen et al. [2] this addresses norm-ball perturbations. We added it to our related work. \\n\\n[1] Exploring the Landscape of Spatial Robustness, Engstrom et al., ICML 2019, https://arxiv.org/abs/1712.02779\\n[2] Certified Adversarial Robustness via Randomized Smoothing, Cohen et al. ICML 2019, https://arxiv.org/abs/1902.02918\\n[3] Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers, Salman et al., https://arxiv.org/abs/1906.04584\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"> Concerns about contribution and novelty of dealing with interpolation noise & Non-isotropic Sigma\\n\\nWe have addressed these points in our response to all reviewers.\\n\\n> How are $\\\\sigma_i$ vs $\\\\sigma_r$ chosen?\\n\\nTo certify for large angles $\\\\gamma$, in Equation (2) we want as large values for $\\\\sigma_i$ and $\\\\sigma_r$ as possible. The largest possible values for $\\\\sigma_i$ are dictated by the current results in smoothing for L2-perturbations. By Table 6 in Appendix G of Salman et al. [1] we choose 0.75, as this provides a good trade-off between robustness and accuracy.\\nThe choice of $\\\\sigma_r$ matters less than the choice of as $\\\\sigma_i$ large values don\\u2019t decrease the accuracy as much. As we are only certifying values of $\\\\gamma$ up to $\\\\pm 8$ degrees, $\\\\sigma_r = 5$ was a good trade-off between accuracy and certifiability.\\n\\n> Audio experiments missing\\n\\nWe will add audio experiments in the next iteration of the paper.\\n\\n[1] Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers, Salman et al., 2019, https://arxiv.org/abs/1906.04584\"}",
"{\"title\": \"Common Response\", \"comment\": \"We thank all reviewers for their comments and valuable feedback. We have addressed the individual reviewer comments below and now address common concerns here.\\n\\n> Impact of work\\n\\nAll reviewers have questioned the novelty of the presented work. We would like to address this by pointing out that the problem of these general perturbations is interesting to the research community [1 - 4]. Dealing with interpolation artifacts is a key challenge in achieving robustness to geometric perturbations of images in our and other works [3, 4]. We believe our approach is a suitable and non-straight-forward way to approach the problem.\\n\\n> Non-isotropic covariance matrices\\n\\nCohen et al.\\u2019s [5] original approach requires only a single (co)variance that is the same for all dimensions. For the perturbations discussed in this work (specifically in section 4) a diagonal covariance matrix is required. Since Theorem 3.2 is the same of diagonal and full covariance matrices we stated it is such. However. for perturbations parameterized by multiple (non-noise) parameters the off-diagonal entries can be used to encode dependencies between the parameters, such as the translation in the x direction and the translation in the y direction. While the diagonal version can proof safety for L2-balls in the parameter space, the off-diagonal entries admits general ellipsoids. \\n\\n\\n[1] Exploring the Landscape of Spatial Robustness, Engstrom et al., ICML 2019, https://arxiv.org/abs/1712.02779\\n[2] Geometric Robustness of Deep Networks: Analysis and Improvement, Kanbak et al., CVPR 2018, https://arxiv.org/abs/1711.09115\\n[3] Certifying Geometric Robustness of Neural Networks, Balunovic et al., to appear in NIPS 2019, https://www.sri.inf.ethz.ch/publications/balunovic2019geometric\\n[4] Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems, Pei et al., https://arxiv.org/abs/1712.01785\\n[5] Certified Adversarial Robustness via Randomized Smoothing, Cohen et al. ICML 2019, https://arxiv.org/abs/1902.02918\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\n\\nThis paper introduces a new smoothing algorithm which produces classifiers with certified robustness against adversarial perturbations. In particular, the authors are interested in settings where the input vectors need not be robust simply to \\\\ell_2 perturbations, for which previous authors (e.g., Cohen et al.) have already introduced a \\\"randomized smoothing\\\" classifier which possesses the desired properties. Instead, motivated by the desire to certify robustness to other families of transformations, the authors propose to introduce a smoothing transform that operates in an additive way on the underlying parameter space (rather than on the raw input vectors, as in the case of \\\\ell_2 randomized smoothing). The authors provide details for a few examples from image and audio data processing, and then show experimental results on ImageNet data.\", \"comments\": \"My assessment of this paper is that the level of novelty is relatively low. On the technical side, I do not believe this paper makes any significant contributions. The authors themselves note that the proof of the main result, Theorem 3.2, closely parallels the proof of Cohen et al. Algorithmically, the proposals of the authors are also minor tweaks of the algorithms in Cohen et al., since the types of perturbations considered in this paper are essentially single-parameter problems. (As a question: The authors prove Theorem 3.2 where the randomized smoothing operation adds possible nonisotropic Gaussian noise in the parameter space. However, it seems that in the examples, the noise added is isotropic -- it is not clear how the authors are choosing \\\\sigma_i in relation to \\\\sigma_r for the interpolation example. Can the authors clarify this, and give examples where smoothing with nonisotropic noise would actually be useful?)\\n\\nThe only real tweak that the authors introduce when presenting their smoothing algorithm is to introduce a small correction term in order to deal with interpolation/rounding errors. However, I do not think that this change is particularly novel.\\n\\nAs a final comment, I think it would have been interesting for the authors to include some simulations with audio data as well, since this constitutes one of the main motivating applications discussed by the authors.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors generalize the randomized smoothing type of robustness certification to handle many types of attacks beyond norm-based attacks, e.g., geometric perturbation, volume change, pitch shifts on audio data. At the core of the proposed generalization is using some interpolation which I think is quite straightforward.\\n\\n1. How about the scenario when both rotation and translation exist?\\n\\n2. In experiments, can the authors consider the accuracy of the certified model under the adversarial setting, i.e., find the transformation that makes the deep nets performs worst?\\n\\n3. Besides randomized smoothing on the input images, recently Wang et al showed that randomize the deep nets can\\nalso improve the deep nets and they gave it a nice theoretical interpretation. Here is the reference: Bao Wang, Binjie Yuan, Zuoqiang Shi, Stanley J. Osher. ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies, arXiv:1811.10745, NeurIPS, 2019\\n\\n4. Interpolation idea has been used in improving robustness of deep nets, for example:\\n1). Bao Wang, Xiyang Luo, Zhen Li, Wei Zhu, Zuoqiang Shi, Stanley J. Osher. Deep Neural Nets with Interpolating Function as Output Activation, NeurIPS, 2018 \\n2). Bao Wang, Alex T. Lin, Zuoqiang Shi, Wei Zhu, Penghang Yin, Andrea L. Bertozzi, Stanley J. Osher. Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization, arXiv:1809.08516, 2018 \\n3). B. Wang, S. Osher. Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning, arXiv:1907.06800\\n\\nIn sum, this paper studies the problem of certifying a broad class of adversarial attacks which is decent, however, the novelty is quite limited. Please address my questions during rebuttal.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper applied the framework of randomized smoothing classifier proposed by Cohen et al. to certify the adversarial attacks other than pixel change, including image rotating, brightness change in RGB space, Volume and pitch shifts for audio.\\n\\nCertifying general adversarial attack is an interesting direction. However, I do not believe this paper is qualified for publishing in ICLR. Below please find my comments:\\n\\nDifferent from change-pixel attack, the certification for rotation/brightness change in image classification, volume and pitch change in audio perturbations is much easier. The studied perturbation can be parameterized by a very low dimension vector (i.e. one dimension (angle and volume) for image rotation and volume change, 3 dimensions (brightness for each channel) for brightness perturbation). To certify this attack, given a base classifier, we can simply do, for example, grid search, on the low dimensional space to find the worst case with very good accuracy. And it should be able to give much better than the randomized smoothing method.\", \"one_contribution_of_this_paper_is_in_that\": \"Theorem 3.2 is valid for random smoothing using gaussian with general covariance matrix. However, the effectiveness of using a general covariance matrix is not studied. In the experiment section, I find only isotropic ones are used.\"}"
]
} |
SyegvgHtwr | Localised Generative Flows | [
"Rob Cornish",
"Anthony Caterini",
"George Deligiannidis",
"Arnaud Doucet"
] | We argue that flow-based density models based on continuous bijections are limited in their ability to learn target distributions with complicated topologies, and propose localised generative flows (LGFs) to address this problem. LGFs are composed of stacked continuous mixtures of bijections, which enables each bijection to learn a local region of the target rather than its entirety. Our method is a generalisation of existing flow-based methods, which can be used without modification as the basis for an LGF model. Unlike normalising flows, LGFs do not permit exact computation of log likelihoods, but we propose a simple variational scheme that performs well in practice. We show empirically that LGFs yield improved performance across a variety of common density estimation tasks. | [
"Deep generative models",
"normalizing flows",
"variational inference"
] | Reject | https://openreview.net/pdf?id=SyegvgHtwr | https://openreview.net/forum?id=SyegvgHtwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"oJ4lESJm-a",
"ryeRNDV2ir",
"B1gQMDEhiB",
"S1xw68VhjS",
"HylQcUE3oB",
"H1gWPLNhsr",
"HyltFrE2jS",
"BJlmkymy5S",
"HJgjqGLwtH",
"r1xPlCLSFS",
"S1eB7k4Zdr",
"r1x0meQ2vH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798746806,
1573828405785,
1573828363157,
1573828287085,
1573828234962,
1573828185431,
1573827968869,
1571921627106,
1571410578624,
1571282414688,
1569959708734,
1569628197555
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2348/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2348/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2348/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2348/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2348/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2348/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2348/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2348/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2348/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2348/Authors"
],
[
"~Kevin_Zhang2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to overcome some fundamental limitations of normalizing flows by introducing auxiliary continuous latent variables. While the problem this paper is trying to address is mathematically legitimate, there is no strong evidence that this is a relevant problem in practice. Moreover, the proposed solution is not entirely novel, converting the flow in a latent-variable model. Overall, I believe this paper will be of minor relevance to the ICLR community.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Official Blind Reviewer #2\", \"comment\": \"Thank you for your review.\\n\\nWe first address the claim that LGFs lack novelty. Please see above our discussion about RAD. In particular, our use of continuous mixing variables is not simply an incremental design choice, but provides a method for circumventing the computational cost which is exponentially increasing with the depth of a standard deep mixture model (as mentioned in the Appendix and related work section) _without_ introducing discontinuities into the loss function or explicit partitioning schemes. Moreover, continuous mixing variables require a significantly different approach to training (namely a variational scheme) which further distinguishes LGFs from these other methods.\", \"please_see_below_our_responses_to_your_other_points\": \"1) Regarding the claim that our paper is pure intuition, please see the discussion above. As we discuss there, although we did not previously formulate our discussion in terms of concrete mathematical statements, we did take care to be precise where this was possible. In particular, note that when we talk about \\\"preserving topology\\\", we mean so in the well-defined mathematical sense that the topology (i.e. the open sets) of a space is preserved under continuous mappings that have continuous inverses. Such functions indeed preserve _every_ topological characteristic, including properties such as connectedness, compactness, genus (i.e. the number of \\\"holes\\\" in the space), etc. These are standard mathematical concepts, and we refer you to https://en.wikipedia.org/wiki/Homeomorphism for more information and references.\\n\\nRegarding the comment that the example in Figure 1 would be fixed by a simple mixture model - we acknowledge this directly in the 4th paragraph of section 2. However, as we argue there, this sort of approach does not scale to complicated datasets (like CIFAR10) where the topology of the target is completely opaque. In these instances we would like a method that can somehow learn the topology of the target on its own.\\n\\n2) As we discuss in section 2, the maximum likelihood objective corresponds to a mode-covering KL objective. In other words, the loss function is encouraged to ensure the support of our model covers all the modes of the target. This is a standard feature of likelihood-based training and is not specific to our model.\\n\\n3) Refer to the general comment above for a discussion on testing the improvement LGFs make on other flow models. As for other variational improvements on flow models, those are discussed in the related work section. These methods are either outside of the scope of LGFs (e.g. Das et al., (2019), Gritsenko et al. (2019)), orthogonal improvements that could be combined with LGFs (e.g. Ho et al. (2019)), or RAD (Dinh at al., 2019) which is discussed above. \\n\\n4) The linked work (Semi-Implicit Variational Inference (SIVI)) may indeed seem superficially similar, but it is notably different in both its motivation and application. In particular, SIVI is a method which looks to improve the approximate variational distribution in variational inference; LGF is a method which looks to improve the bijection approach in density estimation using normalizing flows. LGFs require a variational scheme to optimize their parameters (for which methods like SIVI could potentially be useful), but this is different from considering LGFs to be a variational method on their own. Plus SIVI has no notion of stacking models to obtain even greater expressiveness - there is only one hierarchical layer. \\nGANs are another instance of an implicit model, but they provide no reliable method to approximate log-likelihoods. \\n\\n5) It is not clear what is referred to here. Note that we report the average test set log-likelihood, for which higher is better. In all cases in Table 1, the log-likelihood is higher for LGF-MAF than for MAF.\\n\\n6) Thank you for pointing this out. We have corrected this typo.\\n\\nFinally, we wonder whether you can comment further on the mistakes you perceive in our experiments? Please note point 5) above.\"}",
"{\"title\": \"Response to Official Blind Reviewer #3\", \"comment\": \"Thank you for your review.\\n\\nRegarding your points about the theoretical underpinnings and empirical evaluation of our method, please see our general replies in the Common thread above.\\n\\nThank you very much for your other feedback also. We agree a diagram could help to convey intuition. We have also fixed the typos and clarity issues that you have pointed out in the uploaded version of the paper.\"}",
"{\"title\": \"Response to Official Blind Reviewer #1\", \"comment\": \"Thank you for your review.\\n\\nFirst and foremost, we wish to state our objection to certain uncharitable comments made within this review. Regarding your claim that our experiments are oversold -- we respectfully disagree with this. The statement of ours that you quote describes improved performance on a variety of _tasks_. We stand by this claim: compared with the standard flow-based baselines that we consider, we found that LGFs do improve density estimation for 2D densities, real-world tabular data, and high-dimensional image data, all of which have very distinct structures and dimensionalities. Please see our general reply above for further discussion of this point, as well as further empirical results.\\n\\nIn a similar vein, when describing our contribution, you state that our \\\"(e)xperiments suggest an improvement over MAF\\\". We again refer you to our discussion above, and emphasise that, for the models considered, our results demonstrate that our method yields significant and unambiguous benefit. Moreover, in addition to MAF, which we apply to tabular data, we also consider a large-scale RealNVP model that makes use of fully convolutional networks and a multi-scale architecture. We believe these changes yield a density model with a significantly different structure and characteristics to MAF, and it bears emphasising that our method provides benefit within this quite distinct context also.\\n\\nIn response to your claim that our paper is purely based on intuition, we refer you to our general reply above.\\n\\nRegarding BSDS300 - although we omitted this benchmark, we did consider Fashion-MNIST and CIFAR10, both of which have far higher dimensionality than this dataset (784 and 3072 as opposed to 63 dimensions). Even with this increase in dimension, LGFs still yielded a performance benefit over the baselines we considered. Please also see above for our results using Glow that we have obtained subsequently.\\n\\nRegarding RAD - please see above.\\n\\nFinally, regarding your points about the inexact log-likelihoods: as we argue in section 3.3.1 of the paper, this does not pose a major limitation for our method. When log-likelihoods are required at evaluation time, it is straightforward to obtain an estimate using importance sampling as described. This estimate is consistent in the sense that it is possible to achieve as much accuracy as desired simply by taking more importance samples. This approach is standard, for example, within the VAE literature -- and indeed your comment would seem to apply equally to all of these models as well as ours.\\n\\nFor implicit models like GANs, no straightforward estimate of the likelihood is available at all. In this setting it is also typically impractical to estimate the latent distribution $p(z|x)$ for a given $x$, which can be useful for downstream tasks. We also mention that various normalising flow models exist that do provide fast sampling as well as density estimation -- for instance, RealNVP, which we also consider in this paper.\\n\\nFinally, we have uploaded a revision of the paper that is not longer than 8 pages.\"}",
"{\"title\": \"Common Response to All Reviewers - Part III\", \"comment\": \"We likewise considered B-NAF (De Cao et al, 2019) model, but also encountered problems. In particular, we note that the default choice of $\\\\tanh$ nonlinearity suggested in the paper (and which we believe their results are based on) means that their model is not a bijection, since $\\\\tanh$ is not surjective. The suggested alternative choice of LeakyReLU does fix this, but then introduces a second problem of removing the gradient signal on the Jacobian terms in the loss, which become constant almost everywhere. We sought to address this by considering a soft version of the LeakyReLU defined by $x \\\\mapsto \\\\epsilon x + (1 - \\\\epsilon) \\\\log(1 + e^x)$, where $\\\\epsilon \\\\in (0, 1)$ corresponds to the slope on the negative part of the real line. However, while improving over the \\\"hard\\\" LeakyReLU, this did not train successfully even for simple 2D experiments. It was again unclear for us how best to consider an LGF version of B-NAF.\\n\\nWe have updated our repository with code to reproduce all these results, including for SoS and B-NAF.\\n\\n3. Why didn't we explicitly compare against RAD?\\n\\nAll reviewers cited RAD (Dinh et al., 2019) as a particular benchmark that we ought to have compared against. We agree that RAD is a very interesting paper. However, we do not believe that the version of RAD that exists online at present is ready to be used as a benchmark yet.\\n\\nThe choice of a discrete mixing variable brings significant difficulties that we discuss in section B of our appendix. In particular, naive stacking entails that the cost of evaluating likelihoods grows exponentially in the depth of the model, which quickly becomes intractable as larger models are used e.g. for image datasets.\\n\\nRAD avoids this by partitioning the input space in a way that achieves a linear cost in the depth of the model. However, this partitioning scheme means that the loss landscape becomes discontinuous. Some guidance is provided in their appendix on how to resolve this problem for the case of a one-dimensional density with three components, but it is not immediately clear how to extend this to higher dimensions or more complicated targets. It therefore fell outside the scope of this paper to establish RAD as a comparable benchmark for the problems we wished to consider. We also note that the RAD paper itself does not consider problems in higher than 2 dimensions.\"}",
"{\"title\": \"Common Response to All Reviewers - Part II\", \"comment\": \"2. Were our experiments comprehensive enough?\\n\\nAll reviewers argued that our experiments were not comprehensive enough to serve as a justification for our method, suggesting that, while we demonstrate improvement on the baseline models we consider, we may encounter diminishing returns when using LGFs in conjunction with a stronger underlying model.\\n\\nTo this, we first emphasise that the experiments we report do indeed demonstrate a conclusive and comprehensive benefit of LGFs within the scope considered. For the MAF and RealNVP models that we use, LGFs _uniformly_ improve performance across a variety of interesting density tasks. These tasks vary significantly in size, with the dimensionality of the data ranging over 3 orders of magnitude from fewer than 10 dimensions (for the 2D datasets, GAS, and POWER) to 3072 dimensions (for CIFAR-10). These tasks also vary significantly in terms of structure; in particular, we show that LGFs scale to handle large-scale, fully convolutional, multi-scale density models (i.e. the RealNVP model considered) that exploit the pixel structure of MNIST and CIFAR-10. (Indeed, this has not been demonstrated for several of the benchmarks suggested in the reviews.) While we do not include results for current state-of-the-art density models, we believe the consistent improvement over these baselines alone is compelling.\\n\\nHowever, we do also agree that more baselines are always better. To this end, we report additional experimental results here. First, we considered the Rational Quadratic Spline from the Neural Spline Flows (NSF) paper (Durkan et al., 2019). Here, LGF yielded improved stability and test scores for a variety of 2D experiments, which can be reproduced using an update that we have pushed to our repository above. (See for example the \\\"rings\\\" dataset, which is quite topologically distinct from the standard Gaussian prior and therefore difficult to train on for the baseline.) We also compared NSF with LGF-NSF on the POWER, GAS, and MINIBOONE UCI datasets (the limited time for rebuttal precluded the inclusion of the HEPMASS dataset). For each dataset and model, we report the test-set log likelihood averaged over three random seeds (higher is better). We use the hyperparameter settings provided in the Appendix of the NSF paper for the baseline. On POWER: NSF achieves 0.65, with 3004722 parameters; LGF-NSF achieves 0.66, with 3064418 parameters. On GAS: NSF achieves 12.50, with 3128392 parameters; LGF-NSF achieves 12.29, with 3161104 parameters. On MINIBOONE: NSF achieves -10.80, with 212102 parameters; LGF-NSF achieves -9.51, with 169168 parameters. So, when both models have a number of parameters that is approximately equal, we obtained the same behaviour on POWER, slightly lower likelihoods on GAS, but a significant improvement on MINIBOONE. We also note that we achieved these results with the first hyperparameter settings (namely, the choice of $p$, $q$, and $s$ and $t$ networks) for our model that we attempted, and therefore expect further improvements to be possible here.\\n\\nWe additionally experimented with a large-scale Glow model (Kingma & Dhariwal, 2018). As a baseline, we used exactly the model used in (Kingma & Dhariwal, 2018), but replaced their act norm layers with batch norm. We made our LGF model shallower, using only 2 multi-scale steps as opposed to the default of 3. We also reduced the number of hidden channels in the coupling networks from 512 (the default) to 256. Both our baseline and LGF here achieve the same 3.40 BPD on CIFAR10 after 1000 epochs. However, our model uses only 15M parameters, as opposed to the 44M required by Glow. Again, we achieved these results with minimal tuning of our method, and we believe it possible to improve performance here even further.\\n\\nWe also tried SoS (Jaini et al, 2019), but found we could not reproduce the results described in the paper. In particular, we found the method to be unstable -- to the extent that, for the UCI datasets, we did not achieve a non-NaN value for the average test-set log likelihood on any run (i.e. at least one test value was NaN). We believe this is due to the fact that the suggested configuration in Jaini et al. (2019) means the bijection considered involves 9th order polynomials, which can produce extremely large values when given inputs with values outside the range $[-1, 1]$. It was therefore unclear how to incorporate LGFs into this context.\\n\\n[Continued below....]\"}",
"{\"title\": \"Common Response to All Reviewers - Part I\", \"comment\": \"We thank the reviewers for their feedback and comments. In this thread, we respond to several points that were common across all reviews. More specific replies are also given in the individual reviewer threads below.\\n\\n1. Is our paper purely conjectural?\\n\\nAll reviewers were critical of what they perceive to be a lack of solid theoretical justification for our approach. It is indeed true that our discussion involves some conjecture, which we are careful to make explicit whenever it occurs. However, many of our statements are mathematically precise. For instance, the discussion in the second paragraph of section 2 constitutes a proof of the following proposition:\\n\\n* Proposition: If the supports of $p^*_X$ and $p_Z$ are not homeomorphic, then no normalising flow model with $p_Z$ as the prior will yield $p_X^*$ exactly.\\n\\nThis proposition pinpoints a concrete, well-defined problem with normalising flows that, to our knowledge, has not been addressed in the literature to date.\\n\\nSimilarly, although presented informally, our discussion in section 3.2 of the paper also provides a clear theoretical account of the potential benefits of LGF models compared with standard normalising flows. This discussion can be summarised by the following proposition:\\n\\n* Proposition: Suppose that $\\\\text{supp} p_X^*$ is open and $G(\\\\cdot; u)$ is continuous for each $u$. Suppose further that, for each $x \\\\in \\\\text{supp} p_X^*$, the set\\n\\\\[\", \"b_x\": \"= \\\\{u : p_Z(G(x; u)) |\\\\det DG(x; u)| > 0 \\\\}\\n\\\\]\\nhas positive Lebesgue measure. Then there exists $p_{U|Z}$ such that $\\\\text{supp} p_X = \\\\text{supp} p_X^*$.\\n\\nIn other words, a sufficiently expressive $p_{U|Z}$ can correct the problem identified in the earlier proposition, which stems from the fact that homeomorphisms do not exist between sets with different topologies. The condition on $B_x$ is trivially satisfied in the standard case that $p_Z$ has full support and $|\\\\det DG(x;u)| > 0$ for all $x$ and $u$. The proof of this result is straightforward and proceeds exactly as described in section 3.2 of our paper, with $p_{U|Z}$ constructed so as to downweight regions of mass that fall outside the support of the target.\\n\\nTo better clarify our discussion in the paper, we have stated both results more formally in an updated version resubmitted above.\\n\\n[Continued below....]\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThis paper conjectures that normalizing flows are fundamentally limited due to the architecture assumption that the generative function g is continuous in x. It is argued that this constraint makes maximum likelihood estimation difficult in general. Localised generative flows are proposed as a solution and consist in modeling the generative model as a continuous mixture of bijections. Experiments suggest an improvement over MAF.\", \"decision\": \"The observation that continuity imposes a hard constraint on the network is sound, and the proposed solution appears to show some improvement. However, in its current state, this work appears to be quite fragile both from a theoretical and experimental point of view. First, it is only conjectured that this constraint poses actual problems. Second, the experimental evaluation is weak and insufficient. It omits comparisons with more recent generative flows that have shown to be able to model discontinuous densities. For this reason, I do not recommend the paper for acceptance.\", \"further_arguments\": [\"The whole paper rests on intuition without strong theoretical backup.\", \"The experiments are quite poor and results frankly oversold. It is said the method \\\"improves performance across a variety of common density benchmarks\\\". While we see improvements in Table 1 over MAF, the comparison omits all recent architectures based on Normalizing Flows, such as TAN (Olivia et al, 2018), NAF (Huang et al, 2018), B-NAF (De Cao et al, 2019) or SOS (Jaini et al, 2019). All of those methods have reported better results than those provided in Table 1. They have also been shown empirically to work for discontinuous densities. While I understand that LGF can be combined with any flow architecture, the question remains whether using a continuous mixture translates into significant improvements for those baselines as well. The experimental benchmarks also omit datasets such as BSDS300, for which the higher dimensionality is usually challenging. The same goes for Table 2 which omits recent and better results, such as Glow or FFJORD.\", \"Closer to LGF, a proper experimental comparison to RAD (Dinh et al, 2019) would be appreciated.\", \"The proposed architecture supposedly enables better generative models. However, this comes at the price that the density can no longer be evaluated exactly and analytically. Since normalizing flows are also typically slow for sampling, this makes the benefits of the proposed architecture quite limited. In particular, it is not clear why generative models that are good at sampling only (e.g., GANs) should not then be preferred?\", \"As a result of the point above, the experimental results are reported only in terms of approximated negative log-likelihood. I do not think this is fair, since models like MAF do provide exact values. It also makes the comparison with previous methods more difficult.\"], \"further_feedback\": [\"As per ICLR policy, higher standards should be applied to papers with 9 or more pages. I am confident the paper could be written within 8 pages only.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose to extend flow-based density models by replacing a single bijection with a hierarchical mixture of bijections. Each component in the mixture is then required to only push the prior onto a local region; this helps improve the coverage of $\\\\mathcal{X}$. This is motivated by the conjecture that in many cases the topology of $\\\\mathcal{X}$ might be overly complicated to be effectively captured by a single bijection. Formally, this is achieved by introducing a conditional random variable $U|Z$. In doing however, the log-likelihood is rendered intractable and a variational approximation must instead be resorted to. A recursive formula for computing the ELBO is introduced in this vain.\\n\\nOverall, I think this is a generally interesting contribution to the normalizing-flow literature that I expect to spark further research. However, there are some rough edges to this paper. The initial motivation is well-presented and relatively easy to follow, though a diagram would serve to cement the intuition regarding the support mismatch. The issue mentioned in footnote 2 deserves further discussion. At the same time, while well-reasoned, their justifications are nonetheless largely conjectural and further theoretical or empirical evidence would be welcome, both for characterising the pathologies they aim to redress and their proposed solution.\\n\\nFor the experiment section, I would have liked to have seen comparisons not only to the simplest baseline but also to some of the other methods mentioned in related works. In general, the experiment section is quite short and I didn't get a very good sense of how well this method performs.\", \"the_following_should_be_addressed\": [\"provide more evidence for the conjectures surrounding the motivation and derivation\", \"supply more varied baselines (e.g. RAD model)\"], \"minor_comments\": [\"at the top of page 4, you refer to $p_{U|X}$ several times, but I think you mean $p_{U|Z}$\", \"in the paragraph after equation (2), the $\\\\theta$ superscript seems to be missing from the $p_X^\\\\theta$\", \"in the first sentence of section 3.1, you refer to \\\"the single $g$ used in equation 2\\\", but equation 2 mentions no $g$\", \"in the first line on page 3, you talk about some region of supp $p_X^\\\\theta$ being pushed out of supp $p_X^*$, shouldn't this be the other way round since the KL is infinite only if the the support of $p_x^*$ is not contained within the support of $p_X^\\\\theta$?\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper introduces a straight-forward way to expand the flow models by considering mixture of flow distributions. The idea is not very novel since several previous work have tried the mixture of flow such as the mentioned RAD and Deep Mixture. The paper studies some further improvements such as using the continuous auxiliary variable and stacking multiple mixture layers.\", \"the_major_concerns_are_the_following\": \"1) The paper tries to solve a \\u201cproblem\\u201d build upon intuition. The paper explains as \\u201cthe normalizing flow places global constraint on the bijection\\u201d, \\u201cit need to match the topology of X to the topology of Z \\u201d, \\u201ccontinuous functions necessarily preserve topology\\u201d. What kind of topological properties are referred to here? Are all topological properties preserved under continuous function? It needs to be more accurate when using such terminologies. The intuition of the paper is weak and heuristic. The example in Figure 1 can potentially be easily solved with a two component Gaussian mixture of input Z to a vanilla flow model.\\n\\n2) For the mixture p(X), will the proposed method generate samples concentrated on one or some of the components? Why or why not?\\n\\n3) Considering there are a plenty of improvements of flow models, it is neccesary for the proposed method to compare with, at least for some methods explained in the Related Work section.\\n\\n4) Since the proposed methods inevitably lose the advantage of analytic density property of flow methods, it is better to show some advantage over implicit or semi-implicit methods. For example, (https://arxiv.org/abs/1805.11183) also uses a hierarchical model with continuous auxiliary variables and a marginalization similar to Eq.(4) in this paper. How does the proposed method related to or compared to these methods?\\n\\n5) In experiment section, in Table 1, the proposed method is worse than MAF for 2 datasets out of 4? In Table 2, what are the numbers refer to?\\n\\n6) On page 4, it should be p(U|Z) not p(U|X)?\\n\\nIn sum, I think though the paper makes contribution on exploring better flow models but the novelty is relatively weak, the discussion and comparison of related work is insufficient and the experiments are not convincing or have mistakes. I think a modification is necessary before publishing.\\n\\n################\\n\\nI have read the author's feedback.\"}",
"{\"comment\": \"Hi Kevin,\\n\\nWe are working hard to release our code as quickly as possible, and at this stage plan to do so by October 4th, when reviewers will be assigned to papers.\\n\\nWe respectfully disagree with your assessment that this practice is unfair, since the facility to upload code past the submission deadline is available to all authors, either via the method we have chosen, or by posting an updated link as a comment to their work.\\n\\nWe also note that there is nothing in the ICLR call for papers that forbids or even discourages this approach.\\n\\nWe will be fully transparent about the exact times that we upload our work, which will be timestamped in our git commits. We leave any decisions about how to make use of this information to the discretion of the reviewers.\", \"title\": \"Response\"}",
"{\"comment\": \"Hi,\\nAs of close to 56 hours after submission deadline , no code is present in the provided github link. It is not fair to provide a placeholder link for code submissions (which impact the review process) and submit code taking considerable buffer time after submission deadline.\", \"title\": \"No code in provided github link even after 56 hours of submission deadline\"}"
]
} |
HJxkvlBtwH | Certifying Neural Network Audio Classifiers | [
"Wonryong Ryou",
"Mislav Balunovic",
"Gagandeep Singh",
"Martin Vechev"
] | We present the first end-to-end verifier of audio classifiers. Compared to existing methods, our approach enables analysis of both, the entire audio processing stage as well as recurrent neural network architectures (e.g., LSTM). The audio processing is verified using novel convex relaxations tailored to feature extraction operations used in audio (e.g., Fast Fourier Transform) while recurrent architectures are certified via a novel binary relaxation for the recurrent unit update. We show the verifier scales to large networks while computing significantly tighter bounds than existing methods for common audio classification benchmarks: on the challenging Google Speech Commands dataset we certify 95% more inputs than the interval approximation (only prior scalable method), for a perturbation of -90dB. | [
"Adversarial Examples",
"Audio Classifier",
"Speech Recognition",
"Certified Robustness",
"Deep Learning"
] | Reject | https://openreview.net/pdf?id=HJxkvlBtwH | https://openreview.net/forum?id=HJxkvlBtwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"RISiI_bTQ6",
"SklqlcwuiS",
"r1gpOKwOiB",
"Hye17tPOsB",
"SJeB2PvusB",
"BJeL0admqr",
"rJlkTjrTFB",
"S1lGzu0hYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746774,
1573579250380,
1573579125047,
1573579031371,
1573578668848,
1572208077545,
1571802039305,
1571772426171
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2345/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2345/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2345/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2345/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2345/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2345/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2345/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper developed log abstract transformer, square abstract transformer and sigmoid-tanh abstract transformer to certifiy robustness of neural network models for audio. The work is interesting but the scope is limited. It presented a neural network certification methods for one particular type of audio classifiers that use MFCC as input features and LSTM as the neural network layers. This thus may have limited interest to the general readers.\\n\\nThe paper targets to present an end-to-end solution to audio classifiers. Investigation on one particular type of audio classifier is far from sufficient. As the reviewers pointed out, there're large literature of work using raw waveform inputs systems. Also there're many state-of-the-art systems are HMM/DNN and attnetion based encoder-decoder models. In terms of neural network models, resent based models, transformer models etc are also important. A more thorough investigation/comparison would greatly enlarge the scope of this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply for Reviewer#1\", \"comment\": \"Thank you for the feedback and interest in our work. Below we answer the concerns:\", \"q1\": \"Can you provide background on neural network certification?\", \"a1\": \"Certification of neural networks is an emerging area which aims to prove the robustness of neural networks against adversarial perturbations. For an introduction to the area, we refer the reviewer to related work listed in the introduction, particularly [1] which provides a unifying framework for many of the existing approaches. You can also check the answer we provide to Reviewer 2 for more technical details.\\n\\nCertification of neural networks is still at an early stage and none of the existing approaches scales to state-of-the-art networks and datasets (e.g. ImageNet). While here we do not directly consider the state-of-the-art architectures you listed, our method for certifying recurrent networks is general and can be applied to those architectures as well. However, certification methods for computer vision tasks have scaled from verifying small networks with 200 neurons to large residual networks with thousands of neurons and we expect the same progress in the audio domain in the future.\\n\\nOur work is the first one to certify audio classifiers. None of the existing approaches can handle this task and are mostly limited to computer vision. The only alternative approach which can handle recurrent networks, POPQORN, does not scale to the audio benchmarks as we demonstrated in our experiments in Appendix E.\", \"q2\": \"Section 2. Threat model: what kind of noises are using?\", \"a2\": \"Please see the definition of our threat model at the beginning of Section 2.\\n\\n\\n[1] Salman, Hadi, et al. \\\"A convex relaxation barrier to tight robust verification of neural networks.\\\", NeurIPS 2019\"}",
"{\"title\": \"Reply for Reviewer#2\", \"comment\": \"Thank you for the feedback and interest in our work. Below we answer the concerns:\", \"q1\": \"There should be a more thorough introduction on how to verify the robustness of a neural network classifier for noise perturbation. There should be some background summary such as what are the existing approaches and how to measure the robustness of a neural network classifier, etc..\", \"a1\": \"Thank you for the suggestion, we will extend our background section with the description of existing approaches for certification of neural networks. In the meantime, here we provide short summary:\\n\\nMost existing state-of-the-art scalable certification methods aim to capture all possible behaviors of a neural network using convex relaxations. In this perspective, we treat the input with the predefined perturbation range as a multi-dimensional polyhedron and pass it through the operations within the network.\\nThe most basic approach here uses interval propagation - it maintains the minimum and maximum possible value for each neuron in the network. More recent work (listed in Section 2) presents more elaborate relaxations to capture the propagation of the initial perturbation through the network.\\nRobustness certification is performed by checking whether the neuron corresponding to the true label is always greater than neurons corresponding to other labels with respect to the input region. If this is true, then we can establish the network correctly classifies the input under all possible realized perturbations. We will further elaborate on this in the next revision.\", \"q2\": \"The so-called audio processing pipeline here is actually speech processing pipeline. I am not sure if \\\"audio\\\" is the right term in a strict sense.\", \"a2\": \"Thank you for pointing this out, we will modify our terminology to more precisely match what we describe in the paper.\", \"q3\": \"Can you provide in-depth discussion and comparison between your approach and POPQORN?\", \"a3\": \"Please see our answer to the main points above where we provide answers to these questions and an in-depth comparison between POPQORN and our approach.\"}",
"{\"title\": \"Reply for Reviewer#3\", \"comment\": \"Thank you for the feedback and interest in our work. Below we answer the concerns:\", \"q1\": \"While I found section 3 to be useful to get an intuition of the proposed method, I still feel that it could be condensed a bit to add in additional details. For example, the authors don\\u2019t describe \\u201cback-substitution\\u201d in the work, which I believe should be described in the main text.\", \"a1\": \"We added more details on back-substitution in the last paragraph of Section 3. We have not described back-substitution in full detail in the main text as it is not part of our main contributions. Full details of this algorithm can be found in [1]. We also provide full derivation of bounds in our overview example using back-substitution in Appendix C.\", \"q2\": \"How sensitive was the provability metric to the choice of these 100 test examples?\", \"a2\": \"To check the sensitivity of our results to the choice of 100 samples that we verified we ran the verification on 10 random permutations of our test set (each time with a different seed). We describe this experiment in Appendix F and show the verification results in Figure 9 with error bars indicating the variance. These results show that provability is not significantly affected by the choice of 100 element subset used for verification. In the next revision, we will include these error bars for each plot.\", \"q3\": \"The section on \\u201cProvable defense for audio classifiers\\u201d was not very clear to me. The authors state that \\u201cTo train, we combine standard loss with the worst case loss obtained using interval propagation.\\u201d I was not clear on what the modified loss is. Could the authors please clarify this in the text, preferably a mathematical formulation? Also, I\\u2019m curious why these experiments are only conducted on the FSDD set, but not on the GSC set.\", \"a3\": \"In Appendix G we have now provided the detailed description of our provable defense procedure, including a mathematical formulation of a modified loss, which closely follows Gowal et al. (2018). We also ran this experiment for GSC network \\u2014 these results are also presented in Appendix G.\", \"q4\": \"Why does the interval analysis technique perform so much worse on the GSC set relative to the FSDD set? Could you describe more details about the model architectures used for the two tasks?\", \"a4\": \"Based on our experiments, we believe that certifying GSC is more difficult for two reasons: (i) the main factor which impacts provability is the number of frames \\u2014 each additional frame causes accumulation of errors introduced by loose convex relaxations which ultimately makes the certification more likely to fail. This reflects in the fact that GSC, where samples have 19 frames on average, is significantly more difficult to certify than FSDD which has samples with 15 frames on average, and (ii) given that GSC and FSDD have 30 and 10 output classes, respectively, in the case of GSC verifier needs to prove that the final logit of correct class is greater than the logit of all other 29 classes which is naturally more difficult than proving it for only 9 other classes as is the case with FSDD.\", \"q5\": \"Could you quantify and report the difference in the volume between POPQORN and your approach? Also, why are the approximation volumes not comparable between the two systems?\", \"a5\": \"Please see our answer to the main points above where we provide answers to these questions and in-depth comparison between POPQORN and our approach.\", \"q6\": \"Could you clarify that there is significant body of work which operates directly on the time domain signal?\", \"a6\": \"Yes, thank you for the helpful references. We made a clarification and added the references you suggested.\\n\\n\\n[1] Singh, Gagandeep, et al. \\\"An abstract domain for certifying neural networks.\\\" Proceedings of the ACM on Programming Languages 3.POPL (2019): 41.\"}",
"{\"title\": \"Main Points for Common Concerns\", \"comment\": \"We thank the reviewers for their comments. We first answer the main points, followed by specific questions:\\n\\n - Comparison with POPQORN:\\n\\n We provide detailed quantitative comparison between our approach and POPQORN in Appendix E. The main takeaways are:\\n\\n 1) Our method produces bounds strictly better than interval bounds (see Theorem 1). This means the maximum distance between the true function and our bounds cannot grow arbitrarily large. POPQORN offers no such guarantees. Although POPQORN uses gradient descent to optimize for the bounds with minimum volume, there are no convergence or optimality guarantees. This can cause imprecise results in practice \\u2014 we found many inputs in our audio benchmarks for which POPQORN produces bounds worse than intervals, please see Figure 6 in Appendix E for one example. Our bounds are strictly better than intervals and do not suffer from this problem.\\n\\n 2) While we experimented with POPQORN on synthetic inputs and found that it indeed produces relaxations with slightly smaller volume than our method, for realistic inputs which appear in our audio benchmarks it often performs worse. As it is non-comparable to intervals, meaning that all points obtained via intervals are not included inside those obtained via POPQORN and vice-versa, its bounds are often worse than intervals, e.g. see Figure 6.\\n\\n 3) As POPQORN is relatively slow (108 minutes per sample), we evaluated it only on 10 samples. We plugged their relaxation of tanh * sigmoid instead of our bounds and demonstrated that it results in 0 verified samples on our benchmarks. At the same time, we verify 4 out of these 10 samples (in 29 seconds per sample). We believe the core reason is the existence of many pathological cases as described in the previous point where gradient descent used by POPQORN converges to suboptimal solution which ends up being worse than interval bounds. We note that we contacted the authors to confirm that we are using their framework correctly.\\n\\n 4) POPQORN is 100 - 1000 times slower than DAC as it relies on optimization via gradient descent whereas DAC produces bounds in constant time. This makes POPQORN less practical for verification at the scale of audio classifiers.\\n\\n\\n - Speedup of the existing implementation:\\n\\n We optimized our implementation which improved the runtime of back-substitution. This results in a significant speedup of end-to-end verification: for update depth of 3 now it takes 17.74 seconds per input, compared to 87.92 seconds reported in Table 2. For larger back-substitution depths, the speedup is even larger. We will update running time analysis for experiments in Table 2.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this work, the authors study the task of building neural network classifiers for audio tasks which can be certified as being resistant to an adversarial attack. One of the contributions of this work is the development of abstract transformers which can be used for the data processing frontend used in typical audio applications. The work also proposes an abstract transformers for LSTMs which is stated to be much faster to use in practice than previous work.\\n\\nOverall, this work is interesting and I think it would be a great addition to the conference. The paper is generally well written in the initial sections, and the main ideas are very clearly presented. However, there are a number of missing details, particularly in the final sections which discuss the experimental validation. In its present form, I am rating this work as \\u201cweak reject\\u201d, but I would increase my scores if the authors can improve the final sections in the revised draft.\", \"main_comments\": \"1. While I found section 3 to be useful to get an intuition of the proposed method, I still feel that it could be condensed a bit to add in additional details. For example, the authors don\\u2019t describe \\u201cback-substitution\\u201d in the work, which I believe should be described in the main text.\\n2. A clarification question: When computing provability, the authors state that \\u201cWe randomly shuffled the test data and then, for every experiment, inferred labels one by one until the number of correctly classified samples reached 100. We report the number of provably correct samples out of these 100 as our provability.\\u201d How sensitive was the provability metric to the choice of these 100 test examples? Was the metric computed by repeatedly sampling 100 test cases, for example?\\n3. The section on \\u201cProvable defense for audio classifiers\\u201d was not very clear to me. The authors state that \\u201cTo train, we combine standard loss with the worst case loss obtained using interval propagation.\\u201d I was not clear on what the modified loss is. Could the authors please clarify this in the text, preferably a mathematical formulation? Also, I\\u2019m curious why these experiments are only conducted on the FSDD set, but not on the GSC set. \\n4. Figure 5c. Why does the interval analysis technique perform so much worse on the GSC set relative to the FSDD set? On a related note, it would also be useful to describe some more details about the model architectures for the two tasks.\\n5. The section on \\u201cExperimental comparison with prior work\\u201d similarly left me with a number of questions. The authors mention that \\u201cWe found that, in practice, optimization approach used by POPQORN produces approximations of slightly smaller volume than our LSTM transformer (although non-comparable).\\u201d Could these be quantified and reported in the paper. Also, why are the approximation volumes not comparable between the two systems.\", \"minor_comment\": \"It is true that most works in audio classification and speech recognition use processed frontend features such as MFCCs. However, there is also a significant body of work which operates directly on the time-domain signal. Perhaps it would be better to clarify this in the text?\", \"for_example\": \"Pascual S, Bonafonte A, Serra J. SEGAN: Speech enhancement generative adversarial network. arXiv preprint arXiv:1703.09452. 2017 Mar 28.\\nSainath TN, Weiss RJ, Senior A, Wilson KW, Vinyals O. Learning the speech front-end with raw waveform CLDNNs. In Sixteenth Annual Conference of the International Speech Communication Association 2015.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents an end-to-end neural network verifier that is specially designed for audio signal processing to certify the robustness of a system when facing noise perturbation. The approach is based on abstract transformers to deal with non-linearity in the audio signal processing pipeline and LSTM acoustic model. The authors implement the approach in a so-called \\\"deep audio certifier\\\" system and conduct experiments on various datasets and network architectures. The results seem to be supportive. The idea is good and the mathematical derivation is meticulous (although appears to be a bit tedious). This is an interesting paper globally but I have some concerns.\\n\\n1. There should be a more thorough introduction on how to verify the robustness of a neural network classifier for noise perturbation. There should be some background summary such as what are the existing approaches and how to measure the robustness of a neural network classifier, etc..\\n\\n2. The so-called audio processing pipeline here is actually speech processing pipeline. I am not sure if \\\"audio\\\" is the right term in a strict sense. \\n\\n3. This paper is closely related to the POPQORN paper. So it is good to see some in-depth discussion and comparison between the two. One thing that is not clear to me is that the authors claim POPQORN is very time-consuming but DAC is much faster. I wonder if the authors can elaborate a bit more on this issue. What exactly makes POPQORN time-consuming in this case? \\n\\nP.S. rebuttal read. I will stay with my score.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper proposes an approach for the certification of speech classification neural networks against adversarial perturbations. The network is based on a simple pipeline starting from MFCC to make an utterance level classifier via a last hidden state of an LSTM acoustic model. This approach can perform analysis through this pipeline. I feel that this paper is very difficult to follow because of the lack of background technique explanations based on neural network certification, and the lack of technical surveys of speech recognition and related areas. This paper requires such major restructuring and more surveys to make it in good shape.\", \"Comments\", \"The authors only list CTC related techniques as state-of-the-art ASR, but state-of-the-art ASR is still based on the HMM/DNN hybrid system or attention-based encoder-decoder/RNN transducer. They seriously lack the surveys o this area. Also several technical terminologies are not common in the speech recognition are (e.g., automated speech recognition --> automatic speech recognition)\", \"\\\"Additionally, audio systems typically use recurrent architectures (Chiu et al., 2017)\\\": There are a lot of state-of-the-art ASR systems including TDNN (Kaldi), CNN, and transformer. Again the paper does not have enough surveys.\", \"The font size of the characters in figure 1 is too small.\", \"I cannot understand why the paper uses MFCC. The community was already moved from MFCC to log mel filterbank. We don't need final DCT.\", \"Section 2. Threat model: what kind of noises are using?\", \"Page 3, power operation: Either side must be a conjugate to get the power spectrum.\"]}"
]
} |
SkeJPertPS | Collaborative Training of Balanced Random Forests for Open Set Domain Adaptation | [
"Jongbin Ryu",
"Jiun Bae",
"Jongwoo Lim"
] | In this paper, we introduce a collaborative training algorithm of balanced random forests for domain adaptation tasks which can avoid the overfitting problem. In real scenarios, most domain adaptation algorithms face the challenges from noisy, insufficient training data. Moreover in open set categorization, unknown or misaligned source and target categories adds difficulty. In such cases, conventional methods suffer from overfitting and fail to successfully transfer the knowledge of the source to the target domain. To address these issues, the following two techniques are proposed. First, we introduce the optimized decision tree construction method, in which the data at each node are split into equal sizes while maximizing the information gain. Compared to the conventional random forests, it generates larger and more balanced decision trees due to the even-split constraint, which contributes to enhanced discrimination power and reduced overfitting. Second, to tackle the domain misalignment problem, we propose the domain alignment loss which penalizes uneven splits of the source and target domain data. By collaboratively optimizing the information gain of the labeled source data as well as the entropy of unlabeled target data distributions, the proposed CoBRF algorithm achieves significantly better performance than the state-of-the-art methods. The proposed algorithm is extensively evaluated in various experimental setups in challenging domain adaptation tasks with noisy and small training data as well as open set domain adaptation problems, for two backbone networks of AlexNet and ResNet-50. | [
"balanced random forests",
"source",
"domain adaptation tasks",
"noisy",
"information gain",
"collaborative training",
"collaborative training algorithm",
"overfitting problem"
] | Reject | https://openreview.net/pdf?id=SkeJPertPS | https://openreview.net/forum?id=SkeJPertPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"fTzxnkK3iD",
"B1gHvfE_oH",
"r1gYn-4diB",
"S1ecq-Ndir",
"S1l_sCFXcS",
"ryg6fy_ptB",
"H1gbR6xTtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746739,
1573565021419,
1573564849016,
1573564817839,
1572212383883,
1571811092753,
1571782089395
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2344/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2344/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2344/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2344/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2344/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2344/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes new target objectives for training random forests for better cross-domain generalizability.\\n\\nAs reviewers mentioned, I think the idea of using random forests for domain adaptation is novel and interesting, while the proposed method has potential especially in the noisy settings. However, I think the paper can be much improved and is not ready to publish due to the following reviewers' comments:\\n\\n- This paper is not well-written and has too many unclear parts in the experiments and method section. The results are not guaranteed to be reproducible given the content of the paper. Also, the organization of the paper could be improved.\\n\\n- The open-set domain adaptation setting requires more elaboration. More carefully designed experiments should be presented. \\n\\n- It remains unclear how the feature extractors can be trained or fine-tuned in the DNN + tree architecture. Applying trees to high-dimensional features sacrifices the interpretability of the tree models, hampering the practical value of the approach.\\n\\nHence, I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"The answer to Reviewer #3\", \"comment\": \"Thank you for helpful comments on our paper.\\n\\n1. We use noisy labels only for the \\u2018source\\u2019 domain in the experiments since we design this experiment to validate the robustness of the proposed algorithm. \\nWe randomly change the original label to create a noisy setting. The specified portion of changed noise data is 40% in Table 5 and 60% and 80% in Table 4. This noisy setting is also described as label corruption in [1].\\nAccordingly, we revised the paper to introduce this detail in page 7, as below.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014 Revised paper \\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nIn this experiment, the training labels of the specified portion of the source domain are randomly changed for the noise condition, which is also referred to as the label corruption in Shu et al. (2019). \\nCorruption levels are set to $40, 60$, and $80\\\\%$ of the source domain (refer to the supplementary material for full experimental results.\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014--\\n\\n[1] Yang Shu, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Transferable curriculum for weakly supervised domain adaptation. In AAAI Conference on Artificial Intelligence, 2019\\n\\n\\n2. We revised the paper to describe the training process of the proposed method more clearly. As mentioned by R2, the hyperparameters in training the CoBRF and ablation studies are added in page 6 and 7 as below. \\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014--Revised paper \\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nTo train the CoBRF, we use 100 trees with a maximum depth of 8.\\nWhen there is no training data fallen on a node, we prune the tree at that node.\\nThe number of randomly selected feature dimension for the SVM training is set to 250.\\nThe input feature of SVM is normalized for the stable learning of the hyperplane.\\nWe repeat the SVM training 15 times to select the optimal split in each node.\\n\\n Depth | The number of trees (T)\\n | 5 | 10 | 50 | 100 | \\n 6 | 62.2 | 65.4 | 67.5 | 67.7 |\\n 7 | 60.2 | 63.5 | 67.1 | 67.8 |\\n 8 | 58.9 | 63.3 | 67.5 | 68.0 |\\n 9 | 55.6 | 60.0 | 66.2 | 67.6 |\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\u2014-\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n \\n3. Yes, for open set domain adaptation, we need to elaborate train models so that it does not simply minimize or adjust domain shifts. The source and target domains in the open set condition have a different set of classes, including unknown classes. Therefore, due to this condition of the open set domain adaptation, the overfitting problem should be suppressed. To do this, we take advantage of the good property of random forests [2], which rarely overfit to the training data since they form weak ensembles of multiple decision trees as studied in previous works [3][4]. Therefore, we argue that the proposed CoBRF is robust to the overfitting problem due to the unbalanced classes and the existence of the unknown class of the open set domain adaptation task.\\n\\n[2] Leo Breiman. Random forests. Machine Learning, 45(1):5\\u201332, 2001.\\n[3] Abraham J Wyner, Matthew Olson, Justin Bleich, and David Mease. Explaining the success of adaboost and random forests as interpolating classifiers. The Journal of Machine Learning Research, 18(1):1558\\u20131590, 2017.\\n[4] Heitor M Gomes, Albert Bifet, Jesse Read, Jean Paul Barddal, Fabr\\u00edcio Enembreck, Bernhard Pfharinger, Geoff Holmes, and Talel Abdessalem. Adaptive random forests for evolving data stream classification. Machine Learning, 106(9-10):1469\\u20131495, 2017.\\n \\n4. First of all, we apologize for missing the detailed description of the Openset1 experiment. We follow the setting of Saito et al. (2018) [5], which do not use 21-31 classes of the source data for training, and thus only 1-10 classes of source domain are included in the training step. \\nWe detect the unknown class by thresholding the estimated probability of the CoBRF for target data. The threshold value is set to 0.3 in the experiments. \\n\\nWe revised the paper as follows, and you can find the revised version on page 8 of the paper.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014 Revised paper \\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nall data with label 21$\\\\sim$31 in the target domain are used as one unknown class. According to Saito et al. (2018), the unknown class of the source data is not used in training, and the unknown class of the target data is classified by thresholding the class probability. The thresholding value is set to 0.3 in the epxeriments.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014---\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n[5] Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In European Conference on Computer Vision, pp. 153\\u2013168, 2018. \\n\\n5. We use 10% of source data and all (100%) of target data. We validate the performance of CoBRF on the small labeled training data (source domain). We also train other baseline methods with the same setting in the experiments for the fair comparison.\"}",
"{\"title\": \"The answer to Reviewer #1\", \"comment\": \"Thank you for helpful comments on our paper.\\n\\n1. Taking advantage of the ImageNet dataset\\nAll baseline methods in this paper use pretrained models from the ImageNet dataset.\\nFor example, OSVM, ATI-\\u03bb + OSVM and Saito et al. in Table 6 utilize AlexNet pretrained by the ImageNet dataset. JAN, ATI-semi, and CDA in Table 7 also select ResNet-50 that were pretrained with the ImageNet dataset as their backbone networks.\\n\\nWe also use AlexNet for Table 6 and ResNet-50 for the other tables where two backbone networks are pretrained by ImgaeNet.\\n\\nThus both the proposed CoBRF and state-of-the-art baseline works take advantage of the ImageNet dataset to train domain adaptation methods.\\n\\n\\n2. End-to-end learning of neural networks\\nPlease note that the main focus of CoBRF is to improve the last classification layer in the neural network assuming that the deap features are fixed.\\n\\nAlthough CoBRF does not update the neural network parameters, it outperforms the state-of-the-art works in various experiments. We argue that the pre-trained network provides generalized features for generic classification, which makes CoBRF work well for domain adaptation tasks. Also, most state-of-the-art works learn their models on the pretrained networks from ImageNet, which indicates that they also depend on the capabilities of pretraining on the large dataset.\\n\\n3. Training from scratch\\nAlthough CoBRF is mainly for learning more robust and generic classifier, we can train a network with CoBRF from scratch by combining another domain adaptation method. We first train a deep neural network with another method from scratch, and then we can construct CoBRF on the feature set for the trained deep neural networks. In addition, we can train a deep neural network with triplet sampling from the split result of random forests proposed in [1]. Using this method, a deep neural network can be trained from scratch with random forests.\\n[1] Under review. Submitted to International Conference on Learning Representations, 2020\\n\\nWe will revise our paper based on the answer in the final version if you give the opportunity to present the paper to the conference.\"}",
"{\"title\": \"The answer to Reviewer #2\", \"comment\": \"Thank you for helpful comments on our paper.\\n\\n1. We will revise the paper to improve readability. We will do our best to refine the rough expressions of the paper. We are working on improving the paper, and we will make it better for the final version. It would be greatly appreciated if more detailed comments could be provided.\\n\\n2. We added hyperparameter settings such as the number of trees, maximum depth, feature dimensionality of the SVM training, and the number of repeats in training a random forest to page 6. We also supplemented the ablation study with regard to the number of trees and maximum depths in Table 3. In the ablation study, the maximum depth is 8 with 100 decision trees to consider both accuracy and complexity. \\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014--Revised paper \\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nTo train the CoBRF, we use 100 trees with a maximum depth of 8.\\nWhen there is no training data fallen on a node, we prune the tree at that node.\\nThe number of randomly selected feature dimension for the SVM training is set to 250.\\nThe input feature of SVM is normalized for the stable learning of the hyperplane.\\nWe repeat the SVM training 15 times to select the optimal split in each node.\\n\\n Depth | The number of trees (T)\\n | 5 | 10 | 50 | 100 | \\n 6 | 62.2 | 65.4 | 67.5 | 67.7 |\\n 7 | 60.2 | 63.5 | 67.1 | 67.8 |\\n 8 | 58.9 | 63.3 | 67.5 | 68.0 |\\n 9 | 55.6 | 60.0 | 66.2 | 67.6 |\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\u2014-\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\nPlease refer to page 6, 7, and Table 3 for more information on the hyperparameters and ablation study.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes an approach to building random forests that are\\nbalanced in such a way as to facilitate domain adaptation. The authors\\npropose to split nodes not only based on the Information Gain, but\\nalso so that the sizes of each set passed to left and right children\\nare equal. Another extension to the standard random forest training\\nprocedure is the use of a collaborative term subtracted from the\\ninformation gain over the source domain. This term encourages\\nalignment of the source and target domains in the leaves of trees in\\nthe forest. Experimental results are given on a range of standard\\nand open-set domain adaptation datasets.\", \"the_paper_has_a_number_of_issues\": \"1. There are some problems with clarity, and the English is somewhat rough\\n throughout. These problems are not terribly distracting, but the\\n manuscript could use more polish.\\n2. I don't see a detailed discussion anywhere about the\\n hyperparameters used for fitting the random forests. How many trees\\n are used? What is the max depth? These parameters should be\\n discussed and included in the ablations in order to appreciate the\\n complexity/performance tradeoffs.\\n\\nThis paper has some interesting ideas in it, and the experimental\\nresults are excellent. I would encourage the authors to move salient\\nmaterial from the supplementary material to the main article and to\\nprovide a more thorough discussion of the complexity of the models\\n(the structural parameters of the trees/forests).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new target objects for training random forests that has better generalizability across domains. The authors demonstrated that the proposed method outperforms existing adversarial learning based domain adaptation methods.\\n\\n\\nStrength\\n\\nThe paper is clearly-written. The two objectives(balanced split and common split distribution between source and target domain) are well motivated and explained in the paper.\\n\\nThe authors show that empirically the proposed method outperform several existing adversarial learning based domain adaptation methods.\\n\\n\\nWeakness\\n\\nOne of the main draw back of the method is that it relies on the features extracted from existing pre-trained neural networks, and cannot be used to update the representation of the neural networks. While the adversarial learning based method could do end to end training.\\n\\nIt would be great if the authors could clarify the setup of the baseline methods(e.g. Whether the baseline methods also take benefit of imagenet dataset, and is trained end to end).\\n\\nWhat will happen if you do not have the imagenet models and have to train all the models from scratch?\\n\\nOverall I think it is a borderline paper that might be interesting to some audiences in the conference.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis paper introduces a method for domain adaptation, where each domain has noisy examples. Their method is based on a decision tree in which the data at each node are split into equal sizes while maximizing the\\ninformation gain. They also proposed a way to reduce domain alignment. Their method is tested on several noisy domain adaptation settings and performs better than other baseline methods.\", \"pros\": \"Their idea to utilize a decision tree for domain adaptation sounds novel. \\nExperiments indicate the effectiveness of their method.\", \"cons\": \"This paper is not well-written and has many unclear parts. \\n1, The presentation of the problem set is unclear throughout this paper. In the abstract, they mentioned that they tackle the situation where both source and target domains contain noisy examples. However, they did not define the exact problem setting in any section. I could not understand what kind of problem setting motivated their method, which makes it hard to understand their method. \\n2, How they actually optimized the model is also unclear. From Eq 1~4, it is hard to grasp how they trained the model. \\n3, In open-set domain adaptation, simply minimizing domain-distance can harm the performance. How does the method avoid this issue? It was also unclear. \\n4, Experimental setting seems to be wrong and unclear. In Openset1, they say that \\\"The labels from 1 to 10 of both source and target domains are marked as the known class, and all data with label 11\\u223c20 in the source domain and label 21\\u223c31 in the\\ntarget domain are used as one unknown class\\\". However, Saito et al. (2018) used 21-31 classes in the target domain as one unknown class. In addition, \\\"According to Saito et al. (2018) the target data of the unknown class is not used in training, \\\", they used the 21-31 classes for training in an unsupervised way. How is this method used to detect unknown class? Is there any threshold value set for it?\\n5, The experimental setting is unclear. In 4.4, \\\", we use only 10% of training samples\\\", does it mean 10 % training source examples or target examples? This setting is also unclear. \\n\\nFrom the cons written above, this paper has too many unclear parts in the experiments and method section. I cannot say the result is reproducible given the content of the paper and the result is a reliable one. They need to present more carefully designed experiments.\"}"
]
} |
HkgR8erKwB | PAC-Bayesian Neural Network Bounds | [
"Yossi Adi",
"Alex Schwing",
"Tamir Hazan"
] | Bayesian neural networks, which both use the negative log-likelihood loss function and average their predictions using a learned posterior over the parameters, have been used successfully across many scientific fields, partly due to their ability to `effortlessly' extract desired representations from many large-scale datasets. However, generalization bounds for this setting is still missing.
In this paper, we present a new PAC-Bayesian generalization bound for the negative log-likelihood loss which utilizes the \emph{Herbst Argument} for the log-Sobolev inequality to bound the moment generating function of the learners risk. | [
"PAC-Bayesian bounds",
"PAC-Bayes",
"Generalization bounds",
"Bayesian inference"
] | Reject | https://openreview.net/pdf?id=HkgR8erKwB | https://openreview.net/forum?id=HkgR8erKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Yg3ZN-5cc",
"r1em1aSnsS",
"rkxF8d2tsB",
"r1llUSjHoS",
"rkg-AVjBiS",
"BJggBmjHsH",
"Skxxg4nDqH",
"rkgITCHrqS",
"Bye8oCvAKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746710,
1573833947184,
1573664849006,
1573397831559,
1573397704585,
1573397303686,
1572484072396,
1572327101611,
1571876509956
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2343/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2343/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2343/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2343/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2343/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2343/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2343/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2343/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes PAC_Bayesian bounds for negative log-likelihood loss function. A few reviewers raised concerns around 1) distinguish their contributions better from prior work (eg Alquier). 2) confounders in their experiments. Both reviewers agreed that the paper, as it is written, does not provide sufficient evidence of significance. In addition, experiments shown in the paper varies two things - # parameters (therefore expressiveness and potential generalizability) and depth at each setting. As pointed out, this isn\\u2019t right - in order to capture the effect, one has to control for all confounders carefully. Another concerned raised were around Theorem 2 - that it contains data-distribution on the right hand side, which isn\\u2019t all that useful to calculate generalization bounds (we don\\u2019t have access to the distribution). We highly encourage authors to take another cycle of edits to better distinguish their work from others before future submissions.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"We would like to thank you for clarifying your concern.\\nThe distribution dependence concern can be mitigated by standard concentration of measure bounds (e.g., Chebyshev's inequality), while assuming that the variance of the gradient norm is bounded (which holds since the gradients decay very fast).\\nRegarding the experiments. The MGF is correlated with the gradients norm (fig. 1a) which may be correlated with the depth, depends on model architecture and the chosen transfer functions. However, we would like to clarify that the bound (computed in Table 2) is both the expected gradient norm and the KL term, hence, there is an interesting balance between both of them which causing lower generalization bounds for deeper models.\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"Thank you for your response.\\n\\nI'm not convinced by your argument about the distribution dependence. I believe it is OK to use the training set information on the right hand side but since we don't have access to the distribution, it does not make sense to have distribution dependence on the right hand side.\", \"about_the_depth_experiments\": \"My observation is that your theoretical bound is highly correlated with depth and it is not clear if the correlation with generalization is because of that or not. In particular, in Table 2, your bound is always lower for deeper networks; however, it doesn't always give the right order for generalization. For example, four layer neural nets perform worse than three layer ones on MNIST and Boston but your bound is lower for four layer networks.\"}",
"{\"title\": \"Authors response\", \"comment\": \"We thank the reviewer for taking the time to review our paper.\\nRegarding the Gaussian/log-concave assumption: \\nWe believe such an assumption is reasonable for various input domains (especially compared to the assumptions in previous work, such as linearity, convexity, bounded loss, etc.).\", \"regarding_the_connection_between_the_bound_and_the_results\": \"The premise is that using the proposed bound we can choose better prior distribution over the weights, which results in better posterior distribution over the weights which translates to better generalization and better uncertainty estimates.\"}",
"{\"title\": \"Authors response\", \"comment\": \"Regarding the significance concern:\\nWe kindly disagree with the reviewer that the classification loss is always bounded. While the zero-one loss function is bounded, it cannot be used for training. The hinge-loss can be bounded under the assumption of bounded inputs and Lipschitz function corresponding to the model.\\nWhile computing the Lipschitz constant in NP-Hard for deep nets (https://arxiv.org/pdf/1805.10965.pdf), The Lipschitz constant increases in the depth of the network, while our bound holds for unbounded input and non-Lipschitz functions, and decreases in the depth of the network. Hence, Lipschitz-type bounds are very crude, e.g., they do not distinguish between different activation functions with the same Lipschitz constant. For example, our bound achieves fast-rate (1/m) for deep networks, and as far as we know this is the only bound that achieves it under this setting.\", \"regarding_previous_work_for_unbounded_loss_functions\": \"We kindly ask the reviewer to clarify. As far as we know, the other results for unbounded loss functions consider linear models, while our bound considers deep networks, which is certainly a significant difference. Can you please elaborate on the difference, which we might overlooked. We also point out that although our treatment holds for any (almost anywhere) smooth loss function, the NLL loss is the most widely used loss these days, and this extension only is probably of interest to the community.\", \"regarding_the_herbst_argument\": \"To the best of our knowledge, the Herbst argument was never used in PAC-Bayesian setting (nor in any generalization bound). Moreover, our use of the Herbst argument is different than previous works in functional analysis (e.g., Gross 1975, Ledoux 2001) that use it for Lipschitz functions, while our work use it for *non-Lipschtiz* functions.\", \"regarding_the_experiments\": \"We agree with the reviewer that depth is not the only parameter that is being changed, however, since the goal of this experiment was to explore the effect of the new MGF bound (the complexity of the model), which is dominated by the norm of the gradients w.r.t the input to the model, keeping the number of model parameters roughly the same is essential in order to create comparable KL values between the models. Increasing the number of parameters may cause higher KL values which will greatly affect the generalization bound.\\nRegarding the repeated experiments, we will add mean and std measures for all experiments in the final manuescript.\", \"regarding_writing_and_title\": \"We will clarify all above comments for the final manuscript\"}",
"{\"title\": \"Authors response\", \"comment\": \"Regarding comparison with Alquier et al. (2016):\\nWe kindly disagree with this assessment, which treats deep networks as linear models. Alquier et al. proved their bound for the hinge loss for the linear case which does not hold for deep networks. Our bound holds for any (almost everywhere) smooth loss function, which includes both linear models and deep neural models with the hinge loss. To better emphasize the above point, we presented the applicability of our bound to the linear case of the NLL loss function, which is the logistic regression case. We were focused on the NLL loss function for deep networks due to its extensively usage nowadays. Moreover, our bound achieves fast-rates (1/m), as presented in Figure 1(b), for deep networks while maintaining fixed prior variance, in contrast to Alquier et al.\", \"regarding_theorem_2_format\": \"Traditionally, this is the view when presenting a generalization bound, a view which we also shared for a long time. Over time we came to the conclusion that this limits our understanding of Bayeisan deep networks. Our bound utilizes the distribution D and the prior distribution over the weights p, both distributions do not rely on the training data S. \\n\\nRegarding why inequality (1) is better than inequality (2):\\nWe appreciate the reviewer allowed a specific example to emphasize our contribution. Inequality (1) cannot be computed since it requires to sample training set S multiple times and evaluate the Moment Generating Function (MGF) with respect to it. Since S dimension is about 60k (MNIST/Fashion-MNIST/CIFAR), one would need an infinite amount of time to estimate the MGF. \\nInstead, one can use the independence assumption of S to decompose the MGF (similar to Equation (12) in the appendix). We tried to go this path, and since it estimates the expectation of an exponential function, it reaches infinity (nan) for lambda that is approximately ~15-20 on MNIST (recall, m=60k), i.e., the rate of this approach is much lower than the conventional $\\\\sqrt{m}$. In contrast, since our bound relies on gradients in the log-domain, we can show fast-rate bound (i.e., lambda = 60k) when the network turns deeper (see Figure 1b).\", \"regarding_the_depth_experiments\": \"It is not true that deeper networks generalize better in all cases. The generalization bound consists of two terms, the complexity of the model (our new MGF bound, which is dominated by the norm of the gradients) and the complexity of the learning (which is controlled by the KL-divergence). The goal of this experiment was to explore the effect of the new MGF bound (the complexity of the model). Hence, we kept the number of model parameters roughly the same in order to create comparable KL values between the models. Increasing the number of parameters may cause higher KL values which will greatly affect the generalization bound.\\nWe agree that if we keep the KL-divergence fixed, then the MGF decreases with the depth, but this varies with the components that are being used (convolutional layers, skip-connections, fully connected, etc.). According to Figure 1(a), we see that it is not the depth that matters but the norm of the gradient w.r.t the input, which is exactly what\\u2019s dominates the MGF bound.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper suggests a PAC-Bayesian bound for negative log-likelihood loss function. Many PAC-Bayesian bounds are provided for bounded loss functions but as authors point out, Alquier et al. (2016) and Germain et al. (2016) extend them to unbounded loss functions. I have two major concerns regarding this paper:\", \"1__technical_contribution\": \"Since Alquier et al. (2016) has already introduced PAC-Bayesian bounds for the hinge-loss, I think the technical contributions of this paper is not significant enough for the publication. Moreover, the particular format of the bound in Theorem 2 is problematic since the right hand side depends on the data-distribution. When presenting the generalization bound, we really want the right hand side to be independent of the distribution (given the training set) and that is the whole point of calculating the generalization bounds. In particular, I don't see why inequality (1) is any better than inequality (2).\", \"2__experiments\": \"The main issue with the correlation analysis done in Section 6 is that authors only change depth of the networks and then check the correlation of the generalization bound to the test error. The problem is that in all those networks deeper ones generalize better so it is not clear that the correlation is due to a direct relationship to generalization or a direct relationship to depth. For example, if we take 1/depth as a measure, it would correlate very well with generalization in all these experiments but 1/depth is definitely not the right complexity measure or generalization bound. To improve the evaluation, I suggest varying more hyperparameters to avoid the above issue.\\n\\n\\n***************************\", \"after_rebuttals\": \"Unfortunately, my concerns are not addressed adequately by authors. Therefore, my evaluation remains the same.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"This is a minor issue but worth mentioning. The title is vague and confusing. In the first read one might think this paper provides PAC-Bayesian bounds for usual NNs (which has been considered and written about many times in the literature). The authors should mention that the considered networks are Bayesian NNs.\", \"review\": \"Summary:\\nThis paper proposes a PAC-Bayesian generalization bound for Bayesian neural networks. The author discuss that earlier generalization bounds hold for bounded loss function, whereas the proposed generalization bound considers negative log likelihood loss which can be unbounded. Therefore, earlier results do not hold here. \\n\\nThe technical approach used is along lines of PAC Bayes framework and specifically for this loss function which requires bounding the log-partition function, the authors follow the Herbst Argument for bounding the log-partition function.\", \"contribution\": \"The paper uses straight forward PAC Bayes approach and the only bottleneck is bounding the log-partition function for which the authors use an earlier result (Herbst Argument for bounding the log-partition function. )\", \"significance\": \"My biggest concern with this work is its significance. As we know classification loss is bounded and for regression loss, as long as we have bounded input and a Lipschitz function corresponding to the NN, the output is bounded. Also as authors mention, there have been two earlier results covering other unbounded loss functions. Therefore, I do not feel that extending those results to NLL is a good enough contribution. Especially since the extension uses a known approach (Herbst Argument for bounding the log-partition function. )\", \"experiments\": [\"From the explanations, it seems each architecture is trained once, which is not acceptable. How can one refute the effect of a specific initial value? A good scientific practice entails having mean and variance bar for different values or at least repeating the experiment multiple times and reporting the avg.\", \"According to the paper, the architectures in table 2, fig 1 are made by keeping the number of parameters roughly the same. Then the authors increase the depth. Note that to keep the # of parameters the same, they have to decrease the width as they increase the depth. Therefore, this cannot be a correct analysis of effect of the depth. As depth is not the only parameter that is changed between the architectures.\"], \"writing\": \"The writing is overall ok which some vague places such as \\n*first page, last paragraph, line 1: \\\".. our PAC Bayesian bound is tightest when we learn...\\\" the authors do not discuss what is the space of options for the bound and only mention the case when it is tightest. Therefore the claim is confusing\\n*first page, last paragraph, last line: \\\"..better uncertainty than the baseline\\\". The authors do not specify the baseline which makes this whole claim vague.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper offers PAC-generalization bounds for Bayesian Neural Networks based on a previous result by Alquier et al. (Theorem 1) which connects the generalization gap to the log partition function of the same gap for the prior distribution on the learned parameters (which is identical to the ELBO bound used in Bayesian neural networks for NLL loss). Due to the fact that the optimal bound occurs for the true posterior, the PAC-bayesian bounds offer a novel interpretation as an objective for BNNs.\\n\\nThe authors note that the log partition function can in general be easily unbounded for loss functions based on NLL (as in the BNN case); their result shows that if the norm of the gradient is bounded, that is enough to bound the overall generalization gap. \\n\\nWhile this appears to be a technically impressive feat, the the assumptions involved in Theorem 2 seem significant (probably unavoidable for a theoretically tractable statement). Primarily, the conditional of x given y is Gaussian/log-concave (or at least unimodal, more generally ) but the motivation is based on deep neural networks (for why the gradient is bounded). \\n\\nThe authors also specialize their bound to the case of logistic regression. Interestingly, the gap in this case has an additive term proportional to the product of the label cardinality and the input dimension (I'm not sure whether how significant this is in terms of tightness).\\n\\n In experiments, the authors explore and analyze the tightness of the proposed bounds for various hyperparameters like the variance of the weights prior.\\n\\nThey also perform an exhaustive comparison of the BNN models against non-bayesian alternatives, but it is not clear how the new contributions from the generalization bounds are relevant to the results in, say Section 6.2\"}"
]
} |
SyeRIgBYDB | Semi-Implicit Back Propagation | [
"Ren Liu",
"Xiaoqun Zhang"
] | Neural network has attracted great attention for a long time and many researchers are devoted to improve the effectiveness of neural network training algorithms. Though stochastic gradient descent (SGD) and other explicit gradient-based methods are widely adopted, there are still many challenges such as gradient vanishing and small step sizes, which leads to slow convergence and instability of SGD algorithms. Motivated by error back propagation (BP) and proximal methods, we propose a semi-implicit back propagation method for neural network training. Similar to BP, the difference on the neurons are propagated in a backward fashion and the parameters are updated with proximal mapping. The implicit update for both hidden neurons and parameters allows to choose large step size in the training algorithm. Finally, we also show that any fixed point of convergent sequences produced by this algorithm is a stationary point of the objective loss function. The experiments on both MNIST and CIFAR-10 demonstrate that the proposed semi-implicit BP algorithm leads to better performance in terms of both loss decreasing and training/validation accuracy, compared to SGD and a similar algorithm ProxBP. | [
"Optimization",
"Neural Network",
"Proximal mapping",
"Back propagation",
"Implicit"
] | Reject | https://openreview.net/pdf?id=SyeRIgBYDB | https://openreview.net/forum?id=SyeRIgBYDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"65qJOnY3zm",
"BJgOydfpFH",
"SyeIeeDnKH",
"rkg4bK5jYr"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746679,
1571788768237,
1571741677935,
1571690748484
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2342/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2342/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2342/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The reviewers equivocally reject the paper, which is mostly experimental and the results of which are limited. The authors do not react to the reviewers' comments.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes an implicit update scheme for the back propagation a anlgorithm.\\nThe idea is quite simple and is based on proximal mappings that lead to implicit update.\\nSpecifically, every update in the back propagation algorithm is being replaced by an implicit update except for the intermediate parameters that receive a \\\"semi-implicit\\\" update.\\n\\nThe idea is reasonable and seems to lead to good performance. This is more-or-less expected thanks to the superior performance of implicit updates in general. So, it's good that the authors could make this work in the context of deep nets as well. Here are some more critical thoughts about the paper:\\n\\n\\n1) There is not much theoretical justification about the idea in the paper. Proposition 1 is a simple argument about the fixed point of the procedure. The argument could be made more rigorous, right now it is a bit of a sketch. \\nApart from Proposition 1, there is no more theory offered. The authors could appeal in the theory of implicit SGD for that, e.g., [1,2,3,4]. This theory suggests a lot of stability properties for the implicit SGD update of Equation (27).\\n\\n2) Somewhat related to (1), the authors could make a more clear connection to prior work. \\nFor example, there is not mention of a very similar idea of \\\"implicit back propagation\\\" [5]. \\nAlso the literature in implicit SGD procedures is highly relevant.\\n\\n3) Can we explain the results in Table 1 theoretically?\\n\\n\\n\\n[1] Bertsekas, \\\"Incremental proximal methods for large scale convex optimization. Mathematical\\nprogramming\\\", 2011\\n[2] Kulis and Bartlett, \\\"Implicit online learning\\\", 2010\\n[3] Toulis and Airoldi, \\\"Asymptotic and finite-sample properties of estimators\\nbased on stochastic gradients\\\", 2017\\n[4] Toulis, Airoldi, Rennie, \\\"Statistical analysis of stochastic gradient methods for generalized linear models\\\", 2014\\n[5] Fagan and Iyengar, \\\"Robust Implicit Backpropagation\\\", 2018\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper introduces a novel algorithm for computing update directions for neural network's weights.\\nThe algorithm consists of the modified backpropagation procedure where a layer's error is computed using implicitly-updated weights.\\n\\nThe proposed idea is interesting, but its presentation and evaluation could be significantly improved.\\nFirst, it is not very clear what motivates the exact form of Semi-implicit BP.\\nSecond, I find the notation a bit cumbersome, especially intermediate ^{k+1/2} updates. \\nI also suspect that eq. 9 contains an error, probably the l.h.s. of the first part should be \\\\delta_i^{k+1}?\\nAlgorithm 1 is not very helpful because only the forward pass is explained in detail and a reader must refer to the main text to understand the backward pass.\\n\\nThe experimental evaluation also raises a number of questions.\\n1) Why did authors chose only one value of the learning rate and the lambda hyperparameter? A more appropriate comparison would require slightly more extensive hyperparameter search as it may well be that ProxBP would work better with different values.\\n2) It is also unclear if 5 CG iterations is enough to solve the intermediate problem. Also all convergence guarantees are only provided for the exact implicit update, so at least one should ensure it is computed well enough.\\n3) Isn't it suspucious that ProxBP performed so bad compared to other methods on MNIST? \\n4) Should not the set of baselines include more advanced optimizers such as RMSProp, Adam etc? They don't seem to add more computational burden than Semi-implicit BP.\\n\\nI also think it is worth discussing/investigating if the obtained update directions can be used in other gradient-based optimizers instead of pure gradients and if it can have any advantages.\\n\\nOverall, it does not feel to me that the paper is ready for publication.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The work is based on the recent paper 'Proximal Backpropagation', Frerix et al., ICLR 2018, which views error back propagation as block coordinate gradient descent steps on a penalty functional comprised of activations and parameters.\\n\\nInstead of taking proximal steps on the linear layers as in (Frerix et al., 2018), the authors also pull the non-linearity \\\\sigma into the proximal steps. Another interesting deviation is the idea to consider the newly updated weights {W_i}^{k+1} when updating the activations F_i^{k+1} in the backward pass.\\n\\nWhile potentially offering a faster convergence with respect to epochs, the nonlinear updates have two major drawbacks:\\n\\n1) While there are preliminary theoretical results (fixed points of the method are critical points), it remains unclear whether the computed update is still a descent direction on the original energy. While not crucial, such a result would be reassuring and might give further insights into the method.\\n\\n2) Each update requires the solution of a nonconvex, nonlinear least squares problem which is prohibitively expensive to solve. Note that such nonlinear least squares updates are already proposed in (Carreira-Perpinan & Wang, 2014). When using ReLU activation, the non smoothness might be an issue for standard nonlinear least squares solvers such as Levenberg-Marquadt. \\n\\nFurthermore, the numerical results are unfortunately a bit discouraging. The experiments evaluate toy models on toy datasets and even there only a minor improvement with respect to epochs over SGD and Prox-BP is shown. Furthermore, the plots only consider epochs and not the running time. Due to the non-linear least squares problem, I assume that each epoch for the proposed method is way more costly. Therefore I consider the experimental evaluation too preliminary. A proper evaluation would require an implementation as an optimizer in state-of-the-art deep learning frameworks and a comparison with respect to running time to standard optimizers such as SGD with momentum or Adam on the GPU. \\n\\nThe reported performances for MNIST are surprisingly poor. Note that vanilla SGD with momentum reaches ~98.6% test set performance on such an architecture, while the overall highest reported accuracy in this paper is 98.0%. This might be due to momentum, and it would be interesting whether the proposed method could be combined with momentum or other optimizers such as Adam as in (Frerix et al. 2018).\\n\\nOverall, I don't see this a practical algorithm for training deep networks and there are few theoretical results. Therefore, I cannot recommend acceptance at this stage. \\n\\nTo improve the paper, I would like to see an implementation of the method on the GPU in a recent deep learning framework and an evaluation on larger models / datasets. But I am doubtful this will reach competitive performance to standard optimizers. Also, it would be interesting to see how the precision in the inner nonlinear conjugate gradient solver effects outer convergence. It might be that the subproblem does not have to be solved with very high accuracy.\", \"minor_comments\": [\"Missing citation: 'Difference Target Propagation', https://arxiv.org/abs/1412.7525, studies a similar type of algorithm.\"]}"
]
} |
ByxaUgrFvH | Mutual Information Gradient Estimation for Representation Learning | [
"Liangjian Wen",
"Yiji Zhou",
"Lirong He",
"Mingyuan Zhou",
"Zenglin Xu"
] | Mutual Information (MI) plays an important role in representation learning. However, MI is unfortunately intractable in continuous and high-dimensional settings. Recent advances establish tractable and scalable MI estimators to discover useful representation. However, most of the existing methods are not capable of providing an accurate estimation of MI with low-variance when the MI is large. We argue that directly estimating the gradients of MI is more appealing for representation learning than estimating MI in itself. To this end, we propose the Mutual Information Gradient Estimator (MIGE) for representation learning based on the score estimation of implicit distributions. MIGE exhibits a tight and smooth gradient estimation of MI in the high-dimensional and large-MI settings. We expand the applications of MIGE in both unsupervised learning of deep representations based on InfoMax and the Information Bottleneck method. Experimental results have indicated significant performance improvement in learning useful representation. | [
"Mutual Information",
"Score Estimation",
"Representation Learning",
"Information Bottleneck"
] | Accept (Poster) | https://openreview.net/pdf?id=ByxaUgrFvH | https://openreview.net/forum?id=ByxaUgrFvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"YsrnjBMa3",
"HyeJ_8BnoB",
"rJxEhSHhoH",
"r1lRPLxjjr",
"rylHOHJsiS",
"H1ezeV1siH",
"BylD9OC9oS",
"B1glaFo5jS",
"B1xOHQzqiH",
"HyliJBZciS",
"HkliG6Ktor",
"HygathtKor",
"r1gDQ2YKiB",
"rkePFjKYor",
"SyeM9cFYjB",
"SJx1qthYqB",
"BklSOaCu9S",
"B1gYH_Xuqr",
"BylvZbsnFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746642,
1573832295283,
1573832107667,
1573746278377,
1573741933407,
1573741546344,
1573738638523,
1573726648124,
1573688127551,
1573684450820,
1573653779235,
1573653637206,
1573653535263,
1573653375149,
1573653130291,
1572616583491,
1572560236824,
1572513856900,
1571758334541
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2341/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2341/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2341/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2341/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2341/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2341/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2341/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2341/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2341/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2341/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2341/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2341/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2341/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2341/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2341/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2341/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2341/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2341/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes the Mutual Information Gradient Estimator (MIGE) for estimating the gradient of the mutual information (MI), instead of calculating it directly. To build a tractable approximation to the gradient of MI, the authors make use of Stein's estimator followed by a random projection. The authors empirically evaluate the performance on representation learning tasks and show benefits over prior MI estimation methods.\\nThe reviewers agree that the problem is important and challenging, and that the proposed approach is novel and principled. While there were some concerns about the empirical evaluation, most of the issues were addressed during the discussion phase. I will hence recommend acceptance of this paper. We ask the authors to update the manuscript as discussed.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for recommence.\\nWe will add the statement related to [1] to the appendix in the revision.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your response\\n\\n>> Q1Q2\\nEven some empirical statistics would suffice. I find it misleading to plot only one realization of the gradient estimate. How about plotting 10000 realizations and calculate the variance per value of \\\\rho? If that number is lower than MINE, it would strengthen your argument.\\n\\nR\\uff1aThank you for your constructive suggestion.We agree that the method of quantitive measures of proposed by reviewer is effective to strengthen our argument. Due to the limited response time, we will add the quantitive measures for tightness and variance of MIGE in the camera-ready.\\n\\n\\n>> Q3\\nOk. Then it\\u2019s a fair comparison. If you used their code in that way, please consider citing it as such.\\n\\nR\\uff1aYes, of course.\\n\\n>> Q4\\nOk. Do you plan to include that table in the revision?\\nYes, of course.\\n\\n>> Q5\\nIn other comments in this thread you mentioned the concurrent work [1]. How does [1] relate to the statement you make in the paper? \\nWe will add the statement related to [1] to the appendix\\n\\n>> Q6Q8Q9\\nUnderstood. Please include in the revised version\\n\\nR\\uff1aYes, of course.\\n\\n[1] https://openreview.net/forum?id=rkxoh24FPH\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you for responding\\n\\n>> On the copy-editor. I don\\u2019t think hiring a copy editor is necessary. I recommend some off-the-shelf grammar and spelling checkers. Also, having the paper proofread by colleagues generally helps. They would point out missing definitions of variables, like \\u2018AnonReviewer2\\u2019 pointed out.\\n\\n>> Q1Q2\\nEven some empirical statistics would suffice. I find it misleading to plot only one realization of the gradient estimate. How about plotting 10000 realizations and calculate the variance per value of \\\\rho? If that number is lower than MINE, it would strengthen your argument.\\n\\n>> Q3\\nOk. Then it\\u2019s a fair comparison. If you used their code in that way, please consider citing it as such.\\n\\n>> Q4\\nOk. Do you plan to include that table in the revision?\\n\\n>> Q5\\nIn other comments in this thread you mentioned the concurrent work [1]. How does [1] relate to the statement you make in the paper? \\n\\n>> Q6Q8Q9\\nUnderstood. Please include in the revised version\\n\\n[1] https://openreview.net/forum?id=rkxoh24FPH\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the response. It would be great if you could add the DIM(L) results to the paper and discuss the issues, e.g. in the appendix.\\n\\nYour paper is a piece of some really solid work. I strongly recommend acceptance!\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"R1 : We will revise our paper to include your valuable comments.\", \"r2\": \"Note the performace of Random Project in different layers is different. And the perfomace of the last layer is quite invariant with different RP dimensons. We will conduct more extensive expreiments with different dimensions, and will show the trend curve in the camera-ready upon acceptance.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"\", \"r1\": \"We have conducted experiments of applying MIGE for DIM(L) in the CIFAR dataset and we show the results in the following Table. Surprisingly, there is a significant gap to DIM(L). To analyze this result, we find during training of CIFAR-10, the testing accuracy gets stable after reaching over 50%, while the training accuracy soon reaches 99%. This may suggest that some regularization technique may be needed to the gradient estimation of DIM(L).\\n\\nTo our knowledge, the principle of DIM(L) is still unclear. As argued in [2], the success of these methods cannot be attributed to the properties of MI alone, and they strongly depend on the inductive bias in both the choice of feature extractor architectures and the parameterization of the employed MI estimators. \\n\\u00a0\\nFor MIGE, we are investigating the behind reason, e.g., to investigate the distribution of the patches.\\n\\n.---------------------------------------------------------------------------------------------------\\n.\\t\\t\\t\\t\\t\\t\\tCIFAR10\\t\\t\\t\\t\\tCIFAR100\\n.\\t\\t\\t\\t\\tconv\\tfc\\t\\tY\\t\\tconv\\tfc\\t\\tY\\n.---------------------------------------------------------------------------------------------------\\n.DIM(L)(JSD)\\t\\t\\t72.16\\t67.99\\t66.35\\t41.65\\t39.60\\t39.66\\n.DIM(L)(JSD+PM)\\t\\t73.25\\t73.62\\t66.96\\t48.13\\t45.92\\t39.60\\n.DIM(L)(infoNCE)\\t\\t75.05\\t70.68\\t69.24\\t44.11\\t42.97\\t42.74\\n.DIM(L)(infoNCE+PM)\\t75.21\\t75.57\\t69.13\\t49.74\\t47.72\\t41.61\\n.---------------------------------------------------------------------------------------------------\\n.MIGE(L)\\t\\t\\t59.72\\t56.14\\t54.01\\t30.00\\t28.96\\t27.65\\n.---------------------------------------------------------------------------------------------------\", \"r2\": \"Thanks for providing the insight. We can use MINE (as used in [1], and MINE is better than InfoNCE due to the high bias of InforNCE) for estimating MI stored in representations. Due to the limited response time, we will add the metric in the camera-ready.\"}",
"{\"title\": \"Response to Authors\", \"comment\": \"\\\"For experiments, we follow the experiments of Deep InfoMax and Information Bottleneck to set the experimental setup as in [1, 2], and we also refer to their source code [3, 4]. Under these experimental settings, we use our MI Gradient Estimator to replace the MI estimator in Deep InfoMax and Information Bottleneck\\\"\\n\\nThanks - Please clarify this in the main text. Also i just looked over the PDF again and i could not find a link to the code you refer to?\\n\\n\\\"For definitions of q(z)_\\\\psi(z) and q(x,z)_\\\\psi, q(z)_\\\\psi(z) corresponds to the distribution of representation of x via the encoder E_\\\\psi as described in Equation (11). As mentioned in our paper, \\\"we can obtain the samples from the marginal distribution of z by pushing samples from the data empirical distribution p(x) through E\\\\psi(.) for representation learning.\\\" q_\\\\psi is an implicit distribution determined by the encoder parameters \\\\psi. q(x,z)_\\\\psi is the joint distribution of (x,z), which is determined by the encoder parameters \\\\psi.\\\"\\n\\nAgain thanks for the clarification - I would strongly encourage you to include this in the main text as well as it is not clear how E_psi, q_psi etc is related.\\n\\nThanks for providing the STL results - Can you give any intuition for why you see decreased performance when increasing the dimensionality of RP?\"}",
"{\"title\": \"proofreading\", \"comment\": \"> We will carefully revise our manuscript and hire professional copy editors to proofread our paper.\\n\\nThanks; I don't think professional service is necessary. Please just use spell-check and have someone proofread the paper.\"}",
"{\"title\": \"response\", \"comment\": \"> In addition, we note that the literature has argued that maximizing tighter bounds in DIM(L) leads to worse results [2].\\n\\nThe above might be true for a specific setting when a more expressive critic is used to estimate the InfoNCE bound but has no significance to the paper. As you point out yourself, you are not estimating MI, but only its gradient. If using MIGE for DIM(L) leads to worse results than in the original paper, then it should be clearly stated and discussed -- it will not diminish the value of this paper. \\n\\n> We cannot provide the value of MI to evaluate the representation, because MIGE directly estimates the gradient of MI to optimize MI, rather than estimating the value of MI.\\n\\nWhat you can do is to train a separate critic to estimate InfoNCE as in [2], and evaluate InfoNCE with a large batch size. This result would be very helpful as it would shed light on how much information is actually stored in representations learned with MIGE.\\n\\nThanks for the experiments on STL-10 and further clarification. I'm hoping to read a response to the above comments before the end of the discussion period.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for acknowledging that our idea is interesting and the results are encouraging. We will revise our paper according to the comments and hire a copy-editor to carefully polish the writing.\", \"q1\": \"No comparison on downstream tasks for more datasets except MNIST. In the end, a key question is a final accuracy on different datasets and how to maximize the information effect on it.\", \"r\": \"It is possible to apply RP for MINE. However, due to the dependence on the discriminator used to estimate the mutual information, RP+MINE may lead to results with high variance. And we will investigate this issue in the future work.\", \"q2\": \"There is no discussion about the effect of the random projection on the representation. For example, how it affects performance? How much the algorithm sensitive to this projection?\", \"table\": \"Classification accuracy (top 1) results on STL-10.\\nRP denotes Random Projection.\\n.----------------------------------------------------------------\\n.\\t\\t\\t\\t\\t\\t\\tSTL-10\\t\\n.\\t\\t\\t\\t\\tconv\\tfc\\t\\tY\\t\\n.----------------------------------------------------------------\\n.DIM(JSD)\\t\\t\\t42.03\\t30.28\\t28.09\\t\\n.DIM(infoNCE)\\t\\t43.13\\t35.80\\t34.44\\t\\n.----------------------------------------------------------------\\n.MIGE\\t\\t\\tunaffordable computational cost\\t\\n.MIGE+RP to 1024d\\t49.08\\t40.09\\t38.95\\t\\n.MIGE+RP to 512d\\t49.89\\t41.05\\t38.56\\t\\n.MIGE+RP to 256d\\t49.91\\t40.24\\t38.83\\t\\n.----------------------------------------------------------------\", \"q3\": \"What is the performance of the MINE if it combined with random projection\"}",
"{\"title\": \"Response to Reviewer 3 (2)\", \"comment\": \"\", \"q4\": \"One contribution of the paper is to make the gradient estimator work in high dimensions. To this end, the authors propose Random Projections. It is not clear how this approximation influences the results. An experiment regarding this topic would make the point clearer. Is RP used in the current experiments? Then how does this influence the results? Is the RP used for computational purposes? Then can we quantify the gain in computation?\", \"r\": \"We use the same setting except that the initial learning rate of 2e-4 was set for Adam optimizer, and exponential decay with decaying rate by a factor of 0.96 was set for every 2 epochs.\\n\\n[1] Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and R Devon Hjelm. Mine: mutual information neural estimation. arXiv preprint arXiv:1801.04062, ICML, 2018.\\n[2] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2019.\\n[3] Ben Poole, Sherjil Ozair, Aaron van den Oord, Alex Alemi, and George Tucker. On variational bounds of mutual information. In International Conference on Machine Learning, 2019.\", \"table\": \"Classification accuracy (top 1) results on STL-10.\\nRP denotes Random Projection.\\n.----------------------------------------------------------------\\n.\\t\\t\\t\\t\\t\\t\\tSTL-10\\t\\n.\\t\\t\\t\\t\\tconv\\tfc\\t\\tY\\t\\n.----------------------------------------------------------------\\n.DIM(JSD)\\t\\t\\t42.03\\t30.28\\t28.09\\t\\n.DIM(infoNCE)\\t\\t43.13\\t35.80\\t34.44\\t\\n.----------------------------------------------------------------\\n.MIGE\\t\\t\\tunaffordable computational cost\\t\\n.MIGE+RP to 1024d\\t49.08\\t40.09\\t38.95\\t\\n.MIGE+RP to 512d\\t49.89\\t41.05\\t38.56\\t\\n.MIGE+RP to 256d\\t49.91\\t40.24\\t38.83\\t\\n.----------------------------------------------------------------\", \"q5\": \"In practice, we do not care about MI estimation\\u2019. Please explain further or refer to previous work. *\", \"q6\": \"Two points 1. The information \\u2018between z and z\\u2019 is probably a typo? 2. How does sufficiency relate to an optimization problem? Doesn\\u2019t sufficiency mean in this context I(X;Y)=I(Z;Y)?\", \"q7\": \"Equation 12: In the part . Why do we take gradient w.r.t. x? It seems to me that the reparametrization is a function of x only via. If not, then please explain what this tuple means.\", \"q8\": \"Table 1 has no units. How to interpret the numbers in this table?\", \"q9\": \"Section 4.3, authors note their experiment is \\u2018a little bit different\\u2019 from other related research. How and what exactly is different?\"}",
"{\"title\": \"Response to Reviewer 3 (1)\", \"comment\": \"Thank you for acknowledging that our idea is interesting and the results are encouraging. We will revise our paper according to the comments and hire a copy-editor to carefully polish the writing.\", \"q1\": \"The estimator for the gradient is shown at first on a toy task. Take a correlated Gaussian distribution and estimate gradients of the MI. The correlated Gaussian has an analytical form of MI, which makes this a useful experiment. The paper claims that this estimator \\u2018provides a tighter and smoother\\u2019 gradient estimate. I don\\u2019t see how this experiment and this claim tie together.\", \"r\": \"For consistent comparison, the baseline of DIM and our proposed MIGE are all based on non-linear classification, which is also mentioned in [2]. We did not include any result of linear SVM for DIM or any other methods.\\nThe same classifiers are used for all methods. Our baseline results are directly copied from [2] or by running the code in https://github.com/rdevon/DIM. We haven\\u2019t changed the code of the non-linear classification.\", \"q2\": \"Could the tightness or smoothness be quantified? It seems the MIGE has a lower variance, could empirical results or bounds on the variance be obtained?\\nMoreover, this plot concerns random quantities, whereas we see only one realization. Both the MIGE and the MINE hold under expectation of samples from the data distribution. This is a toy example where we can sample infinitely from the data distribution. That means we can either a) plot more samples or b) obtain (empirical) error bounds on the gradient under these sampling distributions.\", \"q3\": \"The authors train a quote \\u2018small fully connected neural network classifier\\u2019. However, the work of Hjelm trains a linear SVM on the representations.Is it the improved representations (as obtained by using MIGE) or is it the change classifier?\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for acknowledging that our idea is interesting and the results are encouraging. We will revise our paper according to the comments and hire a copy-editor to carefully polish the writing.\\n\\nIn the following, we will answer the concerns of the reviewer.\\n\\n1) For experiments, we follow the experiments of Deep InfoMax and Information Bottleneck to set the experimental setup as in [1, 2], and we also refer to their source code [3, 4]. Under these experimental settings, we use our MI Gradient Estimator to replace the MI estimator in Deep InfoMax and Information Bottleneck.\\nWe will provide an algorithm description in the revision, and we will release our code upon acceptance with more detailed settings.\\n\\n2) Due to the page limits (recommended 8 pages in ICLR CALL for Papers), we cannot provide more details of missing concepts and definitions. We will add an appendix and make our paper more self-contained in the revision.\", \"q1\": \"In section 3. Please define q(z)_psi(z), q(x,z)_psi and describe how they relate to E_psi.\", \"r\": \"Thank you for your valuable suggestions about writing style, we will fix these problems in the revision.\\n\\n[1] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2019. \\n[2] Alemi, A. A., Fischer, I., Dillon, J. V., and Murphy, K. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, 2016. \\n[3] https://github.com/rdevon/DIM\\n[4] https://github.com/alexalemi/vib_demo\", \"q2\": \"What exactly are the contributions by the authors wrt to spectral stein gradient descent (sec 2.1) e.g. is it the scalable approach based on random projections described in sec 3? Further i would like some discussion on the quality of this approximation?\", \"table\": \"Classification accuracy (top 1) results on STL-10.\\nRP denotes Random Projection.\\n.----------------------------------------------------------------\\n.\\t\\t\\t\\t\\t\\t\\tSTL-10\\t\\n.\\t\\t\\t\\t\\tconv\\tfc\\t\\tY\\t\\n.----------------------------------------------------------------\\n.DIM(JSD)\\t\\t\\t42.03\\t30.28\\t28.09\\t\\n.DIM(infoNCE)\\t\\t43.13\\t35.80\\t34.44\\t\\n.----------------------------------------------------------------\\n.MIGE\\t\\t\\tunaffordable computational cost\\t\\n.MIGE+RP to 1024d\\t49.08\\t40.09\\t38.95\\t\\n.MIGE+RP to 512d\\t49.89\\t41.05\\t38.56\\t\\n.MIGE+RP to 256d\\t49.91\\t40.24\\t38.83\\t\\n.----------------------------------------------------------------\", \"q3\": \"Please provide some more details on the DeepInfoMax and Information bottleneck experiments e.g. How exactly did you estimate the MI gradients in these settings? how is the downstream task setup and is it identical to prior work?\", \"q4\": \"About writing style\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the acknowledgment of our paper and the valuable suggestions. We are glad that the reviewer found the work to be novel and well-motivated. We will carefully revise our manuscript and hire professional copy editors to proofread our paper.\\n\\n1) R: Thanks for pointing out the recommendation to apply MIGE on DIM(L) as this is a good suggestion. Due to limited responce time and computational resources, we are still in the process of experimenting and analyzing preliminary results. In addition, we note that the literature has argued that maximizing tighter bounds in DIM(L) leads to worse results [2]. We will consider this as future work and discuss these new findings in a new paper.\\n\\nWe cannot provide the value of MI to evaluate the representation, because MIGE directly estimates the gradient of MI to optimize MI, rather than estimating the value of MI.\\n\\n2) R: For comments on evaluating on larger datasets, we have evaluated our method on a large scale dataset, i.e., STL-10. The results can be found in the following table.\", \"table\": \"Classification accuracy (top 1) results on STL-10.\\nRP denotes Random Projection.\\n.----------------------------------------------------------------\\n.\\t\\t\\t\\t\\t\\t\\tSTL-10\\t\\n.\\t\\t\\t\\t\\tconv\\tfc\\t\\tY\\t\\n.----------------------------------------------------------------\\n.DIM(JSD)\\t\\t\\t42.03\\t30.28\\t28.09\\t\\n.DIM(infoNCE)\\t\\t43.13\\t35.80\\t34.44\\t\\n.----------------------------------------------------------------\\n.MIGE\\t\\t\\tunaffordable computational cost\\t\\n.MIGE+RP to 1024d\\t49.08\\t40.09\\t38.95\\t\\n.MIGE+RP to 512d\\t49.89\\t41.05\\t38.56\\t\\n.MIGE+RP to 256d\\t49.91\\t40.24\\t38.83\\t\\n.----------------------------------------------------------------\", \"r\": \"Indeed, we have cited these two papers. MINE optimizes the Donsker-Varadhan representation rather than the InfoNCE bound. MINE-f optimizes the f-divergence representation. InfoNCE is a different estimator with high bias. For more details please refer to [3, 4].\\n\\n\\n\\n[1] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2019.\\n[2] On mutual inforamtion maximization for representations, https://openreview.net/forum?id=rkxoh24FPH.\\n[3] Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and R Devon Hjelm. Mine: mutual information neural estimation. arXiv preprint arXiv:1801.04062, ICML, 2018.\\n[4] Ben Poole, Sherjil Ozair, Aaron van den Oord, Alex Alemi, and George Tucker. On variational bounds of mutual information. In International Conference on Machine Learning, 2019.\", \"q2\": \"Section 4.2 paragraph 2: \\u201cshrinking\\u201d for different layers wasn\\u2019t mentioned before, and is not immediately clear what it means; the reader needs to be intimately familiar with the DIM paper to understand.\", \"q3\": \"explain the difference between q_\\\\psi and p_\\\\psi, which seem to be used interchangeably.\", \"q4\": \"Section 4.3 mentions \\u201cthreshold\\u201d for stein gradient estimator, which was not mentioned before. Please explain what it is.\", \"q5\": \"The authors talk about MINE, which optimizes the InfoNCE bound [1], which is also used in DIM and CPC [2]. I strongly encourage the authors to cite [1] and [2] and mention them in the related works. Additionally, it would be clear if Figure 1 and related references and description used \\u201cInfoNCE\\u201d instead of \\u201cMINE\\u201d as the name of the method since InfoNCE is\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes the Mutual Information Gradient Estimator (MIGE) for estimating the gradient of the mutual information (MI) instead of calculating it directly in learning representation. They are using Stein's estimator following by a random projection to build a tractable approximation to the gradient of the MI.\\n The MIGE is evaluated on several of unsupervised and supervised tasks, and shown improvement over prior MI estimation approaches in maximize the MI and learning features for classification.\\n\\nIn general, I think that the idea of estimating the gradient of the MI instead of directly calculating it is an exciting research direction, and this paper combines a few pieces together (As mentioned in the paper, there was a work of Li & Turner, 2017 that applied Stein's estimator for implicit models).\\nHowever, the experimental part of this paper is lacking. My main concern is regarding the performance on downstream tasks. Although the experiments demonstrate wins over different models in maximizing MI for CIFAR10 and CIFAR100, the only comparison for downstream tasks is for Permutation-invariant MNIST. One more concern is regarding the random projection. It is not clear what is the effect of it on the representation, and how it impacts on the gradine's estimation.\", \"strengths\": [\"Interesting new model for representation learning based on an estimation of the MI gradients'.\", \"Good set experiments looking at MI maximization performance.\", \"+A well-written and well-organized paper.\"], \"weaknesses\": \"No comparison on downstream tasks for more datasets except MNIST. In the end, a key question is a final accuracy on different datasets and how to maximize the information effect on it. \\nThere is no discussion about the effect of the random projection on the representation. For example, how it affects performance? How much the algorithm sensitive to this projection? What is the performance of the MINE if it combined with random projection...\", \"minor_comments\": \"-Typos and English mistakes - there are many typos. For example -\\n In the introduction - \\\"Another closely related work is the the Information\\u2026\\\"\\n In section 2 - \\u201cIn order to overcome this disadvantages\\\" \\n In section 2.20 - \\\"In optimization, it should be achieved by maximizing the information between z and z.\\\"\\n- There should be more detailed explanations of the experiments. For example - what is the projected dimension (for all the experiments).\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper argues that directly estimating the intractable mutual information (MI) for representation learning is challenging in high dimensions. Instead the authors propose to estimate the needed MI gradients directly using a score function based approach. Using some identities of MI the authors arrive at an expression for the gradient of the mutual information between input and latent representation (eq 10) and proposes to use a generalization of the reparameterization trick and spectral stein gradient descent to approximate this gradient. In Toy experiment and MNIST/CIFAR10 experiments the authors demonstrate that their method produces latent representations that are more informative than competing MI methods for downstream classification tasks. I found the approach and content of the paper interesting and the results seems encouraging. My main concern is that I did not find the that I did not find the method and experimental section to be fully comprehensive and further lacking many details which makes it hard to compare the results with prior work.\", \"pros\": \"1) I find the approach taken by the authors interesting and different from current MI estimation approaches. The paper convincingly motivates their approach by describing the deficiencies of current MI estimators and why targeting the gradients directly might have merits.\\n2) The authors propose to use SSGD and 'generalized' reparameterization in a (well motivated) new setting.\\n3) The cifar10 experiments in table 1 are encouraging and the toy experiment in 2D is illustrates nicely the deficiencies of the current MI estimators \\n\\nCons\\n1) The experimental section is lacking many details to fully understand how and what experiments were performed and how comparable they are to prior work\\n 2) The paper would benefit greatly from a thorough editing to clarify the presentation - there are many missing concepts and definitions that makes it hard to follow without intimate knowledge of related literature. \\n \\n \\nFurther suggestions / questions \\n\\n1. In section 3. Please define q(z)_psi(z), q(x,z)_psi and describe how they relate to E_psi.\\n\\n2) What exactly are the contributions by the authors wrt to spectral stein gradient descent (sec 2.1) e.g. is it the scalable approach based on random projections described in sec 3 ? Further i would like some discussion on the quality of this approximation?\\n\\n3) Please provide some more details on the DeepInfoMax and Information bottleneck experiments e.g. How exactly did you estimate the MI gradients in these settings? how is the downstream task setup and is it identical to prior work?\\n\\n\\n4) About writing style:\\nI think it would benefit the paper if you let the reader decide for them self what adjectives should be used to describe a result. A few concrete suggestions:\\n - Use remarkable/y about your own findings a bit more sparingly (used 4x). \\n - Consider deleting \\u201cmuch\\u201d and \\u201cvast\\u201d in a sentence like: \\u201cour approach MIGE gives much more favorable gradient direction, and demonstrates more power in controlling information flows without vast loss\\u201d.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper works out estimators for the gradient of Mutual Information (MI). The focus is on its recent popular use for representation learning. The insight the authors provide is to see encoding the representation as a \\u2018reparametrization\\u2019 of the data. This insight enables mathematical tools from the literature on \\u2018pathwise derivatives\\u2019. With gradients on the MI, one can estimate models that aim to maximize this quantity. For example in unsupervised learning one can learn representations for downstream tasks. This is shown in Table 1. Another application in supervised learning is the Information Bottleneck. This shown in Table 2.\", \"three_points_for_review\": \"1)\\nThe estimator for the gradient is shown at first on a toy task. Take a correlated Gaussian distribution and estimate gradients of the MI. The correlated Gaussian has an analytical form of MI, which makes this a useful experiment. The paper claims that this estimator \\u2018provides a tighter and smoother\\u2019 gradient estimate. I don\\u2019t see how this experiment and this claim tie together. Could the tightness or smoothness be quantified? It seems the MIGE has a lower variance, could empirical results or bounds on the variance be obtained? \\n\\nMoreover, this plot concerns random quantities, whereas we see only one realization. Both the MIGE and the MINE hold under expectation of samples from the data distribution. This is a toy example where we can sample infinitely from the data distribution. That means we can either a) plot more samples or b) obtain (empirical) error bounds on the gradient under these sampling distributions.\\n\\n2)\\nThe major experimental result in the paper shows advantage of the gradient estimator in Transfer learning. Specifically, the authors compare against the recent DIM of Hjelm et al 2019. The authors train a quote \\u2018small fully connected neural network classifier\\u2019. However, the work of Hjelm trains a linear SVM on the representations. It is not clear where the increase in performance originates. Is it the improved representations (as obtained by using MIGE) or is it the change classifier? \\n\\n3)\\nOne contribution of the paper is to make the gradient estimator work in high dimensions. To this end, the authors propose Random Projections. It is not clear how this approximation influences the results. An experiment regarding this topic would make the point clearer. Is RP used in the current experiments? Then how does this influence the results? Is the RP used for computational purposes? Then can we quantify the gain in computation?\", \"minor_comments\": \"*\\u2019In practice, we do not care about MI estimation\\u2019. Please explain further or refer to previous work.\\n *\\u2019In optimization, it should be achieved by maximizing the information between z and z.\\u2019 (Section 2). Two points\\n 1. The information \\u2018between z and z\\u2019 is probably a typo?\\n 2. How does sufficiency relate to an optimization problem? Doesn\\u2019t sufficiency mean in this context I(X;Y)=I(Z;Y)?\\n * Equation 12: In the part $\\\\nabla_psi (x, E_\\\\psi(x))$. Why do we take gradient w.r.t. x? It seems to me that the reparametrization is a function of x only via $E_\\\\psi(x)$. If not, then please explain what this tuple means.\\n * Table 1 has no units. How to interpret the numbers in this table?\\n * Section 4.3, authors note their experiment is \\u2018a little bit different\\u2019 from other related research. How and what exactly is different?\\n\\nTypographic comments\\n *Just below eqn17, \\u2018minibatche\\u2019 => \\u2018mini-batch\\u2019 or \\u2018mini batch\\u2019\\n *Section 4.2 \\u2018images classification\\u2019 => \\u2018image classification\\u2019\\n *\\u2019However A tractable density is\\u2019 => \\u2018However, a tractable density is\\u2019\\n *\\u2019Estimating gradients of MI than\\u2019 -> \\u2018Estimating gradients of MI rather than\\u2019\\n *Section 3, circumstance 1 \\u2018representation\\u2019 => \\u2018represent\\u2019\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes MIGE---a novel estimator of the mutual information (MI) gradient, based on estimating the score function of an implicit distribution. To this end, the authors employ the spectral Stein gradient estimator (SSGE) and propose its scalable version based on random projections of the original input. The theoretical advantages of the method are presented using a toy experiment with correlated Gaussian random variables, where both the mutual information and its gradient can be computed analytically. In this setting, MIGE provides gradient estimates that are less biased and smoother than baselines. The method is also evaluated on two more complicated tasks: unsupervised representation learning on Cifar-10 and CIfar-100 via DeepInfoMax (DIM) and classification on MNIST with Information Bottleneck (IB), where MIGE outperforms all baselines by a significant margin.\\n\\nI recommend ACCEPTing this paper. This work discusses a vital problem and proposes a novel, well-motivated, principled and very performant solution; additionally, it demonstrates the broad applicability of the introduced method. While the proposed technique consists of previously known building blocks (spectral Stein gradient estimator and random projections), it is cleverly applied in a novel context of estimating MI gradients.\\n\\nWhile the paper is solid, I believe that it could be improved in the following ways. Firstly, I would like to see 1) more extensive and 2) larger-scale evaluations. In the DIM experiment, 1) would correspond to trying the DIM(L) approach, which maximises patch-wise MI. In fact, I strongly recommend including this experiment as it corresponds to and could improve current state-of-the-art. If it turns out that MIGE does not work well on DIM(L), then this would correspond to a serious issue with the method. In this experiment, 1) would also include providing other metrics for learned representations. It would be much more convincing to include estimates of true mutual information (e.g. InfoNCE bound evaluated with a large number of samples [1]) and showing that MIGE can attain higher values than baselines. 2) would correspond to evaluation on bigger datasets: (tiny) ImageNet and STL-10 dataset. Also, the toy experiment would benefit from a higher-dimensional setting (e.g. d=256 to d=1024), since these are often used in practice. \\nSecondly, the paper is sloppily-written, which quite a few grammar and stylistic mistakes (e.g. sentence in sec 3, paragraph 2: \\u201cwe assume obtain to\\u2026\\u201d, which starts with a lower-case letter and doesn\\u2019t make sense). Finally, the paper would benefit from the following clarifications: 1) explain what is the Nystr\\\\:om method, 2) provide either a proof or a citation for eq (19); also the error bound for SSGE should be provided for the paper to be self-contained, 3) explain the difference between q_\\\\psi and p_\\\\psi, which seem to be used interchangeably.\", \"additional_remarks\": \"Sec 2.2, 2), \\u201cstreamlining\\u201d is unclear\\nCircumstances 2 and 3 can be quite easily derived from circumstance 1; also they are not evaluated empirically; it would be nice to have experiments for them, and they can be moved to the appendix in case of lack of space\\nEq (19) while nice, seem to bear no significance for the proposed method and the rest of the paper; consider removing it\\nSection 4.2 paragraph 2: \\u201cshrinking\\u201d for different layers wasn\\u2019t mentioned before, and is not immediately clear what it means; the reader needs to be intimately familiar with the DIM paper to understand.\\nSection 4.3 mentions \\u201cthreshold\\u201d for stein gradient estimator, which was not mentioned before. Please explain what it is.\\nEquations (8-10) are just simple derivations and are not necessary; it would be enough to provide Eq (10).\\nThe authors talk about MINE, which optimizes the InfoNCE bound [1], which is also used in DIM and CPC [2]. I strongly encourage the authors to cite [1] and [2] and mention them in the related works. Additionally, it would be clear if Figure 1 and related references and description used \\u201cInfoNCE\\u201d instead of \\u201cMINE\\u201d as the name of the method since InfoNCE is an estimator and MINE is just a particular implementation of the method.\\n \\n[1] Poole et. al., \\u201cOn variational bounds of mutual information\\u201d, ICML 2019.\\n[2] van den Oord et. al, \\u201cRepresentation Learning with Contrastive Predictive Coding\\u201c, arXiv 2018.\"}"
]
} |
HygTUxHKwH | Qgraph-bounded Q-learning: Stabilizing Model-Free Off-Policy Deep Reinforcement Learning | [
"Sabrina Hoppe",
"Marc Toussaint"
] | In state of the art model-free off-policy deep reinforcement learning (RL), a replay memory is used to store past experience and derive all network updates. Even if both state and action spaces are continuous, the replay memory only holds a finite number of transitions. We represent these transitions in a data graph and link its structure to soft divergence. By selecting a subgraph with a favorable structure, we construct a simple Markov Decision Process (MDP) for which exact Q-values can be computed efficiently as more data comes in - resulting in a Qgraph. We show that the Q-value for each transition in the simplified MDP is a lower bound of the Q-value for the same transition in the original continuous Q-learning problem. By using these lower bounds in TD learning, our method is less prone to soft divergence and exhibits increased sample efficiency while being more robust to hyperparameters. Qgraphs also retain information from transitions that have already been overwritten in the replay memory, which can decrease the algorithm's sensitivity to the replay memory capacity.
| [
"deep learning",
"reinforcement learning",
"model-free reinforcement learning",
"Q-learning",
"DDPG"
] | Reject | https://openreview.net/pdf?id=HygTUxHKwH | https://openreview.net/forum?id=HygTUxHKwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"jfZlT8lP5L",
"SkgJmBIIoH",
"rJe54QLUiS",
"ryeeYMIUiH",
"rJl9BeUUiB",
"HJg27xDRtH",
"HJgus3ATFH",
"rygpYvGTKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746613,
1573442839213,
1573442353795,
1573442167999,
1573441601936,
1571872803755,
1571839135993,
1571788676779
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2340/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2340/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2340/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2340/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2340/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2340/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2340/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a method to reduce the instability issues of off-policy deep reinforcement learning. The proposed solution constructs a simple MDP from the experience in the agent's replay memory. This graph is used to compute a lower bound for the values from the original problem. Incorporating this bound can make the learning system less prone to soft divergence.\\n\\nThe reviewers appreciated the motivation of the paper and the direction of this research. However, the reviewers were not convinced that the formulation was sufficiently complete. There were concerns that the method makes additional assumptions about the data distribution (the presence of state aggregation and the absence of repeated states in continuous spaces). Reviewers found related work was missing. The reviewers also found multiple aspects of the presentation unclear even after the author response. \\n\\nThis paper is not ready for publication as the generality of the proposed method was not sufficiently clear to the reviewers after the author response.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Answer to Review #3\", \"comment\": \"Thank you for those comments.\\nAs you suggested, we will of course re-iterate the paper to find typos and grammatical errors; add more details on related work, include more explicit pros/cons; explicitly add a list of our contributions (graph-perspective on the replay memory; thereby insights into different classes of nodes and their impact on convergence; the q-graph-based bounds for Q-learning which have a set of positive effects). \\nAlso the conclusions will be extended to better link back to the theoretical insights from section 4.\\n\\nFor the remaining points you raised, we would like to confirm our understanding of your comments:\\n1. \\\"the challenges you face when dealing with this issue\\\". The challenge we face is soft divergence, but that is a widely known issue. Do you think the paper would benefit from re-iterating more contents from prior work on soft divergence in Q-learning?\\n2. Would pseudo-code for our method, including the replay memory, data graph, q-graph and network training, provide a \\\"clear illustration for the proposed method\\\"? \\n3. What could a similar illustration for the experiments section look like? Do you think an additional paragraph in the beginning of the experiments section that describes the subsection strucuture would be helpful?\\n\\nIf you feel that any of your requests is not reflected in our suggestions here, please let us know. Thanks.\"}",
"{\"title\": \"One more comment\", \"comment\": \"Adding to the question whether our graphs are also useful if states are never re-visited.\\nThis touches on the question if we can have sufficiently many nodes in the qgraph to derive lower bounds. A 'cheat' related to this issue that we introduce in the paper are zero actions: often some action is known to not change the state (e.g. apply zero force). Then, this zero-actions can be applied to any state and introduce self-loops. Effectively this means that loose ends (which cause highest variance in our educational example) are eliminated.\"}",
"{\"title\": \"Answer to Review #2\", \"comment\": \"Thank you for your comments.\", \"we_are_now_re_writing_parts_of_the_paper_and_will_integrate_the_following_answer_eventually_to_make_sure_the_paper_already_conveys_these_messages\": \"You are right in your concern that the graph may consist of several disconnected components (or chains, as you call them).\\nActually, re-visiting states happens slightly more often than you may think: at corners of the state space, at edges of an obstacle, at narrow goal areas. However, of course, this depends on the environment and may not happen at all. One technique we propose in the paper to deal with this are zero-actions: often some action is known to not change the state (e.g. apply zero force). Then, this zero-actions can be applied to any state and introduce self-loops.\\nAdditionally, in off-policy algorithms you can use exploration to re-visit states. Overall we have the impression that passing information along full trajectories is more useful than the cross-trajectory information exchange at re-visited states. Note that we pass this information without introducing the high gradient variance that is typically linked to full trajectory backups in Monte Carlo methods.\\n\\nThe Q-values we derive from the simplified MDP are only computed on the Qgraph, which is a subgraph of the data graph. Thus, the 'chains' you mentioned would not be included in the Qgraph. We would still use all samples to train the Q-function on the original problem though, just the number of samples for which a lower bound can be provided, varies. If zero actions are used, there is at least one lower bound for every sample.\\n\\nDoes this make sense to you, do you have any further or follow-up questions?\"}",
"{\"title\": \"Answer to Review #1\", \"comment\": \"Thank you for your clear review and suggestions/questions. While we are re-writing parts of the paper in the background, we'd like to answer your questions already. Eventually we will integrate these answers into the paper such that it clearly communicates these points from the beginning.\\n\\nThe two pieces of related work you pointed out seem related indeed. We will discuss them in the next version of our paper. These approaches are complimentary to our work in the sense that they also point out aspects of Q-learning that lead to instabilities. However, they focus on the off-policy property and suggest ways to constrain the actions that are selected. In our work, the action selection is not constrained or altered at all. Instead, we introduce bounds to the Q-function that we enforce during learning. It may actually be of interest for future work to investigate how these methods can be combined and whether this further stabilizes Q-learning.\\n\\n1) One state does correspond to a single node. We never merge different states into one node, but of course this is a matter of floating point precision. In our implementation, states are measured in meters and rounded to 4 digits.\\n2) Our method focusses on learning a Q-function. How actions are selected is not altered by our method and therefore any existing method may be plugged in. For our experiments, we use the standard DDPG setup in which an actor network is trained and used as a policy. For exploration, we add Gaussian noise to the output of the actor network.\\n3) There is no assumption about the initial state distribution. One of the most important insights of our work is that state-of-the-art DDPG and DQN only work on samples in the replay memory. These samples may originate from a problem with any initial state distribution, but the full distribution is never used - only the finite number of samples that is stored in the replay memory. Note that these model-free methods do not execute any virtual rollouts (since no model is learned that could be used). If you could point us to the part of our paper that made you think there were constraints to the distribution of initial states, we'd love to clarify or re-write that paragraph.\\n4) If they are actually the same state (up to precision), the nodes are merged; enabling a cross-over of experience from differen trajectories.\\n5) The considered environments are deterministic. We are not sure what you refer to as \\\"stochastic approximation\\\"? Dynamic programming is used to obtain Q-values for the simplified MDP - however, any other way to obtain the correct Q-values works with our algorithm. Function approximation is used because the state-action space is continuous.\\n\\nThe approach is indeed limited to deterministic tasks (although smaller violations to this assumptions may not be practically relevant).\", \"there_are_ways_to_extend_our_work_to_non_deterministic_tasks_that_we_will_also_discuss_in_the_paper\": \"If the environment is non-deterministic, less tight bounds can be established under additional assumptions.\\nFor instance, let's assume that for all states and any given series of actions \\\\mathfrak{A}, the empirical returns R_i that an agent can observe when following \\\\mathfrak{A} differ by at most \\\\delta.\\nThen all Q-values from the simplified MDP apply as lower bounds with margin \\\\delta:\\nQ_true(s,a) >= Q_simplified(s,a) - delta\\n\\n\\nOur method works on the graph structure only. \\nIt is possible to build a similar graph from high-dimensional input, see for instance the graph of image inputs in this ICLR submission we discovered recently: https://openreview.net/pdf?id=HkxjqxBYDB\\nGiven the graph structure, our method only depends on the number of nodes and edges in the graph but it is independent of the dimensionality of the original input. \\nWe will add more details on the complexity of our algorithm in the paper.\\nAnother question is whether a graph structure is still useful in high-dimensional spaces, because it may seem that it is even less likely for the same node to reappear: (1) we believe that in most cases, only a manifold of this high-dimensional space is actually visited (e.g. only some specific type of image), (2) there are typical attractor-states in many cases, e.g. in the corner of the state space or along edges of an obstacle, (3) in off-policy algorithms you can actively use exploration to re-visit states, (4) overall we have the impression that passing information along full trajectories is more useful than the cross-trajectory information exchange. Note that we pass this information without introducing the high gradient variance that is typically linked to full trajectory backups in Monte Carlo methods!\\n\\nDo these comments fully answer your questions, or are there any follow-up questions? Thanks!\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper aims to build an understanding of deep RL. Because RL remains under-investigated from a theoretical point of view. Many algorithms use function approximation, off-policy learning and bootstrapping together--This is an unstable combination of techniques. In this paper, the authors propose a graph-perspective on the replay memory which allows to analyze the structure of deep RL.\\n\\nThe paper aims at an important issue in deep RL. The motivation of the paper is meaningful.\\nThe paper gives a good summary of the previous related works in Section2.\\n\\nThe paper in the current form needs to be polished again. To obtain a better score, I suggest the authors to modify this paper in these ways: First, the introduction section needs to provide more details, including the pros and cons of previous related works on the research problem of this paper, the challenges you face when dealing with this issue and the contributions; Second, there were more than a few spelling and grammatical errors, please proofread the work and improve the writing; Third, the paper lacks logic in writing. The writing from Section.2 to Section.6 needs to be organized better. It is difficult for readers to grasp the key ideas of the paper through a quick assessment.\\nThe paper focuses on the understanding of RL when deep Q-learning diverges, however, most of the conclusions in the paper are not based on the necessary theoretical proof, but the observations on the experiments.\\n\\nIt would be better if this paper can provide a clear illustration for the proposed method as well as the experiments section.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper is trying to tackle the soft divergence issue in deep RL when algorithms combine function approximation, off-policy learning and bootstrapping, which is also called deadly triad by Sutton & Barto (2018). The paper proposes a way to represent the transitions in the replay memory as a data graph, then construct a simple MDP from it. Much more accurate Q values could be computed from the simple MDP and it provides a lower bound for the Q-values in the original problem. In this way, the method becomes less prone to soft divergence.\\n\\nThe idea of constructing a smaller MDP whose Q-values can be computed exactly by dynamic programming on tabular states, then use these Q-values to help dealing with the instability issues in deep RL is very interesting. In the rebuttal, I'd like the authors to address my major concern of the paper, where the proposed method seems to assume that the finite number of transitions could form a graph, which might not be always true. In typical continuous state spaces, the same state might not appear twice in the sampled transitions. In these cases, the graph becomes a number of disconnected chains and the Q-values from this MDP might not be accurate. Maybe I'm missing something, it's not very clear to me how the proposed method could be applied in the common case in deep RL where there's seldom a loop and the states are rarely visited twice.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes Qgraph, an algorithm that addresses the problem of extrapolation error that appear in RL tasks with continuous action spaces. The authors describe a method to construct a graph from transitions generated by some policy. In this graph nodes correspond to states and (s, a, r, s\\u2019, t) define transitions between these nodes. Then this representation is simplified and used to compute Q-values using methods for tabular MDPs.\\n\\nThe related work section is missing several methods that attempt to address the same problem. Batch Constrained Q-learning (Fujimoto etal, 2018) introduces a formulation of Q-learning that constrains action selection with a generative model trained on a replay buffer in order to omit unseen actions. BEAR-QL (Kumar etal, 2019) describes a similar approach that uses a hard constraint based on MMD. It would be interesting to discuss connections with the recent work on off-policy batch RL.\\n\\nThe clarity of the paper can be improved. In particular, I have several questions regarding the method:\\n1) How are node of the graph are constructed? Does one state correspond to a single node or several states are merged into a different node?\\n2) How the actions are selected?\\n3) What are the assumption regarding the initial state distribution? Does the set of initial states have to be finite?\\n4) If two similar states appear in different branches of the graphs, are the corresponding nodes merged or not?\\n5) If the considered environments are deterministic, what is the motivation for stochastic approximation of dynamic programming?\\n\\nThe approach has several major limitations. One of the main limitations of the approach is that it can be applied only to deterministic tasks. Although it is not stated clearly in the paper, it seems also requires to have a finite set of initial states. \\n\\nThe experimental evaluation is performed on a limited set of tasks and it is rather unclear whether the method can be scaled to higher dimensional control problems.\\n\\nOverall, I feel that the paper needs to be significantly improved.\"}"
]
} |
Bkl2UlrFwr | Iterative Deep Graph Learning for Graph Neural Networks | [
"Yu Chen",
"Lingfei Wu",
"Mohammed J. Zaki"
] | In this paper, we propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL), for jointly learning graph structure and graph embedding simultaneously. We first cast graph structure learning problem as similarity metric learning problem and leverage an adapted graph regularization for controlling smoothness, connectivity and sparsity of the generated graph. We further propose a novel iterative method for searching for hidden graph structure that augments the initial graph structure. Our iterative method dynamically stops when learning graph structure approaches close enough to the ground truth graph. Our extensive experiments demonstrate that the proposed IDGL model can consistently outperform or match state-of-the-art baselines in terms of both classification accuracy and computational time. The proposed approach can cope with both transductive training and inductive training. | [
"deep learning",
"graph neural networks",
"graph learning"
] | Reject | https://openreview.net/pdf?id=Bkl2UlrFwr | https://openreview.net/forum?id=Bkl2UlrFwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"O0qn9S8g0n",
"HJxUglOIoB",
"SkxbT1dLsB",
"rJeFkAvIir",
"Bkx0HaPUor",
"SJgD3hvUoB",
"Bkxq-J6pFr",
"rJlSbRu6tH",
"S1xl73mpYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746584,
1573449709959,
1573449656531,
1573449184991,
1573449030220,
1573448878964,
1571831553963,
1571814908909,
1571793943963
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2339/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2339/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2339/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2339/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2339/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2339/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2339/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2339/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The submission proposes a method for learning a graph structure and node embeddings through an iterative process. Smoothness and sparsity are both optimized in this approach. The iterative method has a stopping mechanism based on distance from a ground truth.\\n\\nThe concerns of the reviewers were about scalability and novelty. Since other methods have used the same costs for optimization, as well as other aspects of this approach, there is little contribution other than the iterative process. The improvement over LDS, the most similar approach, is relatively minor. \\n\\nAlthough the paper is promising, more work is required to establish the contributions of the method. Recommendation is for rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author response to Review #2 (continued)\", \"comment\": \"5) Although this method is claimed efficient, it is indeed slower than the classic GNNs due to the iterative operation. The details of training time comparison between this method and GNNs such as GCN and GAT will be helpful.\\n\\nYes, this is indeed true in terms of run time performance. But note that when input graph structures are noisy or even not available, existing GNNs such as GCN and GAT cannot perform well or even cannot work. We proposed our learning framework IDGL to exactly overcome these limitations. \\n\\nNevertheless, based on the reviewer's comments, we conducted additional experiments and reported the training time for all these models. As we explained to the reviewer #1 as well, we previously reported the training time of IDGL w/o IL (instead of IDGL) by mistake. Below is the corrected training time (mean and std.) for various models. IDGL is consistently faster than LDS, but in general, they are comparable. And yes, both IDGL and LDS are slower than GCN and GAT, which is expected since they did not need to learn graph structure simultaneously. We have also updated this part in the revision.\\n\\n Models / Benchmarks | Cora | Citeseer | Wine | Cancer | Digits\\n GCN | 3 (1) | 5 (1) | -- | -- | --\\n GAT | 26(5) | 28(5) | -- | -- | --\\n LDS | 390 (82) | 585 (181) | 33 (15) | 25 (6) | 72 (35)\\n IDGL | 237 (21) | 563 (100) | 20 (7) | 21 (11) | 65 (12)\\n IDGL w/o IL | 49 (8) | 61 (15) | 3 (2) | 3 (1) | 2 (1)\\n\\n6) Pubmed is not used in this work. I conjecture that the new method cannot handle such a big dataset efficiently.\\n\\nWe admit that scalability might be an issue when applying this method to large networks. It becomes challenging to compute similarity scores for any pair of N nodes when N is a large number. However, this is also a challenge for other graph learning methods. Fo example, in the original LDS paper, the authors admitted that it cannot currently scale to large datasets (due to at least quadratic complexity of node number N). Following the LDS work, we did not evaluate our model on Pubmed either. Note that our current graph learning component (using pairwise similarity metric) can be replaced by other fast and scalable graph learning modules, which we leave it as one of the future works. \\n\\n7) I was wondering why this method is faster than LDS.\\n\\nPlease see the above response. The run times are actually comparable, though our approach is slightly faster.\"}",
"{\"title\": \"Author response to Review #2\", \"comment\": \"We first thank the reviewer for your valuable feedback! Please refer to the overall responses and the responses to Reviewer #1 for a detailed statement on the main contributions of this work.\", \"below_we_address_the_concerns_mentioned_in_the_review\": \"1) Compared with LDS, this work seems to overlook the bi-level optimization problem\\n\\nUnlike LDS, in our model, we optimize a joint loss combining both task-specific prediction loss and graph regularization loss. It will be very interesting to see if our model can benefit from adopting the bi-level optimization technique, which we will leave it as future work. However, one severe limitation of the LDS work is that it essentially only optimizes the edge connectivities of the graph assuming the set of nodes are known, which makes it unable to handle the new set of nodes during the testing (the inductive setting). \\n\\n2) The feature matrices in experiments are not strictly independent with graph structures\\n\\nWe are sorry about the confusion made here. We are not claiming that the raw node features are independent with graph structures, which is not the focus of this work. Our rationale is as follows. Some previous works rely solely on raw node features to learn the graph structure based on some attention mechanism, which we think have some limitations since raw node features might not contain enough information for learning good graph structures. In this work, we propose a novel iterative learning framework that is able to learn better graph structures with updated node embeddings, and in the meanwhile, learn better node embeddings with updated graph structures. Empirical experiments verify the effectiveness of additionally learning graphs with updated node embeddings. We have updated our manuscript to make it more clear. \\n\\n3) As shown in Appendix B, too many hyper-parameters are involved. I conjecture it will be difficult to reproduce the experimental results.\\n\\nWe will release the code and the preprocessed data upon the acceptance of this paper in order to promote the reproducibility of our work. Also, we have listed all hyperparameters associated to IDGL on all benchmarks in Appendix - Sec. B MODEL SETTINGS in our original submission. \\n\\n4) Eqs.(2), (3) and (10) are problematic. Node embeddings Z should be included in them. Eq.(10) does not have theoretical proof. According to Eq.(10), the method cannot handle graphs with noisy edges. In experiments, there are edge deletions, but no edge addings. Experiments with attacked graph are expected. \\n\\nIt seems like that there are some misunderstandings here mainly because our notations are confusing. We have made it more clear in our revision. In particular, In Eqs. (1), (2), the two vectors (originally denoted as x_i and x_j and now we renamed them as v_i and v_j to avoid confusion) are served as the inputs for our similarity score function. Note that these two vectors could be any vectors such as raw node features or computed node embeddings. Therefore, Eqs (1), (2), (3) and (10) are all correct. \\n\\nEq. (10) is based on our assumption that the optimal graph structure is potentially a shift from the initial graph structure, and our goal is to learn this shift. In particular, it could be interpreted as some form of the skip-connection where a hyperparameter \\\\lambda is used to balance the trade-off between using the learned graph structure and the initial graph structure. Our preliminary experiments showed that it is harmful to totally discard the initial graph structure, and using this type of \\u201cskip-connection\\u201d to incorporate the initial graph structure with learned graph structure is much more effective. \\n\\nWe do not quite understand the comment made by the reviewer that \\u201caccording to Eq.(10), the method cannot handle graphs with noisy edges\\u201d. We hope the reviewer can kindly clarify this point. Literally, our method is exactly used to enable GNN to cope with graphs with noisy edges or incomplete edges. \\n\\nHowever, as suggested by the reviewer, we have done additional experiments on Cora and Citeseer with randomly added edges. Below are the results for test accuracy (std). Through experiments, we found that adding random edges is more challenging than removing random edges. And our model is much more robust than GCN in this scenario. We have added these results in the updated manuscript.\", \"cora\": \"Methods \\\\ added edges percentage | 0% | 25% | 50% | 75% \\nGCN | 81.0 (0.2) | 27.6 (8.6) | 20.5 (9.5) | 17.1 (7.7)\\nIDGL | 84.5 (0.3) | 62.9 (2.3) | 62.4 (0.9) | 61.7 (1.7)\", \"citeseer\": \"Methods \\\\ added edges percentage | 0% | 25% | 50% | 75% \\nGCN | 70.9 (0.3) | 20.6 (3.0) | 20.9 (2.8) | 19.9 (2.7)\\nIDGL | 74.1 (0.2) | 60.4 (2.2) | 62.9 (2.5) | 60.6 (1.6)\"}",
"{\"title\": \"Author response to Review #3\", \"comment\": \"We thank the reviewer for providing valuable feedback! Please refer to the response to Reviewer #1 for a detailed statement on the main contributions of this work.\", \"below_we_address_the_concerns_mentioned_in_the_review\": \"1) The analysis of \\\\lambda is necessary. However, this part is missing in the paper.\\n\\nA hyperparameter \\\\lambda is used to balance the trade-off between using the learned graph structure and the initial graph structure. Our preliminary experiments showed that it is harmful to totally discard the initial graph structure and using this type of \\u201cskip-connection\\u201d to incorporate the initial graph structure is more effective than using the initial graph structure as an attention mask, as done in Graph Attention Networks. Below we show the results of using different values of \\\\lambda on Cora. However, in practice, we use validation set to fine tune this hyperparameter and all hyperparameters are listed in our Appendix - Sec. B. Model Setting.\", \"idgl_on_cora\": \"Models \\\\ \\\\lambda | 0.9 | 0.8 | 0.7 | 0.6 | 0.5\\nIDGL | 83.6 (0.4) | 84.5 (0.3) | 83.9 (0.3) | 82.4 (0.1) | 80.9 (0.2)\\n\\n\\n2) The improvements comparing with LDS is not significant.\\n\\nOur model outperforms LDS in 4 out 5 datasets in the transducting setting as shown in Table 1. Our model IDGL is comparable when the initial graph is available (Cora and Citeseer). For the datasets (Wine, Cancer, and Digits) when only initial node features are available, our model consistently achieved better accuracy with much smaller standard deviations. More importantly, one significant advantage of our model compared to LDS is that it can handle inductive setting while LDS cannot as shown in Table 2. \\n\\n3) LDS only uses the optimized graph structure to train GNN.\\n\\nIn fact, LDS uses the original graph structure (or the kNN one if the original one is unavailable) to initialize the edge probabilities of the underlying graph. Please see Algorithm 1 Line #3 in the original LDS paper. We have the same settings as LDS in the experiments, and our comparisons are indeed fair. \\n\\n4) The learned graph should be better interpreted. For example, the cosine similarity on citation graphs with sparse features is very likely to be zero. As a result, the learned graph can be extremely sparse with very few non-zero entries. It would be interesting.\\n\\nYes. Just as the reviewer said, the raw node features of the citation graphs (e.g., Cora and Citeseer) are very sparse. And it is challenging to learn meaningful graph structures solely based on such sparse and indistinguishable features that might not contain enough information about the graph structures. This actually motivates us to propose our iterative graph learning framework where the core idea is to learn better graph structures based on better node embeddings, and in the meanwhile, to learn better node embeddings based on better graph structures. \\n\\n5) The authors design experiments on five datasets.\\n\\nWe actually conducted experiments on 7 datasets (5 transductive datasets + 2 inductive datasets) instead of 5 datasets.\"}",
"{\"title\": \"Author response to Review#1\", \"comment\": \"We thank the reviewer for giving valuable feedback! However, there are some points of misunderstanding that we address in this rebuttal.\\n\\nWe emphasize at the outset that the main contribution of this work is the iterative learning of graph structures and graph node embeddings, which iteratively learn a better graph structure with the updated node embeddings, and learn better node embeddings with the updated graph structure. To the best of our knowledge, we are the first to successfully apply the idea of iterative learning in the literature of graph learning. \\n\\nIn addition, our method dynamically stops when the learned graph structure approaches close enough to the optimal graph based on our proposed stopping criterion. Compared to using a fixed number of iterations globally, the advantage of applying our strategy becomes more clear when we are doing mini-batch training since we can adjust when to stop dynamically for each example graph in the mini-batch. \\n\\nCompared to LDS that can only handle the transductive learning setting, our model can additionally handle the inductive learning set. We conducted extensive experiments to verify the effectiveness of the iterative learning idea (see Sec 3.4 ablation study) and the dynamic stopping strategy (see Sec 3.5 for model analysis). We also theoretically analyzed and empirically examined the convergence of the proposed iterative learning method. Compared to LDS, our model outperforms it in 4 out of 5 benchmarks.\", \"below_we_address_the_concerns_mentioned_in_the_review\": \"1) The main issue here is that the regularization terms in Eqs. 4, 5, 6 (which should be considered to be the most important in the paper) are exactly similar to those in [1] (see Eq. 12 in [1]). This reduces the novelty of the paper.\\n\\nGraph regularization terms play an important role in our proposed model, however, as we have clarified above, this component should not be considered as the most important for this work. As we mentioned in Sec. 2.2, we borrowed effective techniques in Eqs. 4, 5, 6 from graph signal processing literature (with appropriate citations [1]) and adapted them to our model to regularize the learned graph structure. Compared to [1] that directly learns a graph topology using these techniques, our proposed strategy of training a graph learning model with a joint loss combining this graph regularization loss with the task-specific prediction loss is still novel. \\n\\n2) I am not sure why the authors say that LDS [2] does not support inductive learning?\\n\\nIn the original LDS paper, the authors evaluated LDS only in the transductive setting and admitted that \\u201cAdding additional nodes after training (the inductive setting) would currently require retraining the entire model from scratch.\\u201d From our understanding, the restriction to the transductive setting is because LDS aims at directly optimizing the discrete probability distribution on the edges of the underlying graph (i.e., all pairs of nodes in the graph). Hence, it cannot handle unseen nodes/graphs in the testing phase. Unlike LDS, our model instead optimizes a shared learnable similarity function between any two node features, which thus can be used to construct a graph for any set of new nodes. \\n\\n3) For the running time comparison between IDGL and LDS, what are the size and number of parameters used in each model since they greatly affect the running time.\\n\\nOn the Cora data, the number of trainable parameters of IDGL is 28,836, and for LDS, it is 23,040. So they are comparable in terms of model size. We actually reported the training time of IDGL w/o IL (instead of IDGL) by mistake. Below is the corrected training time (mean and std.) for various models. IDGL is consistently faster than LDS, but in general, they are comparable. We have corrected this part in the revision.\\n\\n Models / Benchmarks | Cora | Citeseer | Wine | Cancer | Digits\\n GCN | 3 (1) | 5 (1) | -- | -- | --\\n GAT | 26(5) | 28(5) | -- | -- | --\\n LDS | 390 (82) | 585 (181) | 33 (15) | 25 (6) | 72 (35)\\n IDGL | 237 (21) | 563 (100) | 20 (7) | 21 (11) | 65 (12)\\n IDGL w/o IL | 49 (8) | 61 (15) | 3 (2) | 3 (1) | 2 (1)\\n\\n4) More related work\\n\\nWe thank the reviewer to referring us to several relevant and interesting works. We could do better by placing our work in a broader context. \\n\\n5) Minor point: please clean up duplicates in the reference list.\\n\\nYes, we will fix this.\"}",
"{\"title\": \"Overall responses for clarifying the main contributions of this work\", \"comment\": \"We thank all reviewers for their thorough reading and valuable comments. Before we address the specific technical questions from each reviewer, we would like to firstly focus on clarifying the key contributions of this work (our fault for not being more clear).\\n\\nVery different from the main baseline method - LDS (Franceschi et al., ICML 2019), which jointly learns graph structure and graph node embeddings by learning joint probability distribution on the edges of the graph, we achieve this goal by proposing a novel learning framework consisting of three key components:\\n (a) Iterative learning framework to refine the graph structures and graph embeddings.\\n (b) Graph learning as similarity metric learning;\\n (c) Graph regularization to control smoothness, sparsity, and connectivity;\\nAmong them, the first component (a) is the first time to be proposed (to the best of knowledge) and thus is the most important contribution. The rationale of (a) is to achieve i) refining the adjacency matrix with the updated node embeddings; ii) refining the node embeddings with the updated adjacency matrix. The components of (b) and (c) are combined to learn the graph structure with controlled sparsity and connectivity, which also play an important role in the final performance.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an extension of learning graph structure and GNN concurrently, by considering that real-world graphs are often noisy and incomplete. The idea of optimizing the intrinsic graph structure iteratively for down-stream prediction tasks is interesting. Experimental results demonstrate the effectiveness of proposed method.\", \"strengths\": \"1\\uff09the paper proposes a learnable similarity metric function and a graph regularization for learning an optimal graph structure for prediction.\\n2\\uff09Besides raw node features, the paper attempts to optimize graph structures via learned node embeddings in an iterative manner. \\n3\\uff09The paper is easy to read, and experiments show that the proposed method performs well.\", \"weaknesses\": \"1\\uff09Compared with LDS [1], this work seems to overlook the bi-level optimization problem for learning model parameters based on the optimal graph structure. The reason behind this method is expected. \\n2\\uff09Although the paper claims that the dependence of raw node features for learning graph structure has been weakened, empirical analysis on this point is not given. The feature matrices in experiments are not strictly independent with graph structures.\\n3) As shown in Appendix B, too many hyper-parameters are involved. I conjecture it will be difficult to reproduce the experimental results.\\n4) Eqs.(2), (3) and (10) are problematic. Node embeddings Z should be included in them. Eq.(10) does not have theoretical proof. According to Eq.(10), the method cannot handle graphs with noisy edges. In experiments, there are edge deletions, but no edge addings. Experiments with attacked graph are expected.\\n5) Although this method is claimed efficient, it is indeed slower than the classic GNNs due to the iterative operation. The details of training time comparison between this method and GNNs such as GCN and GAT will be helpful. I was wondering why this method is faster than LDS. Is it due to removing the bi-level optimization problem ? \\n6) Although the method can handle inductive training, it is hardly scale to big networks. Pubmed is an open citation network with around 20,000 nodes similar to Cora and Citeseer. Those three datasets are popularly used in GNNs as testbed. However, Pubmed is not used in this work. I conjecture that the new method cannot handle such a big dataset efficiently.\\n\\nOverall, this proposed method is well motivated, but the technical novelty is limited.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper introduces an iterative method called IDGL for learning both the graph structure (more precisely adjacency matrix) and parameters of the graph neural network.\\n\\nThe idea of iteratively refining an adjacency matrix A to obtain sparsity and smoothness is interesting and the experimental results are quite supportive. The main issue here is that the regularization terms in Eqs. 4, 5, 6 (which should be considered to be the most important in the paper) are exactly similar to those in [1] (see Eq. 12 in [1]). This reduces the novelty of the paper.\\n\\nOther parts in the paper such as using similarity between nodes (e.g. self-attention) to compute adjacency matrix or using the learned adjacency matrix with graph neural network is not new and have been done by many other works. There is also the rich related literature on graph generation (e.g., as in drug design), graph transformation (e.g., as in chemical reaction), structure learning in classical probabilistic graphical models, graph pooling (which is essentially building new latent graphs from an original graph), knowledge-graph completion, etc. This is not to say that the problem is solved (it isn't), but it is fair to place this work in a broader context.\\n\\nAbout the experiments, I have several concerns. First, I am not sure why the authors say that LDS [2] does not support inductive learning? LDS uses input node features to learn the unknown graph structure so I think it should be able to do inductive learning. DeepWalk or Node2Vec are examples of transductive methods because they do not use the node features. Second, for the running time comparison between IDGL and LDS, what are the size and number of parameters used in each model since they greatly affect the running time.\", \"minor_point\": \"please clean up duplicates in the reference list.\\n\\n[1] How to learn a graph from smooth signals, Kalofolias et. al. 2016\\n[2] Learning discrete structures for graph neural networks, Franceschi et. al. 2019.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper leverages metric learning to learn graph structure jointly with the learning of graph embedding. Firstly, it defines the similarity between any pair of nodes as the cosine similarity of nodal representations learned from attributes from nodes. Some tricks such as multi-head and sparsification are applied to learned cosine similarity to enhance the performance. Secondly, the authors introduce several graph regularizations to make the learned graph smooth, connected, sparse and non-trivial. Finally, the learned graph is linearly combined with the existing graph, using a hyperparameter \\\\lambda. The convergence and time complexity are analyzed. The authors design experiments on five datasets. The paper also contains some issues:\\n\\n1. Actually, the proposed framework is learning an extra graph adjacency matrix from nodal features, and further train GNN jointly on those two graphs. Therefore, the analysis of \\\\lambda is necessary. However, this part is missing in the paper.\\n\\n2. The improvements comparing with LDS is not significant. Besides, LDS only uses the optimized graph structure to train GNN, while the proposed framework use both learning structure and the original one (or the kNN result). It raises the question if the proposed framework can still out-perform LDS if LDS also takes the original graph as input to train GNN.\\n\\n3. The learned graph should be better interpreted. For example, the cosine similarity on citation graphs with sparse features is very likely to be zero. As a result, the learned graph can be extremely sparse with very few non-zero entries. It would be interesting.\"}"
]
} |
BJxnIxSKDr | Mint: Matrix-Interleaving for Multi-Task Learning | [
"Tianhe Yu",
"Saurabh Kumar",
"Eric Mitchell",
"Abhishek Gupta",
"Karol Hausman",
"Sergey Levine",
"Chelsea Finn"
] | Deep learning enables training of large and flexible function approximators from scratch at the cost of large amounts of data. Applications of neural networks often consider learning in the context of a single task. However, in many scenarios what we hope to learn is not just a single task, but a model that can be used to solve multiple different tasks. Such multi-task learning settings have the potential to improve data efficiency and generalization by sharing data and representations across tasks. However, in some challenging multi-task learning settings, particularly in reinforcement learning, it is very difficult to learn a single model that can solve all the tasks while realizing data efficiency and performance benefits. Learning each of the tasks independently from scratch can actually perform better in such settings, but it does not benefit from the representation sharing that multi-task learning can potentially provide. In this work, we develop an approach that endows a single model with the ability to represent both extremes: joint training and independent training. To this end, we introduce matrix-interleaving (Mint), a modification to standard neural network models that projects the activations for each task into a different learned subspace, represented by a per-task and per-layer matrix. By learning these matrices jointly with the other model parameters, the optimizer itself can decide how much to share representations between tasks. On three challenging multi-task supervised learning and reinforcement learning problems with varying degrees of shared task structure, we find that this model consistently matches or outperforms joint training and independent training, combining the best elements of both. | [
"multi-task learning"
] | Reject | https://openreview.net/pdf?id=BJxnIxSKDr | https://openreview.net/forum?id=BJxnIxSKDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"nnqOsjlyKL",
"r1eSE7CioB",
"HkgASG0sjB",
"HyxLMz0oir",
"HJxlJG0ior",
"rkep9-0siB",
"rJlCxtZ59S",
"rJxF5WxU9S",
"rygCTmYRtr",
"BylK3m7RtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746556,
1573802796631,
1573802566245,
1573802509808,
1573802455580,
1573802389283,
1572636917654,
1572368785512,
1571881925798,
1571857328746
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2338/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2338/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2338/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2338/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2338/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2338/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2338/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2338/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2338/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Reviewers put this paper in the lower half and question the theoretical motivation and the experimental design. On the other hand, this seems like an alternative general framework for solving large-scale multi-task learning problems. In the future, I would encourage the authors to evaluate on multi-task benchmarks such as SuperGLUE, decaNLP and C4. Note: It seems there's more similarities with Ruder et al. (2019) [0] than the paper suggests.\\n\\n[0]\\u00a0https://arxiv.org/abs/1705.08142\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author Response to R3\", \"comment\": \"Thank you for your review! We have uploaded a revised version of the paper to address your feedback and concerns.\\n\\n1) Questions regarding tensor factorization-based approaches to multi-task learning.\\nThank you for pointing out this related literature. We have added a discussion and cited all of these methods in Section 5. We also ran experiments with [1] on MT10 and found that it did not perform well, achieving only 10% success rate. We will work to tune the implementation of this method before the final version. We note that, while tensor factorization approaches are general and interesting approaches, Mint is simpler and easier to implement and build upon, which we view as a benefit. Further, Mint is less computationally expensive, as it required about 2-3x less computation. \\n\\n2) \\\"The interpretation of why Mint works is not clear: it is not clear that the universality is what makes it work, and there are no experimental analyses of what Mint learns. \\nBeyond performance, analysis on what Mint actually learns would be clarifying. Can the sharing behavior be analyzed by looking at the trained Mint layers? Is Mint actually able to learn both of the extreme settings in practice? The non-synthetic experiments in the paper are only performed on tasks that are closely related.\\\"\\nIn light of your feedback, we have performed an additional experiment in a multi-task regime where one task is duplicated. In this case, we show that the task-specific matrices learned by Mint for the two instances of this duplicate task are more similar than the comparison between one of the duplicate task\\u2019s matrices and the matrices of other distinct tasks. See Section 6.1 for a more detailed analysis.\\n\\n3) \\\"However, in the Mint experiments, a non-linear activation is added between the two components of each layer. This could void the universality property. Is there some reason why this is not an issue in practice?\\\" \\nWe have corrected the figure in our paper to reflect our implementation of Mint in which the task-specific matrices and shared matrices are not separated by a non-linearity.\\n\\n4) \\\"More generally, it is not clear that universality is the important advantage of Mint. Some existing DMTL methods already have this property, including Cross-stitch, which is compared to in the paper. The intriguing difference with Mint is that shared and unshared structure are applied sequentially instead of in parallel. Could there be an advantage in this difference? E.g., is Mint a stronger regularizer because it forces all tasks to use all shared layers (learning the identity function for shared layers is hard), while something like cross-stitch could more easily degenerate to only use task-specific layers even when tasks are related?\\\"\\nThe usage of a sequential, rather than parallel, flow of data is significant. As the reviewer indicates, it is possible that this architecture might act as a regularizer due to the difficulty of learning to ignore useless shared transformations. In addition, we note the phenomenon observed in residual networks where many residual blocks do not learn useful features and simply converge to the identity function, suggesting that encouraging the network to take advantage of all parallel processing streams is difficult. Using a sequential flow of information prevents this degeneracy.\\n\\n5) \\\"As a final note, adding layers to non-Mint models to make the topologically more similar to Mint models may not help these other models. It may make them more difficult to train or overfit more easily, since they are deeper, but do not have Mint method to assist in training. Comparisons without these extra layers would make the experiments more complete.\\\"\\nWe performed such comparisons and added the results to the revised version of the paper (see the plot on the right in Figure 4). Mint still outperforms the baselines without the extra layers in the setting of MT10.\\n\\n6) \\\"What exactly are the 'two simple neural networks' that produce the goal-specific parameters for goal-conditioned RL? Do these maintain the universality property?\\\"\\nThey are 2-layer ReLU networks that take in the goals and return the goal-specific Mint layers. We have added this information to Section 4 in the revised version of the paper. In Lemma 1 in the paper, there are no requirements on the Mint layers, and thus the universality property is still maintained.\\n\\n7) \\\"Can Mint be readily extended to layer types beyond FC layers? This may be necessary when applying to more complex models.\\\"\\nConceptually, Mint can be readily extended to any type of layer in the sense that we can include blocks of \\u201ctask specific -> shared\\u201d layer for any type of neural network layer. However, the requirement on invertibility (required for universality) is a stronger assumption in layers such as convolutions.\"}",
"{\"title\": \"Author Response to R2\", \"comment\": \"Thank you for your review. We have uploaded a revised version of the paper to address all of your concerns. Below, we address the feedback that you provided.\\n\\n(1) \\u201cSection 2 did not clearly illustrate what's being trained and what's being tested, and whether we care about the generalization performance of each task, or the generalization performance to new tasks (generated from P(T)).\\u201d\\nThank you for pointing out this lack of clarity. We have revised Section 2 to explain that we care about performance across all the tasks that we train on. Specifically, we have a fixed set of K tasks, and we wish to obtain high performance across all of these K tasks. We do not care about generalization performance.\\n\\n\\u201cSome notations are confusing there--for instance, i seems to be indicating tasks, but then there is a z_k as task indicator.\\u201d\\nWe have changed the notations so that the tasks are denoted by T_k and the task indicators are z_k. The distinction between T_k and z_k is that T_k is the task itself whereas z_k is an indicator of the task that is provided as input to a neural network. \\n\\n(2) \\u201cWhile having an universal expressive power is good, it is easily achieved by adding an indicator variable (z_k) per layer (similar to task-specific-all-fc in the experiments). So the guarantee does not seem to be closely related to explaining the proposed approach.\\u201d\\nAdding an indicator variable per layer is not sufficient for achieving universal expressive power. Specifically, using task-specific weights in this way amounts to adding task-specific bias terms to the activations of the network which processes the inputs. We have extended the theory (Lemma 1 and its proof) to the case where a task indicator is added to each layer to explain why this does not achieve universal expressive power. \\n\\n\\u201cIt is strongly suggested to introduce FiLM in more detail and compare it with the proposed approach more clearly in design, theory, and experiments.\\u201d\\nWe have added a more in-depth discussion and theoretical comparison of Mint, FiLM, and task indicator conditioning in Section 3.2, and we have a direct empirical comparison to FiLM in the MT10 experiments (see Figure 3).\\n\\n(3) \\\"I believe the authors have *not* answered their first proposed question \\\"does our method enable effective multi-task learning both in settings where there is substantial overlap in tasks and where there is little overlap\\\" properly. \\\"\\nWe agree that defining \\u201clittle overlap\\u201d and \\u201csubstantial overlap\\u201d between tasks is difficult. In our RL experiments, we selected MT10 and MT50 to highlight multi-task learning with relatively little overlap, as the agent must learn distinct skills. In contrast, we selected the goal-conditioned pushing environment as a multi-task learning environment with relatively larger overlap between tasks. In both experiments, we observed the benefits of Mint over other multi-task learning approaches.\\n\\n(4) \\\"It is suggested to analyze the matrices learned by the proposed approach. Do the matrices contain reasonable task correlations?\\\"\\nTo perform this analysis, we ran a multi-task experiment among a set of tasks where two tasks are exactly the same, and compared the matrices learned for these two tasks, in comparison to those from two different tasks. Specifically, we computed the L1 norm of the difference between the two matrices of the same task and compared that with the L1 norm of the difference between two task matrices corresponding to different tasks. We have added this analysis to Section 6.1 (see Figure 4). \\n\\n(5) \\\"It looks a bit strange to me that there is no discussion on regularizing the linear transformation matrices...Have the authors considered the possibility?\\\"\\nWe experimented with regularizing the linear transformation matrices of Mint by maximizing the pairwise cosine distance between them. We found that this regularization did not impact the performance of Mint.\\n\\n(6) \\\"The authors are overly-emphasizing what they want to do (interpolating between independent networks and shared network). This occupies multiple redundant paragraphs in the early sections. \\\"\\nWe reduced the redundancy in Section 3.1.\\n\\n(7) \\\"One baseline that could have been considered is to just train a fully-shared network (without z_k), and a fully-independent one.\\\"\\nIn the MT10, MT50, and goal-conditioned pushing experiments, we used a fully-shared network (SAC) and a fully independent one (independent) and compared these methods to Mint. See Figures 3 and 4.\"}",
"{\"title\": \"Response to all Reviewers\", \"comment\": \"To address the reviewers\\u2019 concerns, we have ran several new experiments & made several updates to the paper listed below:\\n(R1, R2) Improvement in clarity and removal of redundancy\\n(R3) Comparison to shallower network\\n(R3) Discussion of tensor factorization approaches\\n(R2) More detailed discussion of and comparison to FiLM and task-indicator conditioning\\n(R2, R3) New analysis of learned task matrices\\n\\nLastly, we discovered a bug in the CIFAR experiments, derived from the open-source implementation of routing networks. The bug was that methods were being trained for 3 epochs instead of 50. We unfortunately did not have time to rerun and verify all of the CIFAR results in time for the paper revision, so we are omitting those experiments from the current revision of the paper. We will add these experiments to the final version of the paper, once completed.\\nEven without the results on CIFAR, we believe the goal-conditioned RL experiment, the two multi-task RL experiments, as well as the new analysis and additional comparisons, sufficiently illustrate the merit of Mint.\"}",
"{\"title\": \"Author Response to R1\", \"comment\": \"Thank you for your review. Below, we address the feedback that you provided.\\n\\n\\u201cIf the relations among tasks make the model between the both extreme cases, how is the performance of the proposed model?\\u201d\\nIn our goal-conditioned pushing experiments, the relation among the tasks is between the extreme cases. In this setup, Mint outperforms independent training and performs slightly better than joint training.\\n\\n\\u201cWhat does \\u2018when the shared weight matrices are not learned\\u2019 mean?\\u201d\\nWe have removed this statement to avoid confusion and revised the text. What we meant by this statement was that even if the shared weight matrices are no longer changing (e.g. they have been fully optimized), there exist task-specific Mint layers which can allow the Mint network to express the same transformations of the input as an optimal task-specific network.\\n\\n\\u201cTheorem 1 requires that each W^(l) is invertible, which implies that W^(l) is a square matrix. This requirement may not be satisfied in many neural networks. In this case, does Theorem 1 still hold? If not, Theorem 1 is not so useful.\\u201d\\nIn practice, we can design the Mint network such that W^(l) is always an MxM square matrix for some M (M can be different for each layer), and thus satisfy the conditions of the theorem. Specifically, we can first apply a Mint layer which consists of an MxN weight matrix and then apply the shared fully-connected layer which consists of an MxM weight matrix. Therefore, the Mint layer will transform an N-dimensional input to an M-dimensional output, and the shared network contains only square matrices.\"}",
"{\"title\": \"Author Response to R4\", \"comment\": \"Thank you for your comments. We have uploaded a revised version of the paper that addresses your concerns.\\n\\nRegarding methods that use both joint and independent training, we note that for our RL experiments, we compare with superposition, which is a method that combines joint training and independent training and find that Mint outperforms superposition in both MT10 and goal-conditioned RL (see Figure 3 and Figure 5). We have also made various stylistic and typo fixes to improve readability of the text.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"In this paper, the authors propose a simple but effective matrix-interleaving method (mint) for multi-task learning, which aims to represent both joint training and independent training.\\nThe model achieves good performance on several supervised and reinforced learning datasets. Though the model resembles FiLM(Perez et al., 2018), it outperforms FiLM by a larger margin in three dataset.\\n\\nIt would be better for authors to give more detailed comparisons with models that work on combining both joint training and independent training.\\n\\nI would like to see the paper to be accepted for its simplicity and effectiveness.\", \"typos\": \"chnages -> changes?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a matrix-interleaving (Mint) based on neural networks for multi-task learning. The Mint contains a share parameter matrix and a task-specific parameter matrix.\", \"authors_claim_that_the_proposed_mint_have_the_ability_to_represent_both_extremes\": \"joint training and independent training. However, if the relations among tasks make the model between the both extreme cases, how is the performance of the proposed model?\\n\\nWhat does \\\"when the shared weight matrices are not learned\\\" mean? Are the shared weight matrices randomly initialized and then fixed without updating?\\n\\nTheorem 1 requires that each W^(l) is invertible, which implies that W^(l) is a square matrix. This requirement may not be satisfied in many neural networks. In this case, does Theorem 1 still hold? If not, Theorem 1 is not so useful.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper propose a single-network approach to multi-task learning by adding a task-specific linear transformation layer after each fully connected layer. The authors prove that the addition of such a layer keeps the expressive power of the network for each task. They also discuss how the linear transformation (parameterized by a transformation matrix and a bias vector) can be represented in a discrete manner in usual multi-task supervised learning and in a continuous manner (by two other neural networks) in goal-conditioned reinforcement learning. Experiments demonstrate the superiority of the proposed single-network approach.\\n\\nThe proposed approach is single and elegant. It is recommended to weak-reject the paper because of the following key reasons.\\n\\n(1) Problem formulation is far from clear, perhaps because of the lack of clarity in writing. In particular, the super short Section 2 did not clearly illustrate what's being trained and what's being tested, and whether we care about the generalization performance of each task, or the generalization performance to new tasks (generated from P(T)). Some notations are confusing there---for instance, i seems to be indicating tasks, but then there is a z_k as task indicator. Even for the main proposed approach in Section 3.1, the notations are loosely used in nature. For instance, it is hard to understand what the authors mean by \\\"train separate neural networks to output the Mint matrices and biases\\\"---there is no information about the \\\"training data\\\" for learning those neural networks.\\n\\n(2) Theoretical justification is at best shallow, or at least in the context that the authors have put it. While having an universal expressive power is good, it is easily achieved by adding an indicator variable (z_k) per layer (similar to task-specific-all-fc in the experiments). So the guarantee does not seem to be closely related to explaining the proposed approach (though the guarantee is nice to have). The authors contrast the guarantee with what FiLM (a competitor approach) can do, but in the experiments FiLM is not taken as a competitor in multi-task supervised learning, leaving a big gap between theory and practice. In the flow presented by the authors, it is strongly suggested to introduce FiLM in more detail and compare it with the proposed approach more clearly in design, theory and experiments.\\n\\n(3) It is hard to understand whether the experiments are reasonably designed. In particular, the two settings take different sets of competitors, and there is little information on why those competitors are selected, whether they represent state-of-the-art, etc.. The authors highlight that the proposed approach uses much fewer parameters but other than that it is hard to infer why the proposed approach is better. Is it better because there is more overfitting for the competitor's approaches given more parameters? Is it better because it is easier to tune? The task-specific-all-fc (which is of similar # parameters to the proposed approach) result particularly looks suspicious to me but there is no other information to double-check on why the proposed approach is better. In particular, I believe the authors have *not* answered their first proposed question \\\"does our method enable effective multi-task learning both in settings where there is substantial overlap in tasks and where there is little overlap\\\" properly---their best evidence may have been MT10 and MT50 experiments, but even in those experiments, I am not sure whether the authors want to take the results as suggesting there are \\\"substantial overlap\\\" or \\\"little overlap.\\\"\", \"some_other_suggestions\": \"(4) It is suggested to analyze the matrices learned by the proposed approach. Do the matrices contain reasonable task correlations (i.e. for two similar tasks, are the matrices somewhat similar) to understand more about the proposed approach.\\n\\n(5) It looks a bit strange to me that there is no discussion on regularizing the linear transformation matrices, as it seems possible to embed the task relations through the regularization. Have the authors considered the possibility?\\n\\n(6) The authors are overly-emphasizing what they want to do (interpolating between independent networks and shared network). This occupies multiple redundant paragraphs in the early sections. It is better to remove some of those and use the space for more solid results, such as clarifying the notations.\\n\\n(7) One baseline that could have been considered is to just train a fully-shared network (without z_k), and a fully-independent one. Then, use validation to select the better network and compare with the proposed approach.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces a factorization method for learning to share effectively in deep multitask learning (DMTL). The approach has some very satisfying properties: it forces all sharable structure to be used by all tasks; it theoretically captures the extremes of total sharing and total task independence; it is easy to implement, so would be a very useful baseline for future methods; and it is able to effectively exploit task similarities in experiments, and outperforms some alternative DMTL methods.\", \"i_have_two_main_concerns_with_the_work\": \"(1) It is most closely related to MTL factorization methods, but does not discuss this literature, or provide these experimental comparisons; (2) the interpretation of why Mint works is not clear: it is not clear that the universality is what makes it work, and there are no experimental analyses of what Mint learns.\\n\\nW.r.t. (1), there are several DMTL approaches that factorize layers across shared and task-specific components, e.g., [1], [2]. Such approaches are extensions of factorization approaches in the linear setting, e.g., [3], [4]. Compared to previous DMTL approaches, Mint is more closely related to these linear methods, as it takes the idea of factorizing each model matrix into two components and applies it to every applicable layer. In particular, the formal definition (i.e., without nonlinear activation between M and W) of Mint appears to be a special case of the more general factorizations in [1]; an experimental comparison [1] would make the conclusions more convincing, e.g., that universality is important.\\n\\nHowever, in the Mint experiments, a non-linear activation is added between the two components of each layer. This could void the universality property. Is there some reason why this is not an issue in practice? \\n\\nMore generally, it is not clear that universality is the important advantage of Mint. Some existing DMTL methods already have this property, including Cross-stitch, which is compared to in the paper. The intriguing difference with Mint is that shared and unshared structure are applied sequentially instead of in parallel. Could there be an advantage in in this difference? E.g., is Mint a stronger regularizer because it forces all tasks to use all shared layers (learning the identity function for shared layers is hard), while something like cross-stitch could more easily degenerate to only use task-specific layers even when tasks are related?\\n\\nBeyond performance, analysis on what Mint actually learns would be clarifying. Can the sharing behavior be analyzed by looking at the trained Mint layers? Is Mint actually able to learn both of the extreme settings in practice? The non-synthetic experiments in the paper are only performed on tasks that are closely related. \\n\\nAs a final note, adding layers to non-Mint models to make the topologically more similar to Mint models may not help these other models. It may make them more difficult to train or overfit more easily, since they are deeper, but do not have Mint method to assist in training. Comparisons without these extra layers would make the experiments more complete. Do cross-stitch and WPL share the conv layers across all tasks like Mint in Table 1? They should to make it a clear comparison.\", \"other_questions\": \"-\\tWhat exactly are the \\u201ctwo simple neural networks\\u201d that produce the goal-specific parameters for goal-conditioned RL? Do these maintain the universality property?\\n-\\tCan Mint be readily extended to layer types beyond FC layers? This may be necessary when applying to more complex models.\\n\\n\\n[1] Yang, Y. & Hospedales, T. M. \\u201cDeep Multi-task Representation Learning: A Tensor Factorisation Approach,\\u201d ICLR 2017.\\n[2] Long, M., Cao, Z., Wang, J., & Philip, S. Y. \\u201cLearning multiple tasks with multilinear relationship networks\\u201d, NIPS 2017.\\n[3] Argyriou, A., Evgeniou, T., & Pontil, M. \\u201cMulti-task feature learning,\\u201d NIPS, 2007.\\n[4] Kang, Z., Grauman, K., & Sha, F. \\u201cLearning with Whom to Share in Multi-task Feature Learning,\\u201d ICML 2011.\"}"
]
} |
Byl28eBtwH | Learning Cluster Structured Sparsity by Reweighting | [
"Yulun Jiang",
"Lei Yu",
"Haijian Zhang",
"Zhou Liu"
] | Recently, the paradigm of unfolding iterative algorithms into finite-length feed-forward neural networks has achieved a great success in the area of sparse recovery. Benefit from available training data, the learned networks have achieved state-of-the-art performance in respect of both speed and accuracy. However, the structure behind sparsity, imposing constraint on the support of sparse signals, is often an essential prior knowledge but seldom considered in the existing networks. In this paper, we aim at bridging this gap. Specifically, exploiting the iterative reweighted $\ell_1$ minimization (IRL1) algorithm, we propose to learn the cluster structured sparsity (CSS) by rewegihting adaptively. In particular, we first unfold the Reweighted Iterative Shrinkage Algorithm (RwISTA) into an end-to-end trainable deep architecture termed as RW-LISTA. Then instead of the element-wise reweighting, the global and local reweighting manner are proposed for the cluster structured sparse learning. Numerical experiments further show the superiority of our algorithm against both classical algorithms and learning-based networks on different tasks. | [
"Sparse Recovery",
"Sparse Representation",
"Structured Sparsity"
] | Reject | https://openreview.net/pdf?id=Byl28eBtwH | https://openreview.net/forum?id=Byl28eBtwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"oNJhPl8LlN",
"HygwxY8niB",
"rkgDrRBhoB",
"SylaFTH2sH",
"SyxMwM1B5B",
"HyxE0iR-cS",
"rylJ1Qhx9H"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746526,
1573837038808,
1573834303496,
1573834117142,
1572299354272,
1572101068037,
1572025047417
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2337/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2337/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2337/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2337/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2337/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2337/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper is proposed a rejection based on majority reviews.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for you constructional review\", \"comment\": \"Q1: \\\"Motivation for CSS is that there is structure in the recovered signal \\u2014 however no comparison of the recovered structure is made. While it is true, that is the signal is perfectly recovered, it would follow the structure from this data was obtained, however no such guarantees can be made for non-zero errors. \\\"\", \"a1\": \"This paper is discussing cluster structured sparsity (CSS) recovery, and the main goal is to utilize the structure priori knowledge to help the recovery of the signal. So we focus more about the signal itself and only meaured our algorithm by NMSE. However, to make this paper more convicing, we've tested support error rate (SER) of LISTA and RW-LISTA. Specifially, let T(x) denote the support of x and n denote its length, we define SER as $\\\\frac{[T(x)\\\\cup T(x^*) - T(x)\\\\cap T(x^*)]}{n}$. On synthesized data, the comparison of LISTA and RW-LISTA is:\\n| SNR | 15dB | 20dB | 25dB | 30dB | 35dB | 40dB | \\n| LISTA | 0.2327 | 0.2234 | 0.2323 | 0.2466 | 0.2504 | 0.2436 | \\n| RW-LISTA | 0.0282 | 0.0175 | 0.0119 | 0.0076 | 0.0059 | 0.0059 |\", \"q2\": \"\\\"Why is LISTA not compared under all varialtions in section 3.2?\\\"\", \"a2\": \"Thanks for your kind notification. Privously we doesn't consider LISTA in section 3.2 because it has been compared in section 3.1. Now we have include LISTA in section 3.3. Please refer to Figure 6 of the paper.\", \"q3\": \"\\\"I would also like to see what happens whens a pure learning based approach (such as denoting auto-encoders) is used to recover the signal. Do they perform worse than Rw-LISTA? \\\"\", \"a3\": \"Thanks for you valuable opinion. There've been some work using pure learning based approaches for compresive sensing. For example, Mousavi et al. (2015) has applied a three layer stacked denoising autoencoder (SDA) for compressive sensing. However, in the original paper Sigmoid is used as the activation of each layer, which is not suitable for recovery of signal with gaussian distribution. Based on our simulation, SDA performs quiet bad on synthesized data with its nonzero coefficients following stardard gaussian distribution , but behaves a little better on MNIST dataset. The result on MNIST dataset is:\\n| SNR | 5 dB | 10dB | 20dB | \\n| SDA | - 8.08dB | -8.56dB | - 8.88dB | \\n| LISTA | -12.51dB | -15.75dB | -23.53dB | \\n| RW-LISTA | -14.05dB | -17.17dB | -24.23dB | \\nStill, the performance of SDA is not as good as deep architectures unfolded by iterative algorithm (LISTA).\", \"q4\": \"\\\"'considered as unsupervised approaches since all the paramters are fixed instead of learning from data.' \\u2014 this is incorrect\\\".\", \"a4\": \"Thanks for your kind remind. We found the use of supervised vs. unsupervised misleading. So we have corrected it and instead use learnable vs. unlearnable to denote the distinction between classical SR solvers and recent deep learning based solvers.\"}",
"{\"title\": \"Thank you for you careful review and kind notification\", \"comment\": \"Q1: \\\"The paper is slightly hard to read due to many typos, and hard-to-read sentence.\\\"\", \"a1\": \"Thanks for your careful reading and kind notification of the paper. We'll fix these typos and meticulously proofread the paper.\", \"q2\": \"\\\"I am not sure how it shows that the sparse representation is learning the underlying structure?\\\" / \\\"Also, I think a more intuitive explanation of how the reweighting helps preserve structure is needed.\\\"\", \"a2\": \"On the one hand, the experiments show that RW-LISTA significantly outcomes LISTA and classical CSS solvers in clustered structuer sparse recovery, revealing that the proposed algorithm has successfully (learned to) utilize the structure priori for better recovery. On the other hand, as shown in Figure 2, the learned reweighting block favors cluster pattern of the signal while depressing isolated coefficients.\", \"q3\": \"\\\"I did not really see any substantial improvements in performance as compared to say LISTA\\\" /\\n\\\"Perhaps some improved experiments in real datasets.\\\"\", \"a3\": \"Thanks for the constructional idea. In different noist conditions, the performance of LISTA against RW-LISTA on MNIST dataset is (measured by NMSE):\\n| SNR | 5 dB | 10dB \\t | 20dB |\\n| LISTA | -12.51dB | -15.75dB | -23.53dB |\\n| RW-LISTA | -14.05dB | -17.17dB | -24.23dB |\\nIt shows that the superority of RW-LISTA against LISTA is more clear under lower SNR conditions, so we've changed the SNR in section 3.3 to 5 dB.\"}",
"{\"title\": \"Thanks for your valuable and professional opinion\", \"comment\": \"Q1: \\\"The proposed approach in too incremental.\\\" / \\\"The proposed approach is a straightforward combination of LISTA and RwISTA and the section of global/local dependence regarding cluster-sparse structures is unsurprising.\\\"\", \"a1\": \"This paper aims at addressing the problem of clustered structure sparse recovery. The main contribution of this paper is the insight of learning (to recover) cluster structured sparsity by reweighting. Note that traditional reweighting algorithm (candes et al, 2008) was not designed for structured spasity. We expand the reweighting mechanism to structured problems by the power of deep learning. Specifically a reweighting block is proposed to introduce local and global dependencies of the signal's coefficients. And existing algorithms of RwISTA and LISTA are chosen for the incarnation of our idea. However, it should not be limited to RwISTA and LISTA, other algorithms (such as AMP) are also applicable.\\n\\nMoreover, detailed analysis and experiments is also an important contribution of our work. In section 3 we conduct exhaustive simulations by comparsion to both classical and deep learning based algorithms in different settings. The result of experiments verifies the effectiveness and superioriy of our algorithm especially in noisy cases.\", \"q2\": \"\\\"Even though signals may exhibit cluster-sparsity, the size of such cluster might vary widely and it is therefore questionable if such patterns can be best captured via connections in the proposed reweighing blocks. Indeed for some blocks wider or lower neighborhoods might be needed to capture various radii of dependence. \\\"\", \"a2\": \"Even though the block size could vary widely, each coefficient is most statistially related to its very nearing neighborhoods. The larger the distances between two coefficients are, the less the dependence is. Considering the variation of block size, a reasonable receptive field of the reweighting block suits best for recovery. This idea is verified by figure 4(b) and (c): in figure 4(b) there is only 5 to 6 elements excitated in each row of the learned adjacency matrix, while in figure 4(c) a reweighting block consists of two 1*3 convolution layers achieves best performance, which is able to couple each coefficient with its neighboring 4 coefficients.\\n\\nIndeed, some previous works also showed that a relatively reasonable field of connection is enough to handle the variation of block sizes. For example, in PCSBL (Fang et al, 2014) and CluSS (Yu et al, 2015), clustered pattern is captured only via the connection of 2 neighboring coefficients, i.e., $x_i$ is only coupled with $x_{i-1}$ and $x_{i+1}$.\", \"q3\": \"\\\"The use of unsupervised vs supervised is misleading.\\\"\", \"a3\": \"Thanks for your kind notification. We've revised it and instead use learnable vs. unlearnable to refer the distinction between classical SR solvers and recent deep learning based solvers.\", \"q4\": \"\\\"A wider variety of block structure with more or less variability in block size etc should be considered. In addition it would be important to compare against vanilla Rw-ISTA, as one of the classical CSS solvers.\\\"\", \"a4\": \"Thanks for you suggestion, we have considered the variety of block structure by changing sparsity and block number of the signal in section 3.3. Please refer to Figure 6(a) and Figure 6(c). Also, we've made comparison with vallina Rw-ISTA Figure (4).\", \"q5\": \"Ideally we are looking to improve in less favorable conditions of low SNR.\", \"a5\": \"We've revised the experiments in section 3.3 and set SNR to 5dB, where the difference of LISTA and RW-LISTA is more distincted.\", \"q6\": \"As an alternative to adopting RwISTA it might be pertinent to compare against a counterpart using fused lasso penalty (Tibshirani et al 2005).\", \"a6\": \"To our best knowledge, fused lasso is proposed to encourage both sparsity and smoothness of recovered signal. However, in our setting the magnitude of signal's coefficients could vary so we think it's not proper to compare with fused lasso.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Paper extends LISTA by introducing learned re-weighting for the problem of sparse signal recovery. Paper combines the insights from RW-ISTA (a re-weighted iterative algorithm with fixed parameters and LISTA, a learned iterative algorithm without re-weighting.\", \"results_show_that\": \"(a) On synthetic data, performance of Rw-LISTA is superior to many variants of LISTA for various SNRs\\n(b) On synthetic data, Rw-LIST is better than classical approaches when SNR, Sparsity and other factors are varied. Why is LISTA not compared under all these different variations? \\n(c) On MNIST, Rw-LISTA is much better than non-learned approaches, but seems very close to LISTA. \\n\\nIt strikes me that authors make all evaluations based on NMSE. However, motivation for CSS is that there is structure in the recovered signal \\u2014 however no comparison of the recovered structure is made. While it is true, that is the signal is perfectly recovered, it would follow the structure from this data was obtained, however no such guarantees can be made for non-zero errors. \\n\\nI would also like to see what happens whens a pure learning based approach (such as denoting auto-encoders) is used to recover the signal. Do they perform worse than Rw-LISTA? \\n\\nAt present, I think the paper doesnot meet the standard of ICLR submissions. However, if authors address my concerns, I am happy to change the rating.\", \"minor_comments\": \"\\u201cconsidered as unsupervised approaches since all the paramters are fixed instead of learning from data.\\u201d \\u2014 this is incorrect. A significant human intuition went into design of these systems. While one can argue that even NNet architecture design requires human intuition, the trend is towards using less domain-specific architectures.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"An approach is proposed to learn sparse representations while preserving some structures in the data.\\n\\nThe idea seems quite nice where we want to learn the structure that induces sparsity instead of simply sparse representations. The idea is to extend a recent algorithm RwISTA by adding reweighting block. The reweighing block changes weights to encode whether coefficients in the model learned by the network are similar or dissimilar to each other. To build the reweighing block convolutional layers are used.\\n\\nThe paper is slightly hard to read due to many typos, and hard-to-read sentences. Also, I think a more intuitive explanation of how the reweighting helps preserve structure is needed. Right now it is very difficult to understand (except maybe for experts working on similar problems?) Regarding the experiments, most of them are run with synthetic cases. It seems like the approach is compared with several recent approaches though showing good results. On the MNIST data, results are shown where the images are recovered from the sparse representation. I did not really see any substantial improvements in performance as compared to say LISTA. Maybe I am misunderstanding what is being evaluated in Figure 7. Also I am not sure how it shows that the sparse representation is learning the underlying structure? Maybe some re-writing is needed to make this clearer.\\n\\nI think the paper is interesting but needs some polishing to make it easier to read and perhaps some improved experiments in real datasets.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"The paper combines deep learning and compressed sensing. Specifically the RW-LISTA algorithm is proposed for cluster-structured sparse recovery, building upon two existing methods: the Reweighted Iterative Shrinkage Algorithm and the LISTA algorithm. The reweighing process is employed to infer the dependencies between coefficients and encourage cluster structure. Strategies for local and global dependence are presented. The approach is evaluated on synthetic and real datasets.\", \"The paper considers and important topic. However the proposed approach is too incremental and the empirical evaluation could also be improved. In addition the presentation needs more work. Specifically:\", \"the use of unsupervised vs supervised is misleading. Traditional CS approaches are in fact supervised as they map to a regression problem when both input sensing matrix and response vector are available. The distinction has more to do with the ability (or lack of ability) to learn representations.\", \"the proposed approach is a straightforward combination of LISTA and RwISTA and the section of global/local dependence regarding cluster-sparse structures is unsurprising.\", \"Even though signals may exhibit cluster-sparsity, the size of such cluster might vary widely and it is therefore questionable if such patterns can be best captured via connections in the proposed reweighing blocks. Indeed for some blocks wider or lower neighborhoods might be needed to capture various radii of dependence.\", \"As an alternative to adopting RwISTA it might be pertinent to compare against a counterpart using fused lasso penalty (Tibshirani et al 2005).\", \"Experiments are limited: a wider variety of block structure with more or less variability in block size etc should be considered. In addition it would be important to compare against vanilla Rw-ISTA, as one of the classical CSS solvers.\", \"It is somewhat disappointing that the proposed approach should be better the higher the SNR, where other approaches can do well enough. Ideally we are looking to improve in less favorable conditions of low SNR.\"]}"
]
} |
B1liIlBKvS | Selfish Emergent Communication | [
"Michael Noukhovitch",
"Travis LaCroix",
"Aaron Courville"
] | Current literature in machine learning holds that unaligned, self-interested agents do not learn to use an emergent communication channel. We introduce a new sender-receiver game to study emergent communication for this spectrum of partially-competitive scenarios and put special care into evaluation. We find that communication can indeed emerge in partially-competitive scenarios, and we discover three things that are tied to improving it. First, that selfish communication is proportional to cooperation, and it naturally occurs for situations that are more cooperative than competitive. Second, that stability and performance are improved by using LOLA (Foerster et al, 2018), especially in more competitive scenarios. And third, that discrete protocols lend themselves better to learning cooperative communication than continuous ones. | [
"multi agent reinforcement learning",
"emergent communication",
"game theory"
] | Reject | https://openreview.net/pdf?id=B1liIlBKvS | https://openreview.net/forum?id=B1liIlBKvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"mRjBB3rwc3",
"HJliF6cniH",
"SklHlnc2ir",
"SJgddF_hoS",
"BJgC_BOnir",
"r1eSxdPjsr",
"SkgaB38jsH",
"H1lAfjIosH",
"rylU7YLiiB",
"BJlPGfe9iH",
"SkgUaRgFoS",
"Bklu1pgFiB",
"SygY0oetiB",
"SyxjGoHDoS",
"Hklb-YSPir",
"Skla0QSwsB",
"r1gZEZSwoB",
"S1eiWjvIir",
"HJlic5wLoB",
"BygopOPLjr",
"rJl-EdvIir",
"H1gm5SDUiH",
"BkezmBvUoB",
"Bkxo5-v8sB",
"HkleHtCd9S",
"H1eW6F5J9S",
"r1xb36iCFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746497,
1573854594522,
1573854189469,
1573845360506,
1573844341528,
1573775340599,
1573772356910,
1573772054353,
1573771549668,
1573679631357,
1573617342416,
1573616863627,
1573616592986,
1573505810988,
1573505273247,
1573503957462,
1573503272841,
1573448451278,
1573448338900,
1573447874668,
1573447721146,
1573447051007,
1573446937535,
1573446034654,
1572559159615,
1571953081214,
1571892648735
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2336/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2336/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2336/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2336/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2336/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2336/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2336/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2336/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"There has been a long discussion on the paper, especially between the authors and the 2nd reviewer. While the authors' comments and paper modifications have improved the paper, the overall opinion on this paper is that it is below par in its current form. The main issue is that the significance of the results is insufficiently clear. While the sender-receiver game introduced is interesting, a more thorough investigation would improve the paper a lot (for example, by looking if theoretical statements can be made).\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper Update and Thanks\", \"comment\": [\"We\\u2019ve updated the paper based on the discussions here:\", \"Fixed Figure reference\", \"Changed papers cited for claim that previous work didn\\u2019t find selfish communication\", \"Rewritten our section on Crawford and Sobel to reflect this discussion\", \"Added results for bias = 180 in the appendix for completeness\", \"We want to thank all the reviewers for their comments and responses, especially Reviewer 2 with whom we\\u2019ve had an in-depth and productive discussion. Though there are still disagreements, we are deeply thankful for their responsiveness and willingness to interact with our work.\", \"Thank you\"]}",
"{\"title\": \"Author Response\", \"comment\": [\"We don't believe Nash equilibria are essential to our specific research but we do think they are useful and future work with a Nash analysis could extend and improve this current work\", \"Knowing nash equilibria does not give full clarity on learning dynamics\", \"A Nash equilibrium is just a local attractor which does not guarantee clarity about general learning dynamics. Neural networks are famous for guarantees at possible minima while explanations of learning dynamics are still an active research area\", \"Even if we found a learning algorithm that converged towards nash equilibria it could trivially find the non-communication equilibrium, which is the lower bound on performance anyways\", \"\\u201cNash equilibrium strategy has no prescriptive force. At best the equilibrium identifies conditions under which learning can or should stop (more on this below) ,but it does not purport to say anything prior to that\\u201d (Shoham et al, 2003)\", \"Nash equilibria can be useful for playing against new opponents but this is not possible without an alternative paradigm (e.g. meta-learning) for emergent communication\", \"Agents co-learn a protocol and two agents trained separately cannot communicate with each other at test time\", \"Even if an agent learned a Nash strategy, it would not be useful against another a new opponent who would not understand their language\", \"Finding and defining Nash equilibria are not necessary for any of our findings\", \"Empirically showing that communication with selfish agents is possible and conditional on the cooperative nature of the game does not require knowing Nash equilibria\", \"We know the upper bound on the effectiveness of communication (total error = bias) without needing to know whether there is a Nash equilibria there\", \"Showing that the regular emergent communication setup fails to learn in competitive scenarios does not require Nash equilibria. Proving or disproving possible equilibria in the competitive failure cases would not definitively prove or disprove whether agents theoretically could learn to communicate (as they could learn a non-equilibrium or an unstable equilibrium)\", \"LOLA is shown to improve cooperation and communication statistically significantly without needing to show whether it converges to equilibria\", \"Our analysis of discrete/continuous communication is a purely practical and deals with learning dynamics\", \"What knowing Nash equilibria could help with\", \"Setting tighter upper bounds on the achievable stable communication. Perhaps the communication found by LOLA is even closer to optimal\", \"Stronger arguments on whether communication is feasible to achieve in highly competitive scenarios. A lack of Nash equilibria does not guarantee infeasibility but it is a possible indicator\", \"Proof of stable competitive communication protocols\", \"Deterministic Receiver\", \"We noted your review but we point out the futility of making the receiver stochastic.\", \"If we implement a stochastic receiver as described, it would be functionally equivalent to a deterministic receiver. The difference is only adding noise during training.\", \"Rock paper scissors is a normal form game, whereas our game is extended form and therefore the second player is conditioned on the first and can be deterministic\", \"SOTA MARL\", \"The convergence guarantees for those two algorithms are only for differentiable games. Emergent communication with discrete messages is not a differentiable game\", \"SOS is an extension of LOLA that only improves performance in situations such as the tandem game.\", \"In IPD, SOS was shown to be nearly identical to LOLA.\", \"If our situation resembled the tandem game, LOLA would not improve upon regular RL\", \"Since situation does not resemble the tandem game, we should not expect SOS to do any better than LOLA\", \"SGA\", \"requires knowing the loss and jacobian of other players\", \"converges to a Nash equilibria, which could be trivially non-communication\", \"Analysis of Failures\"], \"best_case_learning_is_clearly_defined\": \"a sum of players $L_1$ errors = the bias and one agent is not manipulating the other. This is not the best **stable** configuration but why does it need to be stable? Agents can maintain a reward with a communication protocol in flux\\nWe're not actually at equilibria previously so the issue with more competitive scenarios must be that learning dynamics break down (which is why LOLA, with better dynamics, can help)\"}",
"{\"title\": \"Marginalizing and Simplicity Experiments\", \"comment\": \"Interesting Reciprocity\\nWe agree it is different and reciprocity is indeed interesting. We think that learning to agreeing to a protocol under competition is also interesting\\n\\nDifferentiability \\nI think we may have slightly different meanings for \\u201cdifferentiable\\u201d. In RL terminology, gradient estimators are used specifically when something is not differentiable. If it were differentiable, we wouldn\\u2019t need RL and would just use the exact gradient. And \\n\\nDoes Marginalizing Across Messages Help?\\nMarginalizing across the messages is possible and it is just about allowing the sender to backpropogate through the receiver. We coded the marginalizing sender (https://controlc.com/4b0f1f50 ) but after running found no significant difference to the regular reinforce setup (https://pasteboard.co/IGPUlSC.png ) which implies that REINFORCE is a good enough gradient estimator and the instabilities are not from gradient estimation. Simply calculating the return as a function of the two agents isn't sufficient to get to equilibria, you must, at minimum, also take into account the learning dynamics (e.g. LOLA) \\n\\n\\nLearnability\\nI think we\\u2019re very much on the same page about learnability, that is what we\\u2019re most interested in. The question is whether \\u201clearning works properly\\u201d as you state. Previous works have not managed to make it \\u201cwork properly\\u201d and we wanted to demonstrate that it could! \\n\\nCao et al argue that you need prosocial agents, we show that you don't. Jaques et al argue that you need their SOTA learning rule for selfish communication, but we show that you regular RL is sufficient given the right setup. We think a toy task is just the scenario to show these details and carefully investigate why previous approaches may have been unsuccessful. We believe it is necessary so future work doesn\\u2019t automatically assume you need SOTA or fully cooperative agents to emerge communication.\\n\\nNotation\\nWe think that notation is consistent. We explain generally that agents get some loss between their target and the action ($L$) and then we specify that we choose this loss to be an $L_1$ loss on the circumference of the circle.\\n\\nHyperparameters Tuning\\nWe think that hyperparameter searches are generally done in all of machine learning and especially deep learning which is very sensitive to it. Our situation even more strongly demands hyperparameter search as we are not just looking to find the \\u201cbest\\u201d model but make arguments about whether something is feasible or not. If we do not do an exhaustive search, we cannot argue that something is infeasible (e.g. communication in high competition)\\n\\nFinally, the whole question is how to resolve the issue of communication vs manipulation (we use $L_2$). Your suggestion to \\u201cuse exact gradients\\u201d does not address that because our situation is not that simple (see below)\\n\\nIs Our Setup Too Simple?\\nLooking at similar situations (e.g. https://github.com/facebookresearch/EGG/tree/master/egg/zoo/language_bottleneck/guess_number ) and the architectures present there, our game is more complex but our architectures are comparable. \\n\\nGraphs\\nThe shading is indeed the standard error of the mean. This is in the description for Figure 2 but we will add it to the other descriptions as well to make it more clear. \\n\\nError Bars\\nGiven that even fully cooperative emergent communication does not converge occasionally, and our situation is complicated by divergent interests, we don\\u2019t think our graphs are too high variance. Each point is a different experiment where 5 random seeds run for a particular bias, so the graphs definitely look higher variance than usual RL graphs charting reward over time. We are honest with our random seeds and make no attempt at tuning them. At the very least, our results are significant despite the variance.\"}",
"{\"title\": \"Negotiation Game Followup\", \"comment\": \"You\\u2019re right, to emerge communication in the negotiation game we should carefully construct the sampling of agent preferences (weights) and sampling of items to be more cooperative. This would indeed be a nice addition to the paper and we would be happy to add it to the camera-ready if we manage to overcome the learning dynamic issues of the game.\\n\\nBy \\u201cunderstand the nature of the game\\u201d we mean that RL agents must discover the equilibria but given the game setup and learning dynamics, one agent learns to dominate early on. The dominating agent can then prevent the other from discovering better options and the learning dynamics of RL can make it infeasible for a badly losing agent to recover in this game.\\n\\nThe game is indeed not zero-sum, we meant to write \\\"fully competitive\\\" (unless one agent has weight 0 for an item, agents are competing on all items, there is no common reward they can optimize together, they must always compromise)\\n\\n@Concision: Agreed.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Sorry, we meant \\\"fully competitive\\\" not \\\"zero-sum\\\", the argument is based on that assumption\\n\\nIt seems we're agreed that a carefully constructed game is necessary (and one of the important ways we improve upon previous emergent communication work). The distinction between reward sharing vs selfish is secondary and we think selfish agents just make understanding the game/reward clearer, though functionally we could change the structure to reward-sharing.\"}",
"{\"title\": \"Reviewer response\", \"comment\": \"@zero-sum:\\nSee my comment above. A two-player negotiation game should not be zero-sum (otherwise there is no point in negotiating). \\n\\n@Reward sharing vs game design: \\nMy claim was that there is no set distinction, that's all. In particular relying only on game design rather than reward sharing is by no means a restrictive assumption, since any reward sharing scheme could be simply implemented as a new 'selfish' game with updated rewards.\"}",
"{\"title\": \"Reviewer response\", \"comment\": \"@It's interesting you mention the negotiation game because figuring out why communication didn't emerge there was the starting point of our research:\\n\\nI am glad to hear that this was the starting point. I believe a thorough investigation into this topic and improvement of the setting to address the issues you find would indeed make for a very interesting contribution.\\n\\n@\\\" We found this to be a stackelberg game where the second player has little recourse if they do not understand the nature of the game.\\\"\\n\\nI am not sure what this sentence is supposed to mean. What does it mean for an RL agent to 'understand' the nature? I am going to come back to the point that at the end of the day it's about the possible equilibria of the game. In particular I still don't understand whether this is an issue with the game design (all good strategies are trivial) or the learning methods. \\n\\n@\\\"highly competitive\\\": \\nThis is not inherent in the game setup. You can easily imagine that the payouts agents obtain for different objectives are different enough such that cooperation is more encouraged. \\n\\n@ \\\"the game is often zero-sum\\\":\\nThe game is most likely rarely zero-sum (both agents would have to have the same rewards per item).\\n\\n@5000 character limit:\\nIt is good to be thorough, but there also is value in distilling thoughts and arguments into concise form (\\\"If I had more time, I would have written a shorter letter\\\").\"}",
"{\"title\": \"Response\", \"comment\": \"@Exact gradients:\\nI think it would be easy to discretize the state space without changing the fundamental nature of the game (in particular given that there already is a discrete communication channel). \\n\\n@Iteration:\\nYes - that's partially what I am trying to say. It is harder but (I believe) also a lot more interesting from a research point of view. The initial state is easy to deal with in the discrete version of the game, which I would recommend either way. In the discrete version the initial state just becomes a different one-hot.\\nI agree that truth / lying can only ever be defined given the 'convention' that has been established across the channel.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your review and comments, we appreciate your view and we hope that the paper was readable and enjoyable to someone who isn\\u2019t an expert in emergent communication.\\n\\nWe will fix the figure reference and would like to know which acronyms you found unfamiliar that we can define them when they are first used.\\n\\nYou make an important point about the generalizability of our results, our setup is indeed a simplified game that spans only the range of 2-player cooperative/competitive games and our learning is simplified by a simple state/action space. The reasoning for this is to be able to carefully control the learning dynamics:\\n- tunable bias that perfectly reflects level of competition\\n- no communication through a non-linguistic action space\\n- clear way to differentiate between manipulation and communication\\n- quantified optimal cooperative performance (maximizing cooperative reward)\\n\\nMore complex 2-player environments should only differ in the complexity of learning the state, action, and game dynamics. But the feasibility of learning to communicate should still be relevant and our hope is to encourage strong baselines on selfish emergent communication. We also want researchers who don\\u2019t successfully achieve communication to investigate the underlying reasons, from competitiveness to issues with learning dynamics.\\n\\nFor 3+ players, we expect game dynamics to be different and indeed it could be trivially reformulated to be zero-sum (Balduzzi et al, 2019) so that concept is not meaningful. There are competitive situations not covered by 2 players, e.g. even if 3 agents are fully competitive with each other, two of them could be incentivized to cooperate with one another in order to defeat the third. Still, we think our fundamental claim is generalizable from the simple 2-player cooperative/competitive nature to the general idea that communication should emerge naturally if cooperation is always preferred to non-cooperation (which we quantify). \\n\\nOur second two points should also be valid. LOLA should still be an essential tool to model opponents and allow communication even in cases where cooperation can be exploited. And discrete communication may still be preferable to continuous.\\n\\nStill, we would look forward to expanding on the research question of selfish communication to 3+ agents and more complex scenarios (e.g. ad-hoc communication using meta-learning) in future work\"}",
"{\"title\": \"Response To Misunderstanding + Reward Sharing\", \"comment\": [\"Thank you for getting back so promptly! Sorry about the long response, we didn\\u2019t initially realize the 5000 character limit and were just trying to be thorough. Thank you for going through it all and responding, we really appreciate it.\", \"Misunderstanding\", \"We agree that self-interested agents should only be discussed within the context of a game\", \"We don\\u2019t think Cao et al\\u2019s comments were meant to be just within the context of their game\", \"Jaques et al (2019) is from a similar set of authors and cites Cao et al for their point that Jaquest et al\\u2019s self-interested agents should not learn in a completely different game!\", \"We believe our experiments clarify some issues in Cao et al (2018)\", \"Cao et al conjecture that the issue is that the game is not iterated. We show in our experiments that iteration is not necessary to emerge communication under competition which could be counter to their narrative\", \"The real reason communication does not emerge is likely two-fold 1. The game dynamics allow one agent to dominate under non-communication 2. The game is likely too competitive (their setup does not control the level of competition but our experiments found it to be generally high)\", \"We\\u2019ve investigated the negotiation game and gone into more detail in our reply above\", \"Cooperative Reward Function vs Specific Game Structure\", \"There is a fundamental difference, specifically in MARL, between a game structured for cooperation and reward sharing: Reward sharing doesn\\u2019t guarantee cooperation on it\\u2019s own.\", \"Take Cao et al and assume their negotiation game is zero-sum (it isn't always, based on the description). What does it mean for two agents playing a zero-sum game to be 30% competitive?\", \"Let\\u2019s think about being competitive over 30% of the possible reward. Can we define 30% competitive with reward-sharing?\", \"We can give both agents a reward $R = 0.7 * R_{you} + 0.3 * R_{them}$ but what does that do? Given your weight for an item $W_you$ and your opponent's weight for it $W_{other}$ there's three cases:\", \"1. both agree you should take the item ($W_{you} * 0.43 > W_{them}$ )\", \"2. both agree they should have the item ($W_{them} * 0.43 > W_{you}$)\", \"3. you compete over the item ($0.43 * W_{you} < W_{them} < 2.3 * W_{you}$)\", \"Is this 30% competitive? It depends on the weights given to a player and the distributions of items being negotiated over.\", \"Just doing reward sharing does not guarantee a specific level of competition. Even a partially cooperative reward function may not guarantee cooperation as the optimal strategy\", \"Our point is that reward-sharing still needs a carefully constructed game to specify the level of competition and encourage cooperation, so we make a distinction between reward-sharing and actually guaranteeing the level of competition by carefully creating a game with cooperative/competitive dynamics.\"]}",
"{\"title\": \"Negotiation Game Experiment\", \"comment\": [\"It's interesting you mention the negotiation game because figuring out why communication didn't emerge there was the starting point of our research\", \"We could not reproduce all the curves for selfish agents seen in Cao et al (2018)\", \"We contacted the authors and together still did not manage to reproduce their curve for \\u201cProposal\\u201d \\u201cLinguistic\\u201d and \\u201cBoth\\u201d\", \"We could not figure out why their agents performed more fairly in the presence of a linguistic channel (\\u201cLinguistic\\u201d) despite one agent dominating in the non-communication case and therefore having no incentive to communicate\", \"We explained our results and verified all differences we could think of but the authors could not suggest a reason for the difference\", \"Deepmind did not publicly release their code\", \"Instead, we found the first player to be regularly dominating the second player because of learning dynamics\", \"The first player could not make an accept action on their first move which made it more likely that the second player was the first to learn the accept action\", \"The second player would see two possible outcomes if the accept action could succeed: 1. Gain whatever reward was associated with accepting the deal 2. Continue negotiating and likely go past the final round and end up with no reward\", \"Once the second player was constantly accepting the deal given to it, the first player would learn to give it a deal with as little reward as possible\", \"We found this to be a stackelberg game where the second player has little recourse if they do not understand the nature of the game. The first player would give the second one a deal and the second player would just accept it\", \"The main obstacle to communication was the domination of one agent as well as the highly competitive nature of the game\", \"Since the first agent dominated the other under non-communication, it was never in their interest to communicate\", \"We allowed agents to mask their communication and found that the dominating agent always masked their communication\", \"We achieved some communication when we did partial reward sharing (Peysakhovich and Lerer, 2017) but found that we still needed to tune hyperparameters to disadvantage the first agent\", \"If the reward sharing was high enough, it became the first agent\\u2019s best interests to communicate and communication emerged.\", \"We eventually decided that the negotiation game was not a good test bed for these experiments because the learning dynamics allowed one agent to dominate\", \"A simple idea, inspired by Lowe et al (2019), is that communication should only emerge if the possible reward under communication is greater than under non-communication for both agents\", \"Since the negotiation game has one agent dominating and the game is often zero-sum, we do not think any algorithm can lead to communication\", \"We decided to come up with a different game to specifically look at the role of cooperation/communication unfettered by common issues in emergent communication games: communication through the action space, unquantified game dynamics, unquantified optimal possible play, and badly tuned baselines.\", \"Sorry for going over the 5000 character limit with these two posts but we wanted to give more detail on the results of the experiment you were asking about.\"]}",
"{\"title\": \"2/3 Interesting Experiments Response\", \"comment\": [\"Tabular Exact Gradient\", \"We can create exact gradients by having the message be continuous and allow the sender to backpropagate directly through the receiver.\", \"This is functionally similar to your suggestion about \\u201cmarginalizing across messages\\u201d but cleaner to implement\", \"This is also just the continuous game with a more powerful sender that essentially has one-sided access to the receiver\\u2019s parameters\", \"We had a couple experiments but did not pursue them deeply because we preferred a more symmetric setup. We will take a closer look\", \"Because our state space is continuous, we cannot do tabular RL but instead choose to use policy gradient. Would you want to see a discrete state space instead of a continuous one?\", \"This should not improve learning stability and may actually harm it because discrete spaces are not ordered by default\", \"Our agents would need to learn the ordering of the space (e.g. input 1 < input 2) as well as how to use it\", \"Preliminary experiments we made with discrete action spaces when designing the game were not promising\", \"Iteration\", \"We did actually implement an iterative game in exploratory experiments.\", \"It is more difficult to design and train effectively because we had to add a notion of statefulness\", \"One option is to use an RNN to maintain state but this complicates learning dynamics\", \"Another option is to condition on a single previous state (as in LOLA) but the two-step nature of our game making it difficult to specify an unbiased initial state. Agents need to distinguish the first state as having no history but it is not straightforward to have a null previous state for our continuous state space\", \"Training was less stable and learning dynamics were more complicated.\", \"Ultimately, it was much more computationally intensive which made our extensive hyperparameter searches much less feasible to run.\", \"Sadly, we could not achieve baselines we felt to be reasonable\", \"The idea of alternating lies and truth over rounds is interesting but \\u201clying\\u201d is not as straightforward as it seems\", \"Because emergent communication is not separately learning a meaning and use but learning the two simultaneously\", \"It can be hard to learn what is \\u201clying\\u201d because it relies on there being an existing meaning and subverting that meaning\", \"Learning agents would not be able to distinguish between lying and misinterpreting a signal and would just adjust their distribution of meanings (\\\"without somewhat agreeing to meanings, agents cannot use those meanings to compete (Searcy & Nowicki, 2005; Skyrms & Barrett, 2018).\\\")\", \"We chose to focus on the initial problem of learning to communicate honestly but noisily\"]}",
"{\"title\": \"response part 4\", \"comment\": \"@- We could plot each agent\\u2019s individual reward. A couple test runs of our basic setup were found to have non-communication (90/90) error split\\n\\nYes - please include this for completeness. \\n\\n@- If you have specific experiments and plots you would like us to make, we would be happy to investigate\\n\\nSure, there are 3 interesting cases that are missing from the paper:\\n1) Exact gradient version of the game in a tabular setting. This will rule out any learning issues. \\n2) Iterated game with cheap talk. Can agents learn to use the cheap talk channel to reciprocate / communicate? If so, which components of modern MARL are required for this? Note that this should be done in a setting where the cheap-talk channel allows for interesting reciprocity. Ie. I can tell you the truth now or lie to you and then you can reciprocate during the next time step.\\n3) Go back to the negotiation setting (https://arxiv.org/abs/1706.05125) and show that with SOTA MARL self-interested agents can indeed learn to use the cheap-talk channel. This would be a result. Equally, properly understanding why agents do not learn to use the cheap-talk channel would be interesting. \\n\\nAfter carefully reading through the author response I do not believe that the paper in the current form is ready for publication, but if those experiments were carried out I would be happy to reconsider.\"}",
"{\"title\": \"Comments part 3\", \"comment\": \"@ 2. \\u201cIteration in the parameter space\\u201d.\\nThis is fundamentally different from an iterated game. Interesting reciprocity requires agents to be able to respond to the actions of other agents within the episode.\\n\\n@- The loss is not differentiable wrt to the sender:\\nWell, it clearly is - otherwise you would not be able to optimize it using policy gradient (which does nothing but estimate the gradient). My point is that the game is small and simple enough that you don't have to use policy gradient at all. You can simply marginalize across the possible messages and calculate the exact expected return as a function of the weights of the 2 agents. This would not only allow for easier reproducibility but also alleviate that concerns around hyper-parameters etc. \\n\\n\\n@\\\"This would be interesting if the game was complex\\\":\\nThis goes back to my point about 'self-interested' being meaningless outside a specific task. \\nWe can trivially construct a large number of simple games in which all good strategies / equilibria clearly involve communication (one of which you constructed here). \\nSo by construction, if learning works properly, the agents will learn to communicate in these settings. \\nTherefore, the only challenge here is learnability. Indeed - previous claims about 'emergent communication' amongst self-interested agents have been made in complex settings. \\nWe know that SOTA MARL works (ie. can find equilibria) in small games, so a toy task teaches us very little. \\n\\n@\\\"- We use standard ML notation with referring to a generic loss function and specifically referring to the absolute distance loss function. Please let us know if this was not clear.\\\"\\n\\nI don't think you do, at least not consistently: \\\"$L^i = L(a, T^i)$. By using an $L_1$ loss between the angle of the target and\\naction $L^i_1(T^i, a) = min(|T^i\\u2212a|, 360 \\u2212|T^i\\u2212a|)$\\\"\\n\\n@. We are open to other ways of achieving that if you have suggestions.\\n\\nYes - get rid of the extensive hyperparamter tuning. The game is simple enough you can do exact gradients and you should not need tuning. This game does not require Deep-RL at all and can entirely be done in a tabular setting.\\n\\n@Confusing Graphs and Error Bars\\nNone of the graphs mention what the shading is. Is this the standard error of the mean? If so, why is it so large in Figure 3 c)? Figure 2 d) also points to instability in your training process. \\nAgain - the game is simple enough that there should be no question about the training being reliable.\"}",
"{\"title\": \"response to Part 2\", \"comment\": \"@Nash:\\nI strongly disagree. The one big advantage of a toy problem is that it can be studied in terms of equilibria, shedding light onto the learning. \\n\\n@\\\"Deterministic receivers are standard in basic emergent communication (see examples in EGG by Kharitonov et al)\\\":\\nThis makes sense in fully cooperative settings, but not in general sum. As I pointed out in my review, even something as simple as rock-paper-scissors requires a mixed strategy. \\n\\n@SOTA marl:\\nYes, I would start with a stochastic policy and an algorithm that actually has convergence guarantees in general-sum. Examples that come to mind at SGA (https://arxiv.org/abs/1802.05642) or SOS (https://arxiv.org/abs/1811.08469).\\n\\n@2-player 2-action general-sum games,.. or an expected payoff equivalent to the payoff at some Nash equilibrium:\\n\\nThat is not the game you are playing. Also, I believe their statements hold for stochastic policies, not for deterministic ones. Lastly, in your paper you do not at all analyze averaged policies. \\n\\n@Analysis of Failures\\nI disagree. Your paper cannot analyze learning failures since you do not have an understanding of what the best case learning even is. This goes back to understanding the Nash equilibria of the game. \\nFor example, even when the interests are partially aligned the agents stop learning to communicate. Is that happening because it's no longer an equilibrium or because learning is breaking down?\"}",
"{\"title\": \"reply to \\\"part 1\\\"\", \"comment\": \"[Side note: I believe breaking the response into 4 parts goes against the spirit of the 5000 characters limit.]\\n\\n@(Foerster et al 2016) and (Lazaridou et al 2018) are bad citations for previous literature making this claim:\\nNo problem. \\n\\n\\n@\\\"- We could not figure out what is the exact misunderstanding you are pointing to. Could you rephrase it perhaps?\\\"\\n\\nYes - talking about whether or not 'self-interested' agents cooperate / communicate can only ever be meaningfully discussed given a specific game or reward structure. The claim of Cao et al (2018) should be seen in the context of their specific game. Clearly, self-interested agents will learn to communicate in games where their payouts happen to be correlated, with a game of identical payouts being an extreme example.\", \"along_the_same_line\": \"\\\"- We would like to make a small distinction between the reward function being cooperative and the game encouraging cooperation.\\n - In previous work in emergent communication, there have been papers that have simply given the same reward to both agents or given a part of one agent\\u2019s reward to the other explicitly (Lerer and Peysakovich 2018) which is a cooperative reward function. \\\" \\n\\nIt doesn't matter if you explicitly define a new reward function that is R1' = R1 + alpha * R2 or just come up with a game that is inherently cooperative within some range, where R1 happens to be R1' . The resulting problem is the same, so I don't think this distinction is meaningful.\", \"so_to_summarize\": \"Self-interested (or 'selfish' as you like to call it) by itself is a meaningless distinction outside of a specific game or reward structure, since it includes the limiting case of two agents that are both self-interested in optimizing the same reward function.\"}",
"{\"title\": \"Response to Reviewer 1 Part 2\", \"comment\": [\"Semantic Meaning Of Communication\", \"For emergent communication to consistently improve performance over non-communication, it must be semantically meaningful.\", \"So our protocol is meaningful, but we are more concerned with how effective the meanings are as opposed to what they are exactly.\", \"And communicative efficacy is better measured with rewards rather than qualitative methods (see Lowe et al 2019)\", \"Qualitative methods can be tricked by spurious mutual information metrics caused by certain network architectures\", \"Looking at reward with communication vs non-communication is a clear and foolproof way of measuring the efficacy\", \"Could you clarify what you mean by \\u201cgrounded\\u201d?\", \"We use \\u201cgrounded\\u201d to refer to a symbol having a semantic meaning (essentially a mapping of symbols -> meanings). Any effective emergent communication can be said to be at least partially \\u201cgrounded\\u201d since there must be semantic meaning conveyed by the agents in order to be effective.\", \"Kottur et al say that \\u201ccompositional language is one of the optimal policies\\u201d and point to the compositionality of grounded meanings as necessary for generalization\", \"For now, this remains a difficult term to encapsulate and we think the community has many meanings for it. Sidenote: Chris Manning had a great little speech on this exact point at last year\\u2019s ViGIL workshop (https://bluejeans.com/playback/s/jftkhICjhUnEbcglGD4qWWpHsvunBNISIZNdGdUo2AD7vD9nAq5aI2yXus70immP in Chapter 2 starting at 1:05:27)\", \"References\", \"Resnik, David B. \\u201cHow-Possibly Explanations in Biology\\u201d. Acta Biotheoretica (1991) ,39(2):141\\u2013149.\"]}",
"{\"title\": \"Response to Reviewer 1 Part 1\", \"comment\": [\"Thank you for your comments and corrections! We\\u2019ve taken quite some time to mull everything over and address your concerns point by point below. We would be happy to discuss further and more in depth.\", \"Crawford and Sobel\", \"We put a discussion about the differences between our paper and Crawford and Sobel (1986) in a post above.\", \"On your point about our phrase \\u201can existing language\\u201d, we agree and are also happy to revise the wording. Our point was more subtle that they look at specific equilibria where there exists a fixed mutual understanding as opposed to looking at learning dynamics where the language is emerged and in flux. Would it be more appropriate to write they study \\u201cfixed languages at equilibria\\u201d?\", \"Section 4.2 Deterministic Mappings\", \"In section 4.2, we do not make an assumption of deterministic mappings and indeed, during training, the sender is stochastic, choosing a symbol based on categorical distribution over a vocabulary (as is standard in emergent communication).\", \"Our main point was that given a random initialization of a non-learning sender (learning rate close to 0) and a learning receiver with a regular learning rate, it is highly likely that the learning agent would dominate.\", \"This does not necessitate a deterministic sender since a stochastic sender\\u2019s mappings can be mostly learned (and therefore dominated) in nearly all cases.\", \"The only case where a non-learning sender cannot be dominated by a learning receiver would be a sender with a Nash policy (e.g. all states are mapped to the same symbol and communication is uninformative). But initializing a sender to a Nash policy is very unlikely given the random initialization methods of neural networks.\", \"So in the vast majority of cases, the learning agent would indeed do significantly better than its non-learning opponent.\", \"We can revise the example to make it more clear that the situation is highly likely but not guaranteed.\", \"Justifying The Circular Game\", \"We would like to clarify that this game is not a benchmark but closer to a diagnostic tool.\", \"We wanted to run experiments on the full range of 2 player cooperative/competitive games and empirically show that selfish emergent communication can be feasibly achieved\", \"We also wanted to demonstrate that it is indeed the bias of the game that influences the level of communication achieved and can explain why previous literature was mistaken.\", \"We believe the game is really an extension of Crawford and Sobel, made to be smoothly tuneable from fully cooperative to fully competitive.\", \"To our knowledge, there does not exist such a game in game theory literature (Crawford and Sobel\\u2019s original game is not as easily made fully-competitive).\", \"To our knowledge, no existing games in emergent communication literature have fine-grained control of the level of cooperation/competition in the game.\", \"Algorithmic Game Description\", \"We added the extra line about updates to make it more clear when the episode ends and agents can update their weights.\", \"This is indeed not explicitly part of the game and we can remove it if it seems superfluous to understanding how the game is played by our agents.\", \"Realism of Selfish Communication\", \"We would like to stress that emergent communication is a \\u201chow-possibly\\u201d explanation (Resnick, 1991) of language emergence.\", \"In this way, we think that reward-sharing and full cooperation is not as realistic of a model as two selfish agents that emergent communication for selfish reasons given an environment that requires cooperation.\", \"Current literature in emergent communication has usually assumed reward-sharing and therefore is less realistic than our setting.\", \"On the topic of \\u201ctoy setting\\u201d, this is indeed exactly a toy setting to see if communication emerges.\", \"Since literature in the field of emergent communication has implied communication does not emerge, we created this toy task to see if it could and whether Crawford and Sobel\\u2019s equilibria could be feasibly achieved in the modern setting of emergent communication.\", \"We do not make any presumptions about all games, nor do we think our game should be a benchmark. What we have shown is that research in emergent communication under competition should make use of strong baselines with selfish agents and take into account the quantifiable cooperation/competitive nature of the games studied, which it sometimes does not.\", \"--- continued below ---\"]}",
"{\"title\": \"Reviewer 2 Response Part 4\", \"comment\": [\"Broken Reference\", \"We thank the reviewer for pointing this out and will fix this to read Figure 2\", \"Experiments for Bias = 180\", \"Since we are plotting the sum of rewards, the curve would be trivially at 180.\", \"We could plot each agent\\u2019s individual reward. A couple test runs of our basic setup were found to have non-communication (90/90) error split\", \"If you have specific experiments and plots you would like us to make, we would be happy to investigate\"], \"references\": \"Kharitonov et al. \\u201cEGG: a toolkit for research on Emergence of lanGuage in Games\\u201d Arxiv 2019. https://github.com/facebookresearch/EGG/\\n\\nKingma, Diederik P. and Max Welling. \\u201cAuto-Encoding Variational Bayes.\\u201d ICLR (2014)\\n\\nLanctot, Marc et al. \\u201cA Unified Game-Theoretic Approach to Multiagent Reinforcement Learning.\\u201d NIPS (2017).\\n\\nResnik, David B. \\u201cHow-Possibly Explanations in Biology\\u201d. Acta Biotheoretica (1991) ,39(2):141\\u2013149.\\n\\nSingh, Satinder P. et al. \\u201cNash Convergence of Gradient Dynamics in General-Sum Games.\\u201d UAI (2000).\\n\\nShoham, Yoav et al. \\u201cMulti-Agent Reinforcement Learning: A Critical Survey.\\u201d (2003).\"}",
"{\"title\": \"Reviewer 2 Response Part 3\", \"comment\": [\"Iterated vs One-shot Game\", \"This is a very interesting point and we are happy you brought this up\", \"Though it seems like a simple one-shot game, two things seem to make cooperation possible here\", \"1. A two stage game (1. Sender sends message 2. Receiver takes action). We did experiments on iterated prisoner\\u2019s dilemma (a one-stage game) and found that LOLA could not emerge cooperation in the one-round case though it does in the iterated game.\", \"2. \\u201cIteration in the parameter space\\u201d. Though the game itself is not iterated, the fact an agent plays with the same opponent throughout training allows them to learn conventions with their opponent e.g. it is trivial to learn a simple coordination game between RL agents that are trained together\", \"LOLA is indeed the reason for improved cooperation and communication efficacy\", \"Communication at bias = 90 is clearly better with LOLA agents than it is with our basic setup\", \"It is specifically LOLA that is helping cooperation as making that one change is sufficient to get our results\", \"You can review our code, the only difference between the LOLA scenario and REINFORCE scenario is that specific agent\\u2019s loss function reflects LOLA updates, all other code is unchanged\", \"Confusing Graphs and Error Bars\", \"Could you please specify which graphs are confusing and which error bars you feel are too large? We deeply care about the clarity and statistical significance of our paper and would be happy to improve it\", \"Sender's Differentiable Loss?\", \"In \\\"note that the loss is also differentiable with respect to the action of the 1st agent\\\" we took \\\"1st agent\\\" to refer to the sender\", \"The loss is not differentiable wrt to the sender\", \"Sender maps its state to a categorical distribution over symbols from a vocabulary and stochastically chooses a single symbol based on the distribution. This symbol is the message\", \"Receiver takes the message and deterministically chooses the target\", \"The loss is differentiable wrt to the message but not wrt to the parameters of the sender because of the stochastic choice\", \"For this reason we need to use a gradient estimator\", \"For the basic setup we use REINFORCE with a mean baseline for variance reduction. This is standard in emergent communication (see EGG by Kharitonov et al, 2019)\", \"For the LOLA setup we use DiCE because it allows us to do higher order gradient estimation\", \"This setup of a differentiable receiver and gradient estimated sender is an SCG\", \"It differs by having two different objectives, one for the sender and one for the receiver\", \"\\\"This would be interesting if the game was complex\\\"\", \"We did not aim to find a difficult existing game and show that communication under competition could be achieved because\", \"Jaques et al (2019) already emerge communication in a more complex game, albeit with a complex learning rule\", \"And if we did do this for an even more complex game without being able to easily control the exact levels of competition/cooperation, it would be more difficult to show when communication is feasible as well as precisely how it can be achieved.\", \"A complex game would likely have dynamics that are much harder to control (e.g. communication through a visual action space instead of utterances). This would make our arguments and conclusions weaker\", \"We believe our results are still interesting despite the simplicity of our game\", \"Previous research has looked at more complex games (e.g. Jaques et al, 2019) but has not quantified the level of cooperation/competition.\", \"We needed to create a game that not only had a quantifiable level of cooperation/competition but also allowed for it to easily tunable, spanning the range of fully cooperative to fully competitive\", \"We believe we have shown the feasibility of achieving communication under competition, setting an example for future research to use better baselines and better quantify the level of competition/cooperation\", \"Loss Function Notation\", \"We use standard ML notation with $L$ referring to a generic loss function and $L_1$ specifically referring to the absolute distance loss function. Please let us know if this was not clear.\", \"L2 as a hyperparameter metric\", \"The game with bias = 180 is indeed constant sum (see our proof in the appendix).\", \"Is there a difference between \\u201cgeneral constant sum\\u201d and \\u201cconstant sum\\u201d?\", \"We are indeed implicitly biasing towards fairness by using the $L_2$ metric\", \"We believe this is reasonable to recover the difference between \\u201ccommunication\\u201d and \\u201cmanipulation\\u201d because fair communication cannot be \\u201cmanipulation\\u201d. We are open to other ways of achieving that if you have suggestions.\", \"We also show both agents' $L_1$ losses for all hyperparameter search runs in Figure 4 to allow readers to understand the distribution of results on top of just the best hyperparameters we picked\", \"--- continued below ---\"]}",
"{\"title\": \"Reviewer 2 Response Part 2\", \"comment\": [\"Nash Equilibria and MARL\", \"Though studying equilibria and Nash may be useful, we believe that it is not something our paper on multi-agent learning should focus on. Our view is strongly influenced by Shoham et al (2003) which has informed modern deep MARL and we give a brief summary of related points here\", \"We are studying how agents \\u201cshould\\u201d learn and therefore how agents \\u201cshould\\u201d act\", \"One possible agenda is \\u201cequilibrium\\u201d and asks whether a vector of learning strategies forms an equilibrium\", \"Another possible agenda is \\u201cAI\\u201d and asks what is the best learning strategy given a fixed class of possible opponents\", \"The main difference between the two agendas is \\u201cbounded rationality\\u201d\", \"\\u201cEquilibrium\\u201d assumes perfect reasoning and infinite mutual modelling\", \"\\u201cAI\\u201d starts from a base of bounded rationality and only adds mutual modelling when necessary\", \"These two agendas are not necessarily mutually exclusive but there is distinct philosophical difference\", \"Why our situation is better represented by the \\u201cAI\\u201d agenda and therefore should not be focused on convergence or Nash equilibria\", \"We believe the \\u201cbounded rationality\\u201d assumption to be more appropriate for language emergence which could be said to give a \\u201chow-possibly\\u201d model (Resnick, 1991) of the emergence of human language\", \"We already model our fixed class of opponents as being SGD learning models with similar loss structures. This class of opponents is an assumption made by LOLA and we believe it is reasonable for deep MARL.\", \"SOTA MARL\", \"We believe that, though simple, our methods are indeed state of the art for the specific problem they are tackling.\", \"We do not know of better complex gradient estimators being used in such low-dimensional input situations\", \"We believe that our results are quite good and relatively close to the theoretical optimum\", \"Do you have specific MARL learning algorithms you believe would perform better for our setup? We would be glad to implement and test them\", \"Analysis of Failures\", \"We believe we do a fair analysis of successes and failures.\", \"We find the issue that likely was underlying why previous works did not emerge communication with selfish agents (competitiveness) and do a careful analysis to show how it could be possible\", \"We look at two different popular ways of emergent communication (continuous vs discrete) and analyse how they affect our situation and the achievability of good selfish emergent communication\", \"Deterministic Receiver\", \"Deterministic receivers are standard in basic emergent communication (see examples in EGG by Kharitonov et al)\", \"We could change our receiver to be stochastic, for example, by making its output a gaussian distribution over actions and then sampling from that. This would not make things too different.\", \"Since we train with gradient descent, we could use the reparametrization trick to get the gradient for the receiver (Kingma and Welling, 2014)\", \"Doing this would make our stochastic receiver no different from a deterministic receiver that has an added gaussian noise in its output\", \"This would essentially only be adding variance to the learning of our receiver and simply be worse from an optimization perspective without being too functionally different\", \"IGD and Convergence\", \"We would like to clarify that we are not making theoretical arguments about all general-sum games\", \"Theoretically, Singh et al (2013) have shown that for 2-player 2-action general-sum games, independent gradient ascent with an infinitesimally small step size will lead to either convergence at a Nash equilibrium or an expected payoff equivalent to the payoff at some Nash equilibrium\", \"Please see our point about Nash and MARL for a discussion on why we are not specifically looking for equilibria or convergence\", \"--- continued below ---\"]}",
"{\"title\": \"Reviewer 2 Response Part 1\", \"comment\": [\"Thank you for the in-depth comments, corrections, and suggestions. We\\u2019ve tried to address all your concerns point by point below and would be happy to discuss further and more in-depth.\", \"Papers Claiming Selfish Communication Doesn\\u2019t Work\", \"After reviewing all the papers we agree that (Foerster et al 2016) and (Lazaridou et al 2018) are bad citations for previous literature making this claim, many thanks for bringing this to our attention.\", \"We still believe that the view of emergent communication being possible only in cooperative settings is prevalent in the literature and believe this is an important misunderstanding to address.\", \"For Cao et al (2018), a main claim is that selfish agents cannot learn to effectively emerge communication whereas agents that share a reward function do.\", \"\\u201cSelfish agents do not appear to ground cheap talk\\u201d\", \"They conjecture in section 3.2 that this is because the game is not iterated but we show this is not necessary (more on this lower down in our comment)\", \"Instead, the game is likely too competitive and it is not necessary to share a reward function in order to communicate\", \"Jaques et al (2019) reiterate the claim of Cao et al (2018) and claim that their learning rule allows for communication between competitive agents whereas regular methods do not, without explicitly quantifying the cooperative/competitive nature of their games.\", \"\\u201cThe IC metrics demonstrate that baseline agents show almost no signs of coordinating behavior with communication, i.e. speakers saying A and listeners doing B consistently. This result is aligned with both theoretical results in cheap-talk literature (Crawford & Sobel, 1982), and recent empirical results in MARL (e.g. Foerster et al. (2016);Lazaridou et al. (2018); Cao et al. (2018)).\\u201d\", \"We also found that Lanctot et al (2017) imply that emergent communication is a purely cooperative task (in the sense that they take communication to be a paradigm of cooperation):\", \"\\u201cIn MARL, several agents interact and learn in an environment simultaneously, either competitively such as in Go [92] and Poker [39,106,73], cooperatively such as when learning to communicate [23, 94, 36], or some mix of the two [59, 96, 35].\\u201d\", \"\\\"Reward Function Is Cooperative\\\"\", \"We would like to make a small distinction between the reward function being cooperative and the game encouraging cooperation.\", \"In previous work in emergent communication, there have been papers that have simply given the same reward to both agents or given a part of one agent\\u2019s reward to the other explicitly (Lerer and Peysakovich 2018) which is a cooperative reward function.\", \"The reward function for our agents is purely their own \\u2014 selfish. They do not, a priori, have cooperative intentions. It is only through discovering the nature of the current game\\u2019s setup that they should realize cooperation is advantageous.\", \"Cao et al (2018) study a cooperative \\u201cprosocial\\u201d reward function in a game that is quite competitive whereas we study purely selfish reward functions but in a game whose competitive nature can be tuned by the bias.\", \"Regardless of the game \\u201cprosocial\\u201d agents are going to be cooperative\", \"We take the view of Shoham et al (2003) that if agents are not being controlled by a central designer then the interesting scenario is when \\u201clearning takes place by self-interested agents\\u201d, as opposed to prosocially-interested agents\", \"Misunderstanding\", \"We could not figure out what is the exact misunderstanding you are pointing to. Could you rephrase it perhaps?\", \"\\\"Uninteresting\\\"\", \"Though the result may seem uninteresting from the perspective of static analysis where Crawford and Sobel\\u2019s result is clear, we perform a dynamic analysis (this is explained in more detail in our Crawford and Sobel discussion).\", \"We believe it is, at minimum, interesting for the emergent communication community that seems to hold an opposing belief.\", \"Since there are two well-cited publications from top conferences (ICLR, ICML) that clearly and unambiguously state that selfish agents do not learn to communicate and one implying emergent communication is purely cooperative, we believe that there is indeed a misconception of selfish emergent communication in the field that deserves to be clarified\", \"We believe that our toy task is sufficient to show that selfish emergent communication should be feasible to achieve with modern deep RL methods, overturning that belief\", \"-- continued below --\"]}",
"{\"title\": \"Crawford and Sobel Response\", \"comment\": [\"Since both Reviewer 1 and Reviewer 2 have brought up issues related to how our work differs from Crawford and Sobel (1986), we thought we would address them together and try to clarify the differences.\", \"Crawford and Sobel do a static analysis at equilibria and give existence proofs and guarantees. In contrast, we do a dynamic analysis and focus on showing empirical feasibility of competitive selfish communication in the modern ML paradigm of emergent communication.\", \"Static Analysis vs Dynamic Analysis\", \"Crawford and Sobel study possible equilibria and the fixed communication protocols at those equilibria.\", \"They prove the existence of equilibria and their properties but do not show how to achieve those equilibria nor whether certain learning rules could lead to and maintain those equilibria\", \"We specifically study the standard setup used in emergent communication: learning by gradient descent on the receiver and REINFORCE (or variants) on the sender. We show that even in our basic scenario, it is feasible to achieve communication when the game is more cooperative than competitive.\", \"Our agents are also not seeking equilibria when they achieve communication, they are simply seeking selfish reward without taking the opponent into mind\", \"Crawford and Sobel show that communication is not possible after a degree of divergence in interests\", \"We empirically demonstrate that emergent communication is feasible with regular agents in the modern paradigm, overturning a previous misconception.\", \"We quantify the exact circumstances (level of competition) that would cause this misconception (when the game is more competitive than cooperative)\", \"We make a strong case for all future papers in competitive emergent communication to precisely quantify the level of competitiveness in the game (something that is not currently done e.g. Leibo et al (2017)). This would give a better perspective on the efficacy of achieved and achievable communication.\", \"Knowledge of the Game and Opponent\", \"Crawford and Sobel suppose that agents are \\u201cperfectly rational\\u201d and are given perfect knowledge (see Shoham et al 2003)\", \"Both players are fully aware of the nature of the game, and have knowledge of their and the other player\\u2019s reward for all situations, and always take the rational best response.\", \"Messages are modelled as states of the world with added noise\", \"We start from the assumption of \\u201cbounded rationality\\u201d and only add modelling of the opponent and game as necessary\", \"We suppose nothing about an agent\\u2019s knowledge of the other player, the rules of the game, or how they should act.\", \"Our agents use RL to discover all knowledge through trial and error, with the goal of optimizing their own reward. Our agents do not see the other player\\u2019s reward and are solely optimizing their own.\", \"We only add opponent modelling as necessary for LOLA. Agents use a model of their opponent and built into LOLA is the assumption that opponents learn with gradient descent and have similar loss objectives.\", \"Messages are simply mappings of the world to symbols and do not necessarily need to be ordered or completely cover all states of the world\", \"In short, our work seeks to offer a fundamental contribution to the field of emergent communication and machine learning. We are leaning on the work of Crawford and Sobel as a guide for possible equilibria but fundamentally we wish to show feasibility with learning dynamics not possibility and theoretical guarantees. Our hope with this work is to correct a misconception about selfish emergent communication prevalent in the field, to bring the theoretical contributions of Crawford and Sobel into the fold of emergent communication literature, and to give guides about how to correctly measure and possibly improve emergent communication under competition.\"], \"references\": \"Leibo, Joel Z. et al. \\u201cMulti-agent Reinforcement Learning in Sequential Social Dilemmas.\\u201d AAMAS (2017).\\n\\nShoham, Yoav et al. \\u201cMulti-Agent Reinforcement Learning:a critical survey.\\u201d (2003).\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper looks at the question of emergent communication amongst self-interested learning agents. The paper finds that \\\"selfish\\\" (ie. self-interested) agents can learn to communicate using a cheap talk channel as long as the objective is partially cooperative.\\nThe paper makes states that this is is a novel finding that contradicts the previous understanding of emergent communication in the literature (side point: at least some of the papers referenced for this claim did not at all make the claim).\", \"i_believe_there_is_a_major_miss_understanding_here\": \"As noted in the paper, self-interested agents can learn to communicate in settings in which the reward function is cooperative. Furthermore, it is also known that in 2 player zero-sum there is no incentive to learn a communication protocol.\\nThis clearly shows that talking about whether or not \\\"selfish\\\" agents can learn to communicate only ever makes sense within the context of a specific game / reward structure. \\n\\nWith this in mind, the main finding, agents learn to somewhat communicate with each other in a simple toy setting, with more communication happening when the payouts are more cooperative, is not very interesting. \\n\\nThis doesn't mean that there isn't a good paper to be written here, in principle. Finding simple settings in which SOTA multi-agent learning \\\"fails\\\", ie. doesn't find Nash policies, understanding why it fails and then finding ways to mend things is generally a good research direction. However, this would require a few things which are currently lacking from the paper: (1) clear understanding of the Nash policies for the different reward settings (2) Implementation of SOTA methods for MARL which are appropriate for this setting (3) In depth analysis of learning successes and failures, ideally in settings which have previously been studied in literature (given how task-specific this analysis necessarily is).\", \"regarding_2\": \"General sum games will generally have mixed-strategies as Nash equilibria (just think 'rock-paper-scissors'). With this in mind, using a deterministic policy for the receiver is inappropriate for making any claims about learning in general sum games.\\nFurthermore, it is well known that independent gradient descent (IGD) is not generally going to converge in general sum games (consider the loss functions X * Y and - X *Y or matching pennies). So looking at the outcome of IGD without checking for convergence means the results could be just about anything. Indeed, we don't have to go all the way to writing about emergent communication or complex \\\"sequential social dilemma\\\" to study this, those issues can easily be found in (iterated) matrix games. \\n\\nThis gets us to the second major point of the paper. To the authors' credit, LOLA [1] has been shown to help with convergence in general sum settings and to lead to the emergence of cooperation and reciprocity in iterated games. \\n\\nHowever, the key point for the \\u2018cooperation\\u2019 part is iterated. In a single shot setting (which is explored in this paper), there is simply no way for the agents to reciprocate with each other. So in short, I do not believe the authors' interpretation that agents learn to cooperate with each other because of LOLA, but I do believe that LOLA can help with the learning of mixed strategies (at least for the sender, given that the receiver is deterministic) and with stabilizing convergence. Lastly, the part of the experimental section is dominated by large error bars and graphs that are difficult to interpret.\", \"other_points\": \"-\\\"..but train agents to emerge their own.\\\" (and many other instances). AFAIK \\\"to emerge something\\\" is grammatically wrong (and also sounds really odd). \\n-\\\"Since the loss is differentiable with respect to the receiver, it is trained directly with gradient descent, so we are training in the style of a stochastic computation graph (Schulman et al., 2015).\\\". This is a weird statement. You don't need SCGs for training a supervised objective. Also, note that the loss is also differentiable with respect to the action of the 1st agent. It is trivial in this setting to compute the true expected return, if that is what you are after. Note my point above about deterministic policies\\n-\\\"We perform a hyperparameter search to over both agents\\u2019\\\" -> spurious \\\"to\\\"\\n-\\\"We investigate a similar scenario but concern ourselves with learning agents as opposed to fully-rational agents that have full knowledge of the structure of the game, and we do not assume that agents use an existing language, but train agents to emerge their own\\\" .This would be interesting, if the game was complex.\\n- L_1 vs L - these symbols are used inconsistently, with the subscript _1 sometimes being applied and sometimes not.\\n-\\\"we can look to extant results\\\" - s/extant/extent?\\n-\\\"We use the L2 metric only on hyperparameter search and keep L1 as our game\\u2019s loss to maintain a constant-sum game for the fully competitive case.\\\" - A few points: (a) the game is not in general constant sum (b) By doing this hyperparameter search the evaluation is strongly biased towards 'fair' attributions. This seems highly problematic. \\n-\\\"We report our results in Figure ??\\\" -> Broken reference. \\n-\\\"We do not test b = 180\\u25e6 because the game is constant-sum and therefore trivially Ls1 + Lr1 = 180\\u25e6.\\\" -> So? It would still be interesting to see what learning agents do in this setting. \\n\\n[1]: \\\"Learning with Opponent Learning Awareness\\\", Foerster et al. \\n\\n[update: I have updated the score based on the discussion with the authors]. While the paper lacks execution and conceptual clarity, I believe the game itself is interesting and could serve as a starting point for more thorough investigation.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This ICLR submission deals with a problem of whether selfish agents can learn to use an emergent communication channel, using a sender-receiver game as a case study. It is found that communication can emerge in partially-competitive scenarios, and conditions in which this can happen are investigated.\\nThis review is delivered with the caveat that I am not an expert in this particulat field.\\nThe investigation seems relevant and the paper is well written and structured, being within the scope of the conference. Proofs in the appendix are sound to the best of my understanding.\\nThe literature review is up to date and seems overall relevant.\\nThis study should be understood as a proof of concept, given that the setting seems rather restrictive, so I am unsure that he results could be generalized.\\nThey seem anyhow promising and partially challenge the current understanding of the problem.\", \"minor_issues\": \"All acronyms in the text should be defined the first time they appear in the text.\\nLaTex problem with Fig. reference at the beginning of section 5.1.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThis paper introduces a new sender-receiver game to study emergent communication in partially-competitive scenarios. The authors find that communication can also emerge in partially-competitive scenarios and demonstrate how to encourage communication: 1) selfish communication is proportional to cooperation, and it naturally occurs for situations that are more cooperative than competitive, 2) stability and performance are improved by using LOLA, and 3) discrete protocols are better than continuous ones.\", \"strengths\": [\"This is an interesting paper that is well written and motivated.\", \"They have justified a new sender-receiver game that can be tuned for various levels of competition which then allows them to analyze the effects of various levels of cooperation and competition.\", \"They perform sufficient experimental analysis to show that LOLA outperforms standard methods like REINFORCE in these settings and that discrete communication lends to cooperative communication.\", \"Evaluation is good in the sense that they repeat their experiments multiple times across different random seeds.\"], \"weaknesses\": [\"Given that cheap talk is an extremely well-studied topic in economics, I feel that the authors should have devoted more time to explain the difference in setting between their work in classic pieces like those of Sobel and Crawford. The authors should properly define what they mean by learning agents versus fully rational agents, and the key differences between the two. Furthermore, nowhere do the authors in the cheap talk paper assert that agents use an existing language: the equilibrium itself assigns meaning to each sender\\u2019s message; this is not part of the problem definition per se. In fact, the work of Sobel and Crawford does not even constrain the size of the vocabulary (as was done in this paper): one of its key contributions is to show that in strictly non-cooperative settings, all equilibria must be partition equilibria, with only a finite number of messages used.\", \"Section 4.2: \\u201cinitial random mapping of targets to messages.\\u201d The authors made the assumption that this mapping has to be deterministic. Absent a proof or a citation, I find this difficult to accept. This is especially so since mixed strategies are a crucial component of games of imperfect information.\", \"The introduction of the circular game is suspect. There already exist numerous games involving cheap talk, one of them from the Sobel and Crawford paper. Why is there a need for this new benchmark?\", \"The description of the game is given as an algorithm in the appendix. This comes across as counterintuitive: why are the gradient steps being included as part of the game description? A game\\u2019s specification and the algorithm which is being used to solve it are two different things.\", \"It is difficult for me to assess the significance of these results since the authors have not presented real-world scenarios and experiments that demonstrate the importance of selfish communication. For cooperative communication we see it a lot in examples like grounded language learning, visual dialog, multi-agent communication etc. But I am concerned that the new setting proposed in this paper seems like a 'toy setting' to investigate if emergent communication would happen.\", \"Are the communicated symbols (discrete or continuous) semantically meaningful? It was shown in Kottur et al. (2017) that for emergent communication to occur and generalize to unseen test instances, it was crucial that the communication protocol was grounded i.e. one symbol learning to represent the color, one representing the shape, one representing the size. What is the final communication protocol learned in this case, and it is useful/interpretable in a similar sense?\", \"Typo: 'Figure ??' in line 3 of section 5.1\"]}"
]
} |
BkljIlHtvS | Decoupling Adaptation from Modeling with Meta-Optimizers for Meta Learning | [
"Sébastien M.R. Arnold",
"Shariq Iqbal",
"Fei Sha"
] | Meta-learning methods, most notably Model-Agnostic Meta-Learning (Finn et al, 2017) or MAML, have achieved great success in adapting to new tasks quickly, after having been trained on similar tasks.
The mechanism behind their success, however, is poorly understood.
We begin this work with an experimental analysis of MAML, finding that deep models are crucial for its success, even given sets of simple tasks where a linear model would suffice on any individual task.
Furthermore, on image-recognition tasks, we find that the early layers of MAML-trained models learn task-invariant features, while later layers are used for adaptation, providing further evidence that these models require greater capacity than is strictly necessary for their individual tasks.
Following our findings, we propose a method which enables better use of model capacity at inference time by separating the adaptation aspect of meta-learning into parameters that are only used for adaptation but are not part of the forward model.
We find that our approach enables more effective meta-learning in smaller models, which are suitably sized for the individual tasks.
| [
"meta-learning",
"MAML",
"analysis",
"depth",
"meta-optimizers"
] | Reject | https://openreview.net/pdf?id=BkljIlHtvS | https://openreview.net/forum?id=BkljIlHtvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"-5p4ADkiR",
"H1gp7puhjH",
"B1guno_3jB",
"HygiFsd3oH",
"S1lBNiOhiH",
"S1eNiqdnsH",
"HylCzquhoH",
"Byl46dOhjS",
"rygbXAUK9B",
"rygbTkp-5r",
"B1ecO9yx5H",
"SJg63_1CYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1576798746468,
1573846308864,
1573845936420,
1573845890649,
1573845804850,
1573845659716,
1573845525685,
1573845179613,
1572593177294,
1572093881249,
1571973745759,
1571842229492
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2335/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2335/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2335/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2335/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2335/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2335/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2335/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2335/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2335/AnonReviewer2"
],
[
"~Mikhail_Khodak1"
],
[
"ICLR.cc/2020/Conference/Paper2335/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a number of experiments involving the Model-Agnostic Meta-Learning (MAML) framework, both for the purpose of understanding its behavior and motivating specific enhancements. With respect to the former, the paper argues that deeper networks allow earlier layers to learn generic modeling features that can be adapted via later layers in a task-specific way. The paper then suggests that this implicit decomposition can be explicitly formulated via the use of meta-optimizers for handling adaptations, allowing for simpler networks that may not require generic modeling-specific layers.\\n\\nAt the end of the rebuttal and discussion phases, two reviewers chose rejection while one preferred acceptance. In this regard, as AC I did not find clear evidence that warranted overriding the reviewer majority, and consistent with some of the evaluations, I believe that there are several points whereby this paper could be improved.\\n\\nMore specifically, my feeling is that some of the conclusions of this paper would either already be expected by members of the community, or else would require further empirical support to draw more firm conclusions. For example, the fact that earlier layers encode more generic features that are not adapted for each task is not at all surprising (such low-level features are natural to be shared). Moreover, when the linear model from Section 3.2 is replaced by a deep linear network, clearly the model capacity is not changed, but the effective number of parameters which determine the gradient update will be significantly expanded in a seemingly non-trivial way. This is then likely to be of some benefit.\\n\\nConsequently, one could naturally view the extra parameters as forming an implicit meta-optimizer, and it is not so remarkable that other trainable meta-optimizers might work well. Indeed cited references such as (Park & Oliva, 2019) have already applied explicit meta-optimizers to MAML and few-shot learning tasks. And based on Table 2, the proposed factorized meta-optimizer does not appear to show any clear advantage over the meta-curvature method from (Park & Oliva, 2019). Overall, either by using deeper networks or an explicit trainable meta-optimizer, there are going to be more adaptable parameters to exploit and so the expectation is that there will be room for improvement. Even so, I am not against the message of this paper. Rather it is just that for an empirically-based submission with close ties to existing work, the bar is generally a bit higher in terms of the quality and scope of the experiments.\\n\\nAs a final (lesser) point, the paper argues that meta-optimizers allow for the decomposition of modeling and adaptation as mentioned above; however, I did not see exactly where this claim was precisely corroborated empirically. For example, one useful test could be to recreate Figure 2 but with the meta-optimizer in place and a shallower network architecture. The expectation then might be that general features are no longer necessary.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Khodak\", \"comment\": \"Thank you for your interest, we look forward to complementary theoretical explanations to the questions this manuscript raises.\", \"our_introduction_omits_a_theoretical_discussion_of_the_synthetic_binary_classification_experiment_for_expository_reasons\": \"as explained below, our experimental setup differs from the existing literature making comparisons difficult.\\n\\nThe remaining of this answer addresses the questions point-by-point.\", \"q1\": \"How do you reconcile the experimental results results in Figure 1 with the theory in Finn et al. [Corollaries 1 & 2] ?\\n\\nTo the best of our knowledge, existing theoretical literature for MAML requires strong convexity of the MAML loss. This is the case for Finn et al. (Assumption 2) as well as Khodak et al. and Denevi et al. in the form of L2 regularization. In our binary classification experiments, we directly minimize the binary cross-entropy -- which is not strongly convex -- and violate the assumptions in those works. For example, to match the assumptions in Finn et al., we would have to set the fast adaptation learning rate to 0 thus recovering the multi-task scenario.\\n\\nWhile we can not comment on the failure of the assumptions, we can provide the following insights. As pointed out in the manuscript, when initializing the weights of the convex model at the origin we obtain a high (>90%) post-adaptation accuracy. However, and despite our best hyper-parameter tuning efforts, reaching that point seems infeasible via gradient descent; in other words, shallow models can be hard to meta-learn. This issue is delicate to diagnose, as the training difficulty is induced by the MAML loss (non-convexity) and its evaluation (stochasticity). We note that when including L2-regularization in the MAML loss, meta-learning of the convex model becomes possible and the model reaches approximately 90% accuracy.\", \"q2\": \"How many shots were used, and how does performance improve with more shots ?\\n\\nAt every timestep we sample a new dataset consisting of 1,000 data points in $\\\\mathbb{R}^{100}$, and allow for 1 adaptation step. The meta-batch size is set to 1. In preliminary experiments, using 10x more data points does not improve learnability.\", \"q3\": \"What is the argument mentioned in the Appendix ? Does that mean the learning rate were not tuned for that experiment ?\\n\\nThis sentence in the Appendix is indeed poorly phrased and we have modified it. Naturally, all learning rates in this experiment were tuned to the best of our ability. The argument we refer to is the (empirical) one presented in Section 3.2.\", \"q4\": \"Do the experimental results in Figure 1 also hold for Reptile, or only MAML ?\\n\\nWe do not have results for Reptile, but we believe that our conclusions apply. (c.f. response to AnonReviewer2.)\", \"references\": \"1. Finn, Rajeswaran, Kakade, Levine. Online Meta-Learning. ICML 2019.\\n2. Khodak, Balcan, Talwalkar. Provable Guarantees for Gradient-Based Meta-Learning. ICML \\n3. Denevi, Ciliberto, Grazzi, Pontil. Learning-to-Learn Stochastic Gradient Descent with Biased Regularization. ICML 2019.\"}",
"{\"title\": \"References\", \"comment\": \"References:\\n1. Bernstein, Dennis S. 2018. Scalar, Vector, and Matrix Mathematics: Theory, Facts, and Formulas - Revised and Expanded Edition. Revised, Expanded edition. Princeton University Press.\\n2. Petersen, Kaare Brandt, and Michael Syskind Pedersen. n.d. \\u201cThe Matrix Cookbook.\\u201d Perrylea.com. http://www.perrylea.com/Perry_and_Dawns_Home_Page/Free_Engineering_and_Math_Text_files/Matrix%20Cookbook.pdf.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank AnonReviewer3 for their review.\\n\\nWe have substantially added to the Appendix in order to clarify some of our results.\\n\\nRegarding the issue of overparameterization of depth vs width, we have added extensive results in Appendix A.2.1 where we trained the binary linear network with varying width (w=2, 4 \\u2026 , 256) and depth (l=1, 2, 3, 4). We observe that the linear network is always able to adapt and solve the tasks regardless of the width of the hidden layers, so long as the model has at least one hidden layer.\\n\\nA discussion of the difference in behaviour between C1-C3 and FC is provided in Appendix A.2.2. For each layer of a meta-trained model, we scale the weights of the layer by a given factor before fast-adaptation. We observe that for C1-C3, this scaling does not impact the post-adaptation accuracy. However, for C4 and FC, scaling weights pre-adaptation is catastrophic for post-adaptation accuracy: by perturbing those layers, the model is not able to compute a fast-adapting update and its post-adaptation accuracy drops to chance. For more details, including a discussion of post-adaptation scaling, please refer to Appendix A.2.2.\\n\\nOn the effect of non-linearity enabling fast-adaptation, we point out that all models in Section 5.2 use non-linearities. Yet, while they are able to adapt better than chance, the non-linearity does not allow them to perform as well as deeper models.\\n\\nAs for the expository issues, we have added references [1, Section 9.1; 2, Section 10.2.2] to the derivation of Equation 5, a schematic of the Kronecker product, and a schematic and pseudo-code for our proposed method. Those are available in Appendix A.3.\"}",
"{\"title\": \"References\", \"comment\": \"References:\\n1. Duan, Yan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. 2016. \\u201cRL2: Fast Reinforcement Learning via Slow Reinforcement Learning.\\u201d arXiv [cs.AI]. arXiv. http://arxiv.org/abs/1611.02779.\\n2. Castiello, Ciro, Giovanna Castellano, and Anna Maria Fanelli. 2005. \\u201cMeta-Data: Characterization of Input Features for Meta-Learning.\\u201d In Modeling Decisions for Artificial Intelligence, 457\\u201368. Springer Berlin Heidelberg.\\n3. Rakelly, Kate, Aurick Zhou, Deirdre Quillen, Chelsea Finn, and Sergey Levine. 2019. \\u201cEfficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1903.08254.\\n4. Baydin, Atilim Gunes, Robert Cornish, David Martinez Rubio, Mark Schmidt, and Frank Wood. 2017. \\u201cOnline Learning Rate Adaptation with Hypergradient Descent.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1703.04782.\\n5. Nichol, Alex, Joshua Achiam, and John Schulman. 2018. \\u201cOn First-Order Meta-Learning Algorithms.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1803.02999.\\n6. Rothfuss, Jonas, Dennis Lee, Ignasi Clavera, Tamim Asfour, and Pieter Abbeel. 2018. \\u201cProMP: Proximal Meta-Policy Search.\\u201d http://arxiv.org/abs/1810.06784.\\n7. https://github.com/openai/supervised-reptile/#reproducing-training-runs \\n8. Glorot, X., and Y. Bengio. 2010. \\u201cUnderstanding the Difficulty of Training Deep Feedforward Neural Networks.\\u201d Proceedings of the Thirteenth International Conference. http://www.jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf?hc_location=ufi.\\n9. https://github.com/cbfinn/maml\\n10. Lee, Kwonjoon, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. 2019. \\u201cMeta-Learning with Differentiable Convex Optimization.\\u201d arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1904.03758.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank AnonReviewer2 for taking the time to write such an extensive review. We will address the reviewer\\u2019s concerns point-by-point below.\\n\\n1. We believe MAML and many followup works such as Repitle (specifically pointed by you) share a common modeling architecture: the adaptation mechanism shares the same set of networks as the model\\u2019s learning weights for encoding inductive bias for the target tasks. Thus, we believe similar dependency on depth would likely be observed (the exact tradeoff would be different).\\n\\nNote that for drastically different architectures for meta-learning such as RL2 [1], meta-features [2], PEARL [3], the adaptation mechanism is separated (RL2 using the LSTM\\u2019s hidden states and PEARL using external embedding space). Our observation on MAML is not necessarily applicable.\\n\\n2. The choice of the Kronecker product over other decomposition methods was mostly motivated by the observation that the identity lies in the span of \\u201cKronecker factorizable\\u201d matrices. Concretely, this means that by initializing L, R to the identity, we recover gradient descent as the first adaptation step. In contrast, low-rank factorizations do not span the identity. (By definition, since the identity is full-rank.) This makes it unclear how to initialize the low-rank factors, which can make or break deep learning methods [8] Nonetheless, we report the following results for the Cholesky decomposition in Appendix A.2.3: a rank 1 Cholesky decomposition (SCNN w/ CFC1) gets approximately 70% accuracy on Omniglot, while a rank 10 decomposition (SCNN w/ CFC10) \\u2014 approximately the same number of parameters as KFC \\u2014 obtains around 80%. For CIFAR-FS, SCNN w/ CFC1 gets 32% and SCNN w/ CFC10 gets 48%. For mini-ImageNet, SCNN w/ CFC 1 gets 16% and SCNN w/ CFC10 gets 21%.\\n\\n3. We indeed state the shallow LR models (i.e. without hidden layers) induce a convex loss in Section 3.2 That is because for a single task, we are trying to solve a logistic regression problem, which is convex. As pointed out in the review, the linear network (LR + LinNet) on the same setting induces a non-convex loss due to overparameterization. \\n\\n4. # of adaptation steps for shallow and deep models (section 3.2): both the shallow and linear network encode a linear decision boundary, they will both obtain comparable performance if properly adapted long enough. The contrast using the same # of adaptation steps, however, illustrates the failure mode of MAML: it does not give good initialization points for the shallow model but does give good initialization points for a linear network. In other words, the deeper model is more amenable to adapting, while the two have the same expressiveness.\\n\\n5. Regarding the size and type of models in our experiments, we note that we carefully replicated the original classification experiments from the MAML paper. (available at [9]) To the best of our knowledge, these 4-layer CNNs are still widely used and methods that take advantage of larger networks (e.g. ResNet 12, WRN) were specifically designed for such kind of models. (c.f. Table 1 in [10]) Regarding recurrent networks, we are not aware of any work successfully combining them with MAML.\\n\\n6. Regarding computational metrics (e.g. time and memory complexity, wall-clock timings), we have added an extensive comparison in Appendix A.4. Asymptotic complexities for the forward pass of the linear optimizer are provided in Section 4.2, and the backward pass has similar complexity as it is computed by back-propagation. Concretely, for a n-layer meta-optimizer, the time complexity of the forward pass grows to O(n*k*sqrt(k)) and the memory complexity to O(nk). As pointed out in the review, our method trades expressivity for computation; when MAML takes 0.63 seconds to compute 1 meta-gradient (on the CIFAR-FS setting) our method takes 2.05 seconds, resulting in a 3.25x slow-down. With more adaptation steps, (e.g. for Omniglot) meta-training with meta-optimizers can be as much as 10x slower than MAML. Note that this slow-down only affects meta-training times and that inference time remains unchanged. For more information, please refer to the table available in Appendix A.4.\"}",
"{\"title\": \"References\", \"comment\": \"Bibliography:\\n1. Finn C, Rajeswaran A, Kakade S, Levine S. \\\"Online Meta-Learning\\\". 22 Feb 2019. http://arxiv.org/abs/1902.08438\\n2. Rajeswaran A, Finn C, Kakade S, Levine S. \\\"Meta-Learning with Implicit Gradients\\\". 10 Sep 2019. http://arxiv.org/abs/1909.04630\\n3. Triantafillou, Eleni, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, and Hugo Larochelle. 2019. \\u201cMeta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1903.03096.\\n4. Lee, Kwonjoon, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. 2019. \\u201cMeta-Learning with Differentiable Convex Optimization.\\u201d arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1904.03758.Lee, Kwonjoon, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. 2019. \\u201cMeta-Learning with Differentiable Convex Optimization.\\u201d arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1904.03758.\\n5. Nagabandi, Anusha, Ignasi Clavera, Simin Liu, Ronald S. Fearing, Pieter Abbeel, Sergey Levine, and Chelsea Finn. 2018. \\u201cLearning to Adapt in Dynamic, Real-World Environments Through Meta-Reinforcement Learning.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1803.11347.\\n6. Mi, Fei, Minlie Huang, Jiyong Zhang, and Boi Faltings. 2019. \\u201cMeta-Learning for Low-Resource Natural Language Generation in Task-Oriented Dialogue Systems.\\u201d arXiv [cs.CL]. arXiv. http://arxiv.org/abs/1905.05644.\\n7. Finn, Chelsea, and Sergey Levine. 2017. \\u201cMeta-Learning and Universality: Deep Representations and Gradient Descent Can Approximate Any Learning Algorithm.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1710.11622.\\n8. Baik S, Hong S, Lee KM. \\\"Learning to Forget for Meta-Learning\\\". 13 Jun 2019. http://arxiv.org/abs/1906.05895\\n9. Raghu A, Raghu M, Bengio S, Vinyals O. \\\"Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML\\\". 19 Sep 2019. http://arxiv.org/abs/1909.09157\\n10. Javed K, Yao H, White M. \\\"Is Fast Adaptation All You Need?\\\". 3 Oct 2019. http://arxiv.org/abs/1910.01705\\n11. Grefenstette, Edward, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, and Soumith Chintala. 2019. \\u201cGeneralized Inner Loop Meta-Learning.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1910.01727.\\n12. Nichol, Alex, Joshua Achiam, and John Schulman. 2018. \\u201cOn First-Order Meta-Learning Algorithms.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1803.02999.\\n13. Wang, Jane X., Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. 2016. \\u201cLearning to Reinforcement Learn.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1611.05763.\\n14. Vanschoren, Joaquin. 2019. \\u201cMeta-Learning.\\u201d In Automated Machine Learning: Methods, Systems, Challenges, edited by Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren, 35\\u201361. Cham: Springer International Publishing.\\n15. Rakelly, Kate, Aurick Zhou, Deirdre Quillen, Chelsea Finn, and Sergey Levine. 2019. \\u201cEfficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1903.08254.\\n16. Rusu, Andrei A., Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. 2018. \\u201cMeta-Learning with Latent Embedding Optimization.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1807.05960.\\n17. Baydin, Atilim Gunes, Robert Cornish, David Martinez Rubio, Mark Schmidt, and Frank Wood. 2017. \\u201cOnline Learning Rate Adaptation with Hypergradient Descent.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1703.04782.\\n18. Park, Eunbyung, and Junier B. Oliva. 2019. \\u201cMeta-Curvature.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1902.03356.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"This paper studies MAML, as it is a \\u201cseminar and widely followed work\\u201d (Reviewer#2). It has been extended [1-3] and widely applied across multiple subfields. (e.g. computer vision [4], robotics [5], and dialogue systems [6]). Particularly relevant to the reviewer\\u2019s concern, there have been multiple empirical and theoretical works dedicated solely to the study of MAML [7-11]. Yet, the understanding of why and how MAML works is far from being complete. Thus, the paper sets to make progress in this direction, hypothesizing \\u201cdepth\\u201d as an (unexplored) and important aspect to MAML.\\n\\nNote that other approaches [12-18] are either too different to analyse using the proposed empirical approaches or simply do not fit the few-shot meta-learning paradigm. Note that when possible, we do compare against methods having a similar flavour as the one we propose. (i.e. MetaSGD, MetaCurvature). However, to make our efforts more precise, we are happy to make it clear that this work specifically addresses MAML (and its alike).\\n\\nRegarding the depth/breadth of our experiments, our paper currently features results on 1 synthetic and 3 popular computer vision datasets, which is as much or more than similar submitted/published works [9, 16, 18]. Maybe more importantly, the results for all experimental settings agree with each other, and some (e.g. the freezing experiments) were independently discovered by other researchers. Altogether, we believe this is a testament to their generality and replicability.\\n\\nWe hope that in light of the above, the content of our paper has become more appealing; our goal is not to propose a new state-of-the-art method, but rather to shine some light on the underlying dynamics of a popular meta-learning algorithm.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents an experimental study of gradient based meta learning models and most notably MAML. The results suggest that modeling and adaptation are happening on different parts of the network leading to an inefficient use of the model capacity which explains the poor performance of MAML on linear (or small networks) models. To tackle this issue they proposed a kronecker factorization of the meta optimizer.\\n\\nThe paper is well motivated and well written in terms of clarity in the message and being easy to follow.\\n\\nOne major issue is that the experimental study is not that comprehensive to support the claim of the paper. Especially, in analyzing the failure case of linear models.For example, one may try small (but nonlinear networks) and compare its performance with larger (possibly overparameterized) ones on at least 2 standard network architectures. But, it doesn't mean that I don't like the paper at its current state. The paper yet has a message and it's delivered clearly.\\n\\nI wonder if the overparameterized is just related to depth or overparameterization in width would work too? If not then it might be the \\\"nonlinearity\\\" that is doing the work\\n\\nIn section 3.2 (Figure 2, left) and (Figure2, mid) show that FC follows the pattern of C1-C3. t\\nThen the authors proposed the experiment related to perturbing FC (Figure 2, right) to show that FC is actually not similar to C1-C3 and is important to adaptation. However, one can do similar experiments for C1-C3 and claim they are also important to adaptation. It seems that FC and C4 are really different.\\n\\nFor a non-expert reader it's not readily clear that how the kronecker factorization of A leads to equation 5. An explanation can help. Also, a few sentences or schematic demonstration of kronecker product makes the paper self-contained. \\n\\nThere are a few typos in the paper that can be removed after a thorough proofreading.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper analyzes the popular MAML (Model-Agnostic Meta-Learner) method, and thereafter proposes a new approach to meta-learning based on observations from empirical studies. The key idea of the work is to separate the base model and task-specific adaptation components of MAML. This decoupling of adaptation and modeling reduces the burden on the model, thus enabling smaller memory efficient deep learning models to adapt and give high performance on meta learning tasks. The paper proposes a learnable meta-optimizer consisting of a parametrized function U such that the knowledge of adaptation is embedded into its parameters (A,b), instead of forward model parameters. The computational challenges posed by the proposed method are addressed by expressing the parameter matrix A as a Knonecker product of small matrices which is more efficient from memory and time complexity view point. The results on Omniglot and CIFAR-FS are promising, and the paper shows that the proposed meta-optimize is \\\"more expressive\\\", as well as can adapt a shallower model to the same level of performance as MAML.\", \"+ves:\", \"The discussion on the deficiency of MAML combined with shallow models is well-supported experimentally.\", \"The idea to leverage the parameters of a meta-optimizer for adaptation instead of using model parameters is novel and interesting.\", \"The paper is well-written and easy to follow. It motivates its choices well, both in the proposed method and the experiments.\", \"The paper presents fair comparison in all experiments with appropriately chosen baseline models, and the proposed approach is validated for both linear as well as non linear models using benchmark datasets.\"], \"concerns\": [\"While MAML was a seminal work and is widely followed, there have been many follow-ups of MAML, including another widely used method Reptile (Nichol et al, On First-Order Meta-Learning Algorithms). How is the proposed method relevant more broadly to this genre of methods? Some discussion of this would have been useful to understand the generalizability of the idea.\", \"The choice of the Kronecker product to handle the dimensionality of the meta-optimizer is supported by the paper, but is not very convincing. How important is this choice? What if other decompositions were used?\", \"The paper seems to state that shallow models are convex (Sec 3.2); however, weight symmetry induces non-convexity even in shallow models. This perspective of the problem may not be very well-justified.\", \"In Sec 3.2, the paper compares the 1-step adaptation accuracy of a shallow network and a deeper 4 layered linear network and claim that shallow networks underperform. However this underperformance might be due to the difference in required number of steps to reach optimal performance by the two models, and may not be a fair comparison. Why is this conclusive inference? Considering these inferences motivate the full paper, this is important.\", \"All the presented results are on small CNNs. The paper motivates this as \\u201ceasing the computational burden\\u201d. The original MAML work shows results on state-of-the-art convolutional and recurrent models. It may be important to show results on deeper models to be more confident about its applicability.\", \"Although one can obtain smaller meta-learned models using the proposed method, training via this method will incur a higher computational burden than MAML-trained deep models. The paper does not talk about this additional complexity at all. Comparisons of wall-clock times or asymptotic analysis of the proposed method w.r.t. MAML would have greatly helped understand the pros and cons of the method.\", \"I am on the borderline on this work - it is a well-written paper with a clear objective and support. But lack of rigorous analysis of the proposed method in terms of the method (how important is the Kronecker factorization?), experiments (with deeper architectures) and a more generalizable understanding of the proposed idea seems to be limiting the work's impact.\", \"========POST-REBUTTAL COMMENTS===============\", \"I thank the authors for their response, and all the efforts in the updated manuscript. Some of the clarifications sought were answered clearly. However, unfortunately, I continue to remain on the borderline on this work for the reasons below. (I would be willing to increase my rating to 4 or 5, which however are not available on the drop down, but perhaps not beyond).\", \"The response to AnonReviewer1 says that \\\"there have been multiple empirical and theoretical works dedicated solely to the study of MAML [7-11]\\\", hence supporting this work dedicating its focus to MAML alone. However, on close observation, most of these efforts are not published on peer-reviewed avenues and are only on arXiv at this time. Ref [7] (Finn and Levine, 2017) is published but has significantly stronger contributions. Considering the largely empirical nature of this work, showing its generalizability would be required, in my opinion, to make the conclusions of this work useful to the audience. Expecting that it would naturally hold for other methods like REPTILE may not be sufficient. In my opinion, this is a significant limitation.\", \"I personally remained unconvinced about the response to the question on number of adaptation steps, as well as on the lack of deeper models in the empirical studies.\", \"I once again appreciate the authors for all the additional efforts, it may just be good for the work to be more comprehensive to be relevant and useful.\"]}",
"{\"title\": \"Understanding the negative results for convex linear models\", \"comment\": \"This submission makes the interesting claim that initialization-based meta-learning algorithms require over-parameterized models to learn a good starting point. Given that a significant part of the motivation for this work is made using an empirical analysis of the linear case, I think that a set of recent efforts studying exactly this setting [1,2,3] is quite relevant (note: I am a co-author on one of these papers), especially because most of the theoretical results are positive, whereas the experimental results for the (convex) linear case presented in this submission are negative. Note that the discussion below is meant not as a challenge to the motivational claim, which seems plausible, but as a theorist's effort to understand whether our assumptions are failing/whether studying the over-parameterized case can yield better bounds.\\n\\nIn particular, Finn et al. [1] make an argument in support of the (convex) linear setting [1, Appendix A] and show learnability of the MAML base-learner [1, Corollaries 1 & 2], specifically that optimizing the MAML objective yields an initialization whose error converges to that of the optimal initialization as you see more tasks. This goes against the results in Figure 1, especially the left plot: it is possible that the over-parameterization leads to a better optimization geometry and thus better post-adaptation results, but the inability to exceed random accuracy at all is surprising to me. It would help to have answers to the following questions:\\n 1. How many shots were used for the experiments on synthetic data, and how does the relative performance improve with more shots? Finn et al. [1] assume strong-convexity, which will fail in the few-shot setting (number of samples < input dimension), but it is unclear if this assumption is necessary.\\n 2. In A.1.1 it says \\\"Due to the argument presented in the main text, any hyper-parameter setup will replicate the logistic regression (LR) results, but we used meta and adaptation learning rates of 0.01 and 0.5.\\\" Where/what is this argument, and does this mean learning rates were not tuned? Existing theory depends strongly on these learning rates [1,2,3]. \\n\\nThe analyses in the other papers [2,3] deal with (variants of) Reptile, which while less well-known is still quite popular; it would be interesting/surprising if the results in the submission were true for MAML but not Reptile. Furthermore, the Reptile results do not assume strong-convexity. They do all depend on tasks-similarity, i.e. that linear classifiers that perform well on different tasks are close together; the submission argues in Section 3.2 that this may not be true in practice. On the other hand, both Denevi et al. [2] and Khodak et al. [3] report positive experimental results with (convex) linear models. Denevi et al. [2] also include an evaluation on synthetic data, which may be a useful comparison, while Khodak et al. [3, Figure 1] show that the optimal linear classifiers on a toy text classification task are indeed close together. So perhaps the claimed need for over-parameterization depends strongly on properties of the data that do not always hold in settings where we might want to use linear models.\", \"minor_point\": \"I suggest a rephrasing of the following statement from Section 3.2, as convex functions may have zero, one, or infinitely many optima: \\\"however, if the model is shallow such that L_\\u03c4 is convex in its parameters, then any initialization that is good for fast adapting to one subset of tasks could be bad for another subset of tasks since all the tasks have precisely one global minimizer and those minimizers can be arbitrarily far from each other.\\\"\", \"references\": \"[1] Finn, Rajeswaran, Kakade, Levine. Online Meta-Learning. ICML 2019.\\n[2] Denevi, Ciliberto, Grazzi, Pontil. Learning-to-Learn Stochastic Gradient Descent with Biased Regularization. ICML 2019.\\n[3] Khodak, Balcan, Talwalkar. Provable Guarantees for Gradient-Based Meta-Learning. ICML 2019.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper investigated the effect of depth on the meta-learning model.\\nThe paper mainly studies through experimental means and does not have mathematical analysis to demonstrate. In this way of analysis, a large number of experiments are necessary. In addition to ensuring a large number of experiments, it is necessary to ensure the diversity of methods. This article only studied MAML, therefore, the conclusion of the experimental inquiry cannot convince me.\\nFor the experimental part, I am afraid the results are also weak. For example, please notice that many meta-learning models have proposed. I believe authors should compare more existing works to demonstrate the superiority of the proposed one.\\n\\n[Update after rebuttal period]\\nIt may seem reasonable that depth enables task-general feature learning. However, in fact, it is not true. The major reason for people to think that the receptive field becomes very large after multiple pooling operation. This is true but not the reason for good performance in feature learning. Because of back-propagation, the feature extraction layers can be trained well to extract features from objects of different scales. The major reason for poor performance in feature learning is that the header that creates an object template is not well trained for objects of different scales. As a result, I still keep the confusion in terms of the effectiveness of the proposed method.\"}"
]
} |
Bkg5LgrYwS | Imitation Learning of Robot Policies using Language, Vision and Motion | [
"Simon Stepputtis",
"Joseph Campbell",
"Mariano Phielipp",
"Chitta Baral",
"Heni Ben Amor"
] | In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn can be used to synthesize specific motion controllers at run-time. This multimodal approach enables generalization to a wide variety of environmental conditions and allows an end-user to influence a robot policy through verbal communication. We empirically validate our approach with an extensive set of simulations and show that it achieves a high task success rate over a variety of conditions while remaining amenable to probabilistic interpretability. | [
"robot learning",
"imitation learning",
"natural language processing"
] | Reject | https://openreview.net/pdf?id=Bkg5LgrYwS | https://openreview.net/forum?id=Bkg5LgrYwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"xnDRHE1izH",
"SJev8vVhsB",
"r1xpSgRjiS",
"H1gellRsjr",
"HJe_hkAsjS",
"SyxZV1AojB",
"rJlPACaijH",
"HJecYd1RFS",
"BylouRApKH",
"BJg1cbsTtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746439,
1573828430653,
1573802052592,
1573801960337,
1573801904256,
1573801769134,
1573801678798,
1571842178181,
1571839602757,
1571824006852
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2334/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2334/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2334/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2334/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2334/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2334/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2334/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2334/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2334/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The present paper addresses the problem of imitation learning in multi-modal settings, combining vision, language and motion. The proposed approach learns an abstract task representation, and the goal is to use this as a basis for generalization. This paper was subject to considerable discussion, and the authors clarified several issues that reviewers raised during the rebuttal phase. Overall, the empirical study presented in the paper remains limited, for example in terms of ablations (which components of the proposed model have what effect on performance) and placement in the context of prior work. As a result, the depth of insights is not yet sufficient for publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Dynamic Environments\", \"comment\": \"Thank you for taking the time to address these comments.\\n\\nThe closed loop control experiments are very helpful. I think it would be worth conducting a similar experiment in environments where the trajectory itself (not just the goal) needs to change over time, to avoid dynamics obstacles. This would show that the policy is able to appropriately adjust the shape parameters of the DMP in response to changes in the environment.\"}",
"{\"title\": \"Reviewer #3 Response\", \"comment\": \"Summary:\\nThank you for your review. We outlined the problem and novelty of our work more carefully in the introduction and background section. The novelty of our work lies in proposing an approach that fundamentally combines language, vision and motion in an end-to-end fashion, thereby grounding language in motion with minimal human feature engineering.\", \"detailed_response\": \"We would like to thank the reviewer for outlining the excellent reference to Tani's pioneering work, which has been included in the paper. We incorporated additional references and altered our introduction and background section to better outline the problem statement and its relevance in light of current advancements (section 1 and 2).\\n\\nThe main contribution of our work is to look at language, vision and control as a fundamentally connected problem. This allows us to not only ground language in the environment like [1][2] but to ground language in control policies together with the additional information gained from the vision component. We agree that including language into behavioural cloning is fundamentally not a new idea, as outlined by your reference to [3] and other recent work in the same area [4]. However, we propose a system that fundamentally combines these three lines of research that goes beyond previous work by introducing a fully differentiable approach that grounds language and vision in robot motion learned from demonstrations with minimal human feature engineering. Another contribution of our model is that we are able to generate the parameters for a continuous controller while many other approaches use discretization to either limit their input space [3] or output space [1][5]. Our method builds upon a rich literature on learning embeddings for vision [6], language [4] and tasks [7], allowing us to connect these lines of research to create an end-to-end approach that generates a task-specific continuous controller capable to seamlessly adapt to different tasks.\\n\\nOur experiments justify the feasibility of our approach by demonstrating the ability of our MPN model to generalize towards different sentences and environments (Section 4.1) while generating trajectories similar to the demonstrated behaviours (Section 4.4). Preliminary results on dynamically changing environments suggest the methods ability to adapt to changes in the environment (Section 4.3) which allows the model to work in collaboration with human partners. Especially when humans are working in close proximity with robots, it is important that robots can adapt to changing conditions while also assessing the feasibility of the given tasks (Section 4.2).\", \"references\": \"(1) \\\"Disentangled Relational Representations for Explaining and Learning from Demonstration\\\" Hristov et al \\n(2) \\\"CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning\\\" Johnson et al \\n(3) \\\"Learning semantic combinatoriality from the interaction between linguistic and behavioral processes.\\\" Sugita et al \\n(4) \\\"Grounding natural language instructions to semantic goal representations for abstraction and generalization\\\" Arumugam et al \\n(5) \\\"Mapping Instructions and Visual Observations to Actions with Reinforcement Learning\\\" Misra et al \\n(6) \\\"Learning Deep Parameterized Skills from Demonstration for Re-targetable Visuomotor Control\\\" Chang et al \\n(7) \\\"Deep Multimodal Embedding: Manipulating Novel Objects with Point-clouds, Language and Trajectories\\\" Sung et al\"}",
"{\"title\": \"Reviewer #2 Response (Part 2)\", \"comment\": \"Our network structure was influenced by an automatic parameter search process. However, we agree that a better ablation study regarding the network structure benefits the justification of our model. We extended the existing ablation study with an additional section with regards to certain design choices within the network. We compared the n-gram sizes as well as the use of residual layers in the image processing part of the neural network in section 4.5. Additionally we want to underline the use of a DMP as compared to a simple proportional controller allowing us the teach the robot to reassemble entire trajectory shapes as learned from the initial demonstrations. We further added an additional figure (Figure 6 (b)) to outline this feature of our approach (see section 4.4). While we did not make explicit use of this ability in our current work, we intend to show how it can be used to perform object avoidance or train the robot to approach objects from different sides in future work. Both of these cases require the robot to be actuated differently while having the same goal position, which could not be solved by proportional controllers, thus the choice of a DMP. The results presented in section 4.4 show promising results for our model to further utilize the information extracted from the demonstrated trajectory shapes in future experiments.\\n\\nCombining a difference image with the language in the network as described allows the network to independently learn the features necessary to ground natural language in the perceived environment. Using a manual approach for feature extraction would certainly be possible, but it would probably limit the network's ability to ground the language in the environment while requiring a considerable amount of time to manually engineer the necessary features. To create a comparative method, features would need to be extracted from language as well as the image in a manual fashion before being manually combined to form a informative representation that could be used to be translated into a respective low-level controller. While it is not impossible to use such an approach, we are currently unable to provide information on how the performance would differ as compared to our proposed MPN due to the extensive amount of work required to create a manual feature extractor as described above.\"}",
"{\"title\": \"Reviewer #2 Response (Part 1)\", \"comment\": \"Summary:\\nThank you for your review. We added an appendix that outlines the generation of sentences for random training scenarios in further detail. Both of the references mentioned in your review propose a method to extract action sequences from demonstrations that is used in subsequent low-level controllers. In contrast, our work focuses on dynamically generating low-level controllers in an end-to-end fashion for different tasks. While action sequences are a future goal of our research, we currently do not address this problem in our work.\", \"detailed_response\": \"Thank you for your in-depth review. As pointed out by your review as well as due to concerns from other reviewers regarding our sentence generation, we added appendix A explaining the collection, structure and generation of sentences in further detail. In total, we are able to generate ~180,000 unique sentences which are utilized depending on the environment of the robot. Please refer to the appendix for further details on how sentences are generated. \\n\\nAs compared to R1, our work utilizes natural language to convey instructions to the robot, allowing users to use a natural and intuitive interface for interaction as compared to pre-programming a sequence of desired actions. Furthermore, our approach generates low-level control policies in form of a DMP that allows the robot to resemble the shapes of demonstrated trajectories instead of just going to the predicted goal. To address this feature further, we added a short experiment in section 4.4 comparing the use of a DMP over a goal-directed controller, showing the ability of the DMP to recreate demonstrated trajectories. The work proposed in R1 is focusing on generating action sequences and switching between a set of goal directed motion controllers. In our work, we focus on automatically generating a unique controller for each task at hand from natural language conditioning that generates motions similar to what was demonstrated, even in dynamically changing environments. The work in R1 presents a methodology to run different controllers sequentially to achieve a multi-staged inspection task from pre-defined action sequences with great success. As of right now, we are not focusing on addressing multi-staged tasks, but it is certainly a future direction of our research. \\n\\nThe work presented in R2 mainly focuses on learning to ground inter-object relationships from visual environment perceptions to their respective words. While the work includes a robot experiment, robot control is done by predicting a goal position and using a proportional controller to reach the goal position while disregarding any information from possibly demonstrated movements. While a more versatile language model is a future objective of our work, we are currently not able to use relational descriptions in our work except from global descriptors like \\\"left\\\" or \\\"right-most\\\". We will incorporate the results from R2 in future work to further enhance our method. However, our main contributions is to directly translate unrestricted natural language into low-level control policies for robot actuation. While language grounding is an essential part of our work, we focus on translating high-level semantic task descriptions into complex low-level controllers that reassemble the demonstrated trajectories of the task. Our proposed method has the benefit that everything from the high-level semantics to the low-level control is a single differentiable model that can be trained end-to-end.\"}",
"{\"title\": \"Reviewer #1 Response\", \"comment\": \"Summary:\\nThank you for your review. We added additional experiments in section 4.3 and 4.4 to outline our methods ability to react to dynamically changing environments and reassemble the shape of demonstrated trajectories. We also added an appendix that goes into further detail about the human-subject study and how we generate sentences for our random training scenarios. In contrast to your suggested reference, we present an approach that generates low-level robot controllers based on language and images instead of generating the next action from a discrete set of action.\", \"detailed_response\": \"Thank you for your constructive review. As pointed out in your review, we use the network to output the parameters for a DMP describing the entire motion and utilized it to actuate the robot. However, our approach is capable of generating a new DMP at each time step to adapt to potential changes in the environment. To outline this ability further we added an additional experiment in which the robot is attempting to approach a moving object. In each time step, we moved the object by 1.5cm along a predefined trajectory and regenerated the DMP to obtain the updated trajectory. Without needing additional training, the experiments show that our approach is able to adapt to dynamic environments. Please refer to section 4.3 for further details. \\n\\nWhile our neural network utilizes a DMP, which is ultimately converging to a goal position (assuming the goal is feasible), the forcing term allows the DMP to actuate the robot such that it adheres the shape of the trajectory learned from the demonstrations instead of just approaching the goal position. In order to demonstrate this behaviour, we added an additional figure (Figure 6 (b)) to outline the difference between using a DMP as our low-level controller and a goal-directed controller. Please refer to section 4.4 for further details.\\n\\nTo generate the amount of training data necessary to successfully train our network, we could not exclusively rely on human demonstrations. For this reason, we conducted a human-subject study to collect sentences and words related to pick-and-place tasks and further utilized this information to create a sentence generator. In addition to the collected data, we expanded the list of words with common synonyms from respected NLP databases. As requested, we added appendix A describing the basic templates as well as all synonyms together with the sentence generator in greater detail.\\n\\nThe reference suggested in your review has been incorporated in our paper. The paper presents an approach in combining language and images for robot control by generating multi step action sequences. Similar to our method, the work proposed in the reference uses unrestricted natural language to describe the task and combines it with a visual perception of the environment. In contrast to our work, the reference generates an action from a discrete set of actions instead of generating a continuous low-level controller that directly actuates a robot. Since our work focuses on generating low-level control policies from latent task representations that can directly be used to actuate a robot with respect to demonstrated joint trajectories for a single task, we feel that a comparison to other papers in this area would be miss-leading to readers since a core contribution of our work is to translate high-level semantics from language and vision into low-level control policies rather than learning to generate actions for a subsequent task planner. While the suggested work is interesting in regards to its ability to generate multi step plans, we do not currently look at performing tasks that require more than one action.\"}",
"{\"title\": \"Thank you for the reviews: General Response\", \"comment\": [\"We would like to thank all reviewers for their constructive and helpful feedback on our paper. An updated version of the paper is uploaded. For individual responses to the reviews, please see our respective posts. However, our changes to the paper can be summarized as follows:\", \"We re-formulated parts of the introduction (Section 1) to outline the problem as well as make our contributions clearer\", \"The Background section (Section 2) has been updated to incorporate the literature suggestions from the reviewers.\", \"We added an experiment that demonstrates the ability of the MPN to adapt to dynamically changing environments at run-time by generating a new low-level controller at each time step (Section 4.3).\", \"MPN leverages learning from demonstration to acquire the skills necessary to perform the reaching task. For this reason we decided to use a DMP over a simpler proportional controller. This allows us to generate trajectories that reassemble the demonstrated behaviour. An explanation of this feature has been added in Section 4.4.\", \"Figure 6 (a) and (b) has been added in support of sections 4.3 and 4.4\", \"Section 4.5 extends the ablations study by evaluating the choices of n-gram sizes as well as the usage of residual layers in the image processing pipeline.\", \"We added appendix A that further elaborates on the human-subject study and how sentences are generated for our experiments.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper addresses the problem of using multiple modalities for learning from demonstration. Approaches that take in task or joint space data to learn a policy for replicating that task are numerous. Doing the same with multiple modalities involved, in particular vision, language and motion, has only been recently considered, so this is a timely paper.\\n\\nThe core contribution is pretty well summarised by the architecture in figure 1, which involves a combination of encodings of the words and sentences, images and parameters of a DMP in order to generate movement commands from a high level instruction. \\n\\nUnless I have missed something in the experimental setup, all of the considered task variations are movement commands of the form <Move> to <Object>. The network setup allows for synonyms of two kinds, so <Move> can be replaced by numerous verbal synonyms such as advance and go, and the object can be specified in terms of shapes, colors and so on, but otherwise this is the only specification of the task. This has been addressed in the recent literature using neural network architectures similar to the one being proposed here, e.g., see the following papers. These papers already solve the proposed problem and provide similar explanations. It would be helpful to see comparative discussion with respect to those methods and a clear statement of novelty with respect to such prior work:\\n[R1] M. Burke, S. Penkov, S. Ramamoorthy, From explanation to synthesis: Compositional program induction for learning from demonstration, Robotics: Science and Systems (R:SS), 2019.\\n[R2] Y. Hristov, D. Angelov, A.Lascarides, M. Burke, S. Ramamoorthy, Disentangled Relational Representations for Explaining and Learning from Demonstration, Conference on Robot Learning (CoRL), 2019. \\n\\nAn interesting feature in R2 that the authors do not explicitly address here is the issue of relational specifications in the language, e.g., in addition to saying \\\"move to the red bowl\\\", we may also wish to say \\\"place on top of red block\\\". In the way that MPN is currently set up to map from the language input directly to hyperparameters of the DMP, and considering the embedding structure, it is not clear if MPN is capable of handling such specifications. If so, the claim of generalisation on the language input should be stated more clearly.\\n\\nThe ablation study is setup somewhat differently than what I would have expected. The authors consider the effect of changing the training set size and if the language input includes synonyms or not. Those two aspects seem to produce the expected results. It would also be interesting to see an ablation study in the sense of replacing or removing aspects of the architecture to see its relative effect on the overall model performance. So, for instance, if one did not have a DMP with the hyperparameters being estimated by a network and instead had a more straightforward encoding of where to move to - does it make a difference and how much? Likewise, how much performance benefit, if any, is being derived from an uninterpreted image I being combined as described in the embedding as opposed to an alternative that detects an objects and combines that position differently. The paper would have been stronger if such architectural choices were better justified and also demonstrated in the experiments.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"*Summary\\n\\nThe paper describes a new end-to-end imitation learning method combining language, vision, and motion.\\nA neural network architecture called Multimodal Policy Network is proposed. That can extract internal representations from language and vision to condition the generated motions. \\nIt enables an end-user to influence a robot's policy through verbal communication.\\nThe experiments demonstrate the generalization performance of the method. That can generate behaviors towards different goals depending on different sentences. \\n\\n*Decision and supporting arguments\\n\\nI think the paper is just below the borderline. The reason is as follows.\\n\\nThe concern is about evaluation. They demonstrated the method could work, and the robot can move to appropriate goals. However, there is no comparative methods in the experiment.\\nRelated to this point, the problem was not identified in the Introduction.\\nThe authors might assume that introducing language into behavioral cloning itself is qualitatively new work. However, such a study has a long history. \\nFor example, please refer to Tani's pioneering works.\\nSugita, Yuuya, and Jun Tani. \\\"Learning semantic combinatoriality from the interaction between linguistic and behavioral processes.\\\" Adaptive behavior 13.1 (2005): 33-52.\\n\\nThe author should specify a current challenge or problem in pre-existing studies about imitation learning with language input, clarify their claim, and give empirical support for the claim.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work uses imitations learning (from synthetic data) to train a deep model which takes a natural language instruction, and a visual representation of a robot's environment, and outputs a trajectory for the robot to follow which executes this instruction. The work focuses on a robotic pick-and-place task, where the instruction indicates which of the available bins an item should be placed in. In addition to the trajectory model, a second model is trained which allows the agent to predict whether a given command is actually feasible (i.e. whether the target bin exists). Empirical results show a reasonably high success rate in placing objects in the bin specified by the instruction, though there is still room for improvement in cases where the shape o a combination of features is important to the selection of the correct bin.\\n\\nRather than mapping directly from instructions and observations to control signals, the model trained in this work translates from an instruction, and an image of the agent's environment, to the parameters of a DMP controller. The network therefore outputs the entire motion for the task in a single inference pass. This approach would have advantages and disadvantages. The DMP formulation ensures that the resulting trajectory is relatively smooth. It also means that the network outputs a distinct goal configuration, which the DMP should reach (assuming the goal is feasible) regardless of the other motion parameters. The use of a DMP output space, however, limits the model to generating relatively simple, goal-directed motions, and does not allow the agent to adapt to changes in the layout of the environment (which would only be observed in the static visual input).\\n\\nAs other work has considered visual instruction following (e.g. Misra et. al. \\\"Mapping Instructions and Visual Observations to Actions with Reinforcement Learning\\\") it would strengthen this work considerably to see a direct comparison between this method and existing approaches. It is likely that the approach presented in this work is better suited to the specific problem of robot control, but it would be helpful to see if learning a low-level control policy directly can be successful in this context.\\n\\nThe work needs to expand on the discussion in the second paragraph of section 4, where human annotators were used to generate natural language instructions for different tasks. The paper suggests that this data was not used directly to train the model, but was instead used to build a template for generating natural language instructions. What this template looks like, and how it was constructed based on the human-generated data, remains unclear, and needs to be described in much more detail.\"}"
]
} |
HkxcUxrFPS | Improving Visual Relation Detection using Depth Maps | [
"Sahand Sharifzadeh",
"Sina Moayed Baharlou",
"Max Berrendorf",
"Rajat Koner",
"Volker Tresp"
] | State of the art visual relation detection methods mostly rely on object information extracted from RGB images such as predicted class probabilities, 2D bounding boxes and feature maps. In this paper, we argue that the 3D positions of objects in space can provide additional valuable information about object relations. This information helps not only to detect spatial relations, such as \textit{standing behind}, but also non-spatial relations, such as \textit{holding}. Since 3D information of a scene is not easily accessible, we propose incorporating a pre-trained RGB-to-Depth model within visual relation detection frameworks. We discuss different feature extraction strategies from depth maps and show their critical role in relation detection.
Our experiments confirm that the performance of state-of-the-art visual relation detection approaches can significantly be improved by utilizing depth map information. | [
"Visual Relation Detection",
"Scene Graph Generation"
] | Reject | https://openreview.net/pdf?id=HkxcUxrFPS | https://openreview.net/forum?id=HkxcUxrFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"GA6XwvnMLe",
"f-7L8Z9zZ",
"ryeuECq3sr",
"SJxUbAq3jS",
"H1grl0q2jB",
"r1e-k092oB",
"ryeR2T53oS",
"ByxwGiq2oB",
"r1xH7953sH",
"B1epn-3FsS",
"SylGDVKEqS",
"SylMkK2CYH",
"SkenUk7MFB"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1580642439870,
1576798746410,
1573854767556,
1573854718440,
1573854701305,
1573854681265,
1573854646445,
1573853967241,
1573853724534,
1573663157192,
1572275289926,
1571895514251,
1571069780448
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2333/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2333/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2333/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2333/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2333/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2333/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2333/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2333/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2333/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2333/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2333/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2333/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"New Version After ICLR Decision\", \"comment\": \"We thank all the ICLR reviewers and meta-reviewer again for their constructive feedback. We encourage the interested readers to follow our discussions during the rebuttals.\\n\\nWe made a large revision during the rebuttal period, including a new contribution, however, the changes seem to have been only noted by reviewer 1, and unfortunately, there was no additional feedback from other reviewers. Nevertheless, by considering all the comments one more time, we further clarified some points and made new changes to our paper. We released the VG-Depth dataset, our code and an updated version of the paper:\", \"code_and_dataset\": \"https://github.com/Sina-Baharlou/Depth-VRD\", \"updated_paper\": \"https://arxiv.org/abs/1905.00966\\n\\n\\n---------------------------------------------------------------------------------------------------------------------\\n\\nSpecifically, considering the points by meta-reviewer, we made the following changes:\\n \\t\\n1. We clarified this in our new version. We do not see any advantages in using a unified architecture when different modalities are involved. For example, if one is dealing with sound data in parallel to RGB images, it is reasonable to have an RNN and a CNN feature extractor. The same applies to depth maps and RGB images. Employing VGG to extract RGB features gives us the benefit of transfer learning (from pre-trained models). On the other hand, since we cannot do the same for depth maps, the choice of ResNet gives us the advantage of training an efficient model with less training data. \\nIn our new version, we focused furthermore on studying (1) the importance of different features for relation detection, and (2) whether the evaluation metrics can properly reflect that. Please note the various quantitative and qualitative results reported in the post-rebuttals paper.\\n\\n2. We provided more qualitative results in the new version of our paper. Providing quantitative measures in our case is not possible as our dataset contains only RGB images and there are no ground truth depth maps available. This is the reason to release the synthetic VG-Depth dataset. One can find the quantitative measures on the test data of NYU dataset available in [1], generalizable to unseen images such as the ones in Visual Genome. The work from 2016 is still one of the only open-source methods that have competitive results for RGB-to-Depth generation and as shown in our study, this is already giving a large boost to our detection rate. The main focus is neither on evaluating nor on engineering a better RGB-to-Depth model, but rather on studying its applicability in improving visual relation detection. \\n\\n3. We took AC\\u2019s comment into account, and provided an additional paragraph in our new version discussing this prior work. \\n\\n4. As mentioned in the rebuttals discussion with reviewers 3, we agree that direct reasoning in 3D would be more beneficial. However, to reconstruct a scene in 3D we would need to collect either (1) more than one RGB image or (2) more than one depth map, coming from different views of that scene. This is a more involved process that has limitations both on data and computation. What we have in our scenario is only one RGB image from the scene which we use to synthetically generate the depth map from. Going through the RGB-to-Depth deep network in our architecture is the only way to acquire those depth maps in the first place. Other than the limitation in our current datasets, please note that in many real-world scenarios for humans (or for autonomous agents such as self-driving cars) this is a similar case: a car driving directly forward towards a pedestrian has no access to the rear-view of that person. Depth maps, in this case, can already provide sufficient data on the distance to objects. In summary, we are exploring a real-world scenario where a 3D reconstruction of the scene is not accessible.\\n\\n[1] Laina, Iro, et al. \\\"Deeper depth prediction with fully convolutional residual networks.\\\" 2016 Fourth international conference on 3D vision (3DV). IEEE, 2016.\"}",
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes to improve visual relation prediction by using depth maps. Since existing RGB images do not contain depth informations, the authors use a monocular depth estimation method to predict depth maps. The authors show that using depths maps, they are able to improve prediction of relations between ground truth object bounding boxes and labels.\\n\\nThe paper got relatively low scores (with 3 initial weak rejects). After the revision and suggested improvements, one of the reviewers updated their score so the paper now has 2 weak rejects and 1 weak accept.\", \"the_paper_had_the_following_weaknesses\": \"1. The paper has limited technical novelty as it combines off the shelf components. The components also used different backbones (ResNet at some places, VGGNet at others) that were directly from prior work. Was there any attempt to have an unified architecture? As the main novelty of the work is not in the model aspect, the paper needs to have stronger experiments and analysis.\\n2. More analysis on the quality of the depth estimation is needed. Ideally, the work should provide some insight into whether some of the errors is due to having bad depth estimation? The depth estimation method used is from 2016, there are newer depth estimation methods now. Would having better depth estimation give improved results? Experiments that illustrates that method works well with predicted bounding boxes instead of ground truth bounding boxes will also strengthen the paper. \\n3. There was the question of whether the related Yang et al. 2018 workshop paper should be included as basis for comparison. In the AC's opinion, Yang et al. 2018 is not concurrent work and should be treated as prior work. However, it is not clear whether it is feasible to compare against that work. The authors should attempt to do so and if infeasible, clearly articulate why that is the case.\\n4. As pointed out by R3, once there is a depth map available, it is also possible to compare against 3D methods (such as those that operate on point clouds)\\n\\nOverall the paper had a nice insight by proposing the simple but effective idea of using depth information to help with visual relation prediction. Still the work is somewhat borderline in quality. In the AC's opinion, the main contribution and insight of the paper is of limited interest to the ICLR community, and it would be more appreciated in a computer vision conference. The authors are encouraged to improve the paper with stronger experiments and analysis, incorporate various suggestions from the reviewers, and resubmit to a vision conference.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"To all reviewers\", \"comment\": \"We thank the reviewers for their constructive feedback. We have revised the paper by taking into account most of the mentioned concerns.\\n(Additions)\\nThe main mutual concern was regarding the improvement percentage (Reviewer 1 and 2). While we would like to mention that the improvements in visual relation detection community are generally in a smaller range (for example Graph R-CNN improves the previous baseline by 1,5% points and Neural Motifs improves \\u2018no context\\u2019 baseline by 1,4% points), we addressed this concern by updating our paper as follows: (1) we provided a more extensive ablation study. (2) we extended our qualitative results about the under-represented predicates (Figure 4), to quantitative results by proposing to use a more competent metric (Macro R@K). This metric takes into account the improvements within each predicate class individually and can give a better intuition about the results. \\nWe also added the qualitative results on the generated depth maps to the paper, as requested by Reviewer 3 and 2.\\n\\n(Removals)\\nWe removed the 3rd contribution regarding the feature extraction strategy as suggested by reviewer 3.\\nWe removed the results of the VRD dataset as new metric (Macro R@K) evaluations limited our space. VRD dataset is less often employed in state-of-the-art works and we believe this would not hurt our contribution. We updated all the figures accordingly. Please note that Figure 1 is an image both in VRD and VG datasets.\"}",
"{\"title\": \"Response Page 4\", \"comment\": \"********* Points of extensions (improvement) *********\\n\\nQ1. I believe *unsupervised* discovery of depth information for visual relation detection can be an interesting direction since it is not limited to the availability of relevant depth dataset.\\n\\nA1. As mentioned in our first comment, unsupervised training of RGB-to-Depth network would mean generating depth maps from RGB images without having access to any external datasets containing corresponding depth maps (as the supervised signal). The question is whether it is possible to convert one modality to another without having any parallel data (RGB and corresponding depth maps)? We do not see an obvious way how to achieve that (it would be similar to the task of learning a function that generates animal sounds by looking at their images (going from image to sound modality) without having access to any parallel image and sound data).\\n\\nQ2. It is not clearly motivated why one should use two separate networks for depth and RGB inputs in light of the additional complexity. For instance, it is good to discuss what is the advantage of the proposed (computationally more expensive) method over the following two simpler baselines:\\n- Faster RCNN is used on RGBD input to produce a single feature vector\\n\\nA2.Our focus was not on improving object detection using depth maps which is already a well-explored area. Bringing depth to the Faster R-CNN input would mostly affect object detection and we wanted to isolate this effect from visual relation detection. Also, in this case, we either need to (1) apply shared weights for RGB and D signals which is not a good idea as discussed in Section 2.1.2 or (2) use separate weights for RGB and Depth maps. This would be similar to our current architecture.\\nQ3. Above case with RGB input but have the Faster RCNN predict the depth map as an auxiliary loss. \\nA3. We cannot use depth maps in the loss function as we do not have access to any ground truth depth maps.\"}",
"{\"title\": \"Response Page 3\", \"comment\": \"********* Minor points *********\\n\\n- the code is not available. This is especially important since the paper is outperforming prior works which could be a contribution if reproducible.\\n\\nDear reviewer, the code will be made available as a fork from Neural Motifs code, upon the acceptance of the paper. \\n\\n- Section 2.2: is l_{so} concatenation of l_s and l_o?\\nThank you for mentioning this. Yes, l_{so} is the concatenation of l_s and l_o, and we have updated the text to clarify this point\\n\\n- Section 2.2: y_{spo} is defined but never used.\\nSince we assumed the definition of the error function would be trivial, we didn\\u2019t explicitly use y_{spo} and correspondingly y_hat{spo} in a separate formula. To fix this issue we have removed this variable in the updated version of the paper.\\n\\n- Equation 2: why do we have both e_p and f in the exponents? Aren\\u2019t they the same?\\n\\nFixed.\\n\\n- Equation 2: P is never defined.\\nThere was a missing \\\\mathbf. P is defined at the beginning of that section as the set of all predicates.\\n\\n- Page 5: \\u201ca fully connected hidden layer of 64, 200, 4096 and 20 neurons\\u201d: this amounts to 3 hidden layers.\\nPlease note that each of the mentioned layers is only connected to their corresponding feature pairs separately and they are not sequentially connected to each other. We made it more clear within the text.\\n\\n- Why VGG network for visual feature and AlexNet for depth features?\\n\\nWe can use pre-trained weights of VGG for better performance and we also rely on this from Neural Motifs provided weights so we are easily comparable. For depth maps since we train everything from scratch, AlexNet (or in this version ResNet18), are much easier to train and require fewer data.\\n\\n- zero-shot learning results on the visual genome is missing.\\nPlease note that the visual genome dataset doesn\\u2019t provide the zero-shot evaluations. \\n\\n- training procedure is a bit unclear: the text suggest that the fine tuning and/or learning of the three components might happen separately. It is important to clearly state if they are done in an end-to-end fashion and simultaneously or separately; and why. \\nThanks for the feedback. We updated the text. RGB-To-Depth network and RGB feature extraction network weights are frozen for stability. This is a common practice to keep the weights from earlier layers frozen. Weights from the other networks are not frozen. (what about depth?)\\n\\n- It\\u2019s good to name the method in table 2 in the same fashion as table 1. With the current naming (based on architecture) it is a bit confusing to understand the content without additional cross referencing. For instance AlexNet-BN - Raw seems to correspond to Ours_{c,v,l,d}\\n\\nNot applicable in the new version.\\n\\n- Figure 4: the frequency represented as different shades of red or blue is really hard to notice especially on a printed paper. The red vs blue color coding is not necessary since the bars going up or down indicate the same quality. So, it might be better to use red/blue for frequency instead (e.g. dark red high frequency to dark blue low frequency)\\n\\nFixed.\\n\\n- Section 3.2: the AlexNet reference seems wrong, it should be \\\"ImageNet Classification with Deep Convolutional Neural Networks\\\" NIPS , 2012\\nNot applicable in the new version. Anyhow please note that we used an enhanced version of AlexNet with fewer parameters and minor changes which was proposed in that paper. However, in our current evaluations, we replace AlexNet with ResNet18 which is more efficient.\\n\\n- The structure of section 3.5 is currently flat while the content seems to be nested (two experiments and two sets of corresponding discussions). It will read better if they are organized into subsections.\\n\\nNot applicable in the new version.\"}",
"{\"title\": \"Response Page 2\", \"comment\": \"********* Final Decision *********\\n\\nQ1. I do not find the paper passing the acceptance bar mainly due to the following reasons together:\\n1) The finding is not surprising since most of the visual relations are either explicitly depth-related (e.g., behind) or are semantically constrained by depth (e.g. riding cannot happen at different depths when the image is taken orthogonal to the rider).\\n\\nA1. We respectfully disagree that the lack of a \\u2018surprising\\u2019 effect in a finding should be the reason for its rejection. Let us consider Neural Motifs and Graph R-CNN: getting improvements by contextualizing the object embeddings is actually a trivial idea, or in case of Yu\\u2019s work: getting improvements by employing the prior knowledge signal from a teacher network also might seem like a good idea. What matters in all the above cases is observing that such properties are missing in the current models, proposing some way to employ them while tackling the possible challenges on the way, and providing a study to extensively study their implications.\\n\\nQ2. an additional depth dataset is used which provides the model with privileged information. Should it have been the case that depth information was inferred without an additional offline dataset, the results would have been more interesting.\\n\\nA2. Dear Reviewer 1, thank you very much for your constructive feedback. As mentioned in an earlier comment, this point was not fully clear to us and we would like to ask you for further elaboration. If we understood correctly this point is connected to Point (1) in \\u201cPoints of extensions (improvement)\\u201d. In that case, unsupervised training of the RGB-to-Depth network would mean generating depth maps from RGB images without having access to any external datasets containing corresponding depth maps (as the supervised signal). The question is whether it is possible to convert one modality to another without having any parallel data (RGB and corresponding depth maps)? We do not see an obvious way how to achieve that (it would be similar to the task of learning a function that generates animal sounds by looking at their images (going from image to sound modality) without having access to any parallel image and sound data).\\n\\n\\n\\nQ3. the improvements due to the additional depth network are not significant or conclusive.\\n\\nA3. This is an important point. We included a more extensive ablation study within the updated version of our paper as well as a more intuitive evaluation metric (Macro R@k) so that it would be easier to spot the improvements when using depth maps. \\n\\nQ4. there is a prior uncited work with the same research question for the effectiveness of depth information in visual relation detection which uses a similar approach.\\n\\nA4. Thank you for pointing us to a very relevant work. We included it in our updated version and compared it to our work. We consider this a parallel work as it has been published after the initial submission of our work to AAAI 2018. Some visible differences are:\\n(1) Their feature extraction strategy is very limited (average over mask).\\n(2) This work only studies human-centric relations whereas we study on a much larger and extensive dataset with a broader range of possible relations.\\n(3) The experiments of this study are very limited whereas we provide an extensive set of ablation studies and comparisons to relevant works in this area.\\nNevertheless, we cited their works and added the differences and changed the wording of our paper to better reflect our contributions.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Q1. The final results improve upon the state-of-the-art, especially on the zero-shot learning regime. However, it seems that the improvement is mainly coming from the new architecture as opposed to the inclusion of the depth information. That is, ours_{c,v,l} brings most of the improvement already the last step to ours_{c,v,l,d} is negligible for non-zero-shot case.\\n\\nA1. This is a correct observation. Please note that the improvement in visual relation detection community are generally in a smaller range, for example, Graph R-CNN improves the previous baseline by 1,5% points and neural motifs improve \\u2018no context\\u2019 baseline by 1,4% points. However, to address your concern and shed more light on this, in the updated version of the paper we provided (a) the Macro R@K measure (as mentioned) and (b) a more extensive study on the effect of each feature. Please note that the more relations are detected, the harder it gets to gain improvement with other features. The same effect happens if we assume having only c, d, l and then add v (please refer to the new ablations). In fact, depth maps can be more informative as visual features (l,c,d versus l,c,v). \\n\\nQ2. Along the same line, it\\u2019s possible that this small difference between ours_{c,v,l} and ours_{c,v,l,d} for the standard predicate prediction, can be due to a hyperparameter optimization that is (only or more thoroughly) done for ours_{c,v,l,d}. The hyper-parameter optimization scheme for different baselines is not described.\\n\\nA2. Dear Reviewer, we have reported the best possible results for each model without a focus on the full model. \\n\\nQ3. Given the small difference of ablation levels, the comparison will be stronger if done multiple times and reporting mean and standard deviation of the results.\\n\\nA3. Thank you for the suggestion. Based on your suggestion, we performed bootstrapping by training each model 8 times and updated the results. The maximum reached variance was 0.01 which we added to the text.\\n\\nQ4. For a fair comparison the visual feature vector v_{so} should be tried as the feature of the union bound box of both subject and object same way as it is done for depth feature vector d_{so}. \\n\\nA4. This is a very valid point. Using union was more of an architectural choice for us and it did not have a large effect on the results but we understand that this might have caused confusion. We re-computed the results given the concatenated subject and object depth feature vectors similar to visual features. We updated Figure 2.\\n\\nQ5. The paper refers to \\u201cOurs-d\\u2019_{so}\\u201d as a baseline that *only* uses depth information with no image/label information. However, it seems that the region proposals for this feature are coming from the image-based network that uses image information. \\n\\nA5. Region proposals for all features equally come from the ground truth as we are reporting the predicate prediction settings (since we did not want to include the image detectors error within our results). In models that do not contain l_so, we do not use region proposals as features.\\n\\nQ6. Important related but uncited works: (1) [\\u201cVisual Relationship Prediction via Label Clustering and Incorporation of Depth Information\\u201d ECCV workshops 2018] studies the same question as part of their work.\\n\\nA6. This point has been explained later in the feedback to \\u201cFinal Decision\\u201d.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Q1. The RGB-to-Depth Network is pre-trained on other dataset. Is there any gap when it is used for VG or VRD dataset?\\n\\nA1. Thank you very much for your constructive feedback. Please note that we are interested in the role that depth maps can play in relation detection and evaluating the generalizability power of an RGB-to-Depth model from one dataset to another, is not the focus of our research work. Nevertheless, we updated our paper to provide samples of the generated depth maps from our datasets, as a qualitative measure. Please note that providing quantitative measures in our case is not possible as our dataset contains only RGB images and there are no ground truth depth maps available. However, you can find the quantitative measures on the test data from NYU dataset available in [1] that are supposed to be generalizable to unseen examples.\\n\\n[1] Laina, Iro, et al. \\\"Deeper depth prediction with fully convolutional residual networks.\\\" 2016 Fourth international conference on 3D vision (3DV). IEEE, 2016.\\n\\nQ2.1. Although the depth map feature extraction seems to work well, it seems to be a little trivial. Why a CNN, e.g. AlexNet, or VGG, can be used to extract depth features?\\n\\nA2.1. Thanks for the interesting question. Please note that we only use the architectural design of AlexNet (VGG or ResNet18) and train the weights of the network from scratch. Any other CNN architecture would make a good feature extractor for depth maps as (similar to RGB images), in depth maps: 1) there is high covariance within the local neighborhood which diminishes with distance and 2) the statistics are mostly stationary across the depth maps. CNNs are designed with the imposed inductive bias of locality and translation invariance which perfectly exploits such characteristics of the input domain[1].\\n\\n[1] Battaglia, Peter W., et al. \\\"Relational inductive biases, deep learning, and graph networks.\\\" arXiv preprint arXiv:1806.01261 (2018).\\n\\nQ2.2. And why the AlexNet trained from scratch performs better than AlexNet pre-trained on RGB images for object detection task and VGG net? If the author can give more explanations, this part will be more insightful.\\n\\nA2.2 This has been discussed briefly in Section 2.1.2. While RGB and depth maps share some characteristics (mentioned above) that make them good candidates for CNN-based feature extraction, they are still different modalities representing different information and even having a different pixel range. Therefore, sharing the same CNN weights between them would be sup-optimal. We can elaborate more on this within the text.\\n\\nQ3. From the plot which shows the top 10 percent absolute changes in prediction performance per predicate, the advantage of Depth is not obvious compared with RGB. And Depth does not bring the advantage claimed in Abstract. It\\u2019s a little hard to understand why depth information can rectify the prediction of (Tower, taller, trees). To sum up, the qualitative results are not so satisfying.\\n\\nA3. Thanks for pushing us towards more clarity. You are right. One of the points we wanted to make here was that improvements in under-represented predicates do not get reflected within the overall R@K. To address your concern regarding this, instead of only providing qualitative reports, we now report the result using a better quantitative metric (Macro R@K) which computes the R@K for each predicate separately and reports the mean overall. Please find these results in the updated version. We also further updated the mentioned plot and tried to make it more clear. \\nRegarding the predicate \\u201ctaller\\u201d, we removed this example as we had to remove VRD results for space constraints. However, the explanation goes like this: a shorter person standing closer to the camera can look taller than a tall person standing further away (perspective). Having access to a depth map helps us tackle this problem.\\n\\nQ4. In Table 1, what really functions seems to be c_so, v_so, and l_so, while the improvement brought by depth is limited.\\n\\nA4. This is a correct observation. Please note that the improvement in visual relation detection community are generally in a smaller range, for example, Graph R-CNN improves the previous baseline by 1,5% points and neural motifs improve \\u2018no context\\u2019 baseline by 1,4% points. However, to address your concern and shed more light on this, in the updated version of the paper we provided (a) the Macro R@K measure (as mentioned) and (b) a more extensive study on the effect of each feature. Please note that the more relations are detected, the harder it gets to gain improvement with other features. The same effect happens if we assume having only c, d, l and then add v (please refer to the new ablations). In fact, depth maps can be more informative as visual features (l,c,d versus l,c,v).\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Q1. I am not sure if the proposed approach is the way to go. Given a depth image of the scene, we can generate a reconstruction of the scene in 3D...\\n\\nA1. Thank you for your valuable comments. You are right that direct reasoning in 3D would be more beneficial. However, to reconstruct a scene in 3D we would need to collect either 1) more than one RGB image or 2) more than one depth map, coming from different views of that scene. This is a more involved process that has limitations both on data and computation. What we have in our scenario is only one RGB image from the scene which we use to synthetically generate the depth map from. Going through the RGB-to-Depth deep network in our architecture is the only way to acquire those depth maps in first place. Other than the limitation in our current datasets, please note that in many real-world scenarios for humans (or for autonomous agents such as self-driving cars) this is a similar case: a car driving directly forward towards a pedestrian has no access to the rear-view of that person. Depth maps, in this case, can already provide sufficient data on the distance to objects. In summary, we are exploring a real-world scenario where a 3D reconstruction of the scene is not accessible.\\n\\n2. There is very little discussion about the quality of predicted depth maps. Ideally, this needs to be quantified...\\n\\nA2. Thank you for the nice suggestion. Please find the attached qualitative examples in the updated version of our paper. Please note that providing quantitative measures in our case is not possible as our dataset contains only RGB images and there are no ground truth depth maps available. However, you can find the quantitative measures on the test data from the NYU dataset available in [1] that are supposed to be generalizable to unseen images.\\n\\n[1] Laina, Iro, et al. \\\"Deeper depth prediction with fully convolutional residual networks.\\\" 2016 Fourth international conference on 3D vision (3DV). IEEE, 2016.\\n\\n3. To use a siamese (shared weights) feature extractor between RGB and Depth images or not, is not a significant contribution by itself...\\n\\nA3. Thank you for pointing this out. We considered this a contribution as most state-of-the-art works assume otherwise without providing sufficient experiments on it. However, we updated our contributions list to reflect your concern. We removed this item as our contributions and only described it briefly in the feature extraction section.\", \"minor_comments\": \"1. Figure 2 seems to indicate that a Faster-RCNN is used on both RGB and Depth steams which is backed up by text in Section 2 (first paragraph)...\\n\\nA1. We did not aim to indicate that. Please note that Faster-RCNN has the region proposal networks and the network we apply to depth stream in Figure 2 has only a feature extractor (ResNet18 in this case). As shown in the RGB stream, RPN (from Faster R-CNN) is only applied to RGB images and the extracted regions are also used to provide bounding boxes for the depth maps (also explained in the mentioned paragraph).\\n\\n\\n2. The VGG-16 network is pre-trained in ImageNet and finetuned to relevant data but it is not clear for what task? If the task is object detection, it needs to be trained for it (not fine-tuned, unless it is being initialized from COCO pre-training)...\\n\\nA2. This is pre-trained for object detection and fine-tuned for predicate prediction. We appreciate your comment. We made it clearer in the paper. \\n\\n3. The AlexNet-BN depth model is trained for relation detection using only depth. But it is not clear if it is using proposals/boxes generated by RGB detection model or using ground-truth boxes. Basically, the object-detection component of the pipeline is not clear at all.\\n\\nA3.We use the ground-truth bounding boxes. During training, using noisy proposals from RGB detection would be sub-optimal (specially under predicate prediction setting).\"}",
"{\"title\": \"Question regarding one of the points\", \"comment\": \"Dear Reviewer 1, thank you very much for your constructive feedback. We are working on your points and will soon release the updated paper together with our responses. In the meantime, one of the points was not fully clear to us and we would like to ask you for further elaboration. This is about Point (2) in \\\"Final Decision\\\". If we understood correctly this point is connected to Point (1) in \\u201cPoints of extensions (improvement)\\u201d. In that case, unsupervised training of RGB-to-Depth network would mean generating depth maps from RGB images without having access to any external datasets containing corresponding depth maps (as the supervised signal). The question is whether it is possible to convert one modality to another without having any parallel data (RGB and corresponding depth maps)? We do not see an obvious way how to achieve that (it would be similar to the task of learning a function that generates animal sounds by looking at their images (going from image to sound modality) without having access to any parallel image and sound data).\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"OVERVIEW:\\nThe authors propose to use depth information to better predict the visual relation between objects in an image. They do this by incorporating a pre-trained RGB-to-Depth model within existing frameworks. They claim the following contributions:\\n1. First to utilize 3D information in visual relation detection. They synthesize depth images for existing benchmark datasets of VRD and VG using a pre-trained RGB-to-Depth model trained on NYUv2 to generate RGB-D data for visual relation detection.\\n2. Discuss and empirically investigate different strategies to extract features from depth maps for relation detection.\\n3. Study the quantitative and qualitative benefits of incorporating depth maps. \\\"We show in our empirical evaluation using the VRD and VG datasets, that models using depth maps can outperform competing methods by a margin of up to 3% points\\\".\", \"major_comments\": \"1. I liked the idea of using depth information to inform visual relationships but I am not sure if the proposed approach is the way to go. Given a depth image of the scene, we can generate a reconstruction of the scene in 3D, even if it is partial/imperfect. Direct reasoning in 3D should now be possible instead of going via deep networks as proposed in the paper. I believe a direct 3D approach would make a meaningful baseline at the very least and needs to be discussed.\\n2. The authors use a pre-trained RGB-to-Depth network trained on NYU-v2 to predict depth for the images of VRD and VG. There is very little discussion about the quality of predicted depth maps. Ideally, this needs to be quantified to convince the reader that the generated depth maps are \\\"good\\\" but at the very least the authors need to show qualitative examples (both good, typical and bad) to prove that the pre-trained network generates meaningful depth maps.\\n3. To use a siamese (shared weights) feature extractor between RGB and Depth images or not, is not a significant contribution by itself. In principle, separate feature extractors lead to larger model complexity/learning capability and make sense given domain separation between RGB and Depth.\", \"minor_comments\": \"1. Figure 2 seems to indicate that a Faster-RCNN is used on both RGB and Depth steams which is backed up by text in Section 2 (first paragraph). However, in Section 3.2, under RGB Feature Extraction and Depth Map Feature Extraction, the discussion is about VGG-16 and AlexNet-BN networks. The VGG-16 network is pre-trained in ImageNet and finetuned to relevant data but it is not clear for what task? If the task is object detection, it needs to be trained for it (not fine-tuned, unless it is being initialized from COCO pre-training). The AlexNet-BN depth model is trained for relation detection using only depth. But it is not clear if it is using proposals/boxes generated by RGB detection model or using ground-truth boxes. Basically, the object-detection component of the pipeline is not clear at all.\", \"note\": \"I would like to mention that I have published in monocular object pose estimation and work in the object recognition. I am not as familiar with the visual relation detection field but I understand all the components proposed by the authors in this work. I believe I understood the paper and reviewed it fairly (to the best of my ability).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes to leverage the depth information for relation prediction, arguing that the depth information benefit the prediction of some predicates. To solve the lack of 3D data, an RGB-to-Depth model is trained on external available dataset and then applied to images from visual relation dataset. In the experiments, they investigate different strategies to extract features from depth maps and the explore effectiveness of depth information by comparing the model that only used depth map as input with those which use RGB information. The comparisons with other methods and ablation studies under both Zero-shot setting and normal setting demonstrate the effectiveness of depth information.\\n\\n+Strength:\\n(1) The motivation is reasonable and what the authors make an attempt to explore is very meaningful. Visual relation especially the spatial relation is not likely to be predicted accurately without 3D information. In other words, it seems that visual relation prediction task will be extended to 3D images rather than staying within 2D images. Thus what the authors do is a good exploration for further extensions.\\n(2) Comparisons with previous methods and the results show that the depth information is useful to some extent, but not so obvious.\\n(3) The writing of this article is good and it\\u2019s very easy to understand.\\n\\n-Weakness:\\n(1) The RGB-to-Depth Network is pretrained on other dataset. Is there any gap when it is used for VG or VRD dataset? \\n(2) Although the depth map feature extraction seems to work well, it seems to be a little trivial. Why a CNN, e.g. AlexNet, or VGG, can be used to extract depth features? And why the AlexNet trained from scratch performs better than AlexNet pretrained on RGB images for object detection task and VGG net? If the author can give more explanations, this part will be more insightful.\\n(3) From the plot which shows the top 10 percent absolute changes in prediction performance per predicate, the advantage of Depth is not obvious compared with RGB. And Depth does not bring the advantage claimed in Abstract. It\\u2019s a little hard to understand why depth information can rectify the prediction of (Tower, taller, trees). To sum up, the qualitative results are not so satisfying.\\n(4) In Table 1, what really functions seems to be c_so, v_so, and l_so, while the improvement brought by depth is limited.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"********* Post Rebuttal *********\\n\\nI appreciate the authors' effort in providing thorough responses and revised manuscript. \\n\\nI agree with the authors that \\\"the finding not being surprising\\\" is not a ground for rejection. I tried to word my final decision carefully but it seems it has still caused confusion for the authors. As I have mentioned in my original review, the rating was a result of the 4 points considered *together* . \\n\\nThat is, if one exploits privileged information that needs extra sensory data and/or annotation (point 2), *and*, this privileged information is clearly related and thus should be normally useful for the final task (point 1), *and* achieve marginal improvements (point 3), it can be a ground for rejection. Especially, given that prior works with similar arguments exists (point 4). \\n\\nThe rebuttal has alleviated the issue of marginal improvements (point 3) by introducing meanR@K (or as the revised paper refer to it, Macro R@K). Here, the improvements are more significant both compared to the state of the art and ablated baselines.\\n\\nThe authors also argue that the related [Yang et al. 2018] paper (point 4) should be considered a concurrent submission since the authors original submission was to AAAI18. \\n\\nThe rebuttal also addresses other clarity or experimental issues which improves the quality of the revised work.\\n\\nFinally, I understand that the privileged information is only required during training time which is a good point.\\n\\nAll in all, *assuming that [Yang et al. 2018] is considered a concurrent work* according to ICLR, I think the revised paper becomes slightly above borderline and thus I change my rating to \\\"weak accept\\\". If [Yang et al. 2018] is not considered concurrent work, then, a conclusive comparison is required for the acceptance of the current work.\\n\\n\\n********* Summary *********\\n \\nThe paper poses the question of whether depth information is informative for visual relationship prediction using still images. It is intuitive that 3D arrangement of objects in an image can be a useful cue for predicting their relationship. As such it is important to see whether and to what extent depth information complements RGB information for visual relation detection. That is the focus of this paper.\\nThe paper proposes to use an off-the-shelf monocular depth estimation networks to augment the available RGB information towards better visual relation detection. For that, it proposes a specific network two-stream structure working on RGB image and (predicted) depth image. The proposed model demonstrates improved results upon state of the art for visual relation prediction.\\n \\n\\n********* Strengths and Weaknesses *********\\n \\n+ A comprehensive set of tests has been conducted. \\n+ Zero-shot prediction results are particularly interesting.\\n+ The experiment on ranking the predicate classes based on the change in prediction accuracy before and after using depth information (Figure 4) is interesting and intuitive.\\n* The final results improve upon the state-of-the-art, especially on the zero-shot learning regime. However, it seems that the improvement is mainly coming from the new architecture as opposed to the inclusion of the depth information. That is, ours_{c,v,l} brings most of the improvement already the last step to ours_{c,v,l,d} is negligible for non-zero-shot case.\\n- Along the same line, it\\u2019s possible that this small difference between ours_{c,v,l} and ours_{c,v,l,d} for the standard predicate prediction, can be due to a hyper-parameter optimization that is (only or more thoroughly) done for ours_{c,v,l,d}. The hyper-parameter optimization scheme for different baselines is not described. \\n- Given the small difference of ablation levels, the comparison will be stronger if done multiple times and reporting mean and standard deviation of the results.\\n- For a fair comparison the visual feature vector v_{so} should be tried as the feature of the union bound box of both subject and object same way as it is done for depth feature vector d_{so}. \\n- The paper refers to \\u201cOurs-d\\u2019_{so}\\u201d as a baseline that *only* uses depth information with no image/label information. However, it seems that the region proposals for this feature are coming from the image-based network that uses image information. \\n \\n- Important related but uncited works:\\n(1) [\\u201cVisual Relationship Prediction via Label Clustering and Incorporation of Depth Information\\u201d ECCV workshops 2018] studies the same question as part of their work.\\n \\n\\n********* Final Decision *********\", \"i_do_not_find_the_paper_passing_the_acceptance_bar_mainly_due_to_the_following_reasons_together\": [\"1) The finding is not surprising since most of the visual relations are either explicitly depth-related (e.g., behind) or are semantically constrained by depth (e.g. riding cannot happen at different depths when the image is taken orthogonal to the rider).\", \"2) an additional depth dataset is used which provides the model with privileged information. Should it have been the case that depth information were inferred without an additional offline dataset, the results would have been more interesting.\", \"3) the improvements due to the additional depth network are not significant or conclusive.\", \"4) there is a prior uncited work with the same research question for effectiveness of depth information in visual relation detection which uses a similar approach.\", \"********* Minor points *********\", \"the code is not available. This is especially important since the paper is outperforming prior works which could be a contribution if reproducible.\", \"Section 2.2: is l_{so} concatenation of l_s and l_o?\", \"Section 2.2: y_{spo} is defined but never used.\", \"Equation 2: why do we have both e_p and f in the exponents? Aren\\u2019t they the same?\", \"Equation 2: P is never defined.\", \"Page 5: \\u201ca fully connected hidden layer of 64, 200, 4096 and 20 neurons\\u201d: this amounts to 3 hidden layers.\", \"Why VGG network for visual feature and AlexNet for depth features?\", \"zero-shot learning results on visual genome is missing\", \"training procedure is a bit unclear: the text suggest that the fine tuning and/or learning of the three components might happen separately. It is important to clearly state if they are done in an end-to-end fashion and simultaneously or separately; and why.\", \"It\\u2019s good to name the method in table 2 in the same fashion as table 1. With the current naming (based on architecture) it is a bit confusing to understand the content without additional cross referencing. For instance AlexNet-BN - Raw seems to correspond to Ours_{c,v,l,d}\", \"Figure 4: the frequency represented as different shades of red or blue is really hard to notice especially on a printed paper. The red vs blue color coding is not necessary since the bars going up or down indicate the same quality. So, it might be better to use red/blue for frequency instead (e.g. dark red high frequency to dark blue low frequency)\", \"Section 3.2: the AlexNet reference seems wrong, it should be \\\"ImageNet Classification with Deep Convolutional Neural Networks\\\" NIPS , 2012\", \"The structure of section 3.5 is currently flat while the content seems to be nested (two experiments and two sets of corresponding discussions). It will read better if they are organized into subsections.\", \"********* Points of extensions (improvement) *********\", \"I believe *unsupervised* discovery of depth information for visual relation detection can be an interesting direction since it is not limited to the availability of relevant depth dataset.\", \"It is not clearly motivated why one should use two separate networks for depth and RGB inputs in light of the additional complexity. For instance, it is good to discuss what is the advantage of the proposed (computationally more expensive) method over the following two simpler baselines:\", \"Faster RCNN is used on RGBD input to produce a single feature vector\", \"above case with RGB input but have the Faster RCNN predict the depth map as an auxiliary loss.\"]}"
]
} |
S1et8gBKwH | Semi-supervised Pose Estimation with Geometric Latent Representations | [
"Luis A. Perez Rey",
"Dmitri Jarnikov",
"Mike Holenderski"
] | Pose estimation is the task of finding the orientation of an object within an image with respect to a fixed frame of reference. Current classification and regression approaches to the task require large quantities of labelled data for their purposes. The amount of labelled data for pose estimation is relatively limited. With this in mind, we propose the use of Conditional Variational Autoencoders (CVAEs) \cite{Kingma2014a} with circular latent representations to estimate the corresponding 2D rotations of an object. The method is capable of training with datasets that have an arbitrary amount of labelled images providing relatively similar performance for cases in which 10-20% of the labels for images is missing. | [
"Semi-supervised learning",
"pose estimation",
"angle estimation",
"variational autoencoders"
] | Reject | https://openreview.net/pdf?id=S1et8gBKwH | https://openreview.net/forum?id=S1et8gBKwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"p72TsB7FtJ",
"Bklwm1hYsr",
"S1eXpCiYsr",
"Hyg_tCjKjS",
"Byxpp1v0cS",
"B1gUsOVx5B",
"HkgKc3DJqS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746380,
1573662494694,
1573662395228,
1573662336428,
1572921284571,
1571993757659,
1571941521324
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2332/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2332/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2332/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2332/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2332/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2332/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper addresses the problem of rotation estimation in 2D images. The method attempted to reduce the labeling need by learning in a semi-supervised fashion. The approach learns a VAE where the latent code is be factored into the latent vector and the object rotation.\\n\\nAll reviewers agreed that this paper is not ready for acceptance. The reviewers did express promise in the direction of this work. However, there were a few main concerns. First, the focus on 2D instead of 3D orientation. The general consensus was that 3D would be more pertinent use case and that extension of the proposed approach from 2D to 3D is likely non-trivial. The second issue is that minimal technical novelty. The reviewers argue that the proposed solution is a combination of existing techniques to a new problem area. \\n\\nSince the work does not have sufficient technical novelty to compare against other disentanglement works and is being applied to a less relevant experimental setting, the AC does not recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reviewer 2 response\", \"comment\": \"Dear reviewer, I would like to thank you for the time and effort spent in analyzing this paper and for the specific suggestions made to improve this work. We will answer the concerns presented in the review and indicate the future actions to improve the paper.\\n1) The central empirical result stated is that using this approach allows one to reduce amount of labelled data by 10-20 %. First, even if valid, this is not a very convincing reduction in the amount of supervision. However, I feel this claim is not well-established by the experiments:\\n1a) The paper should report a baseline with only using the loss in eqn 3 and only training the encoder (using various fractions of training data) to predict the rotation i.e. purely discriminative training without training a generative model. The current plots of performance vs fraction of labelled data don't mean much until compared to a similar plot for this baseline. The current results don't really highlight the importance of training the generative model or using the unlabeled data.\", \"a\": \"The training/testing dataset were divided at model level i.e. during training the CVAE has received no renders from the testing 3D models. This way we ensure there has been no information leakage about certain models from the training dataset into the testing phase.\\nOnce again, we would like to thank the reviewer for taking the time to analyze the work and provide useful suggestions to improve our work.\\n-- [1] Learning Disentangled Representations with Semi-Supervised Deep Generative Models, NIPS 2017. Siddharth et. al.\"}",
"{\"title\": \"Reviewer 1 response\", \"comment\": \"Dear reviewer, I would like to thank you for the time and effort spent in analyzing our work. We would like to provide some answers to the comments made in the review and to make a small clarification with respect to the results.\\n1)\\t The construction itself would be novel while each component (e.g., CVAE, latent representation for the rotations, and semi-supervised construction of CVAE) have been already known. Experimental results are not surprising but show that the presented method is useful to some extent. In a sense, it is a bit disappointing that we need 50+% images needed to be labeled to achieve < 20-degree error.\", \"a\": \"Thank you for very much for your suggestion. We will include such expansion in a future version of the paper.\\nOnce again, we would like to thank the reviewer for the time taken and providing valuable feedback.\"}",
"{\"title\": \"Reviewer 4 response\", \"comment\": \"Dear reviewer, I would like to thank you for the time and effort spent in analyzing this paper and for the specific suggestions made to improve this work. We will answer the concerns presented in the review and indicate the future actions to improve the paper.\\n1)\\tThe entire section on CVAE's and losses are quite standard in literature. The interesting part is in combining the supervised and unsupervised parts of the method for the task for pose estimation. But in the end this is a simple weighted loss function (equation 5). So I wonder what is the novelty? What are the new capabilities enabled by this approach?\", \"a\": \"Thank you very much for this suggestion. We will include more experiments in a future version of the paper that try to answer such questions and concerns.\\nOverall, we have identified that we are lacking some important experiments to position our method with respect to a baseline. Moreover, our work might lack stronger points of support, we will include experiments with SO(3) rotations and try to explore different directions to improve our contributions. Once again, we would like to thank the reviewer for the time taken into writing a detailed analysis of our work and providing constructive feedback.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": [\"This paper presents a semi-supervised approach to learn the rotation of objects in an image. The primary motivation is that for rotation estimation datasets may not always be fully labeled, so learning partially from labeled and partially for unlabeled is important. The approach is to use a CVAE with a supervised loss and an unsupervised loss and to jointly train the network. Limited experiments that show performance are presented.\", \"First, the paper solves a very interesting problem with potentially wide applications. The paper is reasonably well-written.\", \"Unfortunately, I don't believe that the contributions of the paper meet the standards of ICLR. I justify my opinion below. The experiments are also very weak.\", \"While the high level goal of \\\"pose estimation\\\" is clear. Even after reading the paper multiple times, I did not understand the setting well. It appears like the paper looks at the problem of 2D orientation estimation of objects in images. However, this setting is restrictive and not very practical in reality. We mostly care about 3D pose estimation. It would have been good to see results on 3D rotations at the very least.\", \"Contribution: It is unclear to me what the primary contribution(s) of the paper is. The entire section on CVAE's and losses are quite standard in literature. The interesting part is in combining the supervised and unsupervised parts of the method for the task for pose estimation. But in the end this is a simple weighted loss function (equation 5). So I wonder what is the novelty? What are the new capabilities enabled by this approach?\", \"Related Work:\", \"Implicit 3D Orientation Learning for 6D Object Detection from RGB Images, ECCV 18\", \"I would have loved to see a description of the differences in the loss functions (1) and (2). Perhaps this can help elevate the contribution more?\", \"I also missed justification of why the particular design choice is suitable for this problem? Would direct regression using a simple CNN work better?\", \"In equation (4), how are the two losses balanced?\", \"The dataset generation part is just confusing. ModelNet40 is rendered but only 2D rotation is predicted? What does 2D rotation mean for a 3D object?\", \"Could this method be tested on a dataset like dSprites (https://github.com/deepmind/dsprites-dataset) which has 3D rotations?\", \"Regarding experiments: I was disappointed to see no comparisons with other approaches or even a simple baseline. A CNN that directly regresses orientation could help put the tables and plots in perspective.\", \"Overall, the problem is important (if lifted to 3D) with important applications. However, the paper does not say anything new about how to solve the problem and the experiments are weak. In its current state, I am unable to recommend acceptance.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes to employ conditional variational autoencoder (CVAE) to estimate the geometry of 2D rotations of objects given images partially labeled. Here, the label represents the geometry of the 2D rotation. The proposed method introduces two latent representation. z is the ordinal latent variable and r a latent representation for the rotations where the latent variable is defined in the 1-dimensional circle in R^2 so that it can naturally represent a hyperspherical latent space.\\nThe construction of the proposed CVAE is straightforward. For labeled images, the (evidence lower bound of the) loglikelihood of the image-rotation pairs is maximized. For labeled images. For labeled images, the (evidence lower bound of the) loglikelihood of the images is maximized. \\n\\nThe decision of the reviewer of this paper is weak reject. The major reason is the lack of technical originality. The construction itself would be novel while each component (e.g., CVAE, latent representation for the rotations, and semi-supervised construction of CVAE) have been already known.\\n\\nExperimental results are not surprising but show that the presented method is useful to some extent. In a sense, it is a bit disappointing that we need 50+% images needed to be labeled to achieve < 20-degree error. One interesting observation of this paper is that more labeled images give better results than giving greater number of renders. Expansion to 3D rotations would be a good challenge.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper tackles the task of rotation estimation in a setting where both labelled and unlabelled examples are available for training. It proposes to learn a generative model of images (a VAE), where the \\u2018code\\u2019 is factored into a latent vector z and the object rotation r. As in training a VAE, an image encoder that predicts the distribution over (z, r) and the generator are jointly trained, but with additional supervision on the distribution over r for the labelled examples.\\n\\nI think the overall idea of learning a disentangled generative model in a semi-supervised setting is simple and elegant, and could in principle help leverage unlabelled data. However, I do have some concerns regarding the specific contributions of this work, and several reservations about the experiments reported, and would overall argue for rejection.\", \"concerns\": \"1) The central empirical result stated is that using this approach allows one to reduce amount of labelled data by 10-20 %. First, even if valid, this is not a very convincing reduction in the amount of supervision. However, I feel this claim is not well-established by the experiments:\\n\\n1a) The paper should report a baseline with only using the loss in eqn 3 and only training the encoder (using various fractions of training data) to predict the rotation i.e. purely discriminative training without training a generative model. The current plots of performance vs fraction of labelled data don't mean much until compared to a similar plot for this baseline. The current results don't really highlight the importance of training the generative model or using the unlabelled data.\\n\\n1b) I think there are some inconsistencies in performances reported in Fig 2. I assume the test set is same despite different training data, because the paper states \\\"All of the trained models are evaluated with respect to the complete test set\\\". In this regard, I am puzzled why using 100% labelled data with 16 renders is significantly better than using 50% labelled data with 32 renders -- these should imply similar number of labelled examples, and more unlabelled ones in the former.\\n\\n2) While the discussion points to this, the paper would really benefit from having results in a real setting, in particular as pose estimation is a field with a lot of prior methods that have been shown to work in these settings. The current results are all in a setup with synthetic, unoccluded data, without background variation, equidistant camera uniformly sampled along a circle. The central idea of using a generative model would be much more difficult to operationalize in a realistic setting where these simplifying assumptions are not made, and I'd only be convinced about the applicability of the approach by results in that setting. As a possible setup, one case use many imagenet images in conjunction with labelled examples in PASCAL3D+ to try this approach.\\n\\n3) The overall approach maybe novel in context of pose estimation, but this idea of learning a disentangled generative model is not, and there are several papers which do so with varying amount of supervision e.g. see [1] below for similar ideas, and pointers. While some details here may vary, in context of these prior works, I'd view this paper as mostly applying well-established ideas to a new task.\\n\\n--\\nIn addition to the above, I have a question regarding the training/testing data:\", \"q\": \"The dataset description only states data was randomly divided - was this random division at an image level, or model level i.e. could different renderings of the same model be in train and test set?\\n--\\n\\n[1] Learning Disentangled Representations with Semi-Supervised Deep Generative Models, NIPS 2017. Siddharth et. al.\"}"
]
} |
HklFUlBKPB | Identifying Weights and Architectures of Unknown ReLU Networks | [
"David Rolnick",
"Konrad P. Kording"
] | The output of a neural network depends on its parameters in a highly nonlinear way, and it is widely assumed that a network's parameters cannot be identified from its outputs. Here, we show that in many cases it is possible to reconstruct the architecture, weights, and biases of a deep ReLU network given the ability to query the network. ReLU networks are piecewise linear and the boundaries between pieces correspond to inputs for which one of the ReLUs switches between inactive and active states. Thus, first-layer ReLUs can be identified (up to sign and scaling) based on the orientation of their associated hyperplanes. Later-layer ReLU boundaries bend when they cross earlier-layer boundaries and the extent of bending reveals the weights between them. Our algorithm uses this to identify the units in the network and weights connecting them (up to isomorphism). The fact that considerable parts of deep networks can be identified from their outputs has implications for security, neuroscience, and our understanding of neural networks. | [
"deep neural network",
"ReLU",
"piecewise linear function",
"linear region",
"activation region",
"weights",
"parameters",
"architecture"
] | Reject | https://openreview.net/pdf?id=HklFUlBKPB | https://openreview.net/forum?id=HklFUlBKPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"IBVseua3oI",
"rJl2PVMsoS",
"HJguCCkijB",
"S1efi0yioS",
"BkgCE01ioH",
"SygDMCksoB",
"HJx7Jukc9H",
"BJxjoEJUcB",
"SJeEalATKB",
"SyxqKXM6KS",
"BJx_nit4Fr",
"Hkl8me6QtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798746351,
1573753955903,
1573744336405,
1573744282438,
1573744182508,
1573744143322,
1572628442715,
1572365475175,
1571836091970,
1571787649714,
1571228591570,
1571176478189
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2331/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2331/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2331/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2331/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2331/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2331/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2331/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2331/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2331/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2331/Authors"
],
[
"~Nicholas_Carlini1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This article studies the identifiability of architecture and weights of a ReLU network from the values of the computed functions, and presents an algorithm to do this. This is a very interesting problem with diverse implications. The reviewers raised concerns about the completeness of various parts of the proposed algorithm and the complexity analysis, some of which were addressed in the author's response. Another concern raised was that the experiments were limited to small networks, with a proof of concept on more realistic networks missing. The revision added experiments with MNIST. Other concerns (which in my opinion could be studied separately) include possible limitations of the approach to networks with no shared weights nor pooling. The reviewers agree that the article concerns an interesting topic that has not been studied in much detail yet. Still, the article would benefit from a more transparent presentation of the algorithm and theoretical analysis, as well as more extensive experiments.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for you reply\", \"comment\": \"I have read your rebuttal, and most of my questions are well addressed. I maintain my original rating on this paper.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": [\"Thank you for the careful review and feedback. To respond to the questions raised:\", \"Detail on algorithmic primitives. We have clarified the text. The algorithm PointsOnLine is able to perform binary search on multiple points simultaneously. Our asymptotic analysis is correct because we know the expected number of boundary points that will be discovered along the line is linear in the total number of neurons of the network (Hanin and Rolnick 2019). For TestHyperplane, the goal is to determine whether all points on a hyperplane do lie within the boundary (since then the hyperplane arises from the boundary of a layer-1 neuron); to do this, we test random points far along the hyperplane - if all these points are indeed on the boundary, then we conclude that the hyperplane is contained in the boundary.\", \"ResNets. We apologize for the lack of clarity, which we have corrected in the text; we intended to emphasize that our algorithm can be modified to learn skip connections, rather than specialty layers that may also occur in a ResNet. Here is the intuition for such modification: In the case of skip connections, each boundary is still given by a bent hyperplane which bends when it intersects the bent hyperplanes associated with neurons at earlier layers. However, potential weights must in this case be considered between any two neurons in different layers. Deriving such skip weights is somewhat more complex than for MLPs, as the \\u201cbend\\u201d is influenced not merely by the skip connection but by the weights along all other paths between the two neurons through the network. Thus, it is necessary to \\u201cmove backward\\u201d through the network - for a neuron in layer k, one must first derive the weights in the preceding layer k-1, then at k-2, and so on. If the reviewers believe that such intuition would not confuse the main argument, we are happy to include it in an appendix.\", \"Evaluation on more complex networks. We have now added experimental verification of our first-layer method on MLPs trained on MNIST and on networks with 3 and 4 layers. Please see Figure 3.\", \"Scaling of weights. As described in 3.2, it is mathematically impossible for any algorithm to learn the \\u201ctrue\\u201d scaling of the weights in a network, since this scaling can be arbitrarily changed without in any way affecting the underlying function. As for how we compared the approximated weights with the true weights, we rescaled both sets of weights vectors to norm 1 for comparison (again, since the scaling is arbitrary).\", \"Figure 3, # of queries. The number of queries shown in the figure is *per parameter learned* - therefore, for larger networks, the number of queries goes up, but not by as much as the number of parameters inferred goes up.\", \"Number of queries for additional layers. Depending on the approach taken to explore intersections between hyperplanes, the number of queries required can grow linearly in the number of parameters inferred, as each weight can be inferred by examining a single intersection between boundaries.\", \"Choice of parameters. All choices of parameters are presented in our publicly available code. The length of line segments used in sampling does not significantly affect the results; nor does the radius.\", \"No prior work has, to our knowledge, been able to deduce even the first layer of an MLP with 2 hidden layers. We show empirically that our algorithm is able to deduce the first layer of 2-, 3-, and 4-layer networks, as well as the second layer of 2-layer networks. The mathematical justification for our algorithm holds for any number of layers. We believe that each of these contributions significantly advances the state-of-the-art.\"]}",
"{\"title\": \"Response to Review #3\", \"comment\": [\"Thank you for the careful review and feedback. To respond to the questions raised:\", \"We do suppose a completely black box condition. We first deduce the first layer (and describe mathematically why our algorithm works). We then deduce the next layer recursively using our deduction of the previous layer (and again describe mathematically why our algorithm works). We invite the reviewer to verify in our publicly available code that we are not violating the black box condition in the smallest particular.\", \"We have clarified the description of our experimental setup in Section 6; in particular, we have spelled out that the memorization task involves training an MLP for 1000 epochs using Adam optimizer on a dataset consisting of 1000 ten-dimensional vectors with i.i.d. random coordinates drawn from a unit Gaussian, given arbitrary binary labels. The network must memorize these points, in keeping with the literature on memorization and generalization (e.g. Zhang et al. 2016).\", \"We now, as requested, show the success of our method for a network trained on MNIST. Please see Figure 3.\", \"We have likewise, as requested, added experimental verification of our method for 3-layered MLPs and also 4-layered MLPs. Please see Figure 3.\", \"We have added a reference to the work of Oh et al. Thank you for calling this work to our attention.\", \"No prior work has, to our knowledge, been able to deduce even the first layer of an MLP with 2 hidden layers. We show empirically that our algorithm is able to deduce the first layer of 2-, 3-, and 4-layer networks, as well as the second layer of 2-layer networks. The mathematical justification for our algorithm holds for any number of layers. We believe that each of these contributions significantly advances the state-of-the-art.\"]}",
"{\"title\": \"Response to Review #2\", \"comment\": [\"Thank you for the careful review and feedback. To respond to the questions raised:\", \"Larger networks. We have now added experimental verification of our first-layer method on MLPs trained on MNIST and on networks with 3 and 4 layers. Please see Figure 3.\", \"Number of queries for additional layers. Depending on the approach taken to explore intersections between hyperplanes, the number of queries required can grow linearly in the number of parameters inferred, as each weight can be inferred by examining a single intersection between boundaries.\", \"ResNets. In the case of skip connections, each boundary is still given by a bent hyperplane which bends when it intersects the bent hyperplanes associated with neurons at earlier layers. However, potential weights must in this case be considered between any two neurons in different layers. Deriving such skip weights is somewhat more complex than for MLPs, as the \\u201cbend\\u201d is influenced not merely by the skip connection but by the weights along all other paths between the two neurons through the network. Thus, it is necessary to \\u201cmove backward\\u201d through the network - for a neuron in layer k, one must first derive the weights in the preceding layer k-1, then at k-2, and so on. If the reviewers believe that such intuition would not confuse the main argument, we are happy to include it in an appendix.\", \"No prior work has, to our knowledge, been able to deduce even the first layer of an MLP with 2 hidden layers. We show empirically that our algorithm is able to deduce the first layer of 2-, 3-, and 4-layer networks, as well as the second layer of 2-layer networks. The mathematical justification for our algorithm holds for any number of layers. We believe that each of these contributions significantly advances the state-of-the-art.\"]}",
"{\"title\": \"Response to Review #4\", \"comment\": \"Thank you for the careful review and feedback. To respond to the questions raised:\\n\\n1. We have clarified the presentation of Figure 1. The middle panel of the figure shows the output N(x,y) of the network as a function of the two inputs given to the network N. The function N is defined by the network shown on the left panel (where the weights were chosen randomly according to the standard initialization procedure described in Experiments); the network itself is the most succinct description of this function. Regarding the partition of input space, each part within the partition corresponds to the set of input points on which a particular subset of the ReLUs in the network is active, and crossing between two parts within the partition means that (at least) one neuron flips from active to inactive ReLU. Once again, the simplest closed form expression of this partition is given by the network itself - it is not a regular tiling or any other kind of partition that lends itself to succinct description. In some sense, this is the power of the neural network, that it is able to fit complicated functions that cannot be described in other ways.\\n\\n2. As we note in the text (see Section 5), there are cases where our algorithm fails - if certain bent hyperplanes coincide exactly or if the boundaries associated with two neurons never intersect. The former case is vanishingly unlikely for real networks - in particular, slightly perturbing the weights makes it possible for the algorithm to succeed once again, and one will never encounter such a brittle setting as the result of a noisy learning rule. In the latter case, as we describe in the text, it is possible for the \\u201cfailure\\u201d of the algorithm to actually be a consequence of the network being ill-determined from the start, with several possible isomorphic settings of the weights. Regarding computational complexity, we include several related hardness results in our Related Work section (e.g. Goel, Kanade, et al. 2017).\\n\\n3. Reconstructing the input based on the output depends on the nature of the function in question. For example, if the output is of lower dimension than the input, then any continuous mapping will be non-injective - i.e. it will be impossible to recover the input from the output. In cases where the function is injective, it is an excellent question to ask, but one which is likely unrelated to the methods we propose here. Regarding reconstructing the training data based on the model, there is extensive interesting literature on membership inference attacks, such as Shokri et al. 2017, Song et al. 2017, and Carlini et al. 2019.\\n\\nWe have now added experimental verification of our first-layer method on MLPs trained on MNIST and on networks with 3 and 4 layers. Please see Figure 3. No prior work has, to our knowledge, been able to deduce even the first layer of an MLP with 2 hidden layers. We show empirically that our algorithm is able to deduce the first layer of 2-, 3-, and 4-layer networks, as well as the second layer of 2-layer networks. The mathematical justification for our algorithm holds for any number of layers. We believe that each of these contributions significantly advances the state-of-the-art.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"In this paper, the authors showed that in many cases it is possible to reconstruct the architecture, weights, and biases of a deep ReLU network given the ability to query the network. The studied problem is very interesting. I have the following questions about this paper:\\n\\n1. Can the authors provide detailed explanation of Figure 1? For instance start from input (x_1, x_2), and the weight in layer 1 and layer 2, what is the exact form of the function plotted in the middle panel? Also, how the input space is partitioned? I appreciate the authors provide this simple example, but detailed math will help readers to understand this easily.\\n\\n2. How about the efficiency of the proposed method? Is it NP-hard? I would like to see some analysis of the computational complexity and also some related experimental results.\\n\\n3. If the ReLU network can be reconstructed, can the input also be reconstructed based on the output? It would be very interesting to show a few example on reconstructing the input. Also, is that possible to even reconstruct the training data based on the released model?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces a procedure for reconstructing the architecture and weights of deep ReLU network, given only the ability to query the network (observe network outputs for a sequence of inputs). The algorithm takes advantage of the piecewise linearity of ReLU networks and an analysis by [Hanin and Rolnick, 2019b] of the boundaries between linear regions as bent hyperplanes. The observation that a boundary bends only for other boundaries corresponding to neurons in earlier network layers leads to a recursive layer-by-layer procedure for recovering network parameters. Experiments show ability to recover both random networks and networks trained for a memorization task. The method is currently limited to ReLU networks and does not account for any parameter-sharing structure, such as that found in convolutional networks.\\n\\nThe networks used in experiments appear to be substantially smaller (e.g. input/output dimensions on the order of 10 neurons) than those used in real applications. Is the proposed approach practical to apply to networks used in actual applications? How does the number of queries per parameter scale? (page 5 mentions sample complexity for recovering the first layer, but it would be helpful to clarify the situation for subsequent layers).\\n\\nPage 7 states that the proposed algorithm also holds for ResNets, with slight modifications, but defers details to future work. If the modifications are indeed slight, it would better to include them here as this is an important special case and would increase the potential impact of the paper.\\n\\nOverall, while the paper does appear to rely heavily on developments made by [Hanin and Rolnick, 2019b], there is a potentially interesting contribution here. I would appreciate clarification on concerns over practicality and the extension to ResNets.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2331\", \"review\": [\"Main contribution of the paper\", \"The paper proposes a new method to recover the unknown structure of the network by utilizing the piecewise linearity of ReLU network.\", \"Some theoretical explanation of the method is provided.\", \"Note & Questions\", \"As far as the author understands, the algorithm does not suppose a fully black-box condition. By seeing the section 4.1 and 4.2, it seems possible to access neurons in the intermediate layers.\", \"Also, the proposed method seems to target only a MLP.\", \"Strong-points\", \"This field is not that thoroughly investigated, and the author proposes a creative method to infer the hidden statistics of the neuron.\", \"Concerns\", \"Most of all, the information the experiments conveys is too small to convince the argument of the author. The reviewer could not find the dataset they train (in the Experiment section), and the graph only shows the case of two-layered networks. Moreover, the reviewer couldn't find the explanation of the graph, including their legend (for example, Memorization).\", \"The author suggests that this method can be applied to various networks. Still, the reviewer couldn't find any clue that the method actually worked for various settings: different activations, convolutional networks, and so on. More experimental results supporting the argument of the authors are required.\", \"Assuming that the network was trained by MNIST and we infer the weight of the networks by the proposed method. Can the recovered network classify the number as well? Then, how the accuracy change?\", \"More quantitative results regarding the asking are required.\", \"Experimental results for more-than-two layered networks should be provided.\", \"Oh.et.al (https://arxiv.org/abs/1711.01768) proposed a blackbox reverse-engineering method and provided experimental settings as well. The author should clarify the novelty and the strong-points of the works compared to the mentioned work.\", \"Conclusion\", \"The author proposes a new method to recover the weight and bias of the network.\", \"The reviewer could not find much clue supporting the author's argument from the experiment section.\", \"inquiries\", \"See the Concerns parts.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces an approach to recover weights of ReLU neural networks by querying the network with specifically constructed inputs. The authors notice that the decision regions of such networks are piece-wise linear corresponding to activations of individual neurons. This allows to identify hyperplanes that constitute the decision boundary and find intersection points of the decision boundaries corresponding to neurons at different layers of the network. However, weights can be recovered only up to permutations of neurons in each layer and up to a constant scaling factor for each layer.\", \"the_algorithm_consists_of_two_parts\": \"Identifying parameters of the first layer and subsequent layers. First they sample a lot of line segments and find their intersections with the decision boundary, i.e. where pre-activations equal 0. Then they sample some points around the intersections and estimate the hyperplanes up to a scaling factor and a sign. For the first layer they check whether some of the hyperplanes belong to it. For the consecutive layers they proceed by moving from the intersection points along the hyperplanes until the decision boundary bends. Again, by identifying which of the bends correspond to the intersection of the current layer's hyperplanes with the previous layers' ones, they are able to recover parameters of the current layer by computing the angles of these intersections.\\n\\nThis paper tackles a very interesting and important problem that might have huge implications for security and many other aspects. However, I'm leaning towards Reject for the following reasons:\\n\\n1. The algorithm's description is either incomplete or unclear. There are such core functions as PointsOnLine and TestHyperplane, whose pseudo-code would be very helpful for understanding. For example, the authors say that PointsOnLine performs a \\\"binary search,\\\" but then this function can find only one (arbitrary) intersection of a line segment with a decision boundary, while each sampled line can intersect multiple ones. If it is not binary search, then the asymptotic analysis given in the end of Sec. 4.2 is incorrect. Even more mysterious is TestHyperplane, from the provided intuition I do not understand how it is possible to distinguish hyperplanes corresponding to the first layer vs. the other layers. In Sec. 4.3, second paragraph, the choice of R is unclear. How to chose it to make sure that the closest boundary intersects it?\\n\\nThe authors consider a very limited setting of only fully-connected (linear) layers with ReLU activations. In this case it is easy to see that the resulting decision boundary is indeed piece-wise linear with a lot of bending. Authors themselves notice, that \\\"the algorithm does not account for weight sharing.\\\" For CNN this will lead to each virtual neuron in each channel to have its own kernel weights, although there must be one kernel per channel. Also the authors admit, that pooling layers affect partitioning of the activation regions, making the proposed approach inapplicable. The authors did not discuss whether the proposed approach can handle batchnorm layers. Such non-linear transformations could pose serious problems. All this rules out applications to, for example, all CNN-based architectures, that prevail in computer vision. The authors mention, that their \\\"algorithm holds with slight modification\\\" for ResNets, but as mentioned earlier convolutional, pooling and batchnorm layers make it not so trivial (if at all possible).\\n\\n2. Experimental evaluation is extremely limited: It is all contained in just one paragraph. Although it is mentioned that \\\"it is often possible to recover parameters of deep ReLU networks,\\\" they evaluated their approach on very shallow and narrow networks (only 2 layers, 10 to 50 neurons in each). The immediate question here is why this algorithm is not applied to sufficiently deep NN? At least a network that could classify MNIST reasonably well. Actually, this would be a better proof-of-concept: Given a pre-trained MNIST classifier, apply the proposed method, recover the weights and check if you get the same output as from the original network. Whereas here the evaluation is given as a normalized relative error of the estimated vs. the true weights. Which raises the question of how the scaling factor was chosen? Recall, that the proposed method estimates network's parameters only up to an arbitrary scaling factor. My guess, is that for the Figures 3 and 4 (both right) the estimated weights were re-scaled optimally to minimize the relative error. But in the end, one is interested in recovering the original weights of the network, not relative ones.\\n\\nI am very confused by Fig. 3 left: Why is the number of queries going down as the number of neurons increases? Should it not be that with more neurons the ambiguity also increases, requiring more queries? Again, this analysis is very limited, it would be very interesting to see, how many more queries one needs for deeper layers of the network. But for this experiments with deeper than 2 layers networks are necessary.\\n\\n3. The choice of parameters is unclear and not discussed. How long should the line segments be, how many of them. How many points are sampled and within which radius to identify hyperplanes, how to choose 'R'. And how all these choices affect accuracy and performance.\\n\\nOverall, the paper looks rather incomplete to me and requires a major revision. It will definitely benefit if the \\\"slight modification\\\" for the case of ResNets is included. Also, experimental evaluation should be completely re-done and extended.\"}",
"{\"comment\": \"Great question. Our implementation is not intended to optimize for the number of queries - its efficiency can be improved greatly at the expense of simplicity. For example, in the current implementation, some neurons in the second layer are estimated repeatedly - using several different points on their associated boundaries - before their full weight vectors are determined. Sharing information between these iterations would reduce the number of queries needed.\", \"a_more_subtle_optimization_approach_would_have_an_even_greater_effect\": \"Suppose that boundary B bends when it intersects boundary B', so that B is given by the two hyperplanes H_1 and H_2 on the two sides of B'. If H_1 and B' are known, then H_2 is actually almost completely known already - since the \\\"bend\\\" occurs along the intersection of H_1 and B'. Only a single scalar needs to be determined. Our implementation recalculates the entire hyperplane H_2, but this is in fact unnecessary and leads to a great increase in queries if the input dimension is high. Reusing this information is straightforward and should certainly be included if the goal is speed over simplicity.\\n\\nWe did not know of the paper you mention until after submission, and will in our revision describe their approach for one-layer ReLU networks - as well your excellent recent paper: https://arxiv.org/abs/1909.01838\", \"title\": \"Query efficiency at second hidden layer\"}",
"{\"comment\": \"Thank you very much for making the code of your algorithm available! It really helps to make the algorithm understandable.\\n\\nWhen I run the code to identify the weights of a network with an architecture 10-20-30-1 (so that there are 20 units in the first ReLU layer, and 30 units in the second ReLU layer) it takes 70 million queries to identifying the second layer. \\n\\nIs this to be expected? For context, this is roughly a hundred thousand queries per trainable parameter. In contrast, it takes under 100,000 queries for the first layer (~400 queries per parameter, in line with Figure 3).\\n\\n(You may also be interested in https://arxiv.org/abs/1807.05185 which gives a very similar algorithm to yours for the case of one-layer neural networks.)\", \"title\": \"Query efficiency of the algorithm\"}"
]
} |
S1lF8xHYwS | Unsupervised Domain Adaptation through Self-Supervision | [
"Yu Sun",
"Eric Tzeng",
"Trevor Darrell",
"Alexei A. Efros"
] | This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data. Like much of previous work, we seek to align the learned representations of the source and target domains while preserving discriminability. The way we accomplish alignment is by learning to perform auxiliary self-supervised task(s) on both domains simultaneously. Each self-supervised task brings the two domains closer together along the direction relevant to that task. Training this jointly with the main task classifier on the source domain is shown to successfully generalize to the unlabeled target domain. The presented objective is straightforward to implement and easy to optimize. We achieve state-of-the-art results on four out of seven standard benchmarks, and competitive results on segmentation adaptation. We also demonstrate that our method composes well with another popular pixel-level adaptation method. | [
"unsupervised domain adaptation"
] | Reject | https://openreview.net/pdf?id=S1lF8xHYwS | https://openreview.net/forum?id=S1lF8xHYwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"FWG9OWa4rJ",
"BJxK3M2OsH",
"BkeNnpsdoS",
"rkgvKBsujH",
"HJxNH7jusH",
"Bygt1CBfsH",
"SJx5Q7Ua5B",
"SyxF2CBaqB",
"rklFudcVqB",
"HkxybvFNqH",
"rkxxcB1Rtr",
"H1eZ5xsSKr",
"B1xColtrYr",
"r1x1BXafOr",
"SJxUhmL6vr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798746323,
1573597873258,
1573596587640,
1573594495226,
1573593915855,
1573178848869,
1572852513868,
1572851376587,
1572280432529,
1572275958862,
1571841415847,
1571299465025,
1571291301571,
1570063159050,
1569706926016
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2330/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2330/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2330/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2330/Authors"
],
[
"~Kolo_Toure1"
],
[
"ICLR.cc/2020/Conference/Paper2330/Authors"
],
[
"~Researchers_CV1"
],
[
"ICLR.cc/2020/Conference/Paper2330/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2330/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2330/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2330/Authors"
],
[
"~Researchers_CV1"
],
[
"ICLR.cc/2020/Conference/Paper2330/Authors"
],
[
"~S._Alireza_Golestaneh2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Thanks for your detailed replies to the reviewers, which helped us a lot to clarify several issues.\\nAlthough the paper discusses an interesting topic and contains potentially interesting idea, its novelty is limited.\\nGiven the high competition of ICLR2020, this paper is still below the bar unfortunately.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"The criticisms are based on misunderstandings and lack justification\", \"comment\": \"For the ICCV 2019 paper, please see our reply to reviewer 1 for a thorough comparison of the differences, both algorithmic and conceptual.\\n\\nFor the domain generalization paper [Carlucci et al], we believe that you have misunderstood our words. We say in our paper that \\u201cbecause their problem setting is very challenging, the accuracy is low for both their proposed method and the baseline.\\u201d We are not criticizing Carlucci et al, but illustrating how their setting is different (as well as their method). You seem to think that we do not understand how their setting is different.\\n\\nWe are then asked to perform experiments under the domain generalization setting of Carlucci et al. First, since this is a setting that our paper does not work on, these experiments are irrelevant to our purpose. Second, such experiments are in fact undefined. What does it mean to run a domain adaptation method in the domain generalization setting, where no target data is available in any form?\\n\\nYou claim that our experiments are \\u201cquite sub-par\\u201d, because we could have obtained \\u201clarge improvements by a combination of very small factors, such as data augmentation, network architecture, optimization method, and even hyperparameter tuning,\\u201d citing your own experience as evidence, without a publication, reference or code. First, there is no reason why our improvements are especially prone to such problems, in comparison to previous work on the benchmarks we use, such as DIRT-T published at ICLR 2018. We do not use data augmentation to keep a fair comparison with the baselines, the hyperparameters are set by our selection rule, and we use the default optimization method that comes with our network architecture, which is widely adopted in all of computer vision and without our method performs no better than the source only results of the baselines.\\n\\nSecond, it is ambiguous what \\u201clarge improvements\\u201d are - over source only (no adaptation) or the previous methods? If over source only, then the runs of our method share all the \\u201csmall factors\\u201d mentioned in your comment as the runs of our source only, so the difference cannot be explained by these factors. If over the previous methods, this criticism is in fact undefined. Our method is not based on any of the baselines in Table 2, so does not even share their hyper-parameters; how can we then improve on them by hyper-parameter tuning?\"}",
"{\"title\": \"Thank you and answers to your questions\", \"comment\": \"Thank you for your time giving us feedback. Here we answer your numbered concerns.\\n\\n1. \\u201cThe concept of self-supervision is not first proposed by this paper.\\u201d Since being proposed in the 1990s, self-supervised learning has become a wide and vibrant field of inquiry, with hundreds, if not thousands of papers published in respected venues. So, we are perplexed by the statement: is this arguing that all these papers were published in error? \\n\\u201cThe proposed method is not novel.\\u201d Such statements are unhelpful without references to prior work. We have stated in the introduction what we perceive to be the novelties of our method. Please provide references to previously published papers that render our novelties invalid.\\n\\u201cPerformance is not better than previous results such as DIRT-T.\\u201d Our results are shown in Table 2, and many of them are better than DIRT-T. Our method is also simpler and derived from a different perspective.\\n\\n2. Below are the requested results for R+L+F:\\n\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\nSource\\t\\tMNIST\\t\\tMNIST\\t\\tSVHN\\t\\tMNIST\\t\\tMNIST\\nTarget\\t\\tMNIST-M\\tSVHN\\t\\tMNIST\\t\\tUSPS\\t\\tUSPS\\n\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\nAccuracy (%)\\t98.7\\t\\t 63.2\\t\\t 85.7\\t\\t 95.8\\t\\t 87.0\\n\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\nThere is not much difference between these numbers and the ones for R only.\\n\\n3. \\u201c[The authors] do not provide any way for how to design self-supervision task\\u201d. Please see Section 3 titled \\u201cdesigning self-supervised tasks for adaptation\\u201d.\\n\\n4. Please see results on Office-31 in our reply to R3.\\n\\n5. First, please note that ICCV 2019 papers are considered concurrent work, not prior work, to ICLR 2020 (ICCV\\u201919 happened in November, whereas deadline for ICLR was in September). Second, S4L, which is designed for semi-supervised learning, differs from ours both algorithmically and conceptually. We have already discussed this in the related work section in the context of semi-supervised learning methods, but to make our point clearer, here are the results for our implementation of the algorithm described in their equation (1) and (2) on MNIST -> MNIST-M, where improving upon the source only (no adaptation) baseline should have been very easy:\\n\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\n\\t\\t\\t | Accuracy (%)\\n\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\nSource only\\t\\t| 44.9\\nS4L method\\t\\t| 56.6\\nOur method\\t\\t| 98.9\\n\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\u2014-\\u2014\\u2014\\u2014\\nThe S4L result is barely better than source only, and qualitatively different from ours i.e. the difference should not come from merely implementation details. The most important difference between their algorithm and ours is that they train the supervised task on labeled data, and self-supervised task on unlabeled data, while we train the self-supervised task(s) simultaneously on both domains (labeled and unlabeled). Conceptually, training the self-supervised task on both domains is critical for alignment, which is the main objective for adaptation. Because for semi-supervised learning, the labeled and unlabeled data come from the same domain, methods for semi-supervised learning e.g. S4L do not need to consider the alignment problem. These comments are not intended to criticize S4L, as it is solving a different problem. In fact, theoretical analysis for semi-supervised learning [Cohen, Cozman] [Ghifary et al] suggests that training the self-supervised task on both domains is not helpful for semi-supervised learning; it is interesting to see how this picture is different for domain adaptation.\\n\\n\\u201cI think it is an interesting paper, but not enough as a conference paper, maybe a workshop paper.\\u201d We are happy you found the paper interesting. We do ask you to please reconsider your recommendation in light of the arguments presented above. Also, as similar works using self-supervision as a tool, e.g. S4L, were published at respectable conferences instead of workshops, it seems reasonable to argue that this work, too, deserves to be accepted to ICLR.\", \"references\": \"Cohen, I., Cozman, F.G.: Risks of semi-supervised learning: how unlabeled data can degrade performance of generative classifiers. In: Semi-Supervised Learning. MIT Press (2006)\\nMuhammad Ghifary, W Bastiaan Kleijn, Mengjie Zhang, David Balduzzi, and Wen Li. Deep reconstruction-classification networks for unsupervised domain adaptation. In European Conference on Computer Vision, pp. 597\\u2013613. Springer, 2016.\"}",
"{\"title\": \"Thank you and answers to your questions\", \"comment\": \"Thank you and we are happy you found our paper thought-provoking. Here we address the cons you wrote:\\n1. It is true that we have no theory backing our approach. On the other hand, it is rarely to see any deep learning paper with theory adequate enough to give \\u201cguarantees\\u201d for datasets we actually care about.\\n2. This connects well with the comment at the end of the review, asking us for \\u201cguidance as to how to choose the set of self-supervised tasks.\\u201d We have in fact given some practical guidance in the paper, which we summarize below as two necessary conditions:\\n- The self-supervised task is well defined and nontrivial on both domains. This rules out the case of rotation prediction on SVHN, since as we explain in the paper, \\u201cthe rotation head learns to look at the periphery and cheat\\u201d.\\n- \\u201cThe labels created by self-supervision should not require capturing information on the very factors where the domains are meaninglessly different.\\u201d as said and explained in section 3. This is rules out tasks such as colorization and autoencoder, for which it is important to learn the low-level details of the image.\\nThese two conditions are easy to reason about in practice. If the \\u201cbattery of self-supervised tasks\\u201d satisfy them, there should be notable improvement on top of the source only baseline as we observe empirically, but there won\\u2019t be a guarantee. In addition, we would like this paper to add to the toolbox of available domain adaptation methods instead of becoming the only tool. When a good self-supervised task satisfying the two conditions cannot be found (SVHN), previous methods have provided different tools to use. When a good self-supervised task naturally exists, our method provides a simple and effective choice.\\nIn the end, this is a valuable question from the reviewer and we plan to be more explicit about those conditions in the next revision.\\n3. Please see results on Office-31 in our reply to R3.\\n\\nYour notes / questions: Thank you very much for pointing out our error with the highlighting. This is an honest typo. In the latest revision, we have improved our results to match that of DIRT-T; the modification we made for the improved results, as well as the original results, can be found in the last paragraph of Appendix B.\"}",
"{\"title\": \"Thank you and answers to your questions\", \"comment\": \"Thank you for your thoughtful review. We have added qualitative comparisons in Appendix G of our latest revision (page 16).\"}",
"{\"title\": \"Regarding novelty and experiments\", \"comment\": \"First off, thanks for the work. I'd like to point out a few things that make me a little skeptical about this paper.\\n\\nAs blind reviewer 1 stated in [5], the method proposed here is quite similar to the ICCV19 paper \\\"S4L: Self-Supervised Semi-Supervised Learning\\\". I can't help but think that it is also similar to the CVPR19 \\\"Domain Generalization by Solving Jigsaw Puzzles\\\" (JiGen), which is basically the same as this paper, but in the domain generalization domain (with a different self-supervised task). I see that the authors have acknowledged this paper in the appendix. In fact, they state that the performance is bad, but you have to understand that it's the domain generalization setting, which makes me wonder - would this method present stronger performance that JiGen if experiments were done on the DG setting?\\n\\nAlso, I think experiements are quite sub-par. I've done some experiments on the \\\"standard domain adaptation\\\" benchmarks such as SVHN-MNIST, STL-CIFAR, and found that you can get large improvements by a combination of very small factors, such as data augmentation, network architecture, optimization method, and even hyperparameter tuning. This is largely due to the fact that SVHN-MNIST and CIFAR->STL is quite a simple task. In one of the comments below, the authors replied that the Office dataset is too \\\"small by the standard of modern deep learning\\\", but I'm not sure if SVHN-MNIST, CIFAR-STL is really any better. Yes, these are quite \\\"large\\\" datasets in terms of image quantity, but is adaptation of B&W digit images really any better for the standards of modern deep learning? I think this paper would be more convincing if experiments were conducted on larger datasets, such as the VisDA-2017 classification dataset. Frankly, I think this is a great dataset that is a much stronger reflection of real-world domain adaptation (much more so than any benchmarks containing MNIST or CIFAR), and there are plenty of previous works that have tested on this dataset to which the authors can compare their work with.\"}",
"{\"title\": \"This comment is based on incorrect readings of table 2\", \"comment\": \"Unlike what the comment said, we perform worse than [Shu et al. 2018] only on the two benchmarks using SVHN. The datasets are standard in domain adaptation and far from \\\"very small\\\". In addition, [Shu et al. 2018] has been a state-of-the-art method. Comparing with this strong method does not make ours \\\"weak\\\". The anonymous commenter also attributes our improvements to hyper-parameter tuning without any evidence. Our method is not based on [Shu et al 2018] or any other baseline in table 2; we cannot make improvements just by tuning hyper-parameters.\"}",
"{\"title\": \"The results of domain adaptation on classification are also weak\", \"comment\": \"From table.2, [Shu et al. 2018] performed better in most settings, including Mnist-->MnistM, Mnist->SVHN, SVHN->Mnist and STL-10 --> Cifar-10; although all these experiments were conducted on very small datasets, weak performance makes the story less convincing (Some hyper-parameters maybe the cause of little improvement).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper describes an approach to domain adaptation that uses\\nself-supervised losses to encourage source/target domain alignment for\\nunsupervised domain adaptation. The authors propose to use four\\nself-supervised tasks (variants of tasks used in the self-supervised\\nrepresentation learning for object recognition literature) that are\\nused with a combined loss including unlabeled source and target\\ntraining samples. The authors also propose an alignment heuristic for\\nguiding early stopping. Experimental results on a standard battery of\\ndomain adaptation problems are given, plus some intriguing baseline\\nresults for semantic segmentation.\\n\\nThe paper is written very well and the technical development and\\nmotivations for each decision are well discussed and argued.\\n\\n1. The experimental evaluation is a bit limited as the object\\n recognition datasets are a bit limited. Results on Office or\\n Office-Home would be nice.\\n\\n2. Using location classification for semantic segmentation seems\\n intuitively to be encouraging the network to learn coarse spatial\\n priors (which should be invariant across the two domains). Have you\\n looked at how alignment is actually happening? More qualitative\\n analysis in this direction would be useful to appreciate the\\n proposed approach.\\n\\n3. Related to the previous point, it would be interesting to see how\\n semgmentations in the unsupervised domain gradually change and\\n improve with increasing alignment.\", \"in_summary\": \"the ideas are simple, intuitive, and well-explained -- I\\nthink the results reported would be easy to reproduce with minimal\\nhead scratching. The experiments are interesting and not overstated.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces an unsupervised domain adaptation method that uses self-supervised tasks to bring the two different domains closer together. It runs experiments on some classic benchmarks.\\n\\nMy score for this paper is weakly rejected because \\n\\n(1) the concept of self-supervision is not first proposed by this paper. The proposed method is not novel. It introduces three simple self-supervision tasks: flip, rotation and location, and the performance is not better than previous results such as DIRT-T; \\n\\n(2) there are 7 benchmarks in Table2, but only 2 of 7 has result on R+L+F. In the paper, it mentioned because the result is not better, but the author should still provide them. \\n\\n(3) it emphasizes the contribution of encouraging more study of self-supervision for unsupervised domain adaptation. It doesn\\u2019t provide any way for how to design self-supervision task or whether more tasks is better. I think it is an interesting paper, but not enough as a conference paper, maybe a workshop paper. \\n\\n(4) there are some classic unsupervised domain adaption benchmarks like Office Dataset, and Bing-Caltech dataset, why not run the method on them?\\n\\n(5) In ICCV 2019, there is a paper \\\"S4L: Self-Supervised Semi-Supervised Learning\\\". The proposed method is almost same. I think the difference is this paper changes the setting and considers the unsupervised data as target domain and supervised data as source domain.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a novel unsupervised domain adaptation framework for neural networks. Similarly to existing approaches, it performs adaptation by aligning representations of the source and the target domains. The main difference is that this alignment is achieved not through explicitly minimizing some distribution discrepancy (this usually leads to challenging minimax optimization problems). Instead, the authors propose to use a battery of auxiliary self-supervised learning (SSL) tasks for both domains simultaneously. Each task is meant to align the source and the target representations along a direction of variation relevant to that task. Assuming that the battery is diverse enough, optimizing the representation for all the tasks leads to matching of the distributions.\", \"pros\": [\"The paper is well-written and easy to read.\", \"I like the simplicity of the idea and the fact that it achieves competitive performance without any adversarial learning (which may be very tricky to deal with).\", \"The paper presents a reasonable procedure for hyper-parameter tuning and early stopping which seems to work well in practice.\"], \"cons\": [\"The paper is purely practical with no theory backing the approach. As a result, the discussion of guarantees and limitations is quite brief.\", \"It\\u2019s unclear how easy it is to come up with a reasonable set of SSL tasks for a particular pair of domains. It seems that it may become a serious problem when the method is applied to something other than benchmarks. Table 2 reveals that there is no consistent improvement over the existing approaches which suggests that the chosen battery of SSL tasks is not universal (as the authors themselves admit). On a related note, it\\u2019s a bit disappointing that the authors mention SVHN results as a failure case but never provide a way to address the issue.\", \"It would be nice to some results for the Office dataset for completeness. The authors could use a pre-trained network as a starting points just like it\\u2019s done in other papers. According to the last paragraph of Section 6 this experiment should be feasible.\", \"Notes/questions:\", \"Table 2, last column: The performance of DIRT-T seems to be better than that of the proposed method and yet the latter is highlighted and not the former.\", \"Overall, I think it\\u2019s a good paper presenting a thought-provoking idea. In my opinion, the weakest point of the work is the lack of any (neither principled nor practical) guidance as to how to choose the set of self-supervised tasks. Despite this I feel that this submission should be accepted but at the same time I\\u2019m curious to see what the authors have to say regarding the concerns I raised in my review.\"]}",
"{\"comment\": [\"We agree with this comment that recent works since 20`18 have better segmentation results. We would also like to emphasize that:\", \"We are only claiming to improve segmentation results when our method is added on top of a prior work. Note that a separate self-supervised head can also be added to the prior works listed in the comment.\", \"Segmentation is not the main result of the paper and only comprises a minor portion of our empirical section, while the methods listed above are explicitly designed for segmentation. In fact, all of our baselines in Table 1 have been accepted to major conferences without any result on segmentation, except CyCADA (which we do compare with on segmentation).\"], \"title\": \"Our point is not to have state-of-the-art segmentation results\"}",
"{\"comment\": \"It seems that the performance (28.9 and 41.2 with off-line transformed images) on semantic segmentation is weak compared to existing works.\\n\\nMany works have more competitive performance;\\n\\nConditional Generative Adversarial Network for Structured Domain Adaptation, CVPR2018 (mIoU=44.5 with vgg19)\\nFully Convolutional Adaptation Networks for Semantic Segmentation, CVPR2018\", \"road\": \"Reality Oriented Adaptation for Semantic Segmentation of Urban Scenes, CVPR2018\", \"dcan\": \"Dual Channel-wise Alignment Networks for Unsupervised Scene Adaptation, ECCV2018\", \"title\": \"The results of domain adaptation on semantic segmentation are weak\"}",
"{\"comment\": \"Office has an average of only 44 images per class per domain. Many other recent works e.g. many of our baselines do not use it because it is considered very small by the standard of modern deep learning.\", \"title\": \"Office very small\"}",
"{\"comment\": \"Interesting work! It would be nice to show the results on a more challenging dataset such as Office-Home as well, the provided datasets are very easy\", \"title\": \"How about the results on Office-Home dataset\"}"
]
} |
H1lOUeSFvB | Improving Gradient Estimation in Evolutionary Strategies With Past Descent Directions | [
"Florian Meier",
"Asier Mujika",
"Marcelo Gauy",
"Angelika Steger"
] | We propose a novel method to optimally incorporate surrogate gradient information. Our approach, unlike previous work, needs no information about the quality of the surrogate gradients and is always guaranteed to find a descent direction that is better than the surrogate gradient. This allows to iteratively use the previous gradient estimate as surrogate gradient for the current search point. We theoretically prove that this yields fast convergence to the true gradient for linear functions and show under simplifying assumptions that it significantly improves gradient estimates for general functions. Finally, we evaluate our approach empirically on MNIST and reinforcement learning tasks and show that it considerably improves the gradient estimation of ES at no extra computational cost. | [
"Evolutionary Strategies",
"Surrogate Gradients"
] | Reject | https://openreview.net/pdf?id=H1lOUeSFvB | https://openreview.net/forum?id=H1lOUeSFvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Fz0A5YUrRf",
"r1lg_TEnoH",
"SJxrQTVhjB",
"SygPkpE2iB",
"SygTYhEnoS",
"SyxXZ-_0Fr",
"Ske0aL1AKB",
"SkgtRHvaFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746294,
1573829992177,
1573829916895,
1573829854743,
1573829765290,
1571877114855,
1571841733822,
1571808721466
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2329/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2329/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2329/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2329/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2329/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2329/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2329/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors propose a novel approach to using surrogate gradient information in ES. Unlike previous approaches, their method always finds a descent direction that is better than the surrogate gradient. This allows them to use previous gradient estimates as the surrogate gradient. They prove results for the linear case and under simplifying assumptions that it extends beyond the linear case. Finally, they evaluate on MNIST and RL tasks and show improvements over ES.\\n\\nAfter the revisions, reviewers were concerned about: \\n* The strong (and potentially unrealistic) assumptions for the theorems. They felt that these assumptions trivialized the theorems.\\n* Limited experiments demonstrating advantages in situations where other more effective methods could be used. The performance on the RL tasks shows small gains compared to a vanilla ES approach. Thus, the usefulness of the approach is not clearly demonstrated. \\n\\nI think that the paper has the potential to be a strong submission if the authors can extend their experiments to more complex problems and demonstrate gains. At this time however, I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We agree that Theorem one follows very easily from that assumption. Therefore, we renamed it to Proposition 1.\\n\\nWe shortly discuss how reasonable the assumption is that the numerical approximation is equal to the true directional derivative. The assumption can be violated because of two reasons: 1) third order and higher order terms make the approximation imprecise, and 2) function evaluation noise (due to random RL environments or stochastic initialization) make the numerical approximation imprecise.\\n1) We agree that this is an issue. However since the success of momentum based approaches established that higher order terms are often \\u2018well behaved\\u2019 in Deep Learning, we believe that our assumption is reasonable. \\n2) Large function evaluation noise is often an issue (at least in RL). However, in such a scenario the statement of Proposition 1 holds for the expectation of our gradient estimator. That is, E[g_our] is equal to the direction in the aforementioned subspace that is most aligned with the gradient.) \\n\\nWe included the effect of P into our analysis.\\n\\nWe agree that using ES with rank based fitness shaping is not the ideal choice to show that better gradient estimation leads to better performance. There seems to be a bias - variance trade-off here, that favours reducing the variance by fitness shaping. Still, using past descent directions seems to improve gradient estimation quality also in this setting and leads to improved performance. Further, the fitness shaping method is widely used in practise. Therefore, we think that showing that past update directions improve performance of that method (Figure 4) has the highest practical impact.\\n\\nIn the new plots we compared our approach to diagonal approximations of CMA-ES, which are more general than the canonical ES, since the sigma is adapted for every parameter separately.\\n\\nWe clarified the comment about diagonal CMA-ES in the paper and in the general reply to the reviewers above.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"1) We agree that linearity is a big assumption. However, it is desirable that the gradient estimate converges to the true gradient in the linear case (Note that for example standard ES, diagonal CMA-ES do not do so). Furthermore, we think it is interesting how fast it does so. We added to a few lines of explanations why we consider this case and how the speed of convergence relates to the one of orthogonal sampling.\\nFor the second result, we want to clarify that we do not assume orthogonality, since any vector can be partitioned into a parallel and orthogonal part. The assumption is that the direction of the orthogonal part is a random orthogonal direction. This implies that the direction of rotation of the gradient when doing a parameter update step is uncorrelated with the previous gradient estimate zeta. Which is what is actually required in the proof. Though this is of course an assumption that is not true in general, we do not see any reason why the gradient should rotated away from zeta (which would be the scenario in which the gradient estimation is worse that predicted by our theory).\\n\\n2) Yes, we assume differentiability for the analysis. We stated this in the paper (before Proposition 1).\\n\\n3) \\n-The MNIST example serves as a proof of concept and to evaluate the quality gradient approximation, which can be only directly evaluated if the true gradients can be computed.\\n- Thanks for the suggestion of using augmented random search as baseline. Time did not suffice, but we consider adding it to the plots.\\n- The computation time of our approach is exactly the same as the one of ES on any problem size.\"}",
"{\"title\": \"Answer to reviewer 2\", \"comment\": \"-We added comparisons to Guided-ES (Ref. 14) and diagonal-CMA-ES (seperable NES) for a high-dimensional quadratic function, to adress the major concerns.\\n\\n-All samples used the exact same mini-batch (resampled after each ES update). The batch has size 100.\\n-We agree that it is interesting to investigate how this performance gap depends on the number of parameters, but we did not have enough time to create this plot.\\n-We observed that the orthogonal epsilon and the N(0,I) epsilon cases have the same performance. The plots are produced using N(0,I) epsilons.\\n\\n-We will add axis labels for the camera ready version.\\n-We use the ||\\\\epsilon subscript in that Equation to denote the part of the vector that is parallel to \\\\epsilon\"}",
"{\"title\": \"General Response to the Reviewers\", \"comment\": \"We thank the reviewer for their valuable comments that helped us improving our manuscript.\", \"let_us_first_highlight_the_main_changes_of_the_manuscript\": \"-We explained more clearly the purpose of the theorems (first paragraph of Section 3.3).\\n-We included the number of samples P into the analysis. This allows us to validate theoretical predictions empirically, see second paragraph below.\\n-We demonstrate that our approach clearly outperforms other contender approaches like diagonal CMA-ES and Guided-ES at the example of optimizing a high-dimensional quadratic function as done in (14). These plots are available in Appendix.C now. These will be polished and moved to the main text for the final version.\\n\\nWe want to emphasize that the experiments on the quadratic function and MNIST serve as a proof of concept to demonstrate the improved gradient estimation quality, while the improved performance over standard ES for the tested RL enviromnents reveals the practical impact that our approach may have, as standard ES is a commonly used method. Especially, this improvement comes without computational extracost and without increased implementation complexity.\\n\\nFor theorem 2 we assume that the true gradient is modified by independent orthogonal noise. While this may seem like a big assumption at first, there is no particular reason to believe the gradient will change in a way that is adversarial to our algorithm. To test this, we have tracked the change of the true gradient over training (alpha) and computed the expected improvement in gradient estimation, under our assumptions and with the observed alphas. This gave us an expected improvement of 9.2e-7 over taking random orthogonal samples, which would be the square root of P/(N-1) (P=number of samples, N=dimensionality of the problem). \\nHowever, we found that taking random samples in ES gave a worse than expected cosine, by -4.1e-7, which could be explained because of the influence of higher order terms or because our samples are independently sampled and not pairwise orthogonal. Taking this into account, we computed our observed improvement over the expectation to be 5.2e-7, which once we subtract the loss we observed (from higher order terms or non-orthogonality) we get a value of 9.1e-7 which is extremely close to our theoretical predictions.\\n\\nWe believe these experiments shows that are assumptions are very reasonable and our theory closely models the observed behaviour. We will add this experiments and explanation to the final version of the paper.\\n\\nFurther, we want to emphasize that fast convergence to the true gradient for linear function as shown in our theorem is a desirable property (as it increases the quality of the gradient estimation in the case of constant or very slowly changing gradients), that is not satisfied by many other approaches like standard ES, canonical ES, diagonal approximations of CMA-ES. This is clear for standard ES. But also for diagonal approximations of CMA-ES, gradient estimation is not improved if the gradient is not aligned with the coordinate axis, e.g. if the gradient is (1,1,...,1) then all coordinates affect the loss equally and the sampling scheme will not differ from the one of standard ES as all sigmas will be the same. Therefore, in terms of gradient estimation our algorithm clearly outperforms all these approaches.\\n\\nWe address the further concerns of the reviewers in seperate answers.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper provides a new type of gradient estimator that combines an Evolutionary Strategies (ES) style estimate (using function evaluations at perturbed parameters) along with surrogate gradient estimates (gradient estimates that may be biased and/or high variance). The estimator involves computing antithetic ES estimates in two subspaces: along the set of (normalized) surrogate gradients, and along a set of randomly chosen vectors in the orthogonal complement of the span of the surrogate gradients. The paper provides a proof of the optimality of the estimate, that is, the proposed gradient estimate maximizes the cosine of the angle with the true gradient over the vectors in the subspace defined by the set of surrogate gradients and sampled directions. The paper proposes an additional mechanism for generating surrogate gradients by simply using previous gradient estimates as surrogate gradients, and derives a convergence rate for when this iterative estimator will approximate a fixed, true gradient (e.g. for linear functions). Finally, the paper applies the estimate to two tasks: MNIST classification and robotic control via reinforcement learning, demonstrating improvements on both compared to standard ES.\\n\\nI think this is a nice contribution, and I enjoyed reading this paper, with one major caveat regarding some of the experiments. The paper is clearly written.\", \"major_concerns\": [\"The paper is missing critical comparisons to existing work. In particular, the paper cites Ref. 14 as another method for using surrogate gradients in optimization. For both examples (MNIST and RL), it is crucial to add the algorithm from that paper as a baseline.\", \"In addition, it would also be nice to see one of the diagonal approximations of CMA-ES as a baseline.\", \"Other questions/comments:\", \"For the MNIST example, you mentioned that the function is deterministic--how many examples are used for each function/gradient evaluation (the full dataset, or some fixed subset)?\", \"It would be nice to see how the performance gap between the proposed estimate and ES varies with the number of parameters (size of the network).\", \"It would be nice to compare the orthogonal epsilon to the N(0,I) epsilon case. As mentioned in the paper, the N(0,I) will be nearly orthogonal to the surrogate gradients in high dimensions. For a practical problem (e.g. MNIST), is the orthogonalization strictly necessary?\", \"Minor comments/typos:\", \"Fig 2: Add label for the x- and y-axes, and a legend.\", \"Use a more semantically meaningful subscript than `our` for the proposed gradient estimate. Perhaps `orth`, since you utilize orthogonal subspaces?\", \"Typo in eq. (6) (issue with the subscript on f)\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper is about improving the quality of surrogate gradients. Their proposal in guaranteed to find a descent direction. In addition to two results, the authors also provide experimental results.\\n\\nThis paper is presenting a research on a recent topic. Using random search methods or evolutionary strategies in machine learning problems is attracting quite an interest in the last years. However, this particular paper misses several important points and hence, lacks sufficient contribution.\", \"here_are_my_major_comments\": [\"The technical results are somewhat superficial: The first one is with the big assumption of linearity. This assumption does not hold almost for all problems where random search strategies would be of use. The second result, on the other hand, assumes orthogonality, which almost surely never happens. I must add that the authors also acknowledge the severity of these assumptions.\", \"It is important to note that the theory here assumes that the gradient does exist but cannot be computed or too expensive to compute.\", \"I would have expected an experimental study that would properly support the proposed approach. Such a study would have shed light on the computation time and efficiency. The authors solve MNIST problem, which can be quite efficiently solved with a variant of (accelerated) gradient method. Solving it with ES and then improving the result with the proposed approach is not satisfactory. Reinforcement learning experiments could have been noteworthy but the authors have solved quite small problems and did not compare their results extensively against contender approaches like augmented random search. It would have been also nice to see the computation times on large problems since the extensive computation time is a big obstacle for training in reinforcement learning.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper addresses the issue of noisy gradient estimation in a type of evolution strategies popularized by the open AI's reinforcement learning paper. It is a follow-up paper of reference [14], and try to analyze the optimality of the gradient estimation. The goal of the paper is well stated and well motivated. The paper itself is well-organized. However, the novelty of this work is not sufficiently high and its usefulness is questionable.\\n\\nTheorem 1 is trivial under the assumption stated above the theorem---the numerical approximation of the directional derivative admits the true directional derivative. In other words, the assumption is too strong to claim the goodness of the proposed scheme. \\n\\nLet's just sample k+P random normal vectors and orthogonalize and normalize them. Let them denoted by the same symbols hat zeta and hat epsilon. The theorem statements holds for this case. Therefore I failed to understand the essential claim of Theorem 1. \\n\\nAbout Theorem 2 and 3, again, the assumption that he numerical approximation of the directional derivative matches the true directional derivative is too strong to make the claim relevant. Moreover, the effect of P somehow disappear from the analysis. \\n\\nAll the analysis is done assuming the above mentioned strong assumption. However, in one of the experiments and the existing works, ranking based fitness shaping has been applied to make the algorithm robust. This replaces the function value differences in the gradient estimator with some predefined values depending on the ranking of f-values of each trial vector. This definitely violates the assumption, and it may result in some vector far away from the true gradient, yet the algorithm still works well. Therefore, the hypothesis underlying in this paper---better estimation of the gradient will lead to a better performance---may not be true. At least the numerical experiments provided in this paper do justify this hypothesis.\\n\\nThe numerical experiments have been conducted to compare the proposed algorithm with the baseline ES algorithm. In a sense it is reasonable to evaluate the effect of the proposed modification in the baseline ES. However, since the baseline ES algorithm is not really efficient on tasks such as the one conducted in Figure 2, the usefulness of the proposed approach is not tested. At least one should compare with the \\\"canonical\\\" ES, where the learning rate fixed and sigma is adapted. See https://arxiv.org/pdf/1802.08842.pdf. \\n\\nUse of the search history proposed in this paper is not really new in ES community. A sort of momentum terms appears in the standard CMA-ES [18] even in two parts of the algorithm and its effectiveness is well-studied empirically. This paper addresses the theoretical aspect of the momentum and this may be new. However, as mentioned above, the assumption is too strong to describe the reality.\\n\\n\\\"Linear time approximations of CMA-ES like diagonal approximations of the covariance matrix (19) often do not work well.\\\" Please specify in what sense the linear time version of the CMA-ES do not work well and provide the evidence (references).\"}"
]
} |
BJlPLlrFvH | Variable Complexity in the Univariate and Multivariate Structural Causal Model | [
"Tomer Galanti",
"Ofir Nabati",
"Lior Wolf"
] | We show that by comparing the individual complexities of univariante cause and effect in the Structural Causal Model, one can identify the cause and the effect, without considering their interaction at all. The entropy of each variable is ineffective in measuring the complexity, and we propose to capture it by an autoencoder that operates on the list of sorted samples. Comparing the reconstruction errors of the two autoencoders, one for each variable, is shown to perform well on the accepted benchmarks of the field.
In the multivariate case, where one can ensure that the complexities of the cause and effect are balanced, we propose a new method that mimics the disentangled structure of the causal model. We extend the results of~\cite{Zhang:2009:IPC:1795114.1795190} to the multidimensional case, showing that such modeling is only likely in the direction of causality. Furthermore, the learned model is shown theoretically to perform the separation to the causal component and to the residual (noise) component. Our multidimensional method obtains a significantly higher accuracy than the literature methods. | [
"effect",
"univariate",
"cause",
"variable",
"variable complexity",
"individual complexities",
"univariante cause",
"structural causal model"
] | Reject | https://openreview.net/pdf?id=BJlPLlrFvH | https://openreview.net/forum?id=BJlPLlrFvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"MdDlIalKA",
"B1gq-oWoir",
"rJxRDaBYjS",
"S1e_TnBtjr",
"SJlqghHKiB",
"SJg4mjrYsH",
"SyxF49zXjS",
"Skx3i1gMqr",
"rJgH6p3pFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746265,
1573751553608,
1573637478161,
1573637311761,
1573637106083,
1573636892081,
1573231153289,
1572106148313,
1571831228900
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2327/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2327/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2327/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2327/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2327/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2327/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2327/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2327/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The author response and revisions to the manuscript motivated two reviewers to increase their scores to weak accept. While these revisions increased the quality of the work, the overall assessment is just shy of the threshold for inclusion.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Authors' Additions Warrant Score Increase\", \"comment\": \"It looks like pretty much all of my questions about the univariate case were answered in the general comment. The below quote from the authors seems important:\\n\\n\\\"In order to avoid raising unnecessary antagonism, we mentioned our criticism in a soft manner. We are aware that this point was probably missed by the reviewers and we will make a significant effort to emphasize it more in the next version.\\\"\\n\\nIndeed, I completely missed this. It seems, then, that one of your main contributions should be that you point out that evalutating these methods in the univariate case is bad because of how trivial it is (comparing marginals is enough). If this is a main claim, then I think the paper could benefit from having more emphasis on this. I see that the authors have uploaded an edited version that does exactly this. Because the authors have such an extensive experimental analysis (many different datasets) in the univariate case, I find this claim, which important for those doing research in this area, sufficiently substantiated by the experiments in this paper.\\n\\nBecause of the above, the addition of an evaluation of PNL and CGNN in the multivariate setting, and the addition of MOUS-MEG real-world dataset dataset for the multivariate setting, I am updating my score to a 6.\"}",
"{\"title\": \"Thank you for the insightful comments\", \"comment\": \"Univariate case: please see the general comments.\\n\\n1. The common assumption in the SCM literature is that a cause-effect pair X -> Y can be modeled as Y = g(X,E), where, E is a noise term independent of X (see (cf. Peters et al. 2017, p. 8), Zhang & Hyvarinen (2009), Hoyer et al. (2009), Shimizu et al. (2006)). In theorem 2, we assume that Y is invertible, and therefore, the information of E in encoded within Y. We believe that this is not a strong assumption. Informally, if \\u201cparts\\u201d of E\\u2019s information were not encoded within Y, they could be ignored and we could write Y = g(f(X),E\\u2019), where E\\u2019 is encoded in Y. \\n\\n2. In the related work section, we discuss various methods that apply different independence tests, such as, ANM and LiNGAM. We added PNL and Direct-LiNGAM as well, emphasizing the reliance of these methods on independence tests and how our method differs. \\n\\nIn general, we are aware that independence tests are the obvious thing to do. The novelty of our algorithm stems from applying a discriminator in order to measure and restrict independence.\", \"there_are_several_advantages_in_employing_a_discriminator\": \"a. Our loss is non-parametric. We can learn a random variable E = R(Y) independent of X without any explicit assumption on the densities of X,Y or E.\\nb. Our method does not assume any specific structure as done by LiNGAM, Direct-LiNGAM, PNL, ANM, etc. In fact, we can provably recover the direction and structure of the SCM (up to transformations), under the assumption that g is invertible. In previous publications, this is possible only under the assumption that the SCM is linear/post-linear (PNL, LiNGAM, ANM, etc\\u2019).\\nc. We do not rely on estimating and minimizing the mutual information between arguments. It is often hard to measure and optimize the mutual information directly, especially in higher dimensions.\"}",
"{\"title\": \"Thank you for the insightful comments\", \"comment\": \"1. Univariate case: please see the general comments.\\n\\n2. Regarding the multivariate case, our method differs in several aspects. In PNL, the authors learn a mapping between X and Y (and vice versa) of the form Y = g(f(X) + E), where f(X) and E are restricted to be independent. In order to learn f and g, the algorithm restricts that f(X) and E = g^{-1}(Y) - f(X) to be independent.\", \"our_algorithm_solves_a_few_disadvantages_of_pnl\": \"a. Their model strongly relies on the assumption that Y has the form: g(f(X) + E) and therefore, they cannot treat the general case where Y = g(f(X),E) as we do. \\nb. In addition, we show theoretically in Thms. 1 and 2 that when applying our method one can recover the direction and structure of the SCM (up to transformations) under the assumption that g and f are invertible. In PNL, this is possible only under the assumption that Y = g(f(X)+E).\\nc. In order to restrict the components f(X) and N = g^{-1}(Y) - f(X) (a function of Y and X) to be independent, PNL minimizes the mutual information I(f(X);E). For this purpose, the PNL algorithm computes the gradient of I(f(X);g^{-1}(Y) - f(X)) with respect to the parameters of f and g. This is a strong disadvantage of that method since it is often hard to measure and optimize the mutual information directly, especially in higher dimensions. In most cases, it requires having explicit modeling of the density functions of X and Y (p_X(x) and p_Y(y)). In our method, the independence constraint is applied on the observations rather on explicit modeling of the density functions.\\n\\nWe added a paragraph in the related work section discussing PNL and how our algorithm resolves the above problems.\\n\\nFinally, for completeness, we added an empirical comparison between our method and a multivariate extension of PNL.\"}",
"{\"title\": \"Thank you for the insightful comments\", \"comment\": \"Regarding points 1 + 2 + 3 + 4. Please see the general comments.\\n\\n5. Thanks for pointing out the work of Heinze-Deml et al. (2017). This paper assumes the algorithm is provided with datasets of different environments, each one has a fixed value of E. In our paper, we focus on a vanilla SCM, where the algorithm is only provided with observational samples of X and Y = g(X,E) (i.i.d samples). The samples are not divided into subsets that are invariant w.r.t E. \\n\\nIn addition, the two independence tests are different. In our case, we require that E is independent of X, while in papers, such as, (Heinze-Deml et al. (2017); Zhang et al., 2011) they assume that Y is independent of E given X. This assumption generally fails in our setting. We will note it to the related work section in the next version of the paper.\\n\\nWe made a considerable effort to extend GPI, LiNGAM, Direct-LiNGAM, and CDS to the multivariate case, however, these algorithms and/or their existing implementations are highly dependent on the assumption that the data is univariate. Fortunately, we were successful in extending PNL and added the results to the table. \\n\\nWe also added empirical results on the MOUS-MEG real-world dataset.\\n\\n6. Regarding CGNN, we found a bug in the public implementation of it that disabled the training to run on the GPU. It is now fixed and we have added the results of running CGNN to the table.\"}",
"{\"title\": \"General comments for all of the reviewers\", \"comment\": \"We would like to thank the reviewers for your constructive feedback, we appreciate it. We revised the paper according to the reviews and uploaded it.\\n\\nWe would like to provide a few general comments to questions on the univariate case that were raised by the reviewers.\\n\\nIn the first part of the paper, we provide a critical stand-point to the univariate SCM. We claim that the univariate SCM is too simplistic. To do so, we show empirically that one is able to infer the causal relationship between two random variables X -> Y without checking the relationship between them. The intention to do so is explicitly stated in the abstract, introduction and summary:\", \"abstract\": \"\\u201cWe show that by comparing the individual complexities of univariate cause and effect in the Structural Causal Model, one can identify the cause and the effect, without considering their interaction at all.\\u201d\", \"intro\": \"\\u201cIn this work, we demonstrate that for the 1D case, which is the dominant case in the existing literature, the SCM model leads to an effect that has lower complexity than the cause. Therefore, one can identify the cause and the effect by measuring their individual complexities, with no need to make the inference based on both variables simultaneously. Thus, the decision as to which of the two is the cause and which is the effect may not be based on causality but on complexity.\\u201d\", \"summary\": \"\\u201c...its success in predicting cause and effect indicates an inherent bias in the unidimensional datasets.\\u201d\\n\\nAlmost all of the algorithms in the literature try to compare the success of mapping from X to Y (and vice versa) under various conditions (independence tests, complexity, etc\\u2019). We introduce the heuristic AEQ method and show empirically that one can infer the causal relationship between X and Y, simply by comparing their complexities, without comparing any mappings between them. The complexity of a random variable X is measured by the MSE error produced by an autoencoder that tries to map T(X) to itself. Here, T(X) is the transformation of X into a multivariate random variable (see Sec. 3.1, page 3). \\n\\nIt is important for us to emphasize that we are not advocates of the AEQ method. It is given as an indication that the univariate SCM framework is too simplistic to capture the true notion of causality and that a method that obviously does not check causality between random variables is able to get competitive results on several benchmarks. \\n\\nIn order to avoid raising unnecessary antagonism, we mentioned our criticism in a soft manner. We are aware that this point was probably missed by the reviewers and we will make a significant effort to emphasize it more in the next version.\\n\\nNext, we wanted to ground the AEQ algorithm in a theoretical manner. The intention is to be able to claim that the univariate model is inherently too simplistic. To do so, first, we showed that the reconstruction error of an autoencoder trained on samples of a multivariate r.v U is proportional to the entropy of U (see Lem. 1). \\n\\nThen, we informally said that the entropy of Y is supposed to be smaller than that of X. This claim holds in the discrete case, where Y = f(X) (with no noise involved). We agree with the reviewers that this claim is generally false (when noise is involved and the r.v.s are continuous). Note, that in our case, we do not compare between the reconstruction errors of X and Y, while we compare them for T(X) and T(Y). Therefore, the AEQ does not compare the entropies of X and Y, it compares the entropies of T(X) and T(Y). For a discrete r.v X, we still have h(T(X)) >= h(T(Y)) for Y = f(X), where f is a monotonic function.\\n\\nSince it is unclear to us if this inequality hold in the general case, we decided to take down the discussion regarding the data-processing inequality. However, we do think that Lem. 1 is important since it provides a better understanding of what is measured by the AEQ. We are very thankful for the reviewers for pointing out these issues. \\n\\nFinally, we compare the AEQ method to a comparator of the standard Shannon entropy of X and Y and show that the entropy does not indicate the causal direction. Note that this does not contradict the combination of Lem. 1 and the empirical results of the AEQ. That is because the AEQ compares the MSEs of autoencoders trained on T(X) and T(Y). By Lem. 1 the reconstruction errors are proportional to h(T(X)) and h(T(Y)) and not to h(X) and h(Y). It is also worth mentioning that when running our experiments we tried different alternatives to the above T. For other transformations we achieved much worse results. Therefore, we believe that the success of the combination between the quantiles and the autoencoder is not accidental.\", \"multivariate_case\": \"We have added multiple empirical results for the multivariate case, following the reviews. In addition, we added a new real-world dataset we call MOUS-MEG, which is described in the experiments section.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Edit after author rebuttal and author additions:\\n\\nI have updated my score from a weak reject (3) to a weak accept (6).\", \"justification\": \"1. The authors have pointed out that I misunderstood one of their contributions: pointing out that they are demonstrating that the univariate case is an insufficient setting to test causal discovery methods because it can be done without even looking at the conditional distributions (just the marginals). This contribution seems important to orient future work. They have made this more clear in their recent upload and would likely make it even more clear in a camera-ready version.\\n\\n2. The authors have made their contribution to the multivariate setting more substantial by adding evaluation on the MOUS-MEG real-world dataset and have better positioned their work relative to others by adding comparisions to multivariate extensions of PNL and CGNN.\\n\\n====================================================================================================\", \"original_review\": \"\", \"summary\": \"The authors focus on the problem of inferring whether the causal structure X \\u2014> Y or Y \\u2014> X. They first consider the case where both X and Y are scalar (univariate) random variables and then consider the case where X and Y are vector-valued (multivariate) random variables. In the scalar case, motivated by the idea that the effect could be less entropic than the cause (due to data processing inequality), they introduce a method based on comparing reconstruction losses of X and Y and show competitive results in Tables 1 and 2. They establish that this method is not sufficient for the multivariate case in Lemma 2 and move to a new method for the multivariate case. They prove identifiability for this new method in for the multivariate case in Section 4.2 and claim state-of-the-art (SOTA) results in Table 3.\", \"main_contributions\": [\"Presents a causal discovery technique for the univariate cases that only examines the marginal distributions of X and Y and seems fairly competitive (Tables 1 and 2)\", \"Extends the post-nonlinear identifiability analysis of Zhang & Hyv\\u00a8arinen (2009) from scalars to vectors and proved that their method will actually identify the correct causal direction\", \"Demonstrates competitive experimental results for both their univariate method\", \"Claims SOTA results for their multivariate method\"], \"decision\": \"I lean toward rejecting this paper because 1) I have several questions about the univariate case (see below) that would need to be resolved before I lean toward accept, 2) although I am not too familiar with the literature, I believe that this paper may be missing key related work that also uses independence testing for causal discovery (see, e.g., Heinze-Deml et al. (2017)\\u2019s Invariant Causal Prediction for Nonlinear Models), and 3) I am not yet convinced that the comparison done in Table 3 is fair and exhaustive.\", \"sufficient_reason_to_accept\": \"If the theorems in Section 4.2 are found to checkout, and the SOTA results in Table 3 are found to be fair, exhaustive comparisons to the previous SOTA, their contribution to the multivariate case would seem to be sufficient for acceptance. I believe more discussion between the authors and reviewers is necessary here.\", \"questions_about_univariate_case\": \"1. The motivation for the first method (entropy decreasing along a Markov chain due the data processing inequality) seems to only be valid when Y := f(X), but not necessarily when Y := f(X) + E. For example, let f be the identity function and E be independent to X. How did you resolve this argument against the intuition?\\n\\n\\n2. Also, I thought the data processing inequality relates mutual information between variables, not necessarily their entropies. Can you make this connection more clear?\", \"context_for_questions_3_and_4\": \"In Section 3.1, you write, \\u201cestimating the entropy of each random variable from its samples does not present a consistent difference between the entropies h(X) and h(Y). Our method, therefore, computes an alternative complexity score for X and, independently, for Y.\\u201d You then go on to link the entropy to the reconstruction error (your method) in Lemma 1 and show competitive results in Tables 1 and 2.\\n\\n3. Why do you want to link the reconstruction error to entropy if you found a purely entropy-based method did not work?\\n\\n4. Why did the purely entropy-based method not work while your method worked if the two are linked?\", \"questions_about_multivariate_case\": \"5. Are you certain that BivariateFit and ANM are the only models that you should be comparing against for this multivariate setting?\\n\\n6. What is CGNN\\u2019s runtime? Would you be able to compare against CGNN in time for a potential camera-ready version of this paper?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper proposes to use the autoencoder to measure the complexity for identifying the cause and effect in the univariate case. For the multivariate case, this paper extends the PNL model and use GAN for enforcing the independence between cause and noise.\", \"However, my main concerns are regarding the assumption of this work, seeing that the assumption h(X)>h(Y) in the univariate case is easy to violate. For example, let Y=f(X)+N (a special case of PNL) with some high entropy N, then h(Y) could higher than the h(X).\", \"In the multivariate case, it can be seen as incremental for PNL but does not offer new insights.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Update:\\n\\nThe authors have successfully justified my concerns. Therefore, I have increased my score to 6.\", \"original_comments\": \"In this paper, the authors consider learning causal directions from observational data from both univariate case and multi-dimensional case. In the univariate case, the authors propose a new method to learn causal directions by exploiting the complexities of cause and effect variables. In the multi-dimensional case, where the complexity can be balanced, the authors proposed a method that learns causal direction based on independence loss.\\n\\n1. The independence loss part looks confusing to me. Standard results in SCM yields that the error term E is a function of both the outcome Y and X. How can you learn the term E just from Y itself? In other words, I am not sure if the conditions required in Theorem 2 is feasible. The authors need to provide some examples to justify that the conditions in Theorem 2 are feasible conditions.\\n\\n2. In fact, the novel idea of learning causal directions based on independence test has been extensively studied in the previous literature. I regret that this has not been mentioned in the current manuscript. Examples include:\", \"http\": \"//www.jmlr.org/papers/v12/shimizu11a.html\\n\\nIn conclusion, since the idea of using independence relations for learning the causal directions is not a very new idea and a lot of discussion of the theoretical analysis is still missing. I regret that this work seems not strong enough to be accepted by ICLR.\"}"
]
} |
rygwLgrYPB | Regularizing activations in neural networks via distribution matching with the Wasserstein metric | [
"Taejong Joo",
"Donggu Kang",
"Byunghoon Kim"
] | Regularization and normalization have become indispensable components in training deep neural networks, resulting in faster training and improved generalization performance. We propose the projected error function regularization loss (PER) that encourages activations to follow the standard normal distribution. PER randomly projects activations onto one-dimensional space and computes the regularization loss in the projected space. PER is similar to the Pseudo-Huber loss in the projected space, thus taking advantage of both $L^1$ and $L^2$ regularization losses. Besides, PER can capture the interaction between hidden units by projection vector drawn from a unit sphere. By doing so, PER minimizes the upper bound of the Wasserstein distance of order one between an empirical distribution of activations and the standard normal distribution. To the best of the authors' knowledge, this is the first work to regularize activations via distribution matching in the probability distribution space. We evaluate the proposed method on the image classification task and the word-level language modeling task.
| [
"regularization",
"Wasserstein metric",
"deep learning"
] | Accept (Poster) | https://openreview.net/pdf?id=rygwLgrYPB | https://openreview.net/forum?id=rygwLgrYPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"oV9WXWF198",
"S1ejPeevsS",
"rke4QlgvsS",
"HygKZgeDjH",
"HyeFyggvoH",
"BJl1kZ14cr",
"ryl86MI0tS",
"Hkl5pWjptr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746235,
1573482594534,
1573482523525,
1573482497332,
1573482464596,
1572233430638,
1571869374402,
1571824065652
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2326/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2326/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2326/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2326/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2326/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2326/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2326/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents an interesting and novel idea that is likely to be of interest to the community. The most negative reviewer did not acknowledge the author response. The AC recommends acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"General statement\", \"comment\": \"We sincerely thank the reviewers for their insightful comments. We have uploaded a revised manuscript and summarize the major changes below.\\n\\n1. In section 2, we added the paragraph illustrating the difference between PER, BN, and decorrelated BN.\\n\\n2. In section 3, we changed the presentation of the proposed method as deriving the PER from the 1-Wasserstein and then explaining its difference with BN.\\n\\n3. In section 4.1, we conducted and added an experiment on the larger dataset with the larger model.\\n\\n4. In section 4.3.1, we added computational complexity analysis.\"}",
"{\"title\": \"Author response to Reviewer 3\", \"comment\": \"We appreciate Reviewer 3 for the carefully reading our work and providing valuable comments. We address your cons as follows:\\n\\n\\n- Difference between PER and Huang et. al. 2018\\nYes, it is true that DBN (Huang et. al., 2018) also captures the interaction between hidden units though whitening. However, there are many cases PER and DBN have different behavior in making activations to follow the standard normal distribution since PER aims to match the distributions and DBN aims to whiten the activations. For instance, DBN cannot make change activations from a skewed distribution or a multimodal distribution having zero mean and the identity covariance matrix, unlike PER. This limitation of DBN can be found in Bilen & Vedaldi (2017) and Deecke et al. (2019) pointing out the inadequacy of normalizing multi-modal distributions by single mean and variance. To clarify this difference, we added a new figure (Fig. 1) and a new paragraph in P. 2-3 (last paragraph of the section 2) in the revised manuscript. \\n\\n-------\\nReference\\n\\nLei Huang, Dawei Yang, Bo Lang, and Jia Deng. Decorrelated batch normalization. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.\\nHakan Bilen and Andrea Vedaldi. Universal representations: The missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv:1701.07275, 2017.\\nLucas Deecke, Iain Murray, and Hakan Bilen. Mode normalization. In International Conference on Learning Representations, 2019.\\n\\n\\n- Computational cost of PER\\nThanks for this great suggestion. As per Reviewer 3 pointed out, PER has non-negligible computational costs in backward pass having O(n d_l s) time complexity while BN has the time complexity of O(n d_l) where b is the size of mini-batch, s is the number of projection, and d_l is the number of hidden units in layer l. In terms of the wall clock running time, a vanilla network, BN, VCL, and PER take 0.071, 0.083, 0.087, and 0.093 seconds for a single forward/backward iteration in 11-layer CNN on a single NVIDIA TITAN X, respectively. The clarification and comparison of computational costs are added in section 4.3.1 of the revised manuscript.\\n\\n\\n- Experiments on larger datasets/models\\nWe appreciate the suggestion from Reviewer 3. As Reviewer 3 suggested, we performed additional experiments on tiny ImageNet (a subset of ImageNet). It has 2x more training samples, 2x more categories, and 2x bigger image size. Besides, following the experiment given in VCL, we used 2x more filters in the experiment, i.e., 2x larger model. As other experiments performed in the original manuscript, we obtained better results than BN, VCL, and a vanilla network, and added the experiment in the revised manuscript.\\n\\n\\n- Typos\\nThanks for carefully reviewing our manuscript. We modified the typos.\"}",
"{\"title\": \"Author response to Reviewer 2\", \"comment\": \"We thank Reviewer 2 for your time and efforts to point out the missing details of the original manuscript in computational cost that is an important issue when proposing a new regularizer. We added the benchmarking result with computational cost analysis in section 4.3.1 of the revised manuscript and address your comments as follows:\\n\\n- Computational cost of PER and BN\\nIn terms of time complexity, PER has the complexity of O(b d_l s) for projection operation where b is the size of mini-batch, s is the number of projection, and d_l is the number of hidden units in layer l. On the other hand, BN has O(b d_l) complexities for element-wise arithmetic operations and computations of mean and variance. \\n\\n- Training time for PER and BN in CIFAR\\nIn our wall clock running time measure, each training iteration takes 0.071 seconds for a vanilla network, 0.083 seconds for BN, 0.087 for VCL, and 0.093 seconds for PER on a single NVIDIA TITAN X.\"}",
"{\"title\": \"Author response to Reviewer 1\", \"comment\": \"We appreciate Reviewer 1 for giving constructive feedback. Following your comments, we thoroughly revised the manuscript and believe the comments significantly improve the clarity of the manuscript. We address your three concerns as follows:\\n\\n\\n- Presentation\\nWe sincerely thank Reviewer 1 for this insightful comment. As per Reviewer 1 pointed out, it is true that we obtained PER by applying the Minkowski inequality to 1-Wasserstein. In the original manuscript, we presented PER then derived PER from the 1-Wasserstein for emphasizing the difference between BN and PER, and now we admit that was a mistake. In the revised manuscript, we thoroughly revised the presentation as deriving the PER from the 1-Wasserstein and then explaining its difference with BN. We believe this change significantly improves the presentation of the manuscript and emphasizes the difference with existing methods even better.\\n\\n- Experimental result\\nWe thank for pointing out missing details in the experimental configuration. As Reviewer 1 indicated, the experimental configurations in the manuscript may be sub-optimal. However, to carefully compare PER with existing methods (BN, VCL, and L1 and L2 activation regularizations), we use the default hyperparameters of baseline models given in their papers. To clarify this point, we added explicit comments about this in each experiment and provided the benchmark results of literature in the result tables.\\n\\n\\n- There are numerous places where English is not adequate.\\nIn response to Reviewer 1, we have very carefully proofread the manuscript again and corrected grammatical errors, typos, and inadequate expressions. We hope that Reviewer will notify us if there are still places where English is not adequate such as \\\"new perspective of concerning the target distribution.\\\"\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces \\\"projected error function regularization loss\\\" or PER, an alternative to batch normalization. PER is based on the Wasserstein metric. The experimental results show that PER outperforms batch normalization on CIFAR-10/100 with most activation functions. The authors also test their method on language modeling tasks.\", \"caveat\": \"I'm not an expert in this domain. Hence, please take my rating with a large grain of salt.\\n\\nComments/questions:\\n- What's the computational cost of using PER over batch norm? \\n- Related to my other question: For the CIFAR-10 & CIFAR-100 comparison. What was the training time for BN vs PER?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This submission belongs to the general field of neural networks and sub-field of activation regularisation. In particular, this submission proposes a novel approach for activation regularisation whereby a distribution of activations within minibatch are regularised to have standard normal distribution. The approach, projected error function regularisation (PER), accomplishes that by minimising an upper-bound on 1-Wasserstein distance between empirical and standard normal distributions.\\n\\nI think the idea described in this submission is interesting. Unfortunately, I have issues with 1) presentation, 2) experimental results, 3) English. \\n\\nThe PER is presented as an objective function that minimises an upperbound on 1-Wasserstein. I believe I have seen no evidence to the origin of PER other than it is the upper-bound on 1-Wasserstein. Therefore, I find it strange to see a presentation where first an objective function is introduced, then 1-Wasserstein is described, and after applying standard inequality you obtain an expression that is PER. The current presentation seems to indicate that before this derivation has been done no one new the connection between PER and the upper bound on 1-Wasserstein. I disagree and say that you obtained the upper bound on 1-Wasserstein and called it PER. For unknown reasons you decided to present first PER, then upper bound and finally claim connection. This is a mistake as it is not a connection but merely a consequence. \\n\\nSimply looking up CIFAR-10 best numbers on any search engine I can find significantly better numbers. It is therefore unclear why did you decide to use sub-optimal configuration without commenting on that. The same applies to PTB and possibly to WikiText2. \\n\\nThere are numerous places where English is not adequate. For instance, \\\"new perspective of concerning the target distribution\\\". \\n\\nFollowing the rebuttal stage where the authors have made significant changes to the manuscript I have decided to increase my assessment score.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new method to normalize activations in neural networks, based on an upper bound of the sliced Wasserstein distance between the empirical activation distribution and a standard Gaussian distribution. I think this feels like a \\\"borderline\\\" case to me. The paper clearly has merits, at the same time there're some issues to be addressed.\", \"pros\": [\"The idea is clearly presented.\", \"Better performance than BN is achieved in many experiments.\", \"Empirical evidence in Section 4.3 looks good, suggesting the proposed method does do the job as expected. The means and variances stabilize as training progresses.\"], \"cons\": [\"While the method based on sliced Wasserstein distances sounds new, the novelty seems limited since the idea of whitening the activation distribution to unit Gaussian was introduced before as mentioned by the authors. The paper claims the random projection may capture \\u201cinteraction between hidden units\\u201d, but it seems the method proposed in e.g. Huang et. al. 2018 also has projection matrices that might be doing similar things?\", \"I\\u2019m concerned about the actual computation cost of the proposed method. Although the method does not introduce any additional parameter compared to BN or VCL, it seems to require multiple random projections for each layer (s=256 in the experiments)? This could be much slower than the BN. A clarification/comparison of the wall clock running time would be desirable.\", \"In terms of the image experiments, I do expect to see results with larger datasets/models, though not absolutely necessary.\"], \"typos\": [\"Page 5, Eq. 9, x_i should be h_i instead?\", \"Page 9, beta^l_j = 0 and ??^_j = 1\"]}"
]
} |
SJeLIgBKPS | Gradient Descent Maximizes the Margin of Homogeneous Neural Networks | [
"Kaifeng Lyu",
"Jian Li"
] | In this paper, we study the implicit regularization of the gradient descent algorithm in homogeneous neural networks, including fully-connected and convolutional neural networks with ReLU or LeakyReLU activations. In particular, we study the gradient descent or gradient flow (i.e., gradient descent with infinitesimal step size) optimizing the logistic loss or cross-entropy loss of any homogeneous model (possibly non-smooth), and show that if the training loss decreases below a certain threshold, then we can define a smoothed version of the normalized margin which increases over time. We also formulate a natural constrained optimization problem related to margin maximization, and prove that both the normalized margin and its smoothed version converge to the objective value at a KKT point of the optimization problem. Our results generalize the previous results for logistic regression with one-layer or multi-layer linear networks, and provide more quantitative convergence results with weaker assumptions than previous results for homogeneous smooth neural networks. We conduct several experiments to justify our theoretical finding on MNIST and CIFAR-10 datasets. Finally, as margin is closely related to robustness, we discuss potential benefits of training longer for improving the robustness of the model. | [
"margin",
"homogeneous",
"gradient descent"
] | Accept (Talk) | https://openreview.net/pdf?id=SJeLIgBKPS | https://openreview.net/forum?id=SJeLIgBKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"SLyRt7Anr",
"ByeXHx5KoS",
"BklZZeqtjB",
"BygiThKFsr",
"BylrCxVAFB",
"SJgkPWhoYH",
"rJegO1tfKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746205,
1573654587051,
1573654521470,
1573653698682,
1571860684661,
1571696983452,
1571094376157
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2324/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2324/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2324/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2324/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2324/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2324/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Talk)\", \"comment\": \"This paper studies the implicit regularization of the gradient descent in homogeneous and shows that when the training loss falls below a threshold, then the smoothed. This study generalizes some of the earlier related works by relying on weaker assumptions. Experiments on MNIST and CIFAR-10 are provided to backup the theoretical findings of the paper.\\nR2 had some concern about one of the assumptions in this work (A4). While authors admitted that (A4) may not hold for all neural networks and all datasets, they stressed that this assumptions is reasonable when the network is overparameterized and can perfectly fit the training data. Overall, all reviewers are very positive about this submission and find a valuable step toward understanding implicit regularization.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thanks for your appreciation! We will fix the errors in the bibliography.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thanks for your reviews and for pointing out the typos!\\n\\nWe admit that (A4) may not hold for all neural networks and all datasets. Indeed, the loss of a neural network is highly non-convex and (A4) seems to be a quite strong assumption. However, it is known that sufficiently overparameterized neural networks can fit the training set through (stochastic) gradient descent. As we discussed in the introduction of our paper, state-of-the-art neural networks are typically overparameterized, and they can perfectly fit not only normal data but also randomly labeled data easily in image classification tasks (Zhang et al., 2017). Theoretically, (Allen-Zhu et al., 2019; Du et al. 2018; Zou et al., 2018) showed that gradient descent can achieve 100% training accuracy if the width is large enough. Given the evidence from both theory and practice, we believe (A4) is a reasonable assumption (at least for many DL tasks).\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thanks for your comments!\\n\\nThe L2-normalization is due to the use of gradient descent (GD is the steepest descent algorithm w.r.t. L2). If we change the optimization algorithm, the normalized margin being optimized should be also changed. Note that this has been studied in the linear case (Gunasekar et al., 2018a): if we run steepest descent with respect to a generic norm $\\\\|\\\\cdot\\\\|$, then the $\\\\|\\\\cdot\\\\|$-normalized margin is maximized. When $\\\\|\\\\cdot\\\\|$ is the L2 norm, it is just the case of gradient descent; when $\\\\|\\\\cdot\\\\|$ is the L1 norm, the corresponding optimization problem is coordinate descent, and it maximizes the L1-normalized margin. Right now, our results only hold for gradient descent and L2 norm. Extending it to more general norm and optimization problems is an interesting future direction.\\n\\nFor robustness, we want to emphasize that the Lipschitz constant is evaluated after normalizing the weight norm. As the weight norm is always $1$ during training, we can expect that the Lipschitz constant does not get extremely large. In our experiments, we admit that the normalized margin and robustness do not grow in the same speed, so the Lipschitz constant may change; however, the normalized margin and robustness do have quite positive correlations, and we think improving robustness by maximizing normalized margin is relevant (it may be able to provide certified robustness). So we will discuss this phenomenon in the next version of our paper to encourage further discussions.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This is a strong deep learning theory paper, and I recommend to accept.\\n\\nThis paper studies the trajectory induced by applying gradient descent/gradient flow for optimizing a homogeneous model with exponential tail loss functions, including logistic and cross-entropy loss in particular. This is an important direction in recent theoretical studies on deep learning as we need to understand which global minimizer the training algorithm picks to analyze the generalization behavior. \\n\\nThis paper makes a significant contribution to this direction. This paper rigorously proves gradient descent / gradient flow can maximize the L2 margin of homogeneous models. Existing works mostly focus on linear models or deep linear networks, and comparing with Nascon et al., 2019a, the assumptions in this paper are significantly weaker. Furthermore, this paper provides convergence rates, which seem to be the first work of this kind for non-linear models.\\n\\nI really like Lemma 5.1. This is not only a technical lemma for proving the main theorem. Lemma 5.1 itself has a nice geometric interpretation. It naturally decomposes the dynamics of the smoothed version into a radial component and a tangential velocity component. I believe this lemma can be useful in other settings as well.\", \"comments\": \"The bibliography should be fixed. Some papers are already published, so they should not be cited as the arXiv version, and author lists in some papers have \\\"et al.\\\"\\n\\n-----------------------------------------------------\\nI have read the rebuttal and I maintain my score.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The goal of the paper is to formally prove that gradient flow / gradient descent performed on homogeneous neural network models maximizes the margin of the learnt function; assuming gradient flow/descent manages to separate the data. This is proved in two steps:\\n 1. Assuming that gradient descent manages to find a set of network parameters that separate the data, thereafter gradient flow/descent monotonically increases the normalized margin (rather an approximation of it).\\n 2. The limit points of optimization are KKT points of the margin maximization optimization problem.\\nWhile the main body of the paper presents a restricted set of results, the appendix generalizes this much further applying it to various kinds of loss functions (logistic/cross-entropy, exponential), to multi-class classification and to multi-homogeneous models. There seem to be many subtleties in the proofs and the paper seems to be quite thorough. (I must say that I'm not expert enough to assess the technical novelty of this paper over prior works.)\", \"recommendation\": \"I recommend \\\"acceptance\\\". The paper takes a significant step by unifying existing results on margin maximization and going beyond them.\", \"technical_comments\": [\"It is clear that in order to define margin meaningfully, some form of normalization is necessary. But a priori, $\\\\|\\\\theta\\\\|_2^L$ is not the *only* choice; $\\\\|\\\\theta\\\\|^L$ could also work for any norm $\\\\|\\\\cdot\\\\|$. But perhaps the choice of $\\\\|\\\\cdot\\\\|_2$ is special (as Thm 4.4 suggests). It will be nice to have some insights/comments on why this choice of $\\\\|\\\\cdot\\\\|_2$ based normalization is the right one.\", \"The paper argues that having a larger margin helps in obtaining better robustness to adversarial perturbations (within $\\\\|\\\\cdot\\\\|$ balls for some choice of $\\\\|\\\\cdot\\\\|$). However note that the notion of \\\"margin\\\" is not just a function of the decision boundary, but instead depends on the specific function computed by the neural network --- this is unlike margin maximization in linear models, where \\\"margin\\\" in determined entirely by the decision boundary. As the paper argues, if we have an upper bound on the Lipschitz constant w.r.t. $\\\\|\\\\cdot\\\\|$ norm, then we get a lower bound on required adversarial perturbations for any training point. However, this does not mean that training longer is necessarily better because by doing so, we might end up with a larger Lipschitz constant (even after normalizing). So even if the \\\"margin\\\" is larger, the actual adversarial perturbations (in $\\\\|\\\\cdot\\\\|$ norm) allowed might get smaller. So I'm not sure how relevant this result is for adversarial robustness.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the implicit regularization phenomenon. More precisely, given separable data the authors ask whether homogenous functions (including neural networks) trained by gradient flow/descent converge to the max-margin solution. The authors show that the limit points of gradient descent are KKT points of a constrained optimization problem.\\n\\n-I think that the topic is important and the authors clearly made some interesting insights.\\n-The main results of this paper (Theorem 4.1 and Theorem 4.4) require that assumption (A4) is satisfied. Assumption (A4) essentially means, that gradient flow/descent is able to reach weights, such that every data x_n is classified correctly. To me this seems to be a quit restrictive assumption as due to the nonconvexity of the neural net there is a priori no reason to assume that such a point is reached. In this sense, the paper only studies the latter part of the training process. \\n\\nI feel that Assumption (A4) clearly weakens the strength of the main results. However, because the topic studied by the paper is interesting and the authors have obtained some interesting insights, I decided to rate the paper as a weak accept.\", \"typos\": \"-p. 4: \\\"Very Recently\\\"\\n-p. 7 and p. 9: \\\"homogenuous\\\" (instead of \\\"homogeneous\\\")\\n\\n----------\\n\\nI want to thank the authors for their response. However, I will stand by me evaluation and will not change it.\\nI agree though that assumption (A4) is indeed reasonable, although of course very strong.\"}"
]
} |
HJe88xBKPr | Mixed Precision Training With 8-bit Floating Point | [
"Naveen Mellempudi",
"Sudarshan Srinivasan",
"Dipankar Das",
"Bharat Kaul"
] | Reduced precision computation is one of the key areas addressing the widening’compute gap’, driven by an exponential growth in deep learning applications. In recent years, deep neural network training has largely migrated to 16-bit precision,with significant gains in performance and energy efficiency. However, attempts to train DNNs at 8-bit precision have met with significant challenges, because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16)and a broader set of workloads (Resnet-18/34/50, GNMT, and Transformer) than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point, to improve error propagation.We also examine the impact of quantization noise on generalization, and propose a stochastic rounding technique to address gradient noise. As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline. | [
"8-bit training",
"8-bit floating point",
"low precision training",
"deep learning"
] | Reject | https://openreview.net/pdf?id=HJe88xBKPr | https://openreview.net/forum?id=HJe88xBKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Hz286cLG6d",
"ryxbyEtnsB",
"S1erqEBnor",
"Hyx3iPknoH",
"SJldvfVooB",
"SkxWA0Y9oB",
"rkectR8csS",
"S1g1u7I5sH",
"SyxwlXOjcH",
"SkxP4oloYB",
"Skxg0CjUtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746175,
1573848025322,
1573831820647,
1573808036178,
1573761632039,
1573719752809,
1573707394014,
1573704550987,
1572729582930,
1571650351283,
1571368647636
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2323/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2323/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2323/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2323/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2323/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2323/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2323/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2323/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2323/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2323/AnonReviewer4"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper propose a method to train DNNs using 8-bit floating point numbers, by using an enhanced loss scaling method and stochastic rounding method. However, the proposed method lacks novel and both the paper presentation and experiments need to be improved throughout.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Still need improvement to be publishable in ICLR\", \"comment\": \"Thank you for your quick response.\\n\\nI think the method part of the paper still needs much improvement to clarify the novelty and contribution. Also, the experiments in the paper are not enough to demonstrate the generalizability of the proposed methods across different models and datasets. In its current form, I'm afraid the paper cannot reach the borderline as an ICLR paper. However, I think the paper do has potential. You can submit it to other conferences after polishing the method part and adding more supportive experiments.\"}",
"{\"title\": \"Updated revision of the paper\", \"comment\": \"We have added an updated version of the paper with the following changes:\\n\\n> Description and pseudo code for enhanced loss scaling algorithm (section 3.1)\\n> Fixed typographical and grammatical errors pointed out by the reviewers (section 4, page 8)\\n\\nWe thank all the reviewers for their helpful comments and feedback.\"}",
"{\"title\": \"Thank you for your quick feedback\", \"comment\": \">> What if overflow and underflow happen at the same time? This is an important issue to address\\n\\nYes, this can happen. It happens and more frequently with GNMT in the early epochs. This is the exact issue we are addressing with enhanced loss scaling. \\n\\nWe see more frequent gradient overflows because of a few outliers in the distribution, while a significant chunk of the gradients experience underflow. \\nThis happens more frequently with GNMT because it does not use any normalization layers which lead to more irregular data distributions. Also, RNNs tend to accumulate errors quickly compared to feed-forward networks, this is exacerbated by the additional noise induced by the low precision (FP8). \\n\\nThe existing loss scaling algorithms treat all overflows equally -- if they see an overflow, they drop the scaling factor. This leads to scaling factor dropping very quickly because of these spurious outliers.\\nThe fix we proposed to our algorithm is to ignore a few spurious overflows which are likely a result of the outliers and continue to maintain a higher loss scale value. We accomplish this by setting a \\u2018lower threshold\\u2019 for the loss scale value to prevent it from going below a certain threshold value even when overflows occur \\u2013 and this strategy worked as evidenced by the GNMT result. \\n\\nNow, to automate this process, we will add a new variable \\u2018consecutive_overflow_threshold\\u2019, (=2 or 3 depending on the workload). This will enable the loss scaling algorithm to ignore overflows unless they occur in succession for \\u2018consecutive_overflow_threshold\\u2019 times, which is a more reliable indicator of a true shift in the gradient distribution, and not caused by spurious outliers. We will also reduce the interval between loss scale updates (from 2000 to 500), so there is a better chance to recover from any inadvertent drop in loss scale value.\\n\\n>> Moreover, GNMT is kind of old that I doubt the value of a method that only works for GNMT.\\n\\nGNMT is kind of old, but It is also more difficult to converge at low precision because of the reasons discussed above. This is not the case for feed forward networks that include layer normalization as evidenced by our Transformer result. Based our observations, we believe automatic loss scaling will work for a large percentage of the feed-forward networks. \\n\\nWe have updated the paper with the pseudo code for enhanced loss scaling algorithm.\\n\\n>> it seems that Sec 3.1 is only for GNMT and Sec 3.2 is only for ResNet 50. The motivation now looks confusing. \\nSection 3.1 is mostly addressing loss scaling issues of GNMT because other networks we converged did not have any issues with existing loss scaling method. We think GNMT represents kind of an extreme case for the following reasons: \\n1.\\tit is a recurrent network which tend accumulate gradient errors quickly, which is exacerbated by the noise induced by low-precision. \\n2.\\tIt does not use any kind of normalization layers, leading to more irregular data distributions, which are difficult handle for the standard loss scaling algorithms. \\n\\nThe observations from Section 3.2 are applicable across the workloads \\u2013 we have chosen Resnet-50 as an example to clearly demonstrate the effects of noise on generalization and how that can be addressed with stochastic rounding. We have observed similar behavior across all three workloads we have demonstrated \\u2013 and they all use stochastic rounding for the gradients. We have not added additional plots for Transformer and GNMT in the interest of space. \\n\\n>> Please distinguish your stochastic rounding method with reference [1].\\n\\nStochastic rounding is not new, the difference is in how it was implemented. Our implementation is more efficient for the following reasons: \\n1.\\tWe perform stochastic rounding only \\u2018once\\u2019 after the full MatMul operation is complete. Wang et.al perform stochastic rounding on the accumulator after every few (8 to 32) FMA instructions. This incurs a few orders of magnitude higher overhead compared to our implementation depending on the number of FMA instructions required by MatMul . They also need to replicate this capability inside each FMA unit which costs more power and silicon area. \\n2.\\t Our rounding method itself is more efficient because we use 8-bit PRNG (LFSR) for generating the random probability. We also reuse the random numbers quite extensively ( > 256 times). This reduces the cost of stochastic rounding hardware quite significantly. \\nWe contribute the following to the state-of-the art FP8 training. \\n-\\tWe show better coverage across multiple datasets & workloads. As a result, we uncovered issues like gradient noise and loss scaling and propose solutions to handle them. \\n-\\tWe proposed a better and more efficient approach to implementing FP8 hardware compared to the one proposed by Wang et.al. \\n- Previous results from Wang et.al. only show results for Resnet 50.\\nHence we believe there is significant novelty in the work we presented. Hope that addresses your questions.\"}",
"{\"title\": \"Thank You For Your Response\", \"comment\": \"A1. If you have an algorithm for your method, I would like to see it in your paper (e.g., write a pseudocode) instead of in the rebuttal.\\nTechnically, your method is able to address overflow. However, by dividing the loss factor by 2, the method will cause underflow. What if overflow and underflow happen at the same time? This is an important issue to address. If this is impossible for training GNMT, could you show some evidence? Moreover, GNMT is king of old that I doubt the value of a method that only works for GNMT.\\n\\nA2. Please distinguish your stochastic rounding method with reference [1]. I cannot see the novelty of this part.\\n\\nOverall, after reading your response, it seems that Sec 3.1 is only for GNMT and Sec 3.2 is only for ResNet 50. The motivation now looks confusing. The novelty of this paper is still not clarified. My suggestion is that you should polish your paper and include more insights and novelty.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your comments. We will attempt to answer your questions below.\\nQ1. \\nOur intention was to show that both enhanced loss scaling and stochastic rounding are essential for achieving full accuracy with FP8 training. \\nFor example, in section 3.1, our experiments already use \\u201cstochastic rounding\\u201d on gradients (essential for convergence) to study the impact of loss scaling in isolation. \\nSimilarly, in section 3.2 when studying the impact of stochastic rounding, we employed the \\u2018best loss scaling strategy\\u2019 derived from section 3.1. Perhaps this is not clearly described in the paper. We will edit the text for clarity and upload the new version of the paper. In Figure 2a, we have compared multiple experimental results to demonstrated impact of using different loss scaling values on final accuracy of Resnet50. \\n\\nQ2.\", \"on_stochastic_rounding\": \"As we discussed in Section 3.2, rounding plays a significant role for FP8 because the rounding errors are quite large at this precision. It is known that standard rounding methods (up, down, towards zero, away from zero) have a positive or a negative bias to the final distribution. The most popular rounding method used by floating point today is round to nearest even (RNE) \\u2013 although this method is free of positive or a negative bias -- it distorts the data distribution to have more even numbers than odd. (more info here: https://en.wikipedia.org/wiki/Rounding#Floating-point_rounding). It is also known that rounding errors grow with longer accumulation chains (like in Convolution and MatMul). For RNE method, the rounding errors grow proportional to the square root of number of accumulations. This is quite significant at extreme low precisions (like FP8) where \\u2018episilon\\u2019 value is large.\\n\\nStochastic rounding is bias free because it uses random probability term for tie-breaking. It does not impact the overall data distribution of the tensor and the rounding errors are small and evenly distributed. This makes the accumulation of errors during long accumulation chains much less likely. \\n>> On why Resnet-50 demands a large scaling factor? : \\nIn general working with FP8 would require larger scaling factor because FP8 has smaller dynamic range compared to FP16. The smallest number that can be represented by FP16 is 5.96e-8 whereas the smallest number that FP8 can represent is 1.52e-5. This means that a larger percentage of smaller gradients fall \\u2018below\\u2019 the FP8 range. Hence, we need to use a larger scaling factor to push them up into the FP8 range. \\n\\nQ3. \\nWe would like to clarify that we do not use FP32 in any of our training results. For Resnet-50 , all convolution and batchnorm layers use FP8 -- except the first conv and last FC layers which use FP16; we also use FP16 master copy of weight. This configuration identical to what is used by Wang et.al.-- hence the comparison is fair. \\nThe key difference between our implementations is that we use FP32 accumulator (in the ALU) while Wang et.al use a modified FP16 (1-6-9 format) \\u2013 as a result, they need to implement additional hardware in the ALU path to perform stochastic rounding on the accumulator to preserve accuracy. Given the complexity of building stochastic rounding hardware, their implementation will be more expensive to build. We discussed these design trade-offs in Section 1. \\n\\nQ4. \\nWe employ the widely disseminated techniques that are used for FP16 mixed precision training, these are implemented in frameworks such as Tensorflow and PyTorch. Our loss scaling methods are modifications on top of these baseline methods.To answer your specific question : Scale (=2) and threshold (min=2, max=2^14) values are hard-coded in in the current implementation of loss scaling algorithm. The dynamic loss scaling algorithm increments the loss scale value by a factor of \\u2018scale\\u2019 every 2000 iteration intervals and reduced the loss scale by a factor \\u2018scale\\u2019 in the case of an occurrence of \\u2018NaN\\u2019 in the during gradient computation. For GNMT training, the enhanced loss scaling method updates the \\u2018min\\u2019 threshold value according to the schedule shown in Figure 2b to prevent the loss scale becoming too small. We will add the description of the algorithm to the paper. \\n\\nQ5. \\nWe have described the loss scaling methods applied to each model in section 3.1 \\nFor Resnet50, we use constant loss scaling of 10K, this is derived empirically through experimentation which are detailed in section 3.1. For GNMT and Transformer, we use dynamic loss scaling implemented by Tensorflow. \\n\\nQ6. \\nFor now, the process of selecting which layers to run at FP8 requires human expertise and intervention. But we expect the future frameworks to automate this process of selecting multiple precision options to maximize performance. Recent work on use of AutoML [1] for mixed-precision quantization is also promising research direction.\\n[1] HAQ: Hardware-Aware Automated Quantization with Mixed Precision, Kuan Wang et.al., CVPR 2019.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thank you for your detailed review and comments.\\nQ1. \\nPlease note that only GNMT required the hand tuned loss scaling schedule. We believe this method can be automated for GNMT as well. We have observed that GNMT saw wider error gradient distributions which often consisted of outliers that are much larger than the mean. When these outliers are scaled with a large scaling factor, they overflow and cause a NaN when gradients for previous layer are computed. The current automatic loss scaling algorithm is ill-equipped to handle these transient NaNs, it over-corrects (reduces) the loss scale value every time it encounters an outlier, resulting in divergence. Our enhanced loss scaling strategy mitigates this by adding a lower threshold to prevent loss scale value from becoming too small. We believe adding a few additional conditions to loss scaling algorithm will handle this case automatically.\", \"the_current_loss_scaling_algorithm_works_like_this\": \"Initial \\u2018loss_scale\\u2019 value is set to \\u2018max_threshold\\u2019.\\nWhen a gradient computation results in a NaN, reduce the loss_scale by a factor of \\u2018scale\\u2019 (=2)\\nIf there is another NaN within the \\u2018interval\\u2019, the loss scale is further reduced by a factor of 2. \\nIf there is no NaN encountered for \\u2018interval\\u2019 (=2000) iterations, the \\u2018loss_scale\\u2019 value is increased by a factor of 2 \\nWhen the gradients have lot of outliers, we would see more of these spurious NaNs and the \\u2018loss_scale\\u2019 value quickly drops. One or more of the following solutions can be applied to solve this. \\n1.\\tReduce \\u2018interval\\u2019 to a smaller iteration count (=200) so the \\u2018loss_scale\\u2019 value can recover to quickly from a previous drop. \\n2.\\tIgnore a few NaNs unless they appear in consecutive iterations. This will address the over-correction (similar to setting a lower threshold) \\n3.\\tA more generic solution is to derive layer-wise scaling factor which is aware of the gradient distribution at each layer [1] \\nAs per your feedback, we will update the paper with a description and/or a flow chart of this algorithm. \\n\\nQ2. On connection between the norm and rounding technique.\\n\\nAs we discussed in Section 3.2, rounding plays a significant role in FP8 training because rounding errors are quite large at this precision. It is known that round to nearest even (RNE) distorts the data distribution to have more even numbers than odd. As a result of this when using RNE, rounding errors grow at the rate proportional to square root of number of accumulations. (more here: https://en.wikipedia.org/wiki/Rounding#Floating-point_rounding) \\nIn Figure 3c, we are showing the result of these accumulated errors on the weight distribution. The overall weight distribution is shifted towards larger numbers resulting in increasing \\u201cL2_loss\\u201d (=sum of squares of the weights). Since l2_loss is used as a \\u2018regularization\\u2019 term ( loss =cross_entropy+l2_loss), the loss increases as the rounding errors keep accumulating. This leads to loss of generalization, as shown in Figure 3a and 3b \\u2013 the training loss keeps going down while validation loss is increasing. \\n\\nTo avoid using l2_loss term, we tried using \\u2018drop out\\u2019 method and trained without any regularization. Though the validation error improved in both these cases, there was still a significant gap in final accuracy due to ineffectiveness of these regularization methods. \\n\\nThen then we went back to l2 regularization \\u2013 this time addressing the rounding errors in the gradients using stochastic rounding. This helped keep the accumulation of errors in check and the we achieved SOTA accuracy. \\n\\nQ3. \\nThe single hyper-parameter used for loss scaling indicates whether to use a \\u2018static\\u2019 or a \\u2018dynamic\\u2019 loss scaling method. We will add this detail to experiments section. \\n\\n Q3b. On the relevance of Banner et.al. [3] as an important baseline.\\n\\nIn our case the update is not full precision. We compute weight gradients at FP8 precision and we use FP8 weight gradients and FP16 master weights for the weight update operation. In Figure 1 we are showing FP32 because the internal accumulator in ALU unit is FP32, during weight update the weights are accumulated into FP32 accumulator and are converted to FP16 before they are written out to the master copy, we have described this in Section 3, para 3. \\nIn contrast Banner et.al [3] use a technique called \\u2018gradient bifurcation\\u2019 where they only quantize one of the two convolutions in the backward pass. They maintain two copies of the error gradient one of which is at full precision. The full precision copy is used to compute the error gradients at FP32 precision and passed down to the previous layer. \\nHope that helps clarify your questions. \\n\\n[1] Adaptive Loss Scaling for Mixed Precision Training, Ruizhe Zhao, Brian Vogel, Tanvir Ahmed \\n[2] Wang N, Choi J, Brand D, et al. Training deep neural networks with 8-bit floating point numbers \\n[3] Banner R, Hubara I, Hoffer E, et al. Scalable methods for 8-bit training of neural networks,\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your helpful comments.\", \"q1\": \"The enhanced loss scaling strategy is interesting but the method seems hand-tuning. Is there any automatical way or heuristic deciding way?\\n\\nWe believe this can be automated. We have observed that GNMT saw wider error gradient distributions which often consisted of outliers that are much larger than the mean. This is exacerbated by the additional noise induced as a result of using lower precision (FP8) for error gradients. When these outliers are scaled with a large scaling factor, they overflow and cause a NaN when gradients for previous layer are computed. The current automatic loss scaling algorithm is ill-equipped to handle these transient NaNs, it over-corrects (reduces) the loss scale value every time it encounters an outlier, resulting in divergence. Our enhanced loss scaling strategy mitigates this by adding a 'minimum threshold' to prevent loss scale value from becoming too small. We believe adding a few additional conditions to loss scaling algorithm will handle this case automatically.\", \"the_current_loss_scaling_algorithm_works_like_this\": \"Initial \\u2018loss_scale\\u2019 value is set to \\u2018max_threshold\\u2019.\\nWhen a gradient computation results in a NaN, reduce the loss_scale by a factor of \\u2018scale\\u2019 (=2)\\nIf there is another NaN within the \\u2018interval\\u2019, the loss scale is further reduced by a factor of 2. \\nIf there is no NaN encountered for \\u2018interval\\u2019 (=2000) iterations, the \\u2018loss_scale\\u2019 value is increased by a factor of 2 \\n\\nWhen the gradients have lot of outliers, we would see more of these spurious NaNs and the \\u2018loss_scale\\u2019 value quickly drops. One or more of the following enhancements can be applied to automatic loss scaling algorithm to address this:\\n \\n1.\\tReduce \\u2018interval\\u2019 to a smaller iteration count (=200) so the \\u2018loss_scale\\u2019 value can recover to quickly from a previous drop. \\n2.\\tIgnore a few NaNs unless they appear in consecutive iterations. This will address the over-correction (similar to setting a lower threshold) \\n3.\\tA more generic solution is to derive layer-wise scaling factor which is aware of the gradient distribution at each layer [1]\", \"q2\": \"The stochastic rounding method is very intuitive. How do you choose the value of \\\"r\\\" in the equation? Is it a sensitive hyper-parameter or not?\\n\\nWe appreciate the positive feedback. \\nThe value of \\u201cr\\u201d is an 8-bit random number generated using LFSR random number generator. We also reuse these random numbers (for about 256 times) to save on the overheads to generate these numbers. \\n\\nWe will fix the typos and grammatical errors you pointed out and update the paper. \\n\\nHope this clarifies your questions. \\n\\n[1] Adaptive Loss Scaling for Mixed Precision Training, Ruizhe Zhao, Brian Vogel, Tanvir Ahmed (https://arxiv.org/pdf/1910.12385)\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Originality: The paper proposed a new scaling loss strategy for mixed-precision (8-bit mainly) training and verified the importance of rounding (quantization) error issue for low-precision training.\", \"quality\": \"The authors clearly illustrated the benefit of their proposed loss strategy and the importance of quantization error for two different tasks (image classification and NMT). The experiments are very clear and easy to follow.\", \"clarity\": \"The paper is clearly written with some visualizations for readers to understand the 8-bit training.\", \"significance\": \"1. The enhanced loss scaling strategy is interesting but the method seems hand-tuning. Is there any automatical way or heuristic deciding way?\\n2. The stochastic rounding method is very intuitive. How do you choose the value of \\\"r\\\" in the equation? Is it a sensitive hyper-parameter or not?\", \"typos\": \"\", \"page_7\": \"with with roughly 200M -> with roughly 200M\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. They use enhanced loss scale, quantization and stochastic rounding techniques to balance the numerical accuracy and computational efficiency. Finally, they get a slightly better validation accuracy compared to full precision baseline. Overall, this paper focuses on engineering techniques about mixed precision training with 8-bit floating point, and state-of-the-art accuracy across multiple data sets shows the effectiveness of their work.\\n\\nHowever, there are some problems to be clarified.\\n1. The authors apply several techniques to improve the precision for training with 8-bit floating point, but they do not show the gain for each individual. For example, how much improvement can this work achieve when just using enhanced loss scaling method or a stochastic rounding technique? This should be clearly presented and more experimental comparison is expected.\\n\\n2. The paper should present a bit more background knowledge and discussion on the adopted techniques. For instance, why the stochastic rounding method proposed in this article by adding a random value in probability can regulate quantization noise in the gradients? And why Resnet-50 demands a large scaling factor?\\n\\n3. On Table 3, in comparison with Wang et al. (2018), the authors use layers with FP32 (not FP16 in Wang). Thus, it is hard to say the improvement comes from the proposed 8-bit training. This should be clarified.\\n\\n4. How to set the hyper-parameters, such as scale, thresholds and so on, is not clear in the paper. There are no guidelines for readers to use these techniques.\\n\\n5. The authors did not give a clear description of the implement for the enhanced loss scaling. They apply different loss scaling methods for different networks. This should be explained in detail.\\n\\n6. In the experiment, for a single model, some layers are 8-bit, some layers are 32-bit and some layers are 16-bit. Is the 8-bit training only applicable for a part of the model? How do we know which layer is suitable for 8-bit training?\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper is about training deep models with 8-bit floating point numbers. The authors use an enhanced loss scaling method and stochastic rounding method to stabilize training. They do experiments on image classification and NLP tasks.\\n\\nThe paper is clearly written. However, I don\\u2019t think this paper passes the bar of ICLR. This paper lacks innovation and insightful analysis.\\n\\n1.Sec. 3.1 proposes enhanced loss scaling. Loss scaling is a heuristic to train low-precision neural networks. The authors train 8-bit GNMT with a changing scaling factor. However, this looks like some manually tuned result for GNMT only. I doubt if this generalizes to other models. Besides, there is no equation or algorithm flowchart to demonstrate their method. It\\u2019s not very readable.\\n\\n2.The logic of Sec. 3.2 is quite confusing. The authors first empirically show that the performance of ResNet-50 significantly drops with 8-bit training. Then they show the sum of the square of the weights in ResNet-50 is high at the beginning. With this observation, they claim it demonstrates the drawback of \\u2018rounding-to-nearest-even\\u2019. I cannot see the connection between the norm of weights and the rounding technique. Moreover, the stochastic rounding has already been used in 8-bit training.[1]\\n\\n3.The setting in the experiment section is not stated clearly. For example, what\\u2019s the hyper-parameter for loss scaling? Another question is the gradient. In Sec. 3, just above Fig. 1, the authors claim the weight update is performed in full-precision. In contrast, they claim the gradient is 8-bit in table 3. If the update is full-precision, [2] is an important baseline.\", \"small_suggestions\": \"1.For Fig. 6, I suggest the authors to smooth the loss curves to avoid overlap of two curves. \\n2.There are two \\u2018with\\u2019s in the last paragraph of page 7.\", \"reference\": \"[1]Wang N, Choi J, Brand D, et al. Training deep neural networks with 8-bit floating point numbers[C]//Advances in neural information processing systems. 2018: 7675-7684.\\n[2]Banner R, Hubara I, Hoffer E, et al. Scalable methods for 8-bit training of neural networks[C]//Advances in Neural Information Processing Systems. 2018: 5145-5153.\"}"
]
} |
SygBIxSFDS | An Empirical and Comparative Analysis of Data Valuation with Scalable Algorithms | [
"Ruoxi Jia",
"Xuehui Sun",
"Jiacen Xu",
"Ce Zhang",
"Bo Li",
"Dawn Song"
] | This paper focuses on valuating training data for supervised learning tasks and studies the Shapley value, a data value notion originated in cooperative game theory. The Shapley value defines a unique value distribution scheme that satisfies a set of appealing properties desired by a data value notion. However, the Shapley value requires exponential complexity to calculate exactly. Existing approximation algorithms, although achieving great improvement over the exact algorithm, relies on retraining models for multiple times, thus remaining limited when applied to larger-scale learning tasks and real-world datasets.
In this work, we develop a simple and efficient algorithm to estimate the Shapley value with complexity independent with the model size. The key idea is to approximate the model via a $K$-nearest neighbor ($K$NN) classifier, which has a locality structure that can lead to efficient Shapley value calculation. We evaluate the utility of the values produced by the $K$NN proxies in various settings, including label noise correction, watermark detection, data summarization, active data acquisition, and domain adaption. Extensive experiments demonstrate that our algorithm achieves at least comparable utility to the values produced by existing algorithms while significant efficiency improvement. Moreover, we theoretically analyze the Shapley value and justify its advantage over the leave-one-out error as a data value measure. | [
"Data valuation",
"machine learning"
] | Reject | https://openreview.net/pdf?id=SygBIxSFDS | https://openreview.net/forum?id=SygBIxSFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"f3lzF5O0Hh",
"Bye_QnZ2sB",
"Hye3O9ZhoH",
"Bkg6LTc5sS",
"rJgr1KZ9jr",
"Byxxvhn8iH",
"BJx713hLsr",
"rJg_8sn8or",
"rJgIEi38iB",
"SJePyo38iH",
"HJle4GUAKr",
"H1ejzNb0YS",
"ryxX9xEpFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746146,
1573817375744,
1573816947615,
1573723476930,
1573685469319,
1573469271559,
1573469146702,
1573469007984,
1573468973667,
1573468895261,
1571869224326,
1571849234553,
1571795082700
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2322/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2322/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2322/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2322/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2322/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2322/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2322/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2322/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2322/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2322/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2322/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2322/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"There is insufficient support to recommend accepting this paper. The authors provided detailed responses to the reviewer comments, but the reviewers did not raise their evaluation of the significance and novelty of the contributions as a result. The feedback provided should help the authors improve their paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Updated the very first two responses\", \"comment\": \"We updated the very first two responses to incorporate the actual changes we made to the paper for each review comment.\\n\\nWe want to thank the reviewer for the helpful comments, which greatly help us improve the manuscript.\"}",
"{\"title\": \"Response to Reviewer #1 (Part 2)\", \"comment\": \"Q: Note that the assumption in machine learning is that you do not have access to the test set and it is something you won\\u2019t see until you deployed your method. I assume the authors meant validation set.\", \"a\": \"Thank you for pointing it out. Indeed, C=1 in the Shapley value definition. Since we only care about the relative value, we introduced a constant before the Shapley value definition. We can see from the comments that such constant causes unnecessary confusion, so we present the classical Shapley value definition with C=1 in the revised version. Please see Section 2.2. Thanks for the suggestion!\\n\\nWe have also polished the writing of the paper and fixed the inconsistent notations.\", \"q\": \"Constant C is introduced in Equation (2) but it is not well justified.\"}",
"{\"title\": \"Response to Reviewer #3's additional comments\", \"comment\": \"Thanks a lot for your prompt reply and helpful comments.\", \"q\": \"These results could at least provide empirical evidence that the heuristic is approximating a value which behaves similar to that of the Shapley value.\", \"a\": \"We have completed the experiment for comparing the ground truth Shapley value of raw data and the KNN Shapley value of deep features. The ground truth Shapley value is computed using the group testing algorithm in [1], which can approximate the Shapley value with provable error bounds. We used a fully-connected neural network with three hidden layers as the target model. The rank correlation between deep-feature-KNN-Shapley and ground truth Shapley value is 0.08 with p-value 0.0046. It shows that the deep-feature-KNN-Shapley may not be able to preserve the exact rank of the ground truth Shapley value. We further applied some local smoothing to the two values and see whether data groups with large Shapley value also has large deep-feature-KNN-Shapley value. We computed 1-100 percentiles of Shapley values, found the group of data points within each percentile interval (say, between 1st and 2nd percentile), and computed the average Shapley value as well as the average deep-feature-KNN-Shapley value for each group. The rank correlation between average deep-feature-KNN-Shapley and average ground truth Shapley value for these data groups is 0.22 with p-value 0.0293. We can see that deep-feature-KNN-Shapley can preserve the rank of the Shapley value to some extent in a macroscopic level.\\n\\nWe are computing the results for TMC and G-Shapley. Because their speed is slow, the code is still running now. We will compare their rank correlation coefficient with ours in the revised version later.\\n\\n[1] https://arxiv.org/abs/1902.10275\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"\\\"we will rename our method as deep-feature-KNN-Shapley based on the suggestion. \\\"\\nAs mentioned before, the use of the term \\\"heuristic\\\" for the given algorithm is technically false. Methods like TMC-Shapley (although heuristics), are seeking to approximate the actual \\\"Shapley value\\\" of the data: the Shapley value of the collaborative game among data points as players. The given algorithm is missing the feature learning aspect and therefore cannot be referred to as a heuristic for approximating \\\"Shapley\\\" values of data points. Any use of the concept \\\"Shapley values\\\" for the given heuristic would be misleading as the heuristic is not considering the collaborative game of \\\"supervised learning\\\".\\n\\n\\\"Thanks for the interesting question. The accuracy decrease speed of removing points from most valuable to least valuable using our deep-feature-KNN-Shapley heuristic is similar to TMC-Shapley but slightly worse than G-Shapley (See this anonymous link for the result on UCI census dataset https://ibb.co/cQ9NkJX). The result on Tiny ImageNet is still running and we will add them into the revised version. \\\"\\nThese results should be added to the main text as they have become one of the gold-standards in the literature; \\\"valuable points are actually valuable\\\"\\n\\n\\\"We are currently performing an experiment which compares the Shapley value estimates of a neural network using the permutation sampling method in [1], which gives provable error guarantees, with the KNN-Shapley value computed on the feature extracted by the first layer. We will update the result later into the revised version. \\\"\\nThese results could at least provide empirical evidence that the heuristic is approximating a value which behaves similar to that of the Shapley value.\"}",
"{\"title\": \"Response to Reviewer #3 (part 2)\", \"comment\": \"Q: One of the most striking results from the Data-Shapley works were removing points from most valuable to least valuable and looking at the accuracy drop speed. It would be very necessary and also very convincing if the introduced method is good at detecting very positive points as well as very negative points.\", \"a\": \"Thanks for pointing it out. Our main goal is to compare a simple heuristic for computing data value with the existing, often more computationally expensive heuristics. Therefore, we use the same set of tasks considered in the existing papers (Ghorbani & Zou). We do not aim to outperform state-of-the-art methods for each task; Instead, we hope to put our work in the context of current efforts in understanding the relationships between different notions of data value and the performance on these tasks. We will make this clear in the revised version.\\n\\n[1] https://arxiv.org/pdf/1902.10275.pdf\", \"q\": \"For almost all of the cases of comparison where previous methods are present for comparison, there seems to be no meaningful advantage. This makes interpreting Figures like 4b, 4c, 5b and most importantly Fig 6b. It would be necessary to add other benchmarks that are not Shapley based. For instance, for data summarization, there has been a line of work the methods of which could be used as a measure of comparison.\"}",
"{\"title\": \"Response to Reviewer #3 (Part 1)\", \"comment\": \"We would like to thank the reviewer for the insightful reviews.\", \"q\": \"\\u201cThe valuation methods often serve as a preprocessing step to filter out low-quality data,\\nsuch as mislabeled or noisy data, in a given dataset\\\" is not a correct statement.\", \"a\": \"Thanks for pointing it out, and we agree this sentence is confusing. The sentence \\u201cThe valuation methods often serve as a preprocessing step to filter out low-quality data, such as mislabeled or noisy data, in a given dataset\\\" appears in our theory section 4.2. By this sentence, we really mean that existing works tend to use the experiments, including mislabeled or noisy data identification, to demonstrate that the Shapley value can distinguish data quality and reflect data value in practice. We would like to give some theoretical justification for this empirical observation. We have revised the sentence to eliminate the confusion. Please see Section 4.1.\"}",
"{\"title\": \"Response to Reviewer #2 (Part 2)\", \"comment\": \"Q: In Definition 3, the definition for the dummy point. This definition requires that U(S \\\\union {z_i}) = U(S) for any S \\\\subseteq D, and in particular it should hold for S=\\\\emptyset. Does U({z_i}) = U(\\\\emptyset) make sense in most practical problems?\", \"a\": \"Thank you for pointing it out. Indeed, our result in Theorem 3 does not require the existence of dummy points in the training set. We introduce the concept of dummy points only to better illustrate the implication of Theorem 3. Because both the Shapley value and the LOO value are zero at dummy points, Theorem 3 can be restated as follows:\\n\\u2014\\nFor a learning algorithm A(\\u00b7) that achieves (\\\\epsilon(N), \\\\delta(N))-DP when training on N data points. Let the performance measure be U(S) = \\u2212 1/M \\\\sum_{i=1}^M E_{h~A(S)} l(h, z_{test,i}) for S \\\\subseteq D. Let \\\\epsilon\\u2019(N) = e^{c(N)} \\u2212 1 + ce^{c\\\\epsilon(N)}\\\\delta(N). Then, it holds that\\n\\\\max_{z_i\\\\in D} \\\\nu_{loo}(z_i) \\\\leq \\\\epsilon\\u2019(N-1)\\n\\\\max_{z_i\\\\in D} \\\\nu_{shap}(z_i) \\\\leq \\\\frac{1}{N-1} \\\\sum_{i=1}^{N-1} \\\\epsilon\\u2019(i)\\n\\u2014\\nEssentially, our theorem wants to show that for differentially private learning algorithms, the values of both bad and good points both converge to zero when the training size is large. However, compared with the LOO value, the convergence is slower for the Shapley value; therefore, the Shapley value provides better chance to differentiate good points from the bad. We have revised Section 4.2 to make it clear.\\n\\nWe have also polished writing and fixed the typos. \\n\\n[1] https://arxiv.org/abs/1902.10275\\n[2] https://arxiv.org/abs/1908.08619\\n[3] https://arxiv.org/abs/1904.02868\"}",
"{\"title\": \"Response to Reviewer #2 (Part 1)\", \"comment\": \"We would like to thank the reviewer for the comments.\", \"q\": \"Section 3 and Section 4.1 focus on U defined in equation (3), in which the testing set is a singleton. It seems to be a major limitation of the paper and it is not clear to me whether or not it is easy to generalize the results in these two sections to the general testing set with multiple points. Please explain!\", \"a\": \"Thanks for the question! The result in Section 3 is generalizable to test set with multiple points due to the decomposability property of the Shapley value. That is, the Shapley value of a training point with respect to multiple test instances is the sum of the Shapley values with respect to each test instance. The result in Section 4 can also be generalized to multiple test point setting using the decomposability property. Specifically, for any two training points, the $K$NN Shapley value with respect to multiple validation points is order-preserving when the order remains the same on each validation point, while the $K$NN LOO value with respect to multiple validation points is order-preserving when the two points are within the $K$-nearest neighbors of all validation points and the order remains the same on each validation point. We can see that similar to the single-validation-point setting, the condition for the $K$NN LOO value with respect to multiple validation points to be order-preserving is more stringent than that for the KNN Shapley value. We have incorporated the discussion of the extension to multiple test points in the Section 3.1 and 4.1.\"}",
"{\"title\": \"Response to Reviewer #1 (Part 1)\", \"comment\": \"We thank the reviewer for the comments.\", \"q\": \"The introduction/title of the paper claims this is a general approach for any model but the authors' focus is only on DNN. This should be corrected.\", \"a\": \"We apologize for not making the experiment details as clear as we intended. Actually, in our experiments, we also examine the models like Naive Bayes and Logistic Regression, which do not enjoy efficient Shapley value calculation methods. Therefore, we use the KNN Shapley value as a surrogate and our experiments show that in general, this heuristic is simple but effective. We clarified in Section 3.2 that for non-DNNs, we directly compute the Shapley value on the raw data as a surrogate for the true Shapley value. We also highlighted the models that we used in the experiments (including both DNNs and non-DNNs) in the title of each figure.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors have developed an algorithm to estimate Shapley value with complexity independent of the model size, based on the KNN classifier. Although the paper is interesting in general, and the experiment results are strong, I still feel that the current version of the paper has not quite met the (very high) standard of ICLR, for the following reasons:\\n\\n1) The authors need to better motivate the advantages of using Shapley value as a data valuation metric. It is not completely clear to me why Shapley value is a good data valuation metric, compared with other options. The authors argue that it is both fair and decomposable (linear in U). However, based on Section 2.2, it is only fair under two extreme cases (identical points and zero marginal contributions). Also, it seems that a lot of other metrics will also satisfy the decomposability condition. Please explain!\\n\\n2) Section 3 and Section 4.1 focus on U defined in equation (3), in which the testing set is a singleton. It seems to be a major limitation of the paper and it is not clear to me whether or not it is easy to generalize the results in these two sections to the general testing set with multiple points. Please explain!\\n\\n3) In Definition 3, the definition for the dummy point. This definition requires that U(S \\\\union {z_i}) = U(S) for any S \\\\subseteq D, and in particular it should hold for S=\\\\emptyset. Does U({z_i}) = U(\\\\emptyset) make sense in most practical problems?\", \"a_typo\": \"in Definition 2, n and N should be the same.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In short, the paper reports improved results on a few applications of Shapley Value of data points using an introduced approximation method that is orders of magnitude faster compared to existing methods.\\n\\nI vote for rejection of this paper mainly because of two reasons. First, the contributions are not enough for this venue. The paper uses an already existing method (Jia et al. (2019a)) with the only difference being that they use it on top of learned features and therefore the main contribution seems to be the discussions in Sec4. Secondly, the paper makes technically \\\"false\\\" claims (as will be dicussed below).\", \"the_positive_aspects_of_the_work_are_as_follows\": \"First, the elephant in the room for data valuation methods, which is assessing how good or bad a data point is, is against privacy and this work addresses this question for the first time in the (very small) literature. Secondly, the experimental results are very comprehensive and make a very good case for usefulness of the introduced algorithm. Thirdly, new useful terms for the emerging community of data valuation are introduced through the clear and well-written definitions of the paper.\\n\\nThe paper mentions that the previously introduced KNN-Shapley method applied to the learned features of a deep neural network could be used as an approximation of data points' Shapley values. This is false. All data points contribute value to the feature learning part and the approximation simply ignores this crucial fact. The whole point of using Shapley values a measure of data value is its properties which are not satisfied for the \\\"collaborative game of ML model training\\\" by this approximation; the approximation can be heavily biased due to the fact that ignores the contributions to the feature learning (let's not forget what made deep network's desirable in the first place is their feature extracting power). One cannot use the training data to learn the feature extractor and then ignore the contributions by definition of the Shapley value being the average contribution to a random subset of data points (which means the rest of the data points are removed from the game). Or, one can do such a thing but the method cannot be called an approximation for the true Shapley Value of data points. The G-Shapley heuristic mentioned in the paper from previous work also seems to suffer from the same drawback as it is not playing the same collaborative game of training the ML model (unless one assumes that simply taking one step of the gradient for every data points would be a good approximation for a complete training!)\\n\\nThe experimental results are comprehensive and convincing. The main issue is that the work discusses these experiments as if the goal of computing data value is performing such tasks (for each of which there exist simpler methods not related to data valuation). \\\" The valuation methods often serve as a preprocessing step to filter out low-quality data,\\nsuch as mislabeled or noisy data, in a given dataset\\\" is not a correct statement. The valuation methods serve as valuation methods which the introduced method, although providing \\\" a valuation method\\\", is not providing an unbiased estimate of the equitable Shapley value valuation method. For many of the provided tasks in the experiments section, previous works (Ghorbani & Zou, Jia et al 2019 b) report the same experiments as further inspections into the Shapley Value for data and not as goals of computing these computationally expensive values. \\n\\nAll in all, although the paper's experimental and theoretical results are useful and interesting as the use case of \\\"a valuation method\\\", but is technically incorrect as calling it an approximation for Data Shapley values makes it not publishable. My score is subject to drastic change if a major rework is done to make this point clear.\", \"a_few_questions_and_suggestions\": [\"One of the most striking results from the Data-Shapley works were removing points from most valuable to least valuable and looking at the accuracy drop speed. It would be very necessary and also very convincing if the introduced method is good at detecting very positive points as well as very negative points.\", \"An interesting empirical experiment would be to look at the Rank Correlation between the introduced approximation and other unbiased Shapley Value approximations. If the correlation is high, it means that empirically the data points contribute equally to feature learning and their value can actually be approximated just by looking at the accuracy on extracted features.\", \"Is any unbiased estimator of true Shapley Values is \\\"order-preserving\\\" by definition?\", \"In Sec 2 it would be more useful for the general audience to include the third Shapley property too.\", \"For almost all of the cases of comparison where previous methods are present for comparison, there seems to be no meaningful advantage. This makes interpreting Figures like 4b, 4c, 5b and most importantly Fig 6b. It would be necessary to add other benchmarks that are not Shapley based. For instance, for data summarization, there has been a line of work the methods of which could be used as a measure of comparison.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Given that computing the Shapely value for data valuation is very expensive and existing approximate methods are not scalable either, the authors introduce a new approach based on K-NN approximation of the model to scale it specifically for DNN. The authors propose to use the final features produced in the last feature extractor layer of DNN as features for KNN and choose K such that the performance of KNN is closest to the performance of DNN. My main problem with this approach is that the authors still need to do this for any trained DNN in order to compute a good value for Equation (2). If the claim here is that the features extractor layers of deep neural network does not change by changing the training set (which is a huge claim), then why one should use K-NN. We can simply use the feature extractor part of DNN (almost all the trainable parameters except the last layer) once and then fix it and only learn the soft-max layer parameter for different subsets. Overall, I believe even though this paper aims to address an important problem, the approach is taken is not well-justified and lacks value. Below are some other minor problems:\\n\\nThe introduction/title of the paper claims this is a general approach for any model but the authors' focus is only on DNN. This should be corrected.\", \"inconsistent_notation\": \"Beginning of Section 2. The training and test set is first denoted by D and D_{test} and then later by S and S_{test}.\\nEquation (2): the authors are using U in a different forms that the ones introduced earlier in Section 2. I recommend the authors only introduce one notation for U and stick with it throughout the paper.\", \"writing_problems\": \"\", \"section_2\": \"\\u201cFor each training data z_i, our goal is to assign a score to each training point, denoted by \\u2026 \\u201d \\u2192Our goal is to assign a score to each training data z_i denoted by \\u2026\\n\\nNote that the assumption in machine learning is that you do not have access to the test set and it is something you won\\u2019t see until you deployed your method. I assume the authors meant validation set.\\n\\nConstant C is introduced in Equation (2) but it is not well justified.\"}"
]
} |
SygSLlStwS | Consistent Meta-Reinforcement Learning via Model Identification and Experience Relabeling | [
"Russell Mendonca",
"Xinyang Geng",
"Chelsea Finn",
"Sergey Levine"
] | Reinforcement learning algorithms can acquire policies for complex tasks automatically, however the number of samples required to learn a diverse set of skills can be prohibitively large. While meta-reinforcement learning has enabled agents to leverage prior experience to adapt quickly to new tasks, the performance of these methods depends crucially on how close the new task is to the previously experienced tasks. Current approaches are either not able to extrapolate well, or can do so at the expense of requiring extremely large amounts of data due to on-policy training. In this work, we present model identification and experience relabeling (MIER), a meta-reinforcement learning algorithm that is both efficient and extrapolates well when faced with out-of-distribution tasks at test time based on a simple insight: we recognize that dynamics models can be adapted efficiently and consistently with off-policy data, even if policies and value functions cannot. These dynamics models can then be used to continue training policies for out-of-distribution tasks without using meta-reinforcement learning at all, by generating synthetic experience for the new task. | [
"Meta-Reinforcement Learning",
"Reinforcement Learning",
"Off-Policy",
"Model Based"
] | Reject | https://openreview.net/pdf?id=SygSLlStwS | https://openreview.net/forum?id=SygSLlStwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"K2Ul_Iq2kQ",
"BkxcUP53jH",
"Bkl6149noS",
"Hyx0Nmujsr",
"rJe75CPjjB",
"Sylec8rosr",
"HkxKVIHjiH",
"HJlJS_WoiS",
"HkeZ_CytiS",
"HJlNvGfaFr",
"S1lwHLn2tB",
"HJgRKlV0OH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746116,
1573853010404,
1573852132567,
1573778230363,
1573777034770,
1573766792232,
1573766704882,
1573750838826,
1573613161096,
1571787356237,
1571763775189,
1570812038489
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2321/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2321/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2321/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2321/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2321/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2321/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2321/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2321/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2321/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2321/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2321/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors propose an algorithm for meta-rl which reduces the problem to one of model identification. The main idea is to meta-train a fast-adapting model of the environment and a shared policy, both conditioned on task-specific context variables. At meta-testing, only the model is adapted using environment data, while the policy simply requires simulated experience. Finally, the authors show experimentally that this procedure better generalizes to out-of-distribution tasks than similar methods.\\n\\nThe reviewers agree that the paper has a few significant shortcomings. It's unclear how hyper-parameters are selected in the experimental section; the algorithm does not allow for continual adaptation; all policy learning is done through data relabelled by the model. \\n\\nOverall, the problem the paper addresses is very important, but we do not deem the paper publishable in its current form.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author Reply for Official Blind Review #1\", \"comment\": \"The goal of our paper is to develop a meta-RL method that is both consistent and sample-efficient. To achieve the sample efficiency, our method must be able to utilize off-policy data during training time. While we can indeed meta-learn the full initialization instead of a context vector for the reward and dynamics model using off-policy data, we cannot do the same for policy. Meta-learning the full initialization of the policy requires the use on-policy data during training time and therefore significantly sacrifice sample efficiency. Hence, in order to enable us to meta-train the policy with off-policy data, we need the adapted context vector from the reward and dynamics model to identify specific tasks.\"}",
"{\"title\": \"Why modify a context vector as opposed to meta-learn model/policy initializations\", \"comment\": \"What is the motivation behind meta-learning a context-vector based task identification mechanism as opposed to just meta-learning initialization as done by Nagabandi et al?\\n\\nI initially assumed the reason is that this mechanism would enable continuous adaptation of a single model which model initialization doesn't (without storing a mixture over models, at least); however, after the authors clarified that they are, in fact, modifying all the parameters of the model at adaptation time, I don't see any benefit of context-vector based approach over what Nagabandi et al did. Am I missing something?\"}",
"{\"title\": \"Author Reply for Official Blind Review #1\", \"comment\": \"That's correct. Continuing to adapt the model would indeed prevent the model from rapidly adapting to other tasks. Therefore whenever we are testing on a new task, we always start from an unadapted model. However this is consistent with the standard meta-RL problem setup, where the model should not have seen any other test tasks during test time. A different problem setup which requires the model to be continually adaptable to different tasks is the online meta-learning problem [1], which we are not tackling in this paper.\\n\\nWe also want to note that the continual adaptation approach that we use is common in other gradient based meta learning algorithms. For example, in the original MAML paper [2], the authors presented the continual adaptation results in Figure 3.\\n\\n\\n[1] Finn, Chelsea, et al. \\\"Online meta-learning.\\\" arXiv preprint arXiv:1902.08438 (2019).\\n\\n[2] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. \\\"Model-agnostic meta-learning for fast adaptation of deep networks.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\"}",
"{\"title\": \"Another question\", \"comment\": \"\\\"However, as described in Algorithm 2 in the paper, during test time, we only adapt the context once but continue to adapt to the dynamics and reward model by its full parameters.\\\"\\n\\nIf you update all the parameters of the model, wouldn't that destroy the meta-learned knowledge? i.e. the model wouldn't be able to 'quickly' adapt to further changes in the environment.\"}",
"{\"title\": \"Author Reply for Official Blind Review #2\", \"comment\": \"First we want to thank reviewer #2 for the informative comments. We have updated the paper to incorporate the reviewer\\u2019s suggestions.\\n\\nRegarding point #1, we agree with the reviewer that current experiments are insufficient, and thus we have conducted additional experiments in the ant-direction mujoco environment used in other meta-RL papers. The results can be found at https://imgur.com/KPzosyd . Regarding the in-distribution performance difference between MIER and PEARL, we found it to be highly task-dependent. It is expected that in some in-distribution environments MIER would be slightly worse than PEARL because for a in-distribution test task, probabilistic inference is the optimal thing to do. However, we do want to make it clear that the focus of this paper is to improve the out-of-distribution task performance while maintaining similar performance on in-distribution tasks compared to existing methods. As suggested by the reviewer, we have also conducted additional experiments for out-of-distribution tasks in the humanoid direction environment, and the results can be found at https://imgur.com/Ob8K5Im . The x-axis corresponds to different test tasks, where task 0 is the easiest and -5 and 5 are the most out-of-distribution. We see that for almost all tasks, our methods outperforms PEARL. For all the additional experiments, we will incorporate the results in the final version of the paper.\\n\\nFor point #2, we have modified the paper to include all the hyperparameter configurations in Appendix A.\\n\\nRegarding point #3, we want to thank the reviewers for pointing out the missing related works. We have modified the paper to include them in the related work section.\", \"answers_for_questions\": \"\", \"q1\": \"In the introduction: \\\"Effective model training requires the validation batch to contain data corresponding to optimal behavior for the tasks...\\\". Why? In principle we could train a good model of the environment by running a sufficiently-explorative policy.\", \"a1\": \"We will modify the paper to make this more clear. We aim at suggesting that it is important to include data collected from the adapted policy in the validation batch, because the adapted policy might visit states that has never been visited by the unadapted policy. We agree with the reiwer that the same result could also be achieved with a sufficiently good exploration policy.\", \"q2\": \"In the related works: \\\"Our method does not suffer from this problem since we use our model to train a model-free policy\\\". It is not clear why (though it becomes later) since simulating long trajectories from a learned model could lead to the usual divergence issues.\", \"a2\": \"Thanks for pointing this out. We have modified the paper to clarify this.\\n\\nQ3,4: In Sec. 3.2: r in \\\\hat{p} should not be bold. Also, \\\"f\\\" in the subscript of the expectation was not defined (is it \\\\hat{p}?). In Sec 3.4: there is a minimization over \\\\phi which however does not appear in the objective.\\n\\nA3,4: These are indeed typos and we have fixed them. Thanks for pointing out.\", \"q5\": \"In the optimization problem at page 5: \\\\phi_{\\\\Tau} should probably be \\\\phi.\", \"a5\": \"It should be \\\\phi_{\\\\Tau}, since \\\\phi_{\\\\Tau} is the adapted context and it is a function of \\\\phi. We are evaluating the final loss on the validation batch using adapted context.\", \"q6\": \"In Fig. 2, why is MIER run for less steps than the other algorithms?\", \"a6\": \"We will re-run the experiments with more steps in the final version of the paper.\"}",
"{\"title\": \"Author Reply for Official Blind Review #1\", \"comment\": \"First we want to thank reviewer #1 for the constructive comments. We have updated the paper to incorporate the reviewer\\u2019s suggestions.\\n\\nRegarding point #1, as suggested by the reviewer, we have conducted experiments for an ablation study of the algorithm performance vs the number of gradient steps in the half cheetah velocity environments, and the results can be found at https://imgur.com/a/e6qUNCH . We see that there is a trade off between performance improvement and stability when increasing the number of fast adaptation steps. We will incorporate these results in the final version of the paper.\\n\\nRegarding point #2, as suggested by the reviewer, we will modify the paper to include a point mass toy examples and visualization of the policy behavior.\\n\\nAs for point #3, as suggested by the reviewer, we have conducted experiments for an analysis of model prediction error, and the results can be found at https://imgur.com/a/5UdqLyp . We see that the model loss does decrease during training, which matches the improvement in average return. We will incorporate these results in the final version of the paper.\\n\\nFor point #4, we have modified the paper to include all the hyperparameter configurations in Appendix A.\\n\\nFor point #5, as suggested by the reviewer, we have conducted additional in-distribution experiments in the ant-direction environment used in other meta-RL papers, and the results can be found at https://imgur.com/a/F8W37TD . We have also conducted additional experiments for out-of-distribution tasks in the humanoid direction environment, and the results can be found at https://imgur.com/a/ZxRRTmd . The x-axis corresponds to different test tasks, where task 0 is the easiest and -5 and 5 are the most out-of-distribution. We see that for almost all tasks, our methods outperforms PEARL. We will incorporate these results in the final version of the paper.\", \"answers_to_questions\": \"\", \"q1\": \"Section 3.2: I assume the expectation should be taken w.r.t. p\\u2019 rather than f?\", \"a1\": \"That\\u2019s indeed a typo. Thanks for pointing it out!\", \"q2\": \"In Algorithm 1 & 2, how was the adapted context \\\\phi_T used to update policy \\\\psi? Was it as input to the model parametrized by \\\\psi? It might be useful to make it clearer.\", \"a2\": \"We have 2 phases of improving the policy with the adapted model context. For the first phase, we direct feed the updated model context as part of the input to the policy. For continual improvement, we use data generated by the model conditioned on the adapted context to continue training the policy.\"}",
"{\"title\": \"Acknowledging the response\", \"comment\": \"Thank you for the detailed response.\\n\\nI've read it and I'm looking at the updated paper. I'll update my review soon (In ~6 hours)\"}",
"{\"title\": \"Author Reply for Official Blind Review #3\", \"comment\": \"First we want to thank reviewer #3 for the constructive comments. We have updated the paper to incorporate the reviewer\\u2019s suggestions.\\n\\nRegarding point #1, as the reviewer points out, indeed it is impossible for the model to accurately adapt to a new task if the task is too out-of-distribution. We will modify the paper to make this clear. However, the main advantage of our algorithm is that the adaptation is consistent, meaning that given enough data during test time and a model with large enough capacity, the adaptation process would eventually perform well for the new task. This is crucial for adapting to out-of-distribution tasks, since there will be no guarantee on how the dynamics and reward function would change for the new tasks.\\n\\nRegarding point #2, as the reviewer points out, the proposed algorithm is not the first to formulate meta-RL problem into a meta-supervised learning problem. We agree with the reviewer that Nagabandi et al. (2018) should be included as a baseline. However, we weren\\u2019t able to reproduce the authors\\u2019 results using the open source code released by the authors, and we are actively communicating with them to resolve the problem. For now we ran our method on the HalfCheetah environment in Nagabandi et al. (2018), and the comparison can be found here: https://i.imgur.com/5bSCSgD.png. We see that our method achieves superior performance. We will modify the paper to include comparison to Nagabandi et al. (2018) in more environments once we resolve the problem.\\n\\nHowever, we do want to clarify that our proposed method is not merely a variation of planning methods on top of the model based approach described in Nagabandi et al. (2018). First of all, the off-policy relabeling method is not a planning algorithm, as it uses policy iteration to improve the policy instead of optimizing actions with respect to the dynamics and reward model\\u2019s prediction. Furthermore, the use of relabeling is essential for our method because it enables cross-task data reuse, which is a unique advantage only applicable in the meta-RL setting. \\n\\nAs for point #3, as suggested by the reviewer, we have modified the paper to include all the hyperparameter configurations in Appendix A. We also want to make it clear that we are not making the assumption that the context vector alone is sufficient to capture changes in MDPs. On the contrary, we are assuming that the context vector is often insufficient to capture a new task, especially for out of distribution tasks. This is precisely the reason why we need off-policy data relabeling to continue improving the policy during test time. The ablation study on the right side of Figure 4 in our paper demonstrates the importance of relabeling using data from other tasks.\", \"answers_for_questions\": \"\", \"q1\": \"It's not clear to me why the validation batch must contain data corresponding to the optimal behavior.\", \"a1\": \"We will modify the paper to make this more clear. We aim at suggesting that it is important to include data collected from the adapted policy in the validation batch, because the adapted policy might visit states that has never been visited by the unadapted policy.\", \"q2\": \"Is the proposed framework really consistent? At adaptation, only the context vector is being updated whereas model parameters (theta) are fixed. Why is a context vector alone sufficient to adapt the model to drastic changes in the MDP?\", \"a2\": \"We want to clarify that by consistency, we mean that the proposed algorithm would converge to optimal policy asymptotically given enough data. As the reviewer points out, merely adapting the context would not guarantee consistency. However, as described in Algorithm 2 in the paper, during test time, we only adapt the context once but continue to adapt the dynamics and reward model by its full parameters. Therefore, if the model has large enough capacity to capture the ground truth dynamics and reward of the MDP, the continued adaptation is consistent.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"### Summary\\n1. The paper proposes an algorithm capable of off-policy meta-training (Similar to PEARL) as well as off-policy policy adaptation (By relabelling previous data using the adapted model and reward function). \\n\\n2. The basic idea is to meta-learn a model that can adapt to different MDPs using a small amount of data. Moreover, the adaptation is done by only changing the latent context vector (Similar to CAVIA or CAML). The remaining parameters of the model (theta) are fixed after meta-training. \\n\\n3. The paper also proposes learning a universal policy that, when given the context vector of a task, can maximize the reward for that task. This means that for with-in distribution meta-testing tasks, the policy can be used as it is (by giving it the right context vector which can be computed by adapting the model). For out-of-distribution tasks, however, it is important to update this policy. \\n\\n4. To update the policy, the paper proposes combining previously stored data (for example data used in meta-training) with the adapted model to do off-policy learning (Using SAC). \\n\\n### Decision with reasons\", \"i_vote_for_rejecting_the_paper_in_its_current_form_for_the_following_reasons\": \"1- The paper assumes that it is possible to learn models for out-of-distribution tasks with a few samples that are accurate on all the previously stored data. This is fundamentally incorrect. If the MDP changes in a significant way, it is not reasonable to expect that we can adapt a model from a few samples. Moreover, even if we can adapt the model using a lot of new experience, it is not reasonable to expect that we can use this model to accurately label all previous data. The authors do acknowledge this when describing results in Figure 3, however they seem to underplay this limitation. \\n\\n2- Turning the meta-RL problem into a supervised learning problem has already been explored. For instance, Nagabandi et al. (2018)[1] showed that it is possible to quickly adapt models to changes using meta-learning. They, however, used decision time planning for the control policy (By random shooting method). This paper, on the other hand, uses Dyna style planning with an off-policy learning algorithm on previously stored data. The only difference is the choice of off-the-shelf planning algorithm which is not a significant contribution (There are some other small differences, such as learning a context vector and not model initialization, learning a universal policy etc, however, I don't see how they are essential for the proposed approach; maybe the authors can clarify why those choices are essential) \\n\\n3- The paper assumes a context vector alone is sufficient to capture changes in MDPs (It keeps the rest of the model fixed at adaptation). This might be reasonable if the context vector is sufficiently large, but the paper does not even mention the size of the context vector. It also skips other important details. For example, it does not mention any details about hyper-parameter selection, how the context-vector used in the model, etc. It's hard to judge the importance of the experimental results because of this. \\n\\n### Questions \\n\\n1- \\\"Effective model training requires the validation batch to contain data corresponding to optimal behavior for the tasks, which we obtain by training a universal policy conditioned on the context descriptor\\\"\\n\\nIt's not clear to me why the validation batch must contain data corresponding to the optimal behavior. \\n\\n2- Is the proposed framework really consistent? At adaptation, only the context vector is being updated whereas model parameters (theta) are fixed. Why is a context vector alone sufficient to adapt the model to drastic changes in the MDP? \\n\\n\\n[1] https://arxiv.org/abs/1812.07671\\n\\n### UPDATE\\n\\nThe authors gave a detailed response to the reviews and answered some of my main concerns. However, I'm still not convinced that the paper, in its current form, can be accepted. My issues are: \\n\\nThe paper combines some existing ideas in a new way but falls short of justifying the choices it made. The proposed contribution is that it is consistent (meta-learning methods that learn a network initialization are also consistent), can do off-line meta-training (So can PEARL) and can use old meta-training data at meta-test time (This is novel to this paper). However, the proposed methodology also has some downfalls. For example: \\n\\nIt does not allow continual adaptation. This is an important limitation of existing consistent meta-learning methods and this paper does not address it. Nagabandi et al 18 [1], on the other hand, propose a similar solution that is also capable of continual adaptation. \\n\\nMOST IMPORTANTLY, the empirical evaluation in the paper is very unsatisfactory. Even though the authors have included hyper-parameters in the appendix in the updated version of the paper, they still do no specify how these parameters were selected. Were the parameters selected to maximize the performance of their method and then copied for the baselines? This would not be a fair comparison. \\n\\nGiven the above-mentioned issues, I don't think the paper in its current form can be accepted and I'm maintaining my initial score. I think the authors should do a more thorough empirical investigation and tune the baselines and their method separately (using comparable compute budget). They should also report results on multiple environments using the same parameters (i.e. tune hyper-parameters on one or a few environments and reports results on some other environments as commonly done in Atari) \\n\\n[1] Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n-------------\\nThe authors propose an algorithm for meta-rl which reduces the problem to one of model identification. The main idea is to meta-train a fast-adapting model of the environment and a shared policy, both conditioned on task-specific context variables. At meta-testing, only the model is adapted using environment data, while the policy simply requires simulated experience. Finally, the authors show experimentally that this procedure better generalizes to out-of-distribution tasks than similar methods.\\n\\nMajor comments\\n--------------\\nMaking meta-rl algorithm generalize better outside of the meta-training distribution is a relevant open problem, and this work proposes nice ideas towards its solution. The paper is well-organized and easy to read. The idea of reducing meta-rl to a task identification problem is not completely novel since some recent works have been proposed in this direction (see later). Anyway, the proposed approach is interesting and seems (at least from the proposed experiments) effective. My main concerns follow.\\n\\n1. Though they attempt to address all relevant questions about the proposed approach, I found the experiments quite weak. Only two Mujoco domains are used for the standard meta-rl experiment, and only one of them (HalfCheetah) is used to test the out-of-distribution capabilities. Regarding the first experiment, MIER always performs comparably or worse than PEARL. What is the intuition behind this result? Does it suggest that MIER is paying additional sample complexity in \\\"in-distribution\\\" tasks in order to be more robust to out-of-distribution ones? On the other hand, the generalization experiments seem much more promising, but I would like to see more (at least the humanoid robot as well) to confirm that this result is not only a specific case of this domain. Furthermore, from Figure 3 it seems that MIER improves over PEARL even on in-distribution tasks, while it performed significantly worse in Figure 2. Why does this happen?\\n\\n2. Related to the previous point, I did not find any description of the parameters adopted in all experiments (learning rates, batch sizes, etc.). I do not believe I would be able to reproduce the results at the present time.\\n\\n3. The proposed method is somewhat related to other recent works [1,2]. In particular, [2] presents similar ideas, where the authors meta-learn a fast-adapting model (actually, a task encoder) and a shared universal policy conditioned on the task representation. The main focus is still to improve the generalization to out-of-distribution tasks. Can the authors better discuss the relations to these works?\\n\\nMinor comments\\n--------------\\n- In the introduction: \\\"Effective model training requires the validation batch to contain data corresponding to optimal behavior for the tasks...\\\". Why? In principle we could train a good model of the environment by running a sufficiently-explorative policy.\\n- In the related works: \\\"Our method does not suffer from this problem since we use our model to train a model-free policy\\\". It is not clear why (though it becomes later) since simulating long trajectories from a learned model could lead to the usual divergence issues.\\n- In Sec. 3.2: r in \\\\hat{p} should not be bold. Also, \\\"f\\\" in the subscript of the expectation was not defined (is it \\\\hat{p}?).\\n- In Sec 3.4: there is a minimization over \\\\phi which however does not appear in the objective.\\n- In the optimization problem at page 5: \\\\phi_{\\\\Tau} should probably be \\\\phi.\\n- In Fig. 2, why is MIER run for less steps than the other algorithms?\\n\\n[1] Humplik, J., Galashov, A., Hasenclever, L., Ortega, P. A., Teh, Y. W., & Heess, N. (2019). Meta reinforcement learning as task inference. arXiv preprint arXiv:1905.06424.\\n[2] Lan, L., Li, Z., Guan, X., & Wang, P. (2019). Meta Reinforcement Learning with Task Embedding and Shared Policy. IJCAI 2019.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed a novel approach to reformulate the meta-RL problem as model identification with a gradient descent based algorithm. Such innovation not only allowed us to perform meta-learning via supervised learning of the model, which is more stable and sample efficient, but also allowed us to leverage off-policy RL to learn the policy instead of meta RL.\", \"pros\": \"1. Paper clarity. Although the submission had a few typos (I am not a native English speaker, but I'd encourage the authors to polish the writing of the paper), it's a very well-written paper overall. The flow and logic of this paper was clean, and the authors stroke a good balance between being focused about the core contribution of the paper, and reviewing related work and introducing sufficient preliminaries. As a result, I think this paper was accessible to both domain experts and the broader ICLR community. \\n\\n2. Novelty. This paper proposed a novel approach to reformulate the meta-RL problem as model identification with a gradient descent based algorithm. To the best of my knowledge, this was the first paper broke the meta-RL problem into a simpler meta supervised learning problem and an off-policy RL learning problem. Although each component of the proposed solution was not new, e.g., \\\"relabel\\\" was used in Dyna, MAML was first introduced in 2017, the combination of each component to address the meta-RL problem seemed to the novel to me. And I think the idea could be interesting to the ICLR community.\", \"cons\": \"It's weak accept rather than accept from me because of how the empirical evaluation were conducted in the paper, and I think the experiments conducted in the paper were a little bit weak (common for most ICLR submissions). Examples:\\n\\n1. Number of gradient steps is an important tuning parameter for MAML, it would be interesting to discuss number of gradient steps within the context of MIER.\\n2. It might be useful to conduct some qualitative results to understand the model learned with MIER against the baselines, e.g., how well MIER adapt to the out-of-distribution tasks with simulated data points (examples o such qualitative studies could be found, say, in Finn et al., 2017).\\n3. Given the fact that one major contribution of this paper was reformulating the meta-RL problem as model identification, it would be useful to conduct some quantitative study to help the readers understand the effectiveness of learning the environment model p(s\\u2019, r|s,a) compared to ground-truth, and how the quality of the learned environment model made an impact on the overall performance of the model.\\n4. Some implementation details of MIER were missing, I don\\u2019t feel confident about how reproducible this research would be. For example, the specification of both environment and policy models were not discussed in the paper.\\n5. In general, it would be useful to conduct more experiment results on more diverse data sets, say, in a supplement material.\", \"a_few_questions_to_the_authors\": \"1. Section 3.2: I assume the expectation should be taken w.r.t. p\\u2019 rather than f?\\n2. In Algorithm 1 & 2, how was the adapted context \\\\phi_T used to update policy \\\\psi? Was it as input to the model parametrized by \\\\psi? It might be useful to make it clearer.\"}"
]
} |
S1gEIerYwH | Transferring Optimality Across Data Distributions via Homotopy Methods | [
"Matilde Gargiani",
"Andrea Zanelli",
"Quoc Tran Dinh",
"Moritz Diehl",
"Frank Hutter"
] | Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis, including complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available.
In this work, we propose a novel homotopy-based numerical method that can be used to transfer knowledge regarding the localization of an optimum across different task distributions in deep learning applications. We validate the proposed methodology with some empirical evaluations in the regression and classification scenarios, where it shows that superior numerical performance can be achieved in popular deep learning benchmarks, i.e. FashionMNIST, CIFAR-10, and draw connections with the widely used fine-tuning heuristic. In addition, we give more insights on the properties of a general homotopy method when used in combination with Stochastic Gradient Descent by conducting a general local theoretical analysis in a simplified setting. | [
"deep learning",
"numerical optimization",
"transfer learning"
] | Accept (Poster) | https://openreview.net/pdf?id=S1gEIerYwH | https://openreview.net/forum?id=S1gEIerYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"FzBfpIGEnK",
"r1xes1UjoH",
"H1ebq869iH",
"rygvEN6qir",
"Hyx8jGXXjS",
"BylA50cmqB",
"HJl5vl0pKB",
"Byl6u9ZhtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746087,
1573769112432,
1573734025209,
1573733423023,
1573233310443,
1572216469663,
1571836001698,
1571719796806
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2320/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2320/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2320/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2320/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2320/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2320/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2320/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents a theoretically motivated method based on homotopy continuation for transfer learning and demonstrates encouraging results on FashionMNIST and CIFAR-10. The authors draw a connection between this approach and the widely used fine-tuning heuristic. Reviewers find principled approaches to transfer learning in deep neural networks an important direction, and find the contributions of this paper an encouraging step in that direction. Alongside with the reviewers, I think homotopy continuation is a great numerical tool with a lot of untapped potentials for ML applications, and I am happy to see an instantiation of this approach for transfer learning. Reviewers had some concerns about experimental evaluations (reporting test performance in addition to training), and the writing of the draft. The authors addressed these in the revised version by including test performance in the appendix and rewriting the first parts of the paper. Two out of three reviewers recommend accept. I also find the homotopy analysis interesting and alongside with majority of reviewers, recommend accept. However, please try to iterate at least once more over the writing; simply long sentences and make sure the writing and flow are, for the camera ready version.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"thank you for your reply\", \"comment\": \"This reviewer agrees that further experimentation can also be conducted in follow up work.\"}",
"{\"title\": \"Thank you for your insightful comments\", \"comment\": \"Thank you very much for taking the time to review the paper and for your comments. We are glad to read that you find the method and our theoretical contribution promising.\\n\\nPlease see our reply to Reviewer #3 regarding a more extensive experimental evaluation of the method.\\n\\nThank you for pointing out the typo in the citation of the VGG paper. \\nWe have also included the citations that you suggested for meta-learning and in the context of speeding up the training of deep networks.\"}",
"{\"title\": \"Thanks for your insightful comments\", \"comment\": \"Thank you very much for your comments and for taking the time to carefully read the paper. We are really glad that you like our work.\\n\\nWe have refined and polished the non-convex local theoretical analysis that previously was only in the appendix, and moved it to the main text. As you pointed out, this local analysis, since it relies on more realistic assumptions, is closer to the experimental evaluations that we conducted. Moreover, it removes the need for a toy convex evaluation that should have bridged the gap that was previously present between theoretical analysis and experimental evaluations.\", \"regarding_your_comment_on_the_notation\": \"'' Minor point: Seems that there is a notation error in proposition G.1 and its proof ($i$ instead of $i+1$). ''\\nif we understood correctly what the reviewer refers to, we confirm that the indices are used correctly and the mismatch is due to the shift in the homotopy problems, i.e. the change of parameter $\\\\lambda_i$ to $\\\\lambda_{i+1}$. \\n\\nRegarding the experimental evaluations, we neglected a discussion on the test performances since no theoretical guarantees on the generalization properties are provided. For completeness of the evaluations, as you suggested, we now included this information in the appendix. We did not observe any special trend, but our method seems competitive with the considered baselines also in terms of test performance. We want to underline once again though that we can not formally make any conclusion regarding generalization, since the theoretical analysis does not address this matter.\\n\\nRegarding a more extensive experimental evaluation of the method, that was also suggested by Reviewer #2, we agree that this might give more insight on the method. However, we believe that that this goes beyond the scope of the paper, whose main focus is to propose a new method, study it theoretically and conduct preliminary numerical evaluations to show the potential of the proposed approach and confirm the theoretical results.\"}",
"{\"title\": \"updating abstract+intro+related-work\", \"comment\": \"Thank you for taking the time to review the paper and for your comments. We are glad to hear that you find the proposed idea interesting.\\n\\nHowever, we respectfully disagree with your assessment that our paper lacks focus. We introduce *a single novel idea*, which we substantiate with both theoretical and empirical results: to use the homotopy method to track the optimum of a neural network from one data distribution to another. In particular, after introduction and related work, we derive theoretical results for combining SGD with the homotopy method (Section 3), come up with a homotopy function to gradually deform a source task distribution into a target one, for both regression and classification tasks (Section 4), and show experimental results for the performance of the method, again for both regression and classification tasks (Section 5). Our homotopy-based method generalizes the widely-used heuristic of fine-tuning networks that were pretrained on a different dataset and is therefore very relevant to the field of deep learning.\\n\\nUnfortunately, we apparently failed to convey this simple overarching idea in our submission to you. In the new version we just uploaded, we have therefore reworked the abstract, split the introduction into two sections (intro + related work) to allow each of these to be more focused, broke some lengthy sentences into two, and added a list of contributions at the end of the introduction in order to point them out more clearly. We sincerely hope that this new structure of the first part of the paper allows a better overview of our contributions.\\n\\nRegarding the accessibility of the content, we find the theoretical derivations to be well introduced and structured, and the other reviewers appear to agree (Reviewer #2 mentioned the concepts to be \\u201csimple and elegant, and well motivated, and also well introduced\\u201d, and Reviewer #3 stated \\u201cOverall, the paper is well written, well motivated and well structured. The technical content is also very clear and excellent.\\u201d). Unfortunately, given space limit, we had to relegate parts of the derivations to the appendix. As pointed out by Reviewer #3, it would have been nice to move the non-convex local analysis into the main text. If you have any suggestions regarding this issue we would be delighted to take them into consideration. In case you have any doubts on the theoretical derivations and/or the experimental evaluations, we are very happy to discuss these. We would also be very happy to hear more details about your comments in order to help us improve the paper to address them better.\\n\\nWe hope that the rewritten first part of the paper is clearer now, and that based on this, you will reconsider your assessment of the paper. We thank you for your time and effort!\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Based on homotopy,, the paper describes a more rigorous approach to transfer learning than the so called \\u2018fine-tuning\\u2019 heuristic. Progress in the direction of more principled approaches for transfer learning would be tremendously impactful, since one of the core promises of deep learning is the learning of features, which can be used in different downstream tasks.\\nEssentially, (if this reviewer understood this correctly) the idea behind this paper works by interpolation between the original task of interest and a potentially easier to optimize surrogate task. Overall, this reviewer found the concept simple and elegant, and well motivated, and also well introduced. However, since this reviewer does not have a formal background in mathematics, they cannot assess the soundness of the proofs.\\n\\n\\nThe paper tests the hypothesis by a simple function approximation regression task, and a classification task to learn to transfer from MNIST to fashion MNIST and MNIST to CIFAR, with promising results. One might argue that a more thorough evaluation would have been desirable, since the claims made by the paper are quite general, and it would have been in the authors\\u2019 best interest to present more thorough evidence that their concept works on wider scale of problems, ideally on an NLP task, given the current hype on pre-training with Transformer-based models.\\n\\n\\n\\n\\nPrevious work & citations:\\n\\nI would recommend to cite Schmidhuber 1987 (Evolutionary principles in self-referential learning) and Hochreiter et al 2001 (Learning to Learn with gradient descent) in the context of Meta learning. \\nIt would be nice to cite Klambauer et al (Self normalizing Networks) in the context of speeding up deep neural network training. \\nThe citations of the VGG paper is currently referenced by first names of the authors, not their last names, I am not sure if this was intended.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Contribution\\nThis paper proposes algorithm for transferring knowledge from easy -to-solved to complex tasks or from already solve to new tasks. It relies on homotopy functions and sequentially solves a sequence of optimization problems where the task distribution is gradually deformed from a source task to the target task. Theoretical guarantees are provided and proven in a strongly convex setting. The main results from the theory show that the distance between the final solution and its optimal are less or equal to relative to the distance of the initial source solution to its optimum. So a near optimal solution for the source task will lead to near optimal solution for the target task. Regression and Classification experimentations show competitive results compared to random and warm-start initialization schemes.\\n\\nClarity\\nOverall, the paper is well written, well motivated and well structured. The technical content is also very clear and excellent.\", \"minor_point\": \"Seems that there is a notation error in proposition G.1 and its proof (i instead of i+1).\\n\\n\\nNovelty\\nThe novelty in this work seems to be the application of homotopy methods to the transfer learning settings. The mathematical guarantees are also new and may even offer new ways to interpret fine tuning methods that have been so successful in recent literature. \\n\\nHowever, given the non-convexity of DNNs, it seems like the analysis in the non-convex settings and its implications should be part of the main text.\", \"experiments\": \"Overall, the experiments are very insightful but limited since you only show the training loss and the validation performance is not evaluated at all. Other things that would could be beneficial in better assessing the quality of your method are: comparison to Curriculum learning methods, more in depth analysis of the impact of k, and gamma in both regression and classification settings, and solving toy convex optimization problems to bridge the gap between theory and application.\", \"preliminary_rating\": [\"Accept *\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Authors propose a very general framework of Homotopy to the deep learning set up and explores a few relevant theoretical issues.\\n\\nThough the proposed idea is interesting, the depth and breadth of authors' presentation are simply lacking. The entire paper lacks focus and I suggest authors consider focusing on 1-2 well thought-out ideas. There are many 3-4 line long sentences that are hard to decipher. Please also consider making the presentation more accessible.\\n\\nOverall, this paper does not meet the bar for ICLR.\"}"
]
} |
SJxE8erKDH | Latent Normalizing Flows for Many-to-Many Cross-Domain Mappings | [
"Shweta Mahajan",
"Iryna Gurevych",
"Stefan Roth"
] | Learned joint representations of images and text form the backbone of several important cross-domain tasks such as image captioning. Prior work mostly maps both domains into a common latent representation in a purely supervised fashion. This is rather restrictive, however, as the two domains follow distinct generative processes. Therefore, we propose a novel semi-supervised framework, which models shared information between domains and domain-specific information separately.
The information shared between the domains is aligned with an invertible neural network. Our model integrates normalizing flow-based priors for the domain-specific information, which allows us to learn diverse many-to-many mappings between the two domains. We demonstrate the effectiveness of our model on diverse tasks, including image captioning and text-to-image synthesis. | [
"domains",
"information",
"mappings",
"image captioning",
"model",
"latent normalizing",
"mappings latent normalizing",
"joint representations",
"images",
"text form"
] | Accept (Poster) | https://openreview.net/pdf?id=SJxE8erKDH | https://openreview.net/forum?id=SJxE8erKDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"xIu5RwzH6s",
"HJgp5ud3or",
"Bkl77Lg2ir",
"H1xfnrHPsr",
"BygTZBrwsB",
"HkxD0NSDir",
"BkeOsFKksH",
"Skei1JsY9H",
"BJgSA2X89r",
"rJed5iPm5S",
"SkesJ76W5H",
"HkxBYSQ0KB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798746059,
1573845141391,
1573811738782,
1573504426028,
1573504261419,
1573504207298,
1572997535771,
1572609763324,
1572383949033,
1572203407933,
1572094691081,
1571857789180
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2319/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2319/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2319/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2319/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2319/Authors"
],
[
"~Rahul_Mehta1"
],
[
"ICLR.cc/2020/Conference/Paper2319/Authors"
],
[
"~Rahul_Mehta1"
],
[
"ICLR.cc/2020/Conference/Paper2319/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2319/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2319/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper addresses the problem of many-to-many cross-domain mapping tasks with a double variational auto-encoder architecture, making use of the normalizing flow-based priors.\\n\\nReviewers and AC unanimously agree that it is a well written paper with a solid approach to a complicated real problem supported by good experimental results. There are still some concerns with confusing notations, and with human study to further validate their approach, which should be addressed in a future version.\\n\\nI recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": [\"Thank you for your careful check!\", \"Eq. 7: Following your suggestion, we have written the concatenation of the $f_i$ the other way around, to also be consistent with previous work such as Glow. This implies the sign change you mentioned.\", \"Eq. 8: Yes, thank you.\", \"Eq. 9: We fixed the typos. Thank you.\", \"Sec. 3.3: We have tried to make the derivation of the objective clearer. If you point out specific steps that remain unclear, we are happy to clarify these in the final version.\"]}",
"{\"title\": \"comments on your answer\", \"comment\": \"Thanks for your corrections and answers.\\nI checked the new version and apparently there are still errors in the formulas.\\nEq. (7). 2nd line is wrong. Your notation for the Jacobians is not correct or at least confusing, as such this is not the chain rule. Then if I understand correctly there is a sign error - instead of + in the 2nd row). It would help the reader to write f the other way around, f=f^Ko...of^1.\\nEq (8) sign error\\nEq. (9) wrong\\n\\nsection 3.3 remains unclear for me.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thank you very much for the constructive feedback and the concrete suggestions for improving the manuscript. We very much appreciate that you found our work innovative with interesting new ideas, and commented on the novelty of our approach. We next address the raised concerns in detail.\\n\\n* Shared component and the global model should be carefully checked:\\nThank you for pointing out these typos/imprecisions in the formalization. We have made a careful pass of Sec. 3 in the revised paper and addressed these issues. We believe the notation to be significantly improved.\\n\\n* A better description of the baselines characteristics and of the model variants (MSE, TXT):\\nWe have added indications in the manuscript to denote that we implemented the baseline (CVAE) ourselves. Other results are taken directly from the literature (previous state-of-the-art approaches, i.e. DIV-BS, AG-CVAE, POS, Seq-CVAE).\\nWe also updated the manuscript with more detailed descriptions of the baselines and our the model variants (see also comments to Review #1). \\n \\n* MSE variant is not used anymore in the other comparisons:\\nIn Tab. 1 we include the MSE baseline when we report the oracle performance of different baselines and the state of the art regarding different performance metrics. For accuracy, in line with previous work, we take our model with the highest CIDEr score (LNFMM) and evaluate it for consensus re-ranking (Tab. 2) and diversity metrics (Tab. 3). We used the same best-performing model for text-to-image generation experiments. \\nFor completeness, we have included the MSE baseline for the experiments with limited labeled data (Tab. 2), i.e. LNFMM-MSE (semi-supervised, 30% labeled). We again observe a considerable advantage of our complete LNFMM model over LNFMM-MSE, showing the advantage of our supervised flow component $f_{\\\\phi_s}$.\\n\\nWe would be happy to answer any additional questions.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thanks a lot for your very encouraging feedback. We really appreciate it and wanted to give brief responses to your two concerns.\\n\\n* Above Eqn 5: K- divergence --> KL divergence:\\nThank you. We have fixed this.\\n\\n* The code could have been cleaned up and better organized, for easier reproducibility and reuse:\\nThank you for the comment. We have already made some improvements to aid re-use. We will make a complete, cleaned version available soon.\\n\\nWe would be happy to answer any additional questions.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thank you very much for the constructive and very detailed comments. We are glad that you appreciated novelty, clarity, and experiments and happily comment on the raised concerns.\\n\\n* VQ-VAE related work:\\nThank you for the pointer. We have added the reference in the related work section. We agree that VQ-VAE shows high quality image generation results. In contrast to our LNFMM framework and other prior work, e.g. Seq-CVAE (Aneja et al. 2019), M3D-GAN (Ma et al. 2019), which models images and/or text as continuous distributions in the latent space, the VQ-VAE framework relies on discrete latent variables. Extending this to joint distributions, e.g. images and texts, is certainly an interesting direction for future work.\\n\\n* Human study:\\nThank you for the suggestion. We agree that existing metrics are only a proxy to human judgement. We will definitely consider including a human study when extending the manuscript.\\n\\n* Insights what the domain specific representation has learnt and what the cross-domain representation learnt:\\nWe have included visualizations for domain-specific and cross-domain information in the Appendix A.4 (Visualization of Latent Spaces) of the revised paper. \\n\\n* Can the model benefit from training on unaligned image and textual data to learn better domain specific representations?\\nUnaligned image and text data can be included during training for better domain-specific representations. The closest experiments in this setting are in Tab. 2, where we include only 30% of the training data as aligned image-text pairs and the remaining 70% are included as unaligned image & text data. We observe that LNFMM (semi-supervised, 30% labeled) with domain-specific components for both images and texts performs better than the baseline LNFMM-TXT (semi-supervised, 30% labeled) for which the domain-specific component is present only for the texts. This shows that the model can benefit from the domain-specific components of each domain. Furthermore, comparing the results of LNFMM with LNFMM (semi-supervised, 30% labeled) we observe that the performance of the model does not drop considerably given limited amount of paired data for supervision. Thus the model benefits from the unaligned data of images and texts for learning latent representations with good performance on various evaluation metrics in Tab. 2. \\n\\n* Ablations not clear:\\nOur main contributions are the domain-specific multimodal priors for images $(p_{\\\\phi_v})$ and texts $(p_{\\\\phi_t})$ for modeling domain-specific information in the latent space and an invertible neural network $f_{\\\\phi_s}$ for transforming data points from one domain to the other. Our complete model is denoted by LNFMM and we show the benefits of the different components by including the following ablations:\\n\\n- A CVAE baseline is included to show the advantage of learning multimodal priors in latent space over a standard Gaussian prior, which cannot capture the multimodality of the data in the latent space.\\n- We include LNFMM-MSE, where we perform supervision by directly minimizing the mean squared error along the shared d' dimensions of text and image encodings, i.e. $||(z_t)_d'-(z_v)_d'||^2$. We remove the invertible neural network $f_{\\\\phi_s}$ for this ablation. We include this baseline to show the effect of the invertible neural network $f_{\\\\phi_s}$ for aligning the latent representations of images and texts for cross-modal tasks (e.g. image captioning).\\n-In LNFMM-TXT, we remove the domain-specific information component from the image pipeline, i.e. $p_{\\\\phi_v}$. Here, we have $z_v = z_s$. The domain-specific component is included only for texts, i.e. $p_{\\\\phi_t}$. This ablation is included to show the benefits/effects of domain-specific components for learning latent representations on cross-modal tasks (e.g. image captioning).\\n- LNFMM (semi-supervised, 30% labeled) includes the results of our model in a semi-supervised setting, in which we used 30% of the training data for supervision, i.e. image-caption pairs for 30% of the training data. The remaining images and captions are included in an unpaired fashion. \\n\\nWe have updated the manuscript to make these ablation settings clearer. \\n\\n* It's also not clear why the authors call the approach a semi supervised setup:\\nThe approach is semi-supervised since unpaired images and texts can also be included during training in addition to paired images and captions (texts). We show the results for the semi-supervised setup in Tab. 2, where from the COCO dataset we used 30% of the training data for supervision (through pairs) and the remaining data is included as unpaired (unaligned) images and texts (see also Reviewer 3).\\n\\nWe would be happy to answer any additional questions.\"}",
"{\"title\": \"Code Reproduction\", \"comment\": \"Thanks for the responses!\\n\\nJust to clarify I am able to use your code to reproduce the results, just not if I follow the paper itself. Can you please clarify why you update the Latent Parameters every other iteration (if that is happening in the training code)? You update the discriminator at an odd iteration and at an even iteration you update your model.\\n\\nFor the affine architecture, in NICE the hidden layers are a function of the partition of the input (x1 and x2), but yours is a function of the hidden layers along with the condition. Can you please explain why it is necessary to concatenate the condition at every layer? (lines 135-148 , latent align modules). Based on other conditioning based VAEs many of them concatenate the condition only on the first layer.\"}",
"{\"title\": \"Clarifications Provided\", \"comment\": \"Thankyou for your interest in our work.\\n\\n*Choice of lambda values*\\nThe regularization parameters were chosen to maximise accuracy on the validation set (4000 points) of the MSCOCO dataset consistent with previous work. We identified a parameter range by grid search and then performed random sampling in that range. \\n\\n*Autoencode_image*\\nIn Autoencode_image we return the reconstruction and the loss. However, that particular loss was a left over from debugging. However, we have since cleaned the code and a newer version is available at the same link with better readability. Please note, that the reconstruction is not a function of the negative log likelihood.\\n\\n*Affine coupling layer of latent affine module*\\nWe use affine coupling layers based on NICE (Dinh et al. 2015) where the scale parameter is not conditioned on the input. Please note that there is a typo in the Appendix (we cited 2017 instead of 2015 paper), we will fix this. \\n\\nPlease note that we could reproduce our results from the publically available code.\"}",
"{\"title\": \"Code Reproduction\", \"comment\": \"I am unable to follow parts of the code and reproduce the results shared in the paper.\\n\\nIn the main training file how are the lambda values chosen ? Currently they are set as 20, 0.6 (lambda2), 25, 500, and 1.2 . \\n\\nIn the function autoencode_image, why is the reconstruction computed as a function of the negative log likelihood (rec_loss + 10*nll) .\\n\\nIn latent_align_modules the shared dimension is modeled as a conditional affine coupling layer, what is the architecture of this layer ? If this follows RealNVP, usually the scale is a function of MLP and the input, but in this case it is an individual parameter.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper introduces a variational model for text to image and image to text mappings. The novelty consists in separating the modeling of text and image latent representations on one hand and the modeling of a shared content representation on the other hand. Priors for text, image and shared representations are generated through an invertible \\u2013 flow model. The motivation for this is to allow for complex priors. Training for the shared component is supervised using aligned text and image data, while training for the residual text and image components is unsupervised. Experiments are performed for text and image generation, using training data from the COCO dataset.\", \"the_proposed_model_presents_several_innovations\": \"separate unsupervised modeling of text and image and joint supervised modeling of shared latent variables, the use of three normalizing flows for the priors respectively associated to these variables. The intuition behind the model is well introduced. However, the technical description of the model itself is somewhat imprecise. Particularly section 3.3 describing the shared component and the global model should be carefully checked. Both descriptions are too imprecise e.g. the d\\u2019 dimensional component of z_v, z_t are not introduced; the derivation or explanation of eq. (10) is not provided, J_phi in eq (9) not defined, etc. There are some typos or erros, check eq (7), (8), q_phi I instead of q_theta in \\u00a73.3.\\nThe experiments compare the model with several different baselines and are quite extensive. Please indicate whether you performed all the tests yourself or picked the numbers in the literature. A better description of the baselines characteristics and of the model variants (MSE, TXT) and their relations with the proposed model, in this paragraph, would help appreciate the results.\\nThe proposed model seems to compare well with different baselines, but the presentation of the experiments is not that clear. For example, the ablation study in Table 1 basically shows that the Phi_s flow component behaves similarly to the complete flow model. This MSE variant is not used anymore in the other comparisons, why? Same remark for the other baselines, why some are used and some not in the different tests? The same remark hold for the text to image experiments.\\n Overall, there are interesting new ideas, a new model, insufficient model description and experiments details. \\n\\n\\n----- After rebuttal -----------\\n\\nThe authors made an effort to clarify and correct the technical errors. I still have some concerns with confusing notations. But I keep my score.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\nThe paper proposes a model for joint image-text representations \\n\\nThe paper proposes a model fro cross-domain generative tasks, specifically image captioning and text-to-image synthesis. The proposed Latent Normalizing Flows for Many-to-Many Mappings uses normalizing flows to model complex joint distributions. The latent representation consist of domain-specific representation and cross-domain information shared across image and text using invertible metrics.\", \"novelty\": [\"The paper explores an interesting area of learning representations for cross-domain tasks such as image captioning and text-to-image synthesis.\", \"The model is well explained. Section 3 explains the model formulation nicely, has consistent notation and slowly builds up to the final formulation by explaining each component concisely.\", \"Recent methods like VQ-VAE [1] have shown promising results for image generation. The related work doesn't provide any discussion regarding that.\", \"Experiments / Analysis:\", \"The model contains exhaustive experiments for both image-captioning and image generation. The model performs on-par or beat state of the art methods on both perceptual and diversity metrics. On diversity metrics, the model performs much better than other recent methods like Seq-CVAE (which arrived on ArXiv only a few weeks prior to the submission deadline) and POS.\", \"The model also shows results on text-to-image synthesis comparing with multiple baselines and various diversity metrics and inception score.\", \"While the proposed methods beats existing diversity and perceptual metrics, it'd be good to also run a human study since these metrics are only a proxy to human judgement.\", \"Apart from showing empirical result, the paper can benefit from providing Insights what the domain specific representation has learnt and what the cross-domain representation learnt.\", \"Can the model benefit from training on unaligned image and textual data to learn better domain specific representations?\"], \"clarity\": \"- Ablations not clear: It's not a 100% clear from the paper what LNFMM-MSE and LNFMM-TXT mean. \\\"LNFMM-TXT contains unsupervised dimensions only for the text distribution and\\nall encoded image features are used for supervision, i.e.without f\\u03c6v\\\" What does this sentence mean? Similarly, it's not clear what \\\"LNFMM (semi-supervised, 30% labeled)\\\" mean?\\n- It's also not clear why the authors call the approach a semi supervised setup? For instance, the paper relies on supervision from paired image-caption data to train the model. \\n\\n[1] Neural Discrete Representation Learning; Aaron van den Oord, Oriol Vinyals, Koray Kavukcuoglu\\n\\n\\n**Update after rebuttal**\\nThank you for clarifications to my questions regarding the ablations and using unaligned image and textual data to learn better domain specific representations. The visualizations in Appendix A.4 are also somewhat helpful in understanding what the representations have learnt. After reading the rebuttal, and considering that they will run the human study to further validate their approach in the final manuscript, I am happy to raise the score.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper addresses the problem of many-to-many cross domain mapping tasks (such as captioning or text-to-image synthesis). It proposes a double variational auto-encoder architecture mapping data to a factored latent representation with both shared and domain-specific components. The proposed model makes use of normalizing flow-based priors to enrich the latent representation and of an invertible network for ensuring the consistency of the shared component across the two autoencoders. Experiments are thorough and demonstrate results that are competitive or better than the state-of-the-art.\", \"decision\": \"This work is a good example of meticulous and well-executed neural network engineering. It combines well-known ideas (variational auto-encoders, normalizing flow priors, invertible networks) into an effective and working solution for a complicated problem. The model is shown to bring improvements in the state-of-the-art over several metrics and benchmarks. The manuscript is well written and easy to follow (provided some technical familiarity with variational inference). It includes all the necessary details for understanding the method. Related works appear to have been discussed and compared properly, although I cannot assess if important works on cross-domain mapping are missing. For these reasons, I recommend this work for acceptance without reservation.\", \"additional_feedback\": [\"Above Eqn 5: K- divergence --> KL divergence\", \"The code could have been cleaned up and better organized, for easier reproducibility and reuse.\", \"---\"], \"post_rebuttal_update\": \"Thank you for your answers. I still recommend your work for acceptance.\"}"
]
} |
SJem8lSFwB | Dynamic Model Pruning with Feedback | [
"Tao Lin",
"Sebastian U. Stich",
"Luis Barba",
"Daniil Dmitriev",
"Martin Jaggi"
] | Deep neural networks often have millions of parameters. This can hinder their deployment to low-end devices, not only due to high memory requirements but also because of increased latency at inference. We propose a novel model compression method that generates a sparse trained model without additional overhead: by allowing (i) dynamic allocation of the sparsity pattern and (ii) incorporating feedback signal to reactivate prematurely pruned weights we obtain a performant sparse model in one single training pass (retraining is not needed, but can further improve the performance). We evaluate the method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models and further that their performance surpasses all previously proposed pruning schemes (that come without feedback mechanisms). | [
"network pruning",
"dynamic reparameterization",
"model compression"
] | Accept (Poster) | https://openreview.net/pdf?id=SJem8lSFwB | https://openreview.net/forum?id=SJem8lSFwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"XrZHOGwyal",
"zBufjTfpaf",
"HCVhUOGSE",
"S1gYqRc9oS",
"SklAJuquiS",
"H1xetvcdjH",
"rJefXD9dsB",
"SJl7CI5_jB",
"BJenAkApFH",
"S1la1ROTYB",
"H1ePQhnstS",
"HyxogpQvtB",
"rkebs7-rtB",
"S1e6DmR2OS",
"rkeUA7QH_S",
"HkgY5-Of_S"
],
"note_type": [
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"comment",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1579195314188,
1577673075068,
1576798746029,
1573723792707,
1573591013901,
1573590904374,
1573590809643,
1573590730899,
1571835860390,
1571814884532,
1571699742912,
1571400947448,
1571259288997,
1570722661468,
1570218957866,
1570042256518
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2317/Authors"
],
[
"~Shangyu_Chen1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2317/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2317/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2317/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2317/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2317/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2317/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2317/AnonReviewer3"
],
[
"~Erich_K_Elsen1"
],
[
"ICLR.cc/2020/Conference/Paper2317/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2317/Authors"
],
[
"~Utku_Evci1"
],
[
"ICLR.cc/2020/Conference/Paper2317/Authors"
],
[
"~Utku_Evci1"
]
],
"structured_content_str": [
"{\"title\": \"Clarify\", \"comment\": [\"Thank you for your interest in our paper.\", \"We would like to point out that Dynamic Network Surgery (DNS) prunes after the training while our DPF performs dynamic reparameterization during the training. More precisely, DNS requires to first train a dense model from scratch through a standard training procedure. Its pruning is only performed on a well-trained model. This pattern is illustrated in Figure 2 and Table 1 of their paper, and thus DNS should be classified into the category of \\\"pruning after the training\\\".\", \"The general way of updating the model is similar in DNS and DPF. However, it differs in the following points:\", \"DPF generalizes the idea of DNS and provides a general convergence analysis. Our improved solution (DPF) can achieve SOTA model compression results even without extra fine-tuning.\", \"We simplify the masking function and avoid introducing two extra hyper-parameters (their equation 3) as in DNS.\", \"Our training is end-to-end. DNS requires to prune the convolutional layers and fully connected layers separately to avoid vanishing gradient problem.\", \"DNS reparameterizes the model around the local minima (pruning after the training) while DPF dramatically reparameterizes the model during the training (from scratch). The reparameterization differences in different phases are illustrated in our paper (Figure 4 and Figure 7).\", \"The intuition of dynamic reparameterization in DNS and DPF is different:\", \"The mask update scheme in DNS is triggered stochastically. For convergence reasons, the triggering probability is monotonically non-increasing and towards 0 in terms of update steps. BTW, up to my understanding, the pruning sparsity in DNS is not incrementally increased.\", \"DPF uses a constant reparameterization step over the whole training procedure where the mask will automatically converge to a stable state. We avoid the careful triggering function design used in DNS.\"]}",
"{\"title\": \"Connection with Dynamic Network Surgery\", \"comment\": \"Hi !\\n\\nCongratulation on your acceptance!\", \"i_got_a_small_question_when_reading_your_paper\": \"I found the proposed algorithm (if I understand correctly) is very similar to Dynamic Network Surgery (NIPS2016), as: 1) Both of the papers update the mask during training (Eq.DPF is similarly used in both papers). 2) Full-precision weights are updated even pruned, so that pruned weights can be recovered. 3) Both of the papers use iterative (dynamic) pruning to incrementally increase sparsity.\\n\\nI notice you also cite Dynamic Network Surgery, but is is in \\\"Pruning after training\\\". In my understanding, Dynamic Network Surgery should be classified into \\\"Pruning during training\\\" since it continuously prunes according to the update of network (thus similar with the framework employed in this paper).\\n\\nI may make some misunderstanding in several points. Thanks in advance if you can correct and solve my concerns.\\n\\nBest regards,\\nShangyu\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a new, simple method for sparsifying deep neural networks.\\nIt use as temporary, pruned model to improve pruning masks via SGD, and eventually \\napplying the SGD steps to the dense model. \\nThe paper is well written and shows SOTA results compared to prior work. \\n \\nThe authors unanimously recommend to accept this work, based on simplicity of \\nthe proposed method and experimental results. \\n \\nI recommend to accept this paper, it seems to make a simple, yet effective \\ncontribution to compressing large-scale models.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for addressing my comments. There is a minor typo in the revised version page 7, section 6, second sentence \\\"grantees\\\" -> \\\"guarantees\\\".\\n\\nI will leave the score unchanged and vote for accepting this work.\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"Thank you for your review.\\n\\nWe have updated the draft to address your concerns. \\nIn particular, we explain the connection to the error feedback framework better (see footnote on page 4 which makes the connection very explicit), fixed the typo, updated the colors in Figure 6 and added a short discussion of the relation to STE.\", \"we_did_further_clarify_why_we_think_that_dpf_can_profit_from_fine_tuning\": \"Whist Figure 4 shows that a large fraction of the mask elements converge, however, a few elements are still fluctuating even at the end of training (approximately 5% (depending on dataset/model) of the active (non-pruned) weights). Thus, after fixing the final mask, fine-tuning of the weights to the chosen mask can provide additional benefits. A similar behavior (Figure 7) can be found for ResNet-20 on CIFAR-10.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"Thank you for your review. We have fixed the typo (1.) in our revision. We hope we can clarify your concerns (2.) on the performance of DPF:\\n\\n[Superior performance]\\nThe superior performance of our method originates in the flexible error feedback scheme.\\nOur scheme can be incorporated with different pruning criteria with less hyper-parameter tuning than other, more specialized, approaches. We believe that the generality and simplicity of our scheme enable good performance across all tasks. Consequently, we expect algorithms that are fine-tuned to specific architectures or tasks could perform better, though we did not observe this in the experiments so far.\\n\\nThe superior performance is not due to the implementation details. All our evaluations are performed under a fair experimental setup by using a similar pruning configuration, the released codes and recommended hyper-parameters for the competitor methods. A side note for Table 2: for pruning model with limited capacity in dense space (e.g. ResNet-20) and high target sparsity ratio (e.g. 95%), our method sometimes cannot find a much better sparse model than Incremental (ZG, 2017) if no fine-tuning is involved.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"Thank you for your review. We have updated the draft and answer below to your specific questions:\\n\\n[Connection to You et al]\\nThank you for pointing out the recent parallel work You et al. We have cited this work in the related work section. We explain the key differences below:\\n1. The Tick-Tock framework introduced in You et al. is only validated on filter pruning while our current submission focuses on unstructured weight pruning (mainly) and filter pruning.\\n2. The Tick-Tock pruning framework (Figure 2 in You et al.) requires a pre-trained model while our method allows to (1) train a compressed model from scratch with trivial extra cost, and (2) pruning a pre-trained model (we will add additional new experimental results confirming this application to the appendix).\\n3. Our method is simpler and easier to implement than Tick-Tock. Tick-Tock involves multiple phases of pruning and finetuning: the Tick phase learns the filter importance with the subset of data samples, and the Tock phase fine-tunes the sparse model on the full data samples. Instead, our method reparametrizes the sparse model via a standard single training pass.\\n4. The Tick-Tock framework is more close to ZG17 than to DPF. They finetune/tock the sparse model while we update the model on the dense space via the error-feedback scheme.\\n\\n[Without \\u2018forcing\\u2019 the sparsity]\\nWe agree with the reviewer that our method does not use l1-regularization to `force` the original weight w to be sparse. Instead, we directly prune weights by increasing order of importance, until reaching our specified target sparsity (magnitude-based pruning) and our error feedback training scheme allows the weight to be flipped back to recover the damage from improper pruning. Even though the initial weights are not sparse, our method will always reach the expected target sparsity (w.r.t. the considered layers) after training.\\n\\nIn comparison to L1-based methods, our approach has no additional pruning-specific hyperparameters, and thus simplifies usage while still reaching SOTA results. We directly use the hyperparameters from the original training scheme of the dense model.\\n\\n[The training efficiency]\\nAs demonstrated in Figure 12 in the Appendix, our proposed method enables to train a sparse model from scratch with trivial computational overhead for task on the scale of Imagenet.\\n\\nThe current submission focuses on verifying the effectiveness of the proposed method (in terms of test performance). A more efficient implementation can further improve the training efficiency for better speedup, for instance, (1) get the gradients at the sparse model (mentioned in the footnote at page 3), (2) automatically control the reparameterization space by using the runtime information (e.g. as shown in Figure 4). We leave such specific improvements for future work.\"}",
"{\"title\": \"Revision 1\", \"comment\": \"We would like to thank all reviewers for their valuable comments and questions. We have prepared an updated version of the manuscript (fixing typos mentioned by the reviewers and added few clarifications as mentioned in the other comments).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2317\", \"review\": [\"Main contribution of the paper\", \"The paper proposes a new pruning method that dynamically updates the sparse mask and the network weight.\", \"Different from the other works, the proposed method does not require post-tuning.\", \"A theoretical explanation of the method is provided.\", \"Methods\", \"In this method, the weight of the baseline network is updated not by the gradient from the original weight but pruned weight.\", \"Here, pruning can be conducted by (arbitrary) a pruning technique given the network weight (Here, the author uses the magnitude-of-the-weight given method from Han.et.al).\", \"Questions\", \"See the Concerns\", \"Strongpoints\", \"The author provides the simple and effective pruning method and verifies the performance with a sufficient amount of experiments.\", \"The author argues that the method is applicable to various pruning techniques.\", \"Concerns\", \"It seems that the paper omits the existing work (You.et.al - https://arxiv.org/pdf/1909.08174.pdf), which seems to share some contribution. The reviewer wants the author to clarify the differences and the strongpoints compared to the work.\", \"The main pruning&update equation (DPF) does not seem to force the original network w to become sparse, such as by l1-regularization. So, the reviewer worried that the method might not produce sparsity if the initial weights are not that sparse.\", \"If the reviewer missed the explanation about this, clarify this.\", \"Regarding the above concern, what if we add regularization term in training the original network w?\", \"As far as the reviewer knows, the proposed method improves the sparsity of the network, but most works choosing the strategy actually cannot meaningfuly enhance the operation time and just enhances the sparsity. Does the author think that the proposed method can enhance the latency? If so, a detailed explanation or experiment will be required.\", \"Conclusion\", \"The author proposes a simple but effective dynamic pruning method.\", \"The reviewer has some concerns regarding the novelty, real speed up, and guarantee of the sparsity.\", \"However, the reviewer thinks that this work has meaningful observations for this field with a sufficient amount of verification, assuming that the author's answers for the concerns do not have much problem.\", \"Inquiries\", \"See the Concerns parts.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors proposed a novel model compression method that uses error feedbacks to dynamically allocates sparsity patterns during training. The authors provided a systematic overview of a good number of existing model compression algorithms depending on the relative order of pruning and training processes. The effectiveness of the proposed algorithm is illustrated by comparing its generalization performance with 6 existing algorithms (and their variants) with two standard datasets and various networks of standard structures. The authors also showed the convergence rate and the fundamental limit of the proposed algorithm with two theorems.\\n\\nThis paper is well-written and very pleasant to read. I would like to accept this paper. But since I have never actually done research in model compression, I would say this is my 'educated guess'.\", \"some_quick_comments\": \"1. I did not go through the proofs of the two theorems. But it seems that there is a typo in the definition of strong convexity on Page 4: '\\\\Delta f(w)' should be '\\\\Delta f(v)'. I assume that this is just a typo. \\n2. Sorry again for not knowing the details of the baseline algorithms. According to Table 1 and Table 2, the proposed method (DPF) outperforms all the baseline algorithms, without a single exception, which looks suspicious for me. After reading the paper, I still don't understand why this should be the case. Is this due to some implementation details? Can you think of some scenarios that the proposed algorithm may not be the one to go with? In other words, when the experiment seems to show that one algorithm absolutely outperforms all the other existing algorithms, there should be some take-home message on why, or some known limitations of the proposed method.\"}",
"{\"title\": \"Some Thoughts\", \"comment\": \"For what it's worth, I've preferred to stop updating the mask when the final sparsity is reached because it combines well with EMA, which is useful in some problems (like RNNs for TTS). It also (anecdotally) hasn't seemed to help to let it continue updating.\\n\\nAs someone who would very much like a better pruning technique for training models to deploy it would be very helpful to understand if the increased performance is due to increasing the FLOPs required for inference. If the accuracy can be increased under the same FLOP budget that would be more useful (to me anyway) in practice than a constant parameter budget. I look forward to the ablation.\\n\\nFor the models that you are comparing with (80% sparse, original or extended training), the only non-uniformity in the models from Gale19 is that the first layer is dense and all remaining layers have a uniform sparsity. For the higher sparsity extended training models the final fully connected layer remains at 80% and the sparsity of the intermediate layers is adjusted higher to compensate. All intermediate layers still have the same sparsity.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work proposes a simple pruning method that dynamically sparsifies the network during training. This is achieved by performing at fixed intervals magnitude based pruning for either individual weights or entire neurons. While similar methods have been explored before, this work proposes a slight twist; instead of updating the weights of the model by following the gradient of the parameters of the dense model, they update the parameters of the dense model according to the gradients of the sparse model. Essentially, this corresponds to a variant of the straight-through estimator [1], where in the forward pass we evaluate the compressed model, but in the backward pass we update the model as if the compression didn\\u2019t take place. The authors argue that this process allows for ``feedback\\u201d in the pruning mechanism, as the pruned weights still receive gradient updates hence they can be ``re-activated\\u201d at later stages of training. They then provide a convergence analysis about the optimization procedure with such a gradient, and show that for strongly convex functions the method converges in the vicinity of the global optimum, whereas for non-convex functions it converges to the neighbourhood of a stationary point. Finally, the authors perform extensive experimental evaluation and show that their method is better than the baselines that they considered.\\n\\nThis work is in general well written and conveys the main idea in an effective manner. It is also a timely contribution as sparse models / compression are important topics for the deep learning community. The overall method seems simple to implement, doesn\\u2019t introduce too many hyper-parameters and seem to work very well. For this reason I tend towards recommending for acceptance, provided that the authors address /comment on a couple of issues I found in the draft.\", \"more_specifically\": [\"The connection to the error feedback is kind of loose and not well explained. After skimming Karimireddy et al. I noticed that 1. it evaluates the gradient at a point (i.e. the current estimate of the parameters), 2. compresses said gradient, 3. updates the parameters while maintaining the difference of the original w.r.t. the compressed gradient. In this sense, it seems a bit different that DPF, as your notation at the first equation of page 4 implies that you take the gradient of a different point, i.e. w_t + e_t instead of w_t. I believe that expanding a bit more about the connection would help in making the manuscript more clear.\", \"There seems to be a typo / error on your definition of an m-strongly convex function at the \\u201cconvergence of Convex functions\\u201d paragraph. I believe it should be <\\\\nabla f(v), w-v> <= f(w) - f(v) - 0.5 m ||w - v||^2, instead of <\\\\nabla f(w), w-v> <= f(w) - f(v) - 0.5 m ||w - v||^2.\", \"The proposed gradient estimator seems to be an instance of the STE [1] estimator, that, as the authors mention, has been using at the Binary Connect algorithm. It would be interesting to see some more discussion about this similarity perhaps also expanding upon recent work that discusses the STE gradient as a form of coarse gradient [2].\", \"At section 5.2 the authors mention that \\u201cdynamic pruning methods, and in particular DPF, work on a different paradigm, and can still heavily benefit from fine-tuning\\u201d. This claim seems to contradict the results at Figure 4; there it seems that the masks have \\u201cconverged\\u201d in the later stages of training, hence one could argue that the fine-tuning already happens thus it wouldn\\u2019t benefit DPF. I believe it would be interesting if the authors provide a similar plot as the one in Figure 4 but rather for the ResNet-20 network on CIFAR 10 (which seems to benefit heavily from FT). Do the masks still settle at the end of training (as it was the case for WideResNet-28-2) and if they do, why is fine-tuning still increasing the accuracy?\", \"Minor: Try to use consistent coloring at Figure 6 as while (a), (b) share the same color-coding, (c) is using a different one hence could be confusing.\", \"[1] Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation, Yoshua Bengio, Nicholas L\\u00e9onard, Aaron Courville, 2013\", \"[2] Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets, Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin, 2019\"]}",
"{\"comment\": \"Thank you for your questions!\\n\\n[Answer to Q1 and Q3: Understanding and Grouping of Methods]\\nBoth ZG17 and Gale19 terminate the mask update when reaching the target sparsity. The ability to update the mask after reaching the target sparsity is implemented, but not used in ZG17 (up to our understanding). Also, Gale19 does not use this feature (the same Tensorflow API and do intensive hyper-parameters tuning; [1]). Our previous statements are on top of these observations and thus argue that ZG17 might not recover from premature pruning. \\n\\nAs mentioned in our previous answer, we will carefully polish the related work section and the grouping of the methods as by your suggestion. However, please note that the statement in Figure 1 only applies to 'incremental methods' that can by definition (see also Figure 1) not recover from pruning errors. \\n\\n[Answer to Q2: Model Pruning Baseline]\\nWe performed a fair comparison with our baselines (by using their codes and recommended hyper-parameters) in our current submission; we reached similar sparsity (same sparsity evaluation function w.r.t. whole model) under the same target sparsity, where we ignored the pruning for the bias, bn, and fully connected layers as most of the papers. Note that most papers only report the reached sparsity for the pruned layers while our reported \\u2018reached sparsity\\u2019 is in terms of the whole model.\\n\\nWe agree it is difficult to directly compare our previous ImageNet results with Gale19 due to different sparsity ratios and different type prune layers. Thus, we present our new results below (only ignore bias and bn layers as in Gale19), where we train for the same 90 epochs and reach the same sparsity (w.r.t. the whole model).\\n * top-1 acc drop (0.80 target model sparsity)\\n * DPF: -0.82\\n * Gale19: -1.10\\n\\n[Answer to Q4: Uniform vs Non-Uniform Layer Sparsities]\\nThis is an interesting question. We will try to provide an ablation study about how the sparsity ratio over layers (uniform/non-uniform layer sparsities) will impact the total FLOPs. Also, w.r.t. your comment about Gale19 the results provided by Gale19 also use non-uniform sparsities across layers [2].\\n\\n-----\\nReference.\\n[1] Looking at the code from line 276 to line 289 at https://github.com/google-research/google-research/blob/master/state_of_sparsity/sparse_rn50/imagenet_train_eval.py, we can witness that the used `end_pruning_step` is equivalent to `sparsity_function_end_step`, indicating that the mask will not be updated after reaching the target sparsity.\\n[2] https://github.com/google-research/google-research/tree/master/state_of_sparsity#trained-checkpoints\", \"title\": \"Response to \\\"Additional Comments and Questions\\\"\"}",
"{\"comment\": \"Thanks for your reply, it helps clear some things up, but I still have some questions.\\n\\n1) Similarities to the Model Pruning:\\n(i) I missed the dense gradient update part. That is indeed different than the model pruning. Thanks for the clarification.\\n(ii) There are few other places in the paper where it is implied that the current pruning algorithms are unable to recover from their mistakes, i.e. Page-4: 'in contrast to incremental pruning approaches that have to stick to sub-optimal decisions' and Figure-1: 'Cannot recover from premature pruning.' It might be accurate to update those, too. \\n(iii) With ZG17's library, It is possible to update connections after the target sparsity is reached using 'end_pruning_step' and 'sparsity_function_end_step' arguments. Default value for end_pruning_step seems to be -1, which enables mask updates until the end of the training. \\n\\n2) Model Pruning Baseline\\nI agree comparing baselines across different settings is a bit tricky. The comparison of models with 73.5% (yours) and 80% sparsity (Gale19), and then also 82.6% sparsity (yours) and 90% sparsity (Gale19) seems inaccurate. \\nGiven the numbers available to us, it seems like the best comparison we can make is between models with 80% sparsity (Gale19) and 82.6% sparsity (yours). \\nIn this case, I note that Gale19 achieves -1.10 accuracy drop while your technique achieves -1.44. When allowing for longer training Gale19 achieves -0.16 accuracy drop.\\n\\n(3) Grouping of Methods\\nDynamic-incremental pruning separation is not clear to me. Is the main difference that the sparsity of the model oscillates around a level (mid, low, high) (see Fig:1)? It is also not clear to me why the sparsity oscillates during training for DPF. \\nBoth ZG17 and your proposed method require training to be dense, while the methods of SM and DSR do not, perhaps they should be categorized differently. They are more close to the 'prune before training' methods than to the 'dynamic' ones. \\n\\n(4) Uniform vs Non-Uniform Layer Sparsities\\nGale19 and ZG17 uses uniform sparsity across layers which makes total FLOPs to scale with (1-sparsity) directly. However your method employs global pruning and allows redistribution of sparsity across layers. We know that global methods tend to sparsify earlier layers less aggressively [Liu17] increasing total FLOPs. It would be nice to see how resulting sparsity distribution looks like with DPF and its effect to the total FLOPs.\", \"title\": \"Additional Comments and Questions\"}",
"{\"comment\": \"Thanks for your interest! Our method is different from ZG17 in three main points illustrated below, and we did compare with (and outperform) fine-tuned ZG17 (our ZG17 baseline has similar performance as the one in Gale19 under a fair comparison).\\n\\n[Answer to Q1: Key differences between ZG17 and our scheme]\\n(i) ZG17 does not update weights in the dense model that are currently masked/pruned; in our scheme we apply the gradient updates to *all* weights in the dense model.\\n(ii) ZG17 updates the mask only when the sparsity is changed (according to a prescribed schedule) while our scheme updates the mask periodically (independent of the current sparsity ratio). An ablation study of the reparameterization period is illustrated in Figure 9a (page 16) in our Appendix. In both schemes, pruned weights can \\u2018flip back\\u2019. We will update this inaccurate statement in the next revision.\\n(iii) In ZG17, once the model achieves the target sparsity, the weight masks are no longer updated. In contrast, we perform dynamic reparameterization (i.e. changing the mask) over the whole training procedure, even when the target sparsity is reached. An illustration of the mask flipping behavior during the training can be found in Figure 4 (page 8). \\n\\nFor comparison, we derived a similar plot for ZG17 that we will include in the next revision. The data shows that our scheme changes up to extra ~40% more of the mask elements than ZG17, and thus explores a larger space.\\n\\n[Answer to Q2: Comparing to fine-tuned ZG17 implementation in Gale19]\\nWe did compare to ZG17. Our implementation of ZG17 had a similar quality drop (for ResNet50 on ImageNet) as in Gale19 (under a fair comparison). The detailed explanations are below:\\n(1) In Gale19, they trained ResNet50 on ImageNet by increasing the number of training steps (1.5x), in terms of extending the region when performing the gradual pruning scheme. Thus, the total number of training epochs (to achieve the best performance) in Gale19 is increased from (the standard one) 90 epochs to 105 epochs. The increased training epochs/flops are significant in terms of ImageNet scale experiments.\\n(2) For ImageNet experiments (and other experiments in our paper), we focused on performing a fair comparison (in Table 3 on page 7) where every method uses the same and standard training scheme (e.g. the number of epochs, learning rate schedule) and thus in the paper we only compared with ZG17 (we fixed the gradual pruning scheme). We also carefully checked the results in Gale19 and below are the differences (by default all methods use the same training epochs):\\n* Top-1 acc drop (0.80 target model sparsity)\\n * Our reimplementation of ZG17: -1.70\\n * Gale 19 implementation of ZG17: -1.10\\n * Our scheme DPF: -0.47\\n* Top-1 acc drop (0.90 target model sparsity)\\n * Our reimplementation of ZG17: -2.59\\n * Gale 19 implementation of ZG17: -2.80\\n * Gale 19 implementation of ZG17 + extra 15 epochs: -1.60\\n * Our scheme DPF: -1.44\\nNote that we picked the best performance from [1] for each sparsity level and calculated the quality loss compared to the baseline performance. Also, note that unlike Gale19, all methods evaluated in our paper DO NOT use label smoothing, which is known as a powerful trick to (potentially) improve the performance of ImageNet training.\\n(3) We would like to emphasize that adding more training tricks (e.g. label smoothing, mixup) to improve the performance is orthogonal to our work, as it can improve the performance for both dense baselines and pruned models. Also, as pointed out by Table 2 (page 7), DFP+finetuning (FT) can further (significantly) improve the performance, which is much better than ZG17 + FT.\\n\\n----\\nReferences\\n[1] https://github.com/google-research/google-research/blob/master/state_of_sparsity/results/sparse_rn50/technique_comparison/rn50_magnitude_pruning.csv\", \"title\": \"Response to \\\"Connections to Model Pruning\\\"\"}",
"{\"comment\": \"Hi,\\n\\nI just read your paper. It is well written and has comprehensive experiments. I'd liked to ask a few questions:\\n\\n1) Looking at your method, I am not sure I understand the difference between your method and Zhu, 2017 (ZG17)'s. At related work it says: '...where pruned weights are not allowed to flip back.' I am afraid this is not true. Their code can be found here: https://github.com/tensorflow/tensorflow/blob/r1.14/tensorflow/contrib/model_pruning/python/pruning.py . They change the mask without setting the pruned connections to 0, so that they can become alive again in the next pruning iteration. Looking at their code your algorithm and theirs seem quite similar, except the global pruning criteria. I might be missing something. Would you mind clarifying the differences between your work and ZG17's? \\n\\n2) I think you should compare your results with Gale, 2019 (https://arxiv.org/abs/1902.09574) since they use ZG17's code and do proper hyper-param tuning. Their results, as far as I know, SOTA for Resnet-50 pruning and it is the same exact method as ZG17. Their results can be found here: https://github.com/google-research/google-research/tree/master/state_of_sparsity .\\n\\nThank you\", \"title\": \"Connections to Model Pruning\"}"
]
} |
H1lQIgrFDS | $\ell_1$ Adversarial Robustness Certificates: a Randomized Smoothing Approach | [
"Jiaye Teng",
"Guang-He Lee",
"Yang Yuan"
] | Robustness is an important property to guarantee the security of machine learning models. It has recently been demonstrated that strong robustness certificates can be obtained on ensemble classifiers generated by input randomization. However, tight robustness certificates are only known for symmetric norms including $\ell_0$ and $\ell_2$, while for asymmetric norms like $\ell_1$, the existing techniques do not apply. By converting the likelihood ratio into a one-dimensional mixed random variable, we derive the first tight $\ell_1$ robustness certificate under isotropic Laplace distributions. Empirically, the deep networks smoothed by Laplace distributions yield the state-of-the-art certified robustness in $\ell_1$ norm on CIFAR-10 and ImageNet. | [
"adversarial robustness certificates",
"randomized smoothing",
"robustness",
"important property",
"security",
"machine learning models",
"strong robustness certificates",
"ensemble classifiers",
"input randomization",
"tight robustness certificates"
] | Reject | https://openreview.net/pdf?id=H1lQIgrFDS | https://openreview.net/forum?id=H1lQIgrFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"34y-FdHzqx",
"rJgvdv2isB",
"rkgObP2isS",
"HyxgtI3soS",
"H1xWW82jiH",
"rkgZIzOmqS",
"rJxnt1oTtB",
"SklWazkntr",
"HkeDpYAGYH",
"rJgZs-Waur",
"SJl2_f2QuH",
"rJlGE2gQur",
"ryxTo1HWOB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798746000,
1573795695184,
1573795583585,
1573795448347,
1573795321252,
1572205129284,
1571823491665,
1571709625235,
1571117503353,
1570734488952,
1570124403774,
1570077737551,
1569963940561
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2316/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2316/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2316/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2316/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2316/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2316/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2316/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2316/Authors"
],
[
"~Anthony_Wittmer1"
],
[
"~Bai_Li1"
],
[
"ICLR.cc/2020/Conference/Paper2316/Authors"
],
[
"~Bai_Li1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"After reading the author's response, all the reviewers agree that this paper is an incremental work. The presentation need to be polished before publish.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"We thank the reviewer for the insightful comments and questions. Please also see our general response above.\", \"q1\": \"Thanks for the suggestions. We will revise the abstract in later revision.\", \"q2\": \"Sorry for the confusion. M is T(x), which is a mixed random variable.\", \"q3\": \"Fig. 3 shows an example of CDF of a mixed random variable M to better understand T(x). Mixed random variables are neither discrete random variables nor continuous random variables (e.g., the sum of a geometric random variable and a Gaussian random variable).\\n\\nIn Fig. 3, M=X \\\\mathbb{I}(X !\\\\in [0.95,2.95])+Pr(X \\\\in [0.95, 0.95+2/3])*\\\\delta(x;a=0.95+2/3) +Pr(X \\\\in [0.95+2/3, 0.95+4/3])*\\\\delta(x; a=0.95+4/3)+Pr(X \\\\in [0.95+4/3, 0.95+2])*\\\\delta(x; a=2.95). (\\\\delta(x;a) is a dirac delta function, X ~ Exponential(1)). Similarly, T(x) is a mixed random variable, and follows a similar CDF.\", \"minor_measure_theoretic_clarification\": \"by definition, a mixed random variable does not admit a probability density function, although a mixed random variable can still have continuous range.\", \"q4\": \"Yes, the multi-class setting is developed in Theorem 1. Note that Pa and Pb in Theorem 1 denote the prediction probabilities for the most probable and the second most probable classes, respectively, in a multi-class setting.\", \"q5\": \"Wang et al. developed a theoretically motivated approach to improve ResNet models. However, such improvement cannot be practically certified: it relies on an attack algorithm (e.g., PGD) to show robustness. In contrast, we can compute a robustness certificate, which *proves* that no adversary exists within the certified region. We have updated our paper and made a clarification. Thank you for your reference!\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"We thank the reviewer for the insightful comments and questions. Please also see our general response above.\\n\\nRe \\u201cafter noting that the radius can be deduced from the work of Li et al.\\u201d: \\n\\nThis seems unfair for evaluating this work. After the first proof of tight results with Gaussian distribution by Cohen et al. (2019), Levine et al. (2019) and Salman et al. (2019) find simpler ways to derive the certificate by other approaches. These follow-up works do not invalidate Cohen et al. (2019). Similarly, Li et al. did not have the Laplace result before the dissemination of this work, and thus their deduction does not invalidate this work (let alone the fact that we even prove the tightness).\\n\\nRe \\u201ca justification for why would one prefer a Laplacian noise of a Gaussian noise\\u201d:\\n\\nOne justification is that, since Laplace distributions puts more weight on the center than Gaussian distributions, Laplace noises are less prone to (negatively) impacting the prediction of the base classifier f than Gaussian noises. Indeed, taking a ResNet110 model on CIFAR-10 (trained without smoothing), we can obtain 24.8% accuracy by using a Laplace noise (variance = 0.12), while the Gaussian noise with the same variance would yield 23.7% accuracy. Here the accuracy is computed with respect to predictions of the base classifier instead of the labels (to illustrate how the smoothing impacts the predictions). \\n\\nWe formalize the intuition in terms of the sensitivity of the noise distributions with respect to their hyperparameters (\\\\lambda and \\\\sigma), and prove that the Laplace noise is less sensitive than the Gaussian noise in terms of negatively impacting the base classifier f. This implies that it is easier to set the hyperparameter for the Laplace noises than Gaussian noises. For a detailed justification, please see Appendix D in the updated version. \\nWe do acknowledge that the two distributions are equally competent since the certifiable range are both [0, \\\\infty). However, the resulting certificates of the two distributions are quite different in practice. An analogy would be architecture design in deep learning research. While existing architectures already exhibit universal approximation / turing completeness, new architectures with suitable inductive bias still improve the empirical performance quite a lot. Here Laplace noises can be regarded as an infinite mixture of L1 balls, which may be a suitable inductive bias for L1 robustness. Empirically, we indeed found that Laplace noises are much better than Gaussian noises for L1 robustness.\", \"technical_clarification\": \"the L1 and L2 certificates of the Gaussian noise are exactly the same (there is no \\\\sqrt{n} scaling). The reason is that given any L1/L2 radius r, we can show that the perturbation [r, 0, 0, \\u2026, 0] will be a theoretical worst case for Gaussian noises. As a result, the worst cases in L1 and L2 coincide, so the resulting certificates are exactly the same. One may prove the result by the fact that all the points within an L2 sphere have the same worst case prediction value under Gaussian noise (see Cohen et al.).\\n\\nQ1-Q4. Sorry for the confusion. M is T(x). We have corrected some figures and rearranged our writing. Thank you for the suggestions.\", \"q5\": \"Inconsistency between Cohen et. al. and Lecuyer et. al. in Figure 6 with Figure 5 of Cohen et al.\", \"a\": \"Cohen et. al. shows the result of \\\\ell_2 norm radius, while ours shows the \\\\ell_1 norm radius. Lecuyer et al. have two certificates (for \\\\ell_1 and \\\\ell_2, respectively). In our paper we use the \\\\ell_1 version while in Cohen et al. they use \\\\ell_2 version, so they look different.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"We thank the reviewer for the insightful comments and questions. Please also see our general response above.\", \"q\": \"Importance of L1 perturbation\", \"a\": \"Our explanation is that L1 distance is easier to interpret than L2 distance since L1 is simply the summation of absolute values without a nonlinear square root. Also, L1 has been widely studied in literature for measuring sparsity (thus connecting to sparse adversarial perturbations).\"}",
"{\"title\": \"General response\", \"comment\": \"We thank the reviewers for the insightful comments and questions.\\n\\nWe would like to clarify the novelty and significance of this paper. \\n\\nThis is an initial but important attempt towards tight results for general Lp norm and other distributions that inevitably involve mixed random variable analysis (cf. Gaussian and discrete) and asymmetric norms (cf. L2 and L0). While existing approaches fails in these challenging cases, our approach illustrates that tight results are still viable, and shines light on how these challenging cases can be tackled in general.\\n\\nThis work should be treated as an (Laplace, L1) analogy to the (Gaussian, L2) case proved by (Cohen et al.), which also improves (Lecuyer et al.) and proves that their result is tight.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper provides a random smoothing technique for L1 perturbation and proves the tightness results for binary classification case. Overall, there are some new results in this paper -- establishing a new certificate bounds for L1 perturbation model. However, I have several concerns about whether this contribution is significant enough:\\n\\nRandom smoothing has been studied extensively recently and the proof technique in this paper is not so different from previous papers (Cohen et al, Li et al). Also, there were L0 perturbation bounds proposed by (Leet et al). Therefore, although I agree that a tighter certified bound compared to (Lecuyer et al) is new, the paper seems to be a bit incremental. It will be more interesting to see if the proposed technique/theorem can be used for a wider range of norms. \\n\\nAlso, it may be more interesting to add some discussions about why L1 perturbation is important for image classification (is it more human-imperceptible?)\\n\\n=======\\n\\nI have checked the rebuttal and other reviewers' comments. Although there are interesting components in this paper, I do agree that the paper is incremental given that many random smoothing methods have been proposed recently for L2, L_infty norms. Therefore I think this is a borderline case and will be ok with rejection.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary.\\n\\nThe authors propose a new certified classifier in \\\\ell_1 norm that is tight. That is to say, upon smoothing a given classifier f with Laplacian noise, a smoothed version of that classifier (probabilistic maximum majority vote) is certified with a radius measured in \\\\ell_1 norm. The authors show that this bound is tight for binary classifiers. These results are complementary to Cohen et al. results.\\n\\nMajor comments.\\n\\n1) The major contribution of this paper is the tightness under the \\\\ell_1 norm for a binary classifier. I do not find this particularly significant. The question is of what value is such a result other than a mathematical exercise. For instance a good justification that the paper is lacking could be one where authors show that their radius is indeed tighter than all other works. The paper still lacks this (I will elaborate on this later), although, their bounds are indeed tighter than Lecure's et al. Since it is not clear whether or not the new certified smoothed classifier has indeed the largest radius among all other works, then at least a justification for why would one prefer a Laplacian noise of a Gaussian noise. Why is Gaussian smoothing sufficient for this purpose given that we do not know for sure that the radius is larger? What value/advantages does this add? The authors motivate their work by saying deriving the tightest \\\\ell_1 is difficult due to the \\\"asymmetry\\\" of the norm. While I do agree on this; however, this is not enough motivation as we we are doing doing abstract maths here.\\n\\nThe new derived radius is not really comparable to the Gaussian radius with \\\\ell_2 radius and this is my major concern. By norm equivalence, we have that \\\\ell_2 \\\\leq \\\\ell_1 \\\\leq \\\\sqrt{n} \\\\ell_2 where n is the dimension. That is to say that the radius computed with \\\\ell_1 is larger than the \\\\ell_2 in some cases by a square root of dimension. The authors can correct me on this if I'm wrong, but for a fair comparison in worst case sense the radius of Cohen et al. should be scaled by \\\\sqrt{n}. In such a scenario, it is really difficult to understand when does it make sense to tackle such a smoothing technique as opposed to Gaussian smoothing.\\n\\nI would not have asked the authors about such a question if the authors derived generic radius under \\\\ell_p smoothing (which is difficult of course). To this end, I believe since the motivation is not clear nor the results are generic enough, I find the work incremental specifically after noting that the radius can be deduced from the work of Li et al. where the main contribution here is the tightness of the radius for a binary classifier.\\n\\n\\nMoreover, I believe the paper still requires some polishing in terms of writing and presentation.\\n\\nSome more comments.\\n\\nI believe the paper can benefit from some rewriting. Here is a list of things the authors can do to improve the paper.\\n\\n1) Define what M is, page 3 \\\"and it is easy to see that M is a mixed random variable\\\". I believe the authors meant T(x).\\n2) The figures are hardly readable. For instance, authors can perhaps increase the legend's font size in figures 4. Also the chosen colors are suboptimal (perhaps the line width of the plots) should be increased. \\n3) The section below Theorem 3 should be moved up to before Theorem 3 as this discusses the proof of Theorem 2. Once a Theorem is presented, the proof sketch should follow.\\n4) Experiments on the undefended classifier has to be in Figures 6 7 and 8.\\n5) Lastly, why are comparison between Cohen et. al. and Lecuyer et. al. in Figure 6 inconsistent with Figure 5 of Cohen et al.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the author derived a tight ell_1, which is not the symmetric norm, robustness certificates under isotropic Laplace distributions. Experimentally, the authors showed that the deep networks smoothed\\nby Laplace distributions yield the state-of-the-art certified robustness in ell_1 norm on the CIFAR-10 \\nand ImageNet. To find the ell_1 certificate, the authors first identified the tight robustness certificate, for attacking the model in one particular direction, say the first direction. To show that any other perturbation directions cannot lead to a worse result, the authors convert the d dimensional likelihood function into a one-dimensional function, and the authors used relaxation for different perturbations and show that the worst-case result is bounded by the previously identified direction. However, I have the following concerns about this work:\\n\\n1. Theoretically, the authors only showed the certificate is tight for binary classification. I would suggest\\nthe author change their claim in the abstract.\\n\\n2. What is M on page 3 which is used without definition after definition 1?\\n\\n3. Can you give a concrete continuous probability distribution that leads to the scenario in Fig.~3?\\n\\n4. Can you extend the analysis to a multi-class classification scenario?\\n\\n5. Besides randomized smoothing on the input images, recently Wang et al showed that randomize the deep nets can\\nalso improve the deep nets and they gave it a nice theoretical interpretation. Here is the reference: Bao Wang, Binjie Yuan, Zuoqiang Shi, Stanley J. Osher. ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies, arXiv:1811.10745, NeurIPS, 2019\\n\\nOverall, since this work is a straightforward integration of some existing work, I think this\\npaper lack novelty. Please address the above questions in rebuttal.\"}",
"{\"comment\": \"Dear Anthony,\\n\\nThank you for the reference! The results in Pinot et al. (2019) are very great. The work addressed adversarial robustness for risk (expectation over data distribution). We will definitely add a discussion paragraph in the next revision. \\n\\nHowever, the paper (Pinot et al., 2019) is fundamentally different from ours. We work on robustness certificates for any (x_i, y_i) pairs, while Pinot et al. (2019) work on robustness guarantee for risk. Specifically, for every given x_i, we can really compute a radius R, such that for any perturbation \\\\delta s.t. \\\\|\\\\delta\\\\|_1 < R cannot alter the prediction (i.e., we guarantee g(x_i) = g(x_i + \\\\delta)). Pinot et al. (2019) gave robustness guarantee for risk (including generalization gap), and they did not provide robustness certificates. \\n\\nAs a side node, we would like to point out that we also proved the *tightness* of our L1 certificates, including both upper and lower bounds, which is new and non-trivial. Pinot et al. (2019) did not show tightness results. \\n\\nBest Regards, \\nAuthors\", \"title\": \"responses to \\\"A closely related paper\\\"\"}",
"{\"comment\": \"Great work and I really enjoy reading it.\\n\\nHowever, previous work has studied the robustness theory of randomization techniques on the general family of exponential distributions. Please check out this paper [1], where the randomized models with the Laplace distributions are also considered.\\n\\n In my opinion, a discussion/comparison seems due.\\n\\n[1] Theoretical evidence for adversarial robustness through randomization. NeurIPS 2019\", \"title\": \"A closely related paper\"}",
"{\"comment\": \"Thank you for the responses!\\n\\nI definitely agree that proving the tightness of the L1 certificates is an important contribution. I am not aware of such a tightness, although I have had the same bound. We are going to update our paper and acknowledge your results for the comprehensiveness.\\n\\nBest,\\nBai\", \"title\": \"responses\"}",
"{\"comment\": \"Dear Bai,\\n\\nThank you for the comment! \\n\\nIn our paper, we have made it clear that the first part in our upper bound theorem is the same as Lecuyer et al. (2019). It is very nice to know that the second part can be derived using your framework based on Renyi divergence. We will definitely acknowledge that it is possible to derive that bound under your framework in the paper through later revision. It is indeed a great addition to our paper to diversify the methods for deriving robustness certificates. Thank you!\\n\\nHowever, in order to make it clear (for the reviewers), we want to emphasize that this work is the first work to establish the certificate (Eq. (1)), no matter how it can be derived. Moreover, your previous paper does not subsume our results for the following reasons:\\n\\n1. One of our main contributions is proving the *tightness* of the L1 certificates, including both upper and lower bounds. Similarly, Cohen et al. (2019) use the same algorithm as yours (i.e., Gaussian perturbation), but their results are still very interesting, because they were able to prove that their certificate is tight on L2. In our case, while the L1 bound may be derived in different ways (which is not established in the literature, though), we can further prove that the bound is tight, which is new and non-trivial. \\n\\n2. In your paper, you were analyzing Gaussian perturbations on L2, but our paper uses Laplace distribution on L1. To use your framework to prove our upper bound results, one needs to rewrite the proof for your theorem 2 on Laplace distribution, and pick alpha->\\\\infty for Lemma 1. In other words, although your proof framework is handy, our upper bound is not a trivial corollary of your theorem. \\n\\nBest Regards,\\nAuthors\", \"title\": \"responses to \\\"a connection to existing work\\\"\"}",
"{\"comment\": \"Thank you for the interesting work.\\n\\nI would like to point out that in equation (1), while the first part \\u03bb/2*log(PA/PB) is equivalent to the bound from Lecuyer et al. (2019), the second part \\u2212\\u03bb log(1 \\u2212 PA + PB) can be derived from Li et al. (2019) in Lemma 1 when alpha->\\u221e, noticing the Renyi divergence of Laplacian distributions is 1/(\\u03b1\\u22121)log(\\u03b1/(2\\u03b1\\u22121)exp((\\u03b1\\u22121)*R/\\u03bb)+(\\u03b1\\u22121)/(2\\u03b1\\u22121)exp(\\u2212\\u03b1*R/\\u03bb) which converges to R/\\u03bb when alpha->\\u221e. It also gives the same tight bound in the binary case where R = \\u2212\\u03bb log[2(1 \\u2212 PA)].\\n\\nIt would be a great addition to your paper if you can make this connection clear. Thank you!\\n\\n[1] Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel J Hsu, and Suman Jana. Certified\\nrobustness to adversarial examples with differential privacy. ieee symposium on security and\\nprivacy, 2019.\\n\\n[2] Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. Second-order adversarial attack and\\ncertifiable robustness. arXiv: Learning, 2018.\", \"title\": \"a connection to existing work\"}"
]
} |
rJxGLlBtwH | On the interaction between supervision and self-play in emergent communication | [
"Ryan Lowe*",
"Abhinav Gupta*",
"Jakob Foerster",
"Douwe Kiela",
"Joelle Pineau"
] | A promising approach for teaching artificial agents to use natural language involves using human-in-the-loop training. However, recent work suggests that current machine learning methods are too data inefficient to be trained in this way from scratch. In this paper, we investigate the relationship between two categories of learning signals with the ultimate goal of improving sample efficiency: imitating human language data via supervised learning, and maximizing reward in a simulated multi-agent environment via self-play (as done in emergent communication), and introduce the term supervised self-play (S2P) for algorithms using both of these signals. We find that first training agents via supervised learning on human data followed by self-play outperforms the converse, suggesting that it is not beneficial to emerge languages from scratch. We then empirically investigate various S2P schedules that begin with supervised learning in two environments: a Lewis signaling game with symbolic inputs, and an image-based referential game with natural language descriptions. Lastly, we introduce population based approaches to S2P, which further improves the performance over single-agent methods. | [
"multi-agent communication",
"self-play",
"emergent languages"
] | Accept (Poster) | https://openreview.net/pdf?id=rJxGLlBtwH | https://openreview.net/forum?id=rJxGLlBtwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"vOTurDFmk",
"ryxFUWSwoB",
"r1g_7brwsB",
"H1xXheHvoH",
"r1gUKlHviB",
"SyltwyrDsS",
"S1eg12kTcS",
"BJxoajDCtS",
"HJgVjdDRFB",
"SkeTV81RKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745969,
1573503313399,
1573503263754,
1573503147195,
1573503101815,
1573502816556,
1572826072504,
1571875779201,
1571874972371,
1571841589096
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2315/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2315/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2315/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2315/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2315/Authors"
],
[
"~Yuchen_Lu1"
],
[
"ICLR.cc/2020/Conference/Paper2315/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2315/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2315/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper investigates how two means of learning natural language - supervised learning from labeled data and reward-maximizing self-play - can be combined. The paper empirically investigates this question, showing in two grounded visual language games that supervision followed by self-play works better than the reverse.\\n\\nThe reviewers found this paper interesting and well executed, though not especially novel. The last is a reasonable criticism but in this case I think a little beside the point. In any case, since all the reviewers are in agreement I recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Answers re: OR game\", \"comment\": \"Thanks for your interest in our work!\\n\\n1) By assigning each object a unique word, we simulate a perfectly compositional language. One population of speaker and listener are trained on a fixed order of words. Different populations are trained on different permutations of the language. So there is no varying permutation during training. \\n\\n2) During self-play, the message received by the listener is the same as the one sent by the sender.\\n\\nHope this helps!\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank the reviewer for the kind and insightful review.\\n\\n> Naming not consistent\\n\\nYes, we indeed mean ec2supervised as sp2sup and sched as sup2sp in the first paragraph of Section 5. Thanks for pointing this out, we will correct all the naming errors and ensure consistency throughout in the final version.\\n\\n> Adding error bars\\n\\nYes, the current values are the mean over runs from 5 different seeds. We will add error bars to show variance in the final version.\\n\\n> The conclusion in Section 7 that \\u2018S2P performs much worse than the other options\\u2019 is contrary to previous results.\\n\\nIn this Section, we are referring to the \\u201cpoor performance of the *sup2sp* method\\u201d. The sup2sp method is defined in Figure 2 and Section 3.3, and refers to one of the many instantiations of the (more general) S2P framework. So what we are saying is not that S2P performs worse, but that sup2sp performs worse than other methods of S2P. This is shown in Figure 7a (not Figure 6, as we mistakenly wrote in the text). The reason for this is that, since self-play is a form of regularizer (see Figure 7b), doing self-play updates often leads to worse performance on the task (until more supervised updates are done). So, if you finish training with self-play updates without performing supervised updates, your performance will be quite low. We will clarify this in the final version. \\n\\n> Add more hyperparameter details\\n\\nThank you for pointing this out, we will indeed update and clarify this in the final version. \\n\\n> Figure 3b, colour is representative of performance. Is this mean accumulated reward? Please clarify to increase how informative this visualisation is, as currently it is unclear if yellow or blue is the desired value.\\n\\nYes, it shows the performance (mean accumulated reward) over 50 pairs of speakers and listeners, so a higher value (yellow) is desirable. It shows that the performance of both the speakers and listeners when paired randomly is found to be quite variable, although we do observe a slight preference towards their own partner (yellow diagonal). Due to space constraints, we chose to omit this but we will clarify this in the final version.\\n\\n> if all these methods are \\\"well known ways to combine self-play and supervised learning\\\" can all be supported by an (or preferably multiple) exemplar publications that used these method previously. \\n\\nWe do refer to Lewis et al. 2017 published at EMNLP\\u201917 (oral) and Lazaridou et al. 2016 published at ICLR\\u201917 (oral) for two S2P methods, sched_frz and rand respectively. The sup2sp and sp2sup are the baseline models which we use for comparing other sophisticated approaches, and which we compare to show that supervised learning before self-play generally improves performance (Section 5). The sched_rand_frz is a novel extension to the sched_frz method which we found was more stable and sometimes performed slightly better. \\n\\n> Reference errors, fig refs/naming errors \\nThanks for pointing out these, we will add references to the published versions of these papers and fix the references/naming to the figures in the text.\"}",
"{\"title\": \"Response to Reviewer #2 (part 2/2)\", \"comment\": \"Clarification questions\\n\\n> What exactly does Figure 4c compare? Are both methods distilled from ensembles or is the blue line normal S2P while the other is distilled from an ensemble of compositional languages? It's not clear since point (3) in section 5 refers to the S2P result (not Pop-S2P) in that plot. I'm also assuming that PB-S2P means the same thing as Pop-S2P, but that's not made clear anywhere. Does PB stand for Population Based?\\n\\nFigure 4c compares two distilled policies. One is distilled from S2P populations (trained with X samples), and one is distilled from \\u2018perfect emergent communication languages\\u2019 (defined in the text) and fine-tuned on X samples. So both are population-based. We apologize for the naming error, by PB-S2P we indeed mean Pop-S2P. \\n\\n> In the rand setting how is convergence defined? Do both objectives need to converge or just one?\\n\\nFor the rand setting, both the objectives need to converge since we define convergence based on the performance of the listener on $\\\\mathcal{D}_{val}$. Fig 9 in Appendix shows that they indeed converge after certain number of train steps.\\n\\n> In the sched_rand_frz setting what is r?\\n\\nWe define r as the probability of freezing the speaker parameters as mentioned in Section 3.3. The actual number was mistakenly commented out in the submitted version. For reference, we use l=50 and m=50 for sched, r=0.5 for sched_rand_frz, and q=0.75 for rand. We will update this in the final version.\\n\\n> In the IBR how are the distractor images picked?\\n\\nThey are picked using a uniform random distribution over all the images available in the dataset.\\n\\n> Can't both self-play and supervision be used at the same time (just use a weighted combination of the two objectives)? I don't think the paper ever did this but it seems like a very useful variation to consider.\\n\\nYes this can indeed be done, by mixing gradient updates from both self-play and supervision in a single batch. This is quite close to the \\u2018random\\u2019 schedule (which alternates every example), and we don\\u2019t expect to see much difference, although it could indeed be tried.\"}",
"{\"title\": \"Response to Reviewer #2 (part 1/2)\", \"comment\": \"We thank the reviewer for the kind and insightful review.\\n\\n> Point 3 of section 5 isn\\u2019t very surprising, since most emergent comm doesn\\u2019t do fine-tuning to transfer to human language. The point would be stronger if \\u2018translation layers\\u2019 were considered.\\n\\nIn point 3 of section 5, we provide evidence for the claim that doing supervised learning before self-play is better than the converse. Indeed, while the question of how to bridge the gap from emergent communication to natural language is of significant interest to the emergent communication community (see e.g. the upcoming NeurIPS Workshop on \\u2018bridging the gap from emergent communication to natural language\\u2019 https://sites.google.com/view/emecom2019/), you are right that there is no consensus as to how this should be done, and very few papers (Lazaridou et al 2016, Havrylov et al 2017) have examined this explicitly (although there are many papers that examine how to make emergent language more compositional to be closer to human language). Two natural ways to do this are: (1) to fine-tune an emergent language using human data (similar to our single-agent sp2sup), or (2) to use a population of emergent languages as a kind of \\u2018prior\\u2019 that can then be fine-tuned with human data (our population-based sp2sup is inspired by this). You are correct that there is also some work (Andreas et al., 2017) on translating (e.g. computing an alignment) between the emergent language and natural language, and we think an interesting next step would be to compare the task performance of these models to various S2P schedules. \\n\\n\\n> Unclear if the results in figure 7a are significant\\n\\nWe will add error bars to the final version of our paper, which will clarify these concerns. Indeed, the difference between some of the schedules are not significant (we found that the standard error was in the range of 1-2%). Our analysis shows that there is not one S2P method that is clearly superior to all others, but slight improvements could be gained from switching schedules (which may be task dependent). \\n\\n> Skepticism about whether these trends will generalize to other tasks / models\\n\\nWe are assuming that by \\u2018trends\\u2019, the reviewer is referring to the order of performance of different S2P schedules. Our belief is that there are some trends that will hold when moving to other tasks / models (the lower performance of sp2sup and sup2sp). However, as mentioned above, our analysis showed that while there is indeed some variation between different S2P methods, there is no method that is clearly superior to all others. We suspect that using a schedule (potentially with parameter freezing) will perform better than the random schedule, however this may be task-dependent. We will clarify this view in the paper. \\n\\n> Minor weaknesses\\n\\nThank you for pointing these out, we will fix them in the final copy.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your kind and insightful review!\\n\\n> Considering other natural language tasks, e.g. negotiation or recommendation\\n\\nWe agree that there are other tasks in NLP that have more direct applications than the OR and image-based referential (IBR) games. The reason we selected the IBR game is because it\\u2019s the most common game with natural language that has been used in previous work on emergent communication (e.g. [1, 2, 3]). Thus, it makes sense to compare various supervised and self-play schedules on this task. We agree that a strong step for future work would be expanding to other complex natural language tasks such as negotiation (note that the suggested \\u2018language recommendation\\u2019 paper came out on arXiv only two weeks before the ICLR submission deadline, and was accepted only after the deadline). \\n\\n> Adding non-task related metrics to study the language. \\n\\nThis is an interesting suggestion. To help understand the difference in language generation policies for different S2P schedules, we will add qualitative examples to the Appendix, and investigate which other metrics we could add to compare the generated languages (most likely perplexity on the validation set, as it\\u2019s unclear how one would automatically measure \\u2018fluency\\u2019 and \\u2018consistency\\u2019). In our experience, adding self-play usually results in a decrease in perplexity (because you are adding an objective that\\u2019s not maximum likelihood), in exchange for better performance on the task. \\n\\n> Population-based S2P is incremental and unrelated\\n\\nYes, we agree that introducing population-based S2P is somewhat orthogonal to the main theme of our paper. We think population-based approaches have a lot of potential for improving grounded language learning with self-play especially when language tasks become more complex. This is because, for more complex language tasks, we hypothesize that self-play will result in larger deviations from natural language, and population-based approaches can help alleviate this. While our current distilled Pop-S2P result doesn\\u2019t yet reach the ensemble result, the S2P ensemble results in Figure 6 have an improvement at least as large over the single-agent S2P result (8.8% for 1k samples, and 4.1% for 10k samples), than single-agent S2P has over the supervised learning baseline without self-play (7.8% for 1k samples, and 4.4% for 10k samples). Also, population-based methods help with some parts of our analysis (for example, generating Figure 4c: the \\u2018perfect emcomm baseline\\u2019 wouldn\\u2019t be as intuitive without having the distiller trained on a population of such languages \\u2014 see our response to Reviewer #2). With this being said, we agree that the presentation of this result could be changed in our paper to de-emphasize it, and will work on this in the final version. We\\u2019d love to hear if the reviewer has recommendations for this. \\n\\n> Test on more variations of data size for better visualization\\n\\nWe did run experiments with 2k and 5k samples and show the training curves for different S2P methods in the Appendix. Due to space constraints, we chose to show results on only 1k and 10k in the main text of the paper. However, we can go ahead and add another graph showing performance vs. # samples to the final version of the paper. \\n\\n> Figures and captions need to be improved\\n\\nWe thank the reviewer for these observations, and will make the changes in our final version.\", \"references\": \"[1] Multi-agent communication and the emergence of (natural) language, Lazaridou et al, 2016. \\n[2] Compositional obverter communication learning from raw visual input, Choi et al, 2018.\\n[3] Emergent communication in a multi-step, multi-modal referential game, Evtimova et al, 2017.\"}",
"{\"title\": \"About Implementation of the OR Game\", \"comment\": \"Dear Authors,\\n\\nThanks for the interesting work. I am currently trying to build upon your Obect Reconstruction settings. I realize that you mentioned the target language is in arbitaray order. E.g. \\\"blue triangle shaded\\\" would be the same as \\\"blue shaded triangle\\\". My question is\\n1. During pretraining/supervised learning, is speaker/listener trained on different permutation of the language\\n2. During selfplay, will the communication channel send the permuted message from speaker to listener?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper investigated how two conflicting learning objectives; supervised and self-play updates could be combined with a focus on visual-grounded language tasks. With a different set of their combinations, the authors empirically found that alternating two learning updates may result in the best equilibrium state; consistency with samples in the supervised dataset and optimal state with high rewards in the task environment.\\n\\nThe paper is very well-written, and I really enjoyed reading it overall. There are some typos, presentation issues, and minor format issues (e.g., wrong naming) though. I do like this kind of simple but insightful result with enough empirical observations and discussions. Even though there is not that novel method proposed, the overall message found from the experiments, their interpretation by the authors, and meaningful comparisons to the past works in emergent communication are fair enough to learn high scientific values from it. The design of the experiment is again very simple (e.g., changing the size of data, switching two setups in different ways) but clear to understand. This work is a good example of how well-designed hypotheses and their empirical validation could contribute to the field. I also appreciate the large spectrum of literature surveys including from the recent advances (Lewis et al., 2017, Lee et al., 17) to the past literature in emergent communications such as Littman (1994) and (Farrell & Rabin, 1996). \\n\\nOne of my concerns is the lack of applications, especially on the tasks using more natural language. The two tasks; OR and IBR, seem to be very limited settings to evaluate how self-play operates with data supervision. As pointed out by the authors, supervision from the training data itself may include most of the unexplored cases of the task, leading a less chance to learn policies from the high rewards. I think more realistic tasks using natural language need to be considered: negotiation (e.g., Lewi\\u2019s task, \\u201cDecoupling strategy and generation in negotiation dialogues\\u201d), recommendation (e.g., \\u201cRecommendation as a Communication Game: Self-Supervised Bot-Play for Goal-oriented Dialogue\\u201d), and more. I agree with the point made by the authors that this work mainly focuses on investigation rather than exploitation. But, then it would be adding another emergent task where the self-play can learn many more policies than one in the supervised dataset. \\n\\nAdding to the point, I was expecting to see non-task related metrics to measure the effectiveness of their appropriate combinations. For example, it would be better to add language-side metrics (e.g., perplexity, fluency, consistency) to measure how language degeneration varies by the different combinations. This issue is not addressed in the paper, and I guess this is mainly because of the limited usage of language in the two limited tasks. If the paper is only focusing on emergent language which is related to specific tasks, it would be better to tone-down a little bit and state the major difference of it with natural language. \\n\\nThe population-based S2P seems to be a bit incremental and unrelated to the main theme of the paper. To me, the motivation of adding POP into S2P based on the policy variability is somewhat different from the original claim about the combination of supervised and selfplay. Also, the improvements on IBR in Figure 7 are incremental, making the major claim of this work little divergent. \\n\\nIn terms of presentation, if you like to show how performance changes over the different sizes of data, it would be better to show it by graphs over different variations instead of the bar charts only with 10k and 50k sizes. In addition, the figures and captions need to be improved for better interpretation. I think they are written in a hurry or changed a lot in the last minutes. Please see some minor formatting issues below.\", \"minor_comments\": \"Duplicate reference of (Lewis et al., 2017)\\nSome names defined in Section 3.3 and Section 5 are not exactly matched.\\nFigures and fonts in Figures 4 and 7 are a little difficult to understand. Especially, I can\\u2019t understand the two upper figures in Figure 4a\\nCaptions in Figure 4 are not matched with the sub-figures. \\nFigure r4b -> Figure 4b\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n---\\n\\n(motivation)\\nTo develop language speaking agents we can teach them to mimic human language\\nor to solve tasks that require communication. The latter is efficient, but\\nthe former enables interpretability. Thus we combine the two in an attempt\\nto take advantage of both advantages. This paper studies a variety of ways to\\ncombine these approaches to inform future work that needs to make this tradeoff.\\n\\n(approach)\\nThe trade-off is studied using reference games between a speaker and a\\nlistener. Goal oriented _self-play_ and human _supervision_ are considered two contraints one\\ncan put on a network during learning. This work considers algorithms that vary\\nwhen self-play and supervision are used (e.g., training with self-play then supervision,\\nor supervision then self-play, or alternating back and forth between the two).\\nAdditional variations freeze the speaker or distill an ensemble of agents into one agent.\\n\\n(experiments)\\nA synthetic Object Reference game (OR) and a Image-Base Reference game (IBR) with real images are used for evaluation. Performance is accuracy at image/object guessing.\\n1. (OR) Like previous work, this work finds that emergent languages are imperfect at supporting their goals and cannot be understood by agents that only understand a human language like English.\\n2. (OR) Pre-training with supervision then fine-tuning with self-play is superior to pre-training with self-play then fine-tuning with supervision. This is presented as surprising from the perspective of language emergence literature, which is though of as pre-training with self-play.\\n3. (IBR) Distilling an agent from an ensemble of 50 independently trained agents outperforms training single agents from scratch, but is still not as good as the whole ensemble.\", \"self_play_vs_supervision_schedules\": \"4. (IBR) Supervision (using image captions) followed by self-play performs much worse than all other approaches.\\n5. (IBR) Alternating between supervision and self play (e.g., randomly choosing supervision or self-play every iteration) performs best.\\n\\n\\n\\nStrengths\\n---\\n\\nThe curricula considered by this paper seem to have a sigificant impact on performance. These are new and could be important for future work on language learning, which may have considered the sup2sp setting from figure 7a without considering the sched setting.\\n\\nThe diversity of experiments provided and the analysis help the reader get a better sense for how emergent communication models work.\\n\\nIt's nice to see experiments on both a toy setting and a setting with realistic images.\\n\\nFuture directions suggested throughout the paper are interesting.\\n\\n\\nWeaknesses\\n---\\n\\n\\n* The 3rd point of section 5 is presented as a major conclusion of this paper, but it is not very surprising and I don't see how it's very useful. The perspective of language emergence literature is presented a bit strangely. The self-play to supervision baseline seems to be presented as an approach from the language emergence literature. I don't think this is what any of that literature promotes exactly, though it is close. Generally, I (and likely others) don't think it's too surprising that trying to fine-tune a self-play model with language supervision data doesn't work very well, for the same reasons cited in this paper (point 3 of section 5). I think the general strategy when trying to gain practical benefits from self-play pre-training is a translation approach where the learned language is translated into a known language like English rather than trying to directly align it to English as does the supervision approach in this paper. This particular baseline would be more useful if the paper considered learning some kind of translation layer on top of the self-play pre-trained model.\\n\\n* How significant are the performance differences in figure 7a, especially those between the frozen and non-frozen models? Is the frozen model really better or this performance difference just due to noise?\\n\\n* I'm somewhat skeptical that these trends will generalize to other tasks/models. The main goal of this paper is to inform future work. That makes it even more important than normal that the trends identified here are likely to generalize well. Are these trends likely to generalize well? Does the paper address when these trends are expected to hold anywhere?\", \"minor_presentation_weaknesses\": [\"Figure 4: I think the sub-figures are mis-labeled in the caption.\", \"In the related work I'm not sure the concept of generations is right. I think it should refer to different languages of different agents across time rather than different languages of the same agent across time.\", \"Missing details / clarification questions:\", \"What exactly does Figure 4c compare? Are both methods distilled from ensembles or is the blue line normal S2P while the other is distilled from an ensemble of compositional languages? It's not clear since point (3) in section 5 refers to the S2P result (not Pop-S2P) in that plot. I'm also assuming that PB-S2P means the same thing as Pop-S2P, but that's not made clear anywhere. Does PB stand for Population Based?\", \"In the rand setting how is convergence defined? Do both objectives need to converge or just one?\", \"In the sched_rand_frz setting what is r?\", \"In the IBR how are the distractor images picked?\"], \"suggestions\": \"* Can't both self-play and supervision be used at the same time (just use a weighted combination of the two objectives)? I don't think the paper ever did this but it seems like a very useful variation to consider.\\n\\n\\nPreliminary Evaluation\\n---\", \"clarity\": \"The writing is fairly clear, though some details are lacking.\", \"significance\": \"This work could help inspire some future models in the language emergence literature.\", \"quality\": \"Experiments are aligned with the paper's goals and support its conclusions.\", \"originality\": \"The distillation approach and curricula are novel.\\n\\nOverall the work could prove to be an interesting and useful reference point inside the language emergence literature so I recommend it for acceptance.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper explores the effect of ordering supervised learning and self-play on the resultant language learnt between agents. The topic is of high relevance to the ICLR community and makes several interesting insights useful to anyone learning control of a multi-agent system where communication amongst agents is applicable. I have several suggestions for improvements below, but all I believe are feasible to make within the time period of the rebuttal with the most necessary being:\\n\\n1) The naming of methods in Section 5 is not consistent with those introduced in section 3.3. For example, in the first paragraph ec2supervised is presumably sp2sup and sched is presumably sup2sp? Similarly, on page 7 (S2P and Pop-S2P) and Figure 4. Please revise and ensure consistency throughout.\\n\\n2) Figures 4b, 6 and 7 only present single values. Are these average values from repeated runs? If so please quantify variance.\\n\\n3) The conclusion in Section 7 at the bottom of page 8 that \\\"S2P performs much worse than the other options\\\" is contrary to previous results. Can the authors please comment on what features of the environment caused this difference?\\n\\n4) Appendix A includes details of hyperparameters, but some details remain unclear. Specifically, hyperparameter ranges swept over are shown but how were they then chosen from? Are they optimised for each environment and algorithm? What does the bold text in the table represent? If it is chosen values, why do only some parameters have chosen values? These are important details to enable reproduction of the paper.\", \"minor_comments\": \"In Section 3.3, if all these methods are \\\"well known ways to combine self-play and supervised learning\\\" can all be supported by an (or preferably multiple) exemplar publications that used these method previously. Directly linking each to the previous work will further clarify the contribution this specific paper makes and help readers new to the area gain insight across the multiple papers this work builds upon.\\n\\nFigure 3b, colour is representative of performance. Is this mean accumulated reward? Please clarify to increase how informative this visualisation is, as currently it is unclear if yellow or blue is the desired value.\\n\\nPage 5, small typo \\\"introduced in the Lee et al. (2017)\\\" should be \\\"introduced in Lee et al. (2017)\\\".\\n\\nFigure 4b, the legend is blocking two bars and their corresponding value. It looks like moving to the bottom right may help, or placing above the plot.\\n\\nFigure 4 caption refers to a subfigure (d) that is not included.\\n\\nOn Page 6, the reference to babbling equilibrium should include a citation for interested readers to learn more about this well established concept.\\n\\nOn Page 7 there is a reference to Figure r4b, is this intended to be a reference to Figure 4b right?\\n\\nFigure 6 appears after Figure 7. Maintaining ordered numbering would be preferable.\\n\\nOn Page 9 it is noted some experimental results are in the Appendix but as there is a page and a half of space remaining before the 10 page limit, I would encourage to include all results in the main body of the paper.\\n\\nMultiple references do not list a publication venue (e.g. Evtimova et al., Lazaridou et al. 2018, Tieleman et al. 2018) or cite Arxiv versions when the work has been later published (e.g. Jacques et al. 2018 was published at ICML 2018). \\n\\nFigure 9 caption should state the environment.\"}"
]
} |
rklfIeSFwS | CNAS: Channel-Level Neural Architecture Search | [
"Heechul Lim",
"Min-Soo Kim",
"Jinjun Xiong"
] | There is growing interest in automating designing good neural network architectures. The NAS methods proposed recently have significantly reduced architecture search cost by sharing parameters, but there is still a challenging problem of designing search space. We consider search space is typically defined with its shape and a set of operations and propose a channel-level architecture search\,(CNAS) method using only a fixed type of operation. The resulting architecture is sparse in terms of channel and has different topology at different cell. The experimental results for CIFAR-10 and ImageNet show that a fine-granular and sparse model searched by CNAS achieves very competitive performance with dense models searched by the existing methods. | [
"Neural architecture search"
] | Reject | https://openreview.net/pdf?id=rklfIeSFwS | https://openreview.net/forum?id=rklfIeSFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"DNRVVjHdjt",
"SJlMlAIp5B",
"ryx2wOgP9H",
"rJxkVRfTFB",
"HJgFd7gpKB"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745939,
1572855273937,
1572436067887,
1571790374834,
1571779440901
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2314/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2314/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2314/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2314/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a channel pruning approach based one-shot neural architecture search (NAS). As agreed by all reviewers, it has limited novelty, and the method can be viewed as a straightforward combination of NAS and pruning. Experimental results are not convincing. The proposed method is not better than STOA on the accuracy or number of parameters. The setup is not fair, as the proposed method uses autoaugment while the other baselines do not. The authors should also compare with related methods such as Bayesnas, and other pruning techniques. Finally, the paper is poorly written, and many related works are missing.\", \"title\": \"Paper Decision\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper aims to propose a novel framework for neural architecture search. Although there have been many solutions in the literature, the authors try to build a NAS model that is sparse in structure while being similarly effective as conventional dense models. The method is straightforward - they select a single fixed operation as edges, and channels as vertices, and the problem of NAS can be directly solved by a gradient descent method. The sparsity can also be achieved on the level of channels.\\n\\nI have three major concerns, including a lack of novelty, unconvincing experiments, and poor presentation of the work. First, the proposed method is quite straightforward and can be viewed as a quick extension of existing structures. Simplifying the selection of operations make it easy for computation, while it also constrains the applications of the proposed framework. Second, the reported results do not seem to be promising since the improvement was marginal. It is also very difficult to tell whether the contribution is brought by the proposed sparse structure or the adoption of autoaugment since the baseline methods are not applied with it. Third, the paper has not been well written and there are grammatical mistakes throughout the manuscript. I attached the original abstract of the paper and my corrected version below.\\n\\nThere is growing interest in automating designing good neural network architectures. The NAS methods proposed recently have significantly reduced architecture search cost by sharing parameters, but there is still a challenging problem of designing search space. We consider search space is typically defined with its shape and a set of operations and propose a channel-level architecture search (CNAS) method using only a fixed type of operation. The resulting architecture is sparse in terms of channel and has different topology at different cell. The experimental results for CIFAR-10 and ImageNet show that a fine-granular and sparse model searched by CNAS achieves very competitive performance with dense models searched by the existing methods.\\n\\nThere is a growing interest in automating designing good neural network architectures. The NAS methods proposed recently have significantly reduced costs of architecture search by sharing parameters, but there is still a challenging problem of designing search space. \\nConsidering that existing search space is typically defined with its shape and a set of operations, we propose a channel-level architecture search (CNAS) method using only a fixed type of operation. \\nThe resultant architecture is sparse in terms of channels and it has different topologies at different cells. \\nThe experimental results for CIFAR-10 and ImageNet show that a fine-granular and sparse model searched by CNAS achieves very competitive performance with dense models searched by existing methods.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper aims to search a sparse but competitive architecture with using a single fixed type of operation by proposing a channel-level neural architecture search (CNAS). Different from most previous NAS works, this paper conducts NAS process on channel-level such that different cell has different topology. CNAS provides a heuristic algorithm to calculate the saliency vector and zero out the channels iteratively until satisfying a given sparsity. This paper performs CNAS on Cifar-10 and ImageNet, and analyzes the topological properties of the final model. The results of experiment demonstrate CNAS can reach a competitive model with dense models searched by baselines.\\n\\nThis paper provides us with a novel insight that searching neural architecture on the channel level instead of operation and connection level. However, it just combines NAS and pruning parts together, which lacks of novelty in the algorithm level.\", \"i_lean_to_reject_this_paper_because\": \"(1) it lacks of novelty, (2) the experiment result is not convincing, (3) some related works are missed, (4) the expression of the paper is not clear.\\n\\nMain argument\\n\\nCNAS is a straightforward combination of NAS and pruning. As the author said in the section 2.1, CNAS method can be seen as two separate processes: training a supernet like one-shot NAS methods and then conducting pruning on the found supernet using Taylor expansion criteria. Both parts are the same as previous works almost and there is no innovation and improvements.\\n\\nMany related works are missed in the paper. One important step in CNAS is pruning, which uses Taylor expansion technic as previous work. However, it only introduces NAS in introduction section and related works section, ignoring the pruning process. From my view, the pruning part is more important than NAS part.\\n\\nFrom the results of the experiment, the improvement of CNAS is not convincing. I think the main focus of the paper is the sparsity, but in Table 2, the number of parameters of model is still larger (4.6*10^6) compared with some baselines like DARTS (2.9*10^6). Besides that, much space in the experiment section is devoted to the relationship between supernet and the final model, which is not so important. Because in other methods, supernet is just an intermedia. Therefore, The comparison between them is not so meaningful.\\n\\nThe paper is hard to understand because of unclear writing. For example, in algorithm part, the author doesn't make DimensionReduction function and its inputs clear. The author mentions the first input in the paragraph but how to combine with the second input? Also, the representation in Figure 1(b) is confused. It's hard to figure out \\\"the thick edges\\\", \\\"the solid thick ones\\\" and \\\"the dotted solid thick ones\\\".\\n\\nQuestions\\n\\n1. As we all known, operation set is important to the search space. Have you tried more types of operations? From my view, using only one fixed operation is unfair for CNAS compared with other methods.\\n\\n2. One of your focus is sparsity of the model. Can you explain the reason that you set the number of parameters to a large value (4.6*10^6)? Have you tried to use a larger sparsity value? What's the performance of CNAS when the model is sparser?\\n \\n3. In Table 2, there are some different tricks (cutout, autoaugmented) applied on some methods. Can you explain how you guarantee the fair comparison between different methods? If we just compare CNAS-R or CNAS-W, they are not better than baselines.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper models the neural architecture search (NAS) as a network pruning problem, and propose a method to sparsify the super-net during the search of architectures.\\n\\nOverall, the novelty in this paper is not strong and their experimental performance is weak compared with recently published papers. I do not see a need to have such a new algorithm in the NAS literature. Please see the question below:\\n\\nQ1. \\\"Bayesnas: a bayesian approach for neural architecture search\\\". ICML 2019\\n- This paper also takes a pruning's perspective for NAS, but it is much more efficient than the proposed one. Would the authors have some discussion and experimental comparison with this paper? Specifically, Bayesnas considers more complex sparse patterns then the submission.\\n\\nQ2. \\\"adaptive stochastic natural gradient method for one-shot neural architecture search\\\". ICML 2019\\n- Could the authors have some discussion with this paper? This paper has comparable performance, but it is also much faster.\\n\\nQ3. What are the benefits of the proposed method? \\n- From Tables 2 & 3, the proposed method is not better than STOA on the accuracy or number of parameters. \\n- CNAS + autoaugmented can offer better accuracy, but the comparison is not fair as different pre-processing method is used.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a channel pruning approach based one-shot neural architecture search (NAS). Unlike other NAS works that mostly search for operations/connections and topologies, this paper focuses on pruning channels for a fixed network.\\n\\nIn general, the idea of channel pruning has been extensively studied in previous works, and the channel pruning search algorithm is very similar previous one-shot NAS framework. The results on CIFAR-10 are reasonably good, but the results on ImageNet are not competitive to other NAS works.\", \"here_are_some_more_comments\": \"1. This paper is more like a new automated pruning technique rather than a new NAS technique. Therefore, I recommend the authors compare this technique with other pruning techniques, such as NetAdapt (https://arxiv.org/abs/1804.03230 ) and AMC (https://arxiv.org/abs/1802.03494).\\n\\n2. The baseline model described in Figure 1 is quite limited. It would be helpful if the authors can also apply this pruning technique to other types of models (such as NASNet-A/MNASNet-92 from your Table 3, or mobilenets used in NetAdapt/AMC papers).\\n\\n3. Section 2.2 and Algorithm 1 is difficult to follow. It is not clear how Taylor expansion is carried out, and how saliency vector S is used. I recommend the authors expanding Algorithm 1 to include more details.\\n\\n4. Figure 2(b) shows random pruning leads to better results than no-pruning. This is kind of counter-intuitive, could you give more details about your settings and explanation?\\n\\n5. There are some minor errors: (1) Figure 1 [y1, y2] should be [z1, z2], and [y3, y4] should be [z3, z4]; (2) At the end of section 2.1, the number of weights in node 1 should be reduced by 4/8 instead of 3/8.\"}"
]
} |
SkgWIxSFvr | FLAT MANIFOLD VAES | [
"Nutan Chen",
"Alexej Klushyn",
"Francesco Ferroni",
"Justin Bayer",
"Patrick van der Smagt"
] | Latent-variable models represent observed data by mapping a prior distribution over some latent space to an observed space. Often, the prior distribution is specified by the user to be very simple, effectively shifting the burden of a learning algorithm to the estimation of a highly non-linear likelihood function. This poses a problem for the calculation of a popular distance function, the geodesic between data points in the latent space, as this is often solved iteratively via numerical methods. These are less effective if the problem at hand is not well captured by first or second-order approximations. In this work, we propose less complex likelihood functions by allowing complex distributions and explicitly penalising the curvature of the decoder. This results in geodesics which are approximated well by the Euclidean distance in latent space, decreasing the runtime by a factor of 1,000 with little loss in accuracy.
| [
"latent space",
"prior distribution",
"problem",
"models",
"data",
"observed space",
"user",
"simple",
"burden"
] | Reject | https://openreview.net/pdf?id=SkgWIxSFvr | https://openreview.net/forum?id=SkgWIxSFvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"PEIapz3c80",
"S1guDwFhir",
"rJlmU9O3iB",
"rkekuz-3sB",
"HkeKVmyjiS",
"rJg9E72qoH",
"HJlnqgn9sB",
"BylaTCjqjr",
"rkekX8OYiS",
"HJxzlWLFjS",
"ryl1AlLKiS",
"r1xw1pSKjS",
"SJeN-ZNl5r",
"HJlXO5WwYr",
"rJxI645i_r"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745909,
1573848928104,
1573845579188,
1573814886983,
1573741361059,
1573729074259,
1573728404277,
1573727940881,
1573647894633,
1573638378126,
1573638343351,
1573637342639,
1571991803986,
1571392107158,
1570641085800
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2313/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2313/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2313/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2313/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2313/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2313/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2313/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2313/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2313/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2313/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2313/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2313/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2313/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2313/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes to regularize the decoder of the VAE to have a flat pull-back metric, with the goal of making Euclidean distances in the latent space correspond to geodesic distances. This, in turn, results in faster geodesic distance computation. I share the concern of R2 that this regularization towards a flat metric could result in \\\"biased\\\" geodesic distances in regions where data is scarce. I suggest the authors discuss in the next version of the paper if there are situations where this regularization might have drawbacks and if possible, conduct experiments (perhaps on toy data) to either rule out or highlight these points, particularly about scarce data regions.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Misunderstanding\", \"comment\": \"To be clear, my argument is not based on the mentioned \\\"Only Bayes...\\\" paper; I point to this reference as it is the clearest exposition of the problem that I have seen. The fundamental problem is trivial: if you regularize towards smooth manifolds, then distances along the manifold will, by definition, be shorter in regions where the regularization dominates. This bias imply that geodesic distances are not only short when connecting similar points, but may also be short when connecting data points that belong to different components of the manifold.\\n\\nMy concern is that you choose to ignore this bias. If you choose to solve the problem using Bayesian methods, SVD regularization or some other means, is from my perspective irrelevant. My issue is that as long as the bias is ignored, then the entire proposed model suffers the consequences.\\n\\n(as a historical side-remark: the original paper from Tosi et al that introduced the idea of Riemannian metrics in latent spaces allude to this bias and point out that the Bayesian solution removes the problem; the argument that \\\"Only Bayes should learn a manifold\\\", thus, actually goes back to the initial paper of the field)\"}",
"{\"title\": \"Thanks for reviewing\", \"comment\": \"The reviewer\\u2019s arguments are mainly based on the above mentioned paper \\u201conly Bayes should learn a manifold\\u201d (which is not peer reviewed). We have doubts about some points in that paper. There are alternative solutions (not \\u201conly\\u201d Bayes), as we mentioned above, to learn a manifold for a deterministic decoder Jacobian.\\n\\nWe have argued that our contribution is a fast, geodesics-inspired distance function based on generative models (we declared in the paper that a bit of geodesic accuracy is sacrificed). It is also demonstrated that it works empirically in relevant settings. We regret that this is not enough reason to refrain from a \\u201cclear reject\\u201d of the paper in your eyes.\"}",
"{\"title\": \"Re: geodesics\", \"comment\": \"Thanks for being explicit.\\n\\nYou are saying that you want the geodesic distance to be short for similar data points. Given that you are working with geodesics under the pull-back metric, I take this to mean that you want short geodesic distances between points that are nearby in data space (measured along the manifold), and long for points that are far away from each other in data space.\\n\\nWhen you regularize towards flatness, you (by the very definition of the pull-back) do generally not have this requested property as the geodesic curves must take \\\"shortcuts\\\" through regions where the regularizer is the most active (i.e. where data is lacking). I do not agree with the argument that mixup can be used to avoid this problem (by filling out regions of space with limited data): for example, in motion capture (the example of the authors), period motion (walking, ...) must topologically result in a circular latent space. Filling in regions of space where data is lacking is a topological impossibility.\\n\\nWhile I think the proposed regularization may have value, I have to judge the work by the geodesic aspect as that dominates the paper. Here I see fundamental mistakes (as pointed to in my initial review), and I have not seen a convincing rebuttal from the authors. Hence I retain my score.\"}",
"{\"title\": \"Response about geodesics\", \"comment\": \"We expect that short (approximate) geodesics under the learned model indicate similarity of data points in question. We will add this to the last paragraph of the introduction.\"}",
"{\"title\": \"Right\", \"comment\": \"That part I understand. I think it seems sensible to regularize towards flatter manifolds (that's what most regularization sets out to achieve). To formalize a notion of flatness the authors go far near-constant volume measures (magnification factors, MF). This aspect of the paper seem sensible to me.\\n\\nBut none of this seem to be related to the notion of geodesic curves, which constitute a significant part of the paper. All I'm trying to understand is where the geodesics fit into all of this. To understand this, I feel I must first understand exactly which properties the authors expect a geodesic to have.\"}",
"{\"title\": \"Motivation\", \"comment\": \"You don't have to like the paper, but I'll try to provide my understanding of the motivation.\\n\\nI think that they want to regularize p(x | z) to be \\\"flatter\\\", in the sense that the gradients of x with respect to z are more constant as one moves around z. They accomplish this with an objective that's related to mixup. \\n\\nThe paper's argument is then that this regularization allows the choice of a more complex q(z|x), for example a hierarchical model.\"}",
"{\"title\": \"Quick question\", \"comment\": \"Hi,\\n\\nThanks for the response; I've had a quick look at the updated manuscript, but I am not able to find the mentioned motivation of why you want to compute geodesics. Can you point to a specific paragraph in case I missed something obvious?\", \"to_be_clear\": \"I am not asking why you want the geodesics to be fast to compute, I'm asking why you want to use geodesic distances in the first place, and which properties you expect a geodesic to posses (it must have some desirable properties since you opt to compute it).\\n\\nThanks!\"}",
"{\"title\": \"Replies to Reviewer #3\", \"comment\": \"We appreciate all reviewers\\u2019 opinions and suggestions very much and it enabled us to substantially improve our manuscript.\\n\\n2. Answer: If we understand correctly, the question is why we interpolate to extend the data in the entire latent space and input the data into the Jacobian regularisation. When we measure the distance between two points in the latent space, the region (latent space) in between of these points has to be regularised/smooth. To obtain such a regularised latent space, the regulariser needs data at these regions---otherwise the latent space might be folded/unsmooth. However, data is not always available (e.g., data between two clusters). Therefore, we augment the data for the entire latent space using mixup (a powerful and simple method based on interpolation). To constrain the latent space between any two points to be smooth (approximate constant MF), we use interpolated data as input for the Jacobian regularisation term in Eq (9).\\n\\n3. Answer: We do not put the Lipschitz continuity constraint on the decoder, but we prove that our decoder satisfies Lipschitz continuity. We have clarified it in the updated version.\\n\\n4. Answer: Thanks for pointing this out. We have modified the title to \\u201cLearning flat latent manifolds with VAEs\\u201d.\\n\\n5. Answer: We agree that this would be interesting but consider it beyond the scope of this work. \\n\\n6. Answer: This experiment shows how our method is applied to a state-of-the-art algorithm in terms of measuring the distance. We have revised the tracking experiment to improve readability. \\n\\n7. Answer: The model can be relatively smooth without Jacobian normalisation, but the distance in the latent space cannot reflect the truth distance. We take Jogging and walking as an example. Jogging is a larger movement than walking in terms of the joints. Without the Jacobian normalisation, the distribution of walking in the latent space is still larger than that of jogging, which is in conflict with the true distance.\"}",
"{\"title\": \"Replies to Reviewer #2. Part 2\", \"comment\": \"5)\\n5.1 Question: \\u2026 What does it mean to be \\\"more invariant\\\" ? And how is invariance (to what) related to the condition number of the metric?\", \"answer\": \"As shown in the paper [Chen et al., 2018a; Arvanitidis et al., 2018], the expensive geodesic method has not been developed for latent spaces with more than two dimensions. Therefore, we select the graph-based method for comparison. In addition, the number of graph nodes and neighbours influence the accuracy of the approximated geodesic. Similarly, the accuracy of ODE- and NN-based geodesic approximations depends on the step length as well.\"}",
"{\"title\": \"Title: Replies to Reviewer #2. Part 1\", \"comment\": \"We thank the reviewer for the valuable comments and suggestions.\\n\\n1) Answer: Thanks for pointing this out. We have added a clearer motivation. The main aim of the paper is approximating the geodesic sufficiently (sacrificing a bit of accuracy) but maintain high computational speed to enable useful applications. We use the Riemannian distance (with the geodesic as the shortest path) as an inspiration to develop a distance function which can be computed rapidly (1000 times faster than previous methods, see Sec. 4.2). This is of crucial importance in certain scenarios such as autonomous driving.\\n\\n2) Answer: We agree that the geodesic should follow the trend of the data and the variance of the decoder can improve the results. However, our goal is to achieve a homogeneous MF, and consequently Euclidean distance approximates to Riemannian distance. We use data augmentation (mixup) to fill the data into the missing data regions, so that there is no low density regions. While this certainly is a further inductive bias on the data distribution, e.g. it makes low density regions less likely to emerge during training, it is a heuristic that helps the experimental results.\\n\\n3) Answer: We agree that moving towards a fully stochastic decoder and using the appropriate geodesics framework is a promising direction. However, we are mainly interested in a distance function. Further, all experiments are done with a Gaussian likelihood with homoscedastic variance. Hence, the stochasticity of the decoder in this work is only used to explain the noise of the data, which certainly is not an interesting thing to reflect for estimating distances. We have cited the paper mentioned above [Hauberg, 2019] and extend the paper regarding this direction. \\n\\nAs shown in Fig. 3b (green lines), the geodesics do NOT tend to be straight lines. This is only the case because of our contribution (see experiments in Fig. 3a). Therefore, our results are different to [Hauberg, 2019]. Regions in the latent space, which have a high MF are \\u201cstretched\\u201d by the Jacobian regularisation, and the distance between points is thus established in a different way. \\n\\nIn [Hauberg, 2019], the variance of the Jacobian is used to constrain the geodesic to follow the data manifold. However, there are alternative solutions for deterministic approaches like e.g. regularising the singular term of the SVD decomposition [Chen et al., 2018a].\\n\\n\\n4) Answer: We have added the difference in the related work. The main difference is that the stochastic methods work for the regions without data, because the RBF layer generates high MF for those. However, it is less general and post hoc. The uncertainty does not emerge from a principled way (such as in a Bayesian model) but is instead driven by certain assumptions. The deterministic method requires other strategies to guarantee that the geodesic is within the data manifold. In our proposed method, we regularise the latent space to have the same metric, so that we do not need to consider the region without data.\"}",
"{\"title\": \"Replies to Reviewer #1\", \"comment\": \"We would like to thank the reviewer for the thorough and helpful reviews.\\n\\n\\n -This paper extends VHP-VAE with jacobian regularization, which is approximated (the paper doesn't say so but I think it's a first order taylor expansion).\", \"answer\": \"Table 2 shows both supervised learning and unsupervised learning methods. Usually supervised learning methods require labeled data, which is often not possible. Tasks such as autonomous driving are very data hungry and it is expensive to label the required data. Our unsupervised method (no labels required) is close to the supervised learning method (DeepSORT) and outperforms other unsupervised learning models. In four out of 16 metrics, our method outperforms the supervised model; In 11 out of 16 metrics, our model outperforms other non-supervised learning models. We have revised Table 2 to highlight the results.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"1.\\tThe idea of explicitly forcing the encoding space to be flat by putting constraint on metric tensor is simple but neat.\\n2.\\tThe use of Jacobi regularization in Eq. (9) is effective but the choice of using interpolation to extend this in the entire decoding space is kind of adhoc. Can authors please justify?\\n3.\\tNot sure how authors put the Lipschitz continuity constraint on f. Please explain. \\n4.\\tThe title of \\u201cFlat manifold VAEs\\u201d is misleading as it potentially means VAEs for flat manifold \\n5.\\tI wonder what will happen if you put an unfolding constraint in the encoding space like LLE, ISOMAP etc.. The loss function is data driven so this should give atleast similar behavior.\\n6.\\tOverall I like the experimental setup, but the tracking experiment is kind of distracting. The authors may want to remove this experiment.\\n7.\\tIn Fig. 7, the authors have shown with and without Jacobi normalization which I am really not convinced with, need better explanation.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"\", \"i_have_several_concerns_with_the_paper\": \"1) Geodesics are never motivated:\\nThe paper provides no motivation for why geodesics are interesting objects in the first place, so it is not clear to me what the authors are even trying to approximate.\\n\\n2) Under the usual motivation, the work is flawed:\\nThe usual motivation for geodesics is that they should follow the trend of the data (e.g. go through regions of high density). Since no other motivation is provided, I will assume this to be the motivation of the paper as well. The paper propose to use a flexible prior and then approximate geodesics by straight lines. Beyond the most simple linear models, then this cannot work. If the prior is flexible, then straight lines will hardly ever constitute paths through regions of high density. The core idea of the work, thus, seem to be in conflict with itself.\\n\\n3) A substantial bias is ignored:\\nThe paper consider the Riemannian metric associated with the *mean* decoder. Due to regularization, holes in the data manifold will be smoothly interpolated by the mean decoder, such that geodesics under the associated metric will systematically be attracted to holes in the data manifold. Hauberg discuss this issue in great length here:\", \"https\": \"//arxiv.org/abs/1806.04994\\n\\nHere it is also demonstrated that geodesics under the mean decoder tend to be straight lines (which is also what the authors observe). Taking the stochasticity of the VAE decoder into account drastically change the behavior of geodesics to naturally follow the trend of the data.\\n\\n4) Related work is mischaracterized:\", \"previous_work_on_the_geometry_of_latent_spaces_largely_fall_into_two_categories\": \"those that treat the decoder as deterministic and those that treat it as being stochastic. In the cited papers Arvanitidis et al and Tosi et al consider stochastic decoders, while the other consider deterministic decoders. Given that geodesics have significantly different behavior in the two cases, it is odd that the difference is never discussed in the paper.\\n\\n5) It is not clear to me what the experiments actually show:\\n\\n-- I did not understand the sentence (page 5): \\\"The model is more invariant if the condition number is smaller...\\\" What does it mean to be \\\"more invariant\\\" ? And how is invariance (to what) related to the condition number of the metric?\\n\\n-- Figure 3 show example geodesics, but only geodesics going between clusters (I have no idea how such geodesics should look). If I look at the yellow cluster of Fig3a, then it seems clear to me that geodesics really should be circular arcs, yet this is being approximated with straight lines. Are the ground truth geodesics circular? At the end, it seems like the shown examples are the least informative ones, and that intra-cluster geodesics would carry much more meaning.\\n\\n-- What am I supposed to learn from the \\\"Smoothness\\\" experiment (page 7) ? My only take-away is currently that the proposed regularization does what it is asked to do. It is not clear to me if what it aims to do is desirable? Does the experiment shed light on the desirability of the regularizer or is it more of a \\\"unit test\\\" that show that the regularizer is correctly implemented?\\n\\n-- In the \\\"Geodesic\\\" experiment (page 7) I don't agree with the choice of baseline. If I understand correctly, the baseline approximate geodesics with shortest paths over the neighbor graph (akin to Isomap). However, there is no reason to believe that the resulting paths bare any resemblance to geodesics under the studied Riemannian metric. The above-mentioned paper by Hauberg provide significant evidence that these baseline geodesics are not at all related to the actual geodesics of the studied metric. The only sensible baseline I can think of is the expensive optimization-based geodesics.\\n\\n== rebuttal ==\\nI have read the rebuttal and discussed with the authors, and I retain my original score.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper considers augmenting the hierarchical VHP-VAE with a criteria in which the jacobian is approximately regularized at interpolations and extrapolations between different points in z space. The experiments suggest this is an important problem with VHP-VAE and also that it's successfully addressed.\", \"comments\": \"-This paper cites Mixup but there are two more papers to consider here: Manifold Mixup (ICML 2019) and Adversarial Mixup Resynthesis (Neurips 2019) which both considered mixing in a latent space. AMR considered in an autoencoder, and Manifold Mixup is also relevant because its theoretical analysis explicitly considers flattening although in a somewhat difference sense (and both are different from what's done here). \\n\\n -The object tracking experiments don't seem very convincing to me (just looking at table 2 at least).\"}"
]
} |
ryxWIgBFPS | A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms | [
"Yoshua Bengio",
"Tristan Deleu",
"Nasim Rahaman",
"Nan Rosemary Ke",
"Sebastien Lachapelle",
"Olexa Bilaniuk",
"Anirudh Goyal",
"Christopher Pal"
] | We propose to use a meta-learning objective that maximizes the speed of transfer on a modified distribution to learn how to modularize acquired knowledge. In particular, we focus on how to factor a joint distribution into appropriate conditionals, consistent with the causal directions. We explain when this can work, using the assumption that the changes in distributions are localized (e.g. to one of the marginals, for example due to an intervention on one of the variables). We prove that under this assumption of localized changes in causal mechanisms, the correct causal graph will tend to have only a few of its parameters with non-zero gradient, i.e. that need to be adapted (those of the modified variables). We argue and observe experimentally that this leads to faster adaptation, and use this property to define a meta-learning surrogate score which, in addition to a continuous parametrization of graphs, would favour correct causal graphs. Finally, motivated by the AI agent point of view (e.g. of a robot discovering its environment autonomously), we consider how the same objective can discover the causal variables themselves, as a transformation of observed low-level variables with no causal meaning. Experiments in the two-variable case validate the proposed ideas and theoretical results. | [
"meta-learning",
"transfer learning",
"structure learning",
"modularity",
"causality"
] | Accept (Poster) | https://openreview.net/pdf?id=ryxWIgBFPS | https://openreview.net/forum?id=ryxWIgBFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"zYdRZ5AMXn",
"r1xwKpbhoH",
"BJeG83E5jH",
"BJxSWlOYor",
"HkgdEnCOoS",
"H1gIz2Ruir",
"rJlLJh0OiH",
"BkxGniA_sH",
"SyxcGsCusB",
"SygYFAKF9S",
"r1lca6O85S",
"rkl6smfDtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745879,
1573817726960,
1573698634113,
1573646332821,
1573608496229,
1573608461790,
1573608414033,
1573608361812,
1573608209798,
1572605568991,
1572404673595,
1571394469375
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2311/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2311/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2311/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2311/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2311/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2311/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2311/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2311/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2311/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2311/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2311/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes to discover causal mechanisms through meta-learning, and suggests an approach for doing so. The reviewers raised concerns about the key hypothesis (that the right causal model implies higher expected online likelihood) not being sufficiently backed up through theory or through experiments on real data. The authors pointed to a recent paper that builds upon this work and tests on a more realistic problem setting. However, the newer paper measures not the online likelihood of adaptation, but just the training error during adaptation, suggesting that the approach in this paper may be worse. Despite the concerns, the reviewers generally agreed that the paper included novel and interesting ideas, and addressed a number of the reviewers' other concerns about the clarity, references, and experiments. Hence, it makes a worthwhile contribution to ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Acknowledgment\", \"comment\": \"Thank you for the clarifications and updated paper, and I applaud the inclusion of a negative result in the Appendix for the p(B|A) scenario. My score remains unchanged.\"}",
"{\"title\": \"Response to the questions on parameter counting\", \"comment\": \"1) It is true that the online likelihood itself is averaged during meta-training and not the gradient of the online likelihood. Nonetheless, the online likelihoods are computed using parameters that have been updated using stochastic gradient descent. Adapting on only a small dataset of intervention data indeed means that we might not reach zero gradient, but we will have a noisy estimate of this gradient, which in turn might induce some noise in our measure of adaptation. The fact that we can repeat this multiple times on different transfer distributions over the course of meta-training means that the effect of this noise is mitigated. In practice, we did observe that the gradient of the meta-transfer objective with respect to the structural parameters (which is driven by the difference in online likelihoods, from Proposition 2) could sometimes point in the \\u201cwrong direction\\u201d (as is the case in SGD in general), but points on average in the direction leading to convergence towards the correct causal model.\\n\\n2) We believe that the effect of the parameter counting argument depends more on the difference in the order of magnitude between the two causal models (10 vs 110 in the case of N=10 and p(A) changing, as given by Table D.1), rather than actual counts (100 vs 110 for N=10 and p(B | A) changing). This would explain why its effect is stronger using the tabular representation as N increases (Figure 2, left), and why the difference is less clear when increasing N in the experiment with MLP representation (Figure 2, right).\\n\\n3) In all experiments, we made sure to control for as many sources of spurious bias (which would steer the decision towards one of the two hypotheses) as possible, notably choosing models with the same capacity to parameterize both causal models. It is true that even optimization could affect how the online likelihood is computed, and therefore induce enough bias to have the structure converge towards the wrong causal model. However across all our experiments with a interventions on the cause A, we did not encounter this situation on both simple models (eg. tabular representation for discrete variables) and more complex models (Mixture Density Networks for continuous multimodal variables). We also addressed this in part in the first point of our initial response to Reviewer 2, for an additional perspective.\\n\\nFinally we have updated Figure D.4 in our latest revision of the paper. The learning curves in the first revision were only averaged over 5 runs, where the line without intervention (in orange) was indeed leaning towards the right causal model. In the latest revision of the paper, we now average both curves over 40 runs, with a clearer conclusion: on average, the belief that A -> B is the correct causal model remains at 0.5. And even though some runs deviate from an equal belief (likely due to noise induced by SGD), using interventions proves to be more reliable in this case where the model is not identifiable.\"}",
"{\"title\": \"Further questions regarding parameter counting\", \"comment\": \"Thank you for the extensive comment and the additions to the paper. I have some remaining questions regarding your key hypothesis: fewer non-zero expected parameter gradients implies better online likelihood.\\n\\n1) I don\\u2019t quite follow the statement in your comment \\u201cOn average over meta-training, the parameter gradients of the unmodified modules will be zero, having a smaller contribution during the computation of the online likelihood for the correct causal model.\\u201d What you average during the meta-train iterations is not the gradients to the online likelihood, but the online likelihood itself. Could you please clarify this?\\n\\n2) Your \\u201cfailure case\\u201d also shows that the parameter counting argument does not always imply better online likelihood (for finite N) and that the opposite can be true also. In this case, the right causal model with N^2 nonzero expected gradients has lower expected online likelihood than the wrong causal model with N^2+N nonzero expected gradients. Do you have an hypothesis why this is the case?\\n\\n3) As you don\\u2019t transfer until convergence on the interventional training set (Fig 1), I find it plausible that for complicated models the training dynamics might affect the online likelihood in sufficiently to change the ordering of the causal graphs. Could you elaborate on this?\", \"furthermore\": \"could you please explain why in the new Figure D.4 without interventions, the line still appears to tend significantly towards the right causal model?\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": [\"We would like to thank you for your kind words about the paper, and we share your enthusiasm regarding this research direction. We want to answer your questions here:\", \"In our experiments, we carefully made sure that the two models A -> B and B -> A have the same capacity, as shown in Tables D.1-4, to ensure that there was no bias induced by the modeling choice we could not control for. However it is indeed very interesting to ask what would happen if this assumption does not hold. There are several factors that can contribute to the \\u201cfast adaptation\\u201d, but in general one would expect that models more faithful to reality lead to faster adaptation; causality is one such aspect that we are investigating in this paper, but other aspects might exist as well. Nonetheless, as is the case in Machine Learning in general, a good but difficult to optimize model may be rejected over a model which is easier to train from an optimization point of view. In that case, it might be impossible to recover the causal direction alone, independently of all other factors of variations (such as different model structures). As an extreme example, even the loss landscape involved in the computation of the online likelihood (Equation (3)) could have an impact on the conclusions about the causal directions; we did not encounter this issue in our experiments though.\", \"Following this suggestion, we included an experiment where the conditional distribution p(B | A) changes in Appendix D.5 in the first revision. In the absence of the parameter counting argument, we found in our initial experiments that the conclusions may not hold anymore. To summarize our findings so far: our method sometimes fails and sometimes works (albeit with a measure of adaptation performance different from the online likelihood used throughout the paper). We would also like to note that even though the last part of Section 2.2 does not hold, the result of Proposition 1 still applies in this situation. The conclusion of the paper remains that when the counting argument works (which here means an intervention on the cause), our experiments show successful recovery of the cause-effect relationship.\", \"Drawing a new $D_{int}$ for each step of gradient descent might seem prohibitive. However, the purpose of the meta-transfer objective is to leverage information from only small datasets $D_{int}$, meaning that there is not much of a burden when it comes to the amount of data required, using as little as T=20 datapoints for each intervention in our experiments. This burden is comparable to the one required by any meta-learning algorithm in general.\", \"Some clarification may be needed on the meaning of N. N here refers to the number of possible values of the categorical variables A and B can take, as mentioned at the end of Section 2.2 and in Section 3.3, and not to the amount of data (denoted in the paper as either m for $D_{obs}$ or T for $D_{int}$). A larger value of N puts the learning further in the \\u201csmall-sample regime\\u201d where we have seen that the causal signal is stronger in Section 2.2 (and shown clearly in the tabular case in Figure 2, left). However, the difference is less clear for MLPs, most likely because unlike the tabular representation, the number of parameters in the MLP representation does not scale quadratically with N anymore, and MLPs are known to be robust to over-parametrization (i.e., they don't overfit easily even when the number of parameters is much larger than the number of examples).\"]}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We would like to thank you for your interest in the paper, and for carefully going through our theory and experimental results. We want to address your three concerns here in more detail:\\n\\n1) There might be a small confusion here about our assumption of \\u201csmall changes\\u201d. This assumption applies to the intervention on the distribution p (throughout the paper) over causal variables, and not to the encoder (which is in Section 4 only). More precisely, we assume that the transfer distribution $\\\\tilde{p}$ is the result of an intervention on only a single variable (in the paper, the cause A); the fact that we only intervene on a single variable characterizes the \\u201clocalized\\u201d (or \\u201csmall\\u201d) nature of the change. This assumption is detailed in Section 2.1, and mathematically defined in Proposition 1. This change is small only in the right representation space (e.g. in the latent space in our experiment in Section 4), as mentioned in Section 1. On the other hand, we do not make any assumption on the magnitude of the angle for the decoder; in our experiment, we used $-\\\\pi/4$ (see Appendix E in the first revision for details).\\n\\n2) Following the suggestion of the reviewer, we conducted an experiment where the conditional distribution p(B | A) changes while the marginal on the cause p(A) remains unchanged during intervention. Our experiment results are available in Appendix D.5 in the first revision. Moreover, even if the change is on p(B | A), our analysis of the number of dimensions for each model in Table D.1 would still be valid. The learner does not know that p(A) is unchanged, and thus we would still have to model the marginal of A, similar to how we still had to model the conditional of B given A in the experiment of Appendix D.1. This ensures that both models have the same capacity (inducing any spurious bias). That being said, as pointed out by Reviewer 2, the parameter counting argument at the end of Section 2.2 would not hold if the intervention is on the effect B.\\n\\n3) We share the sentiment that future work should include discussions on the application of the meta-transfer objective to real-data tasks. Following up on this work, [1] showed empirical success on both graphs with multiple variables, as well as standard datasets from the Bayesian Network Repository [2].\\n\\n[1] Ke, Nan et al., Learning Neural Causal Models from Unknown Interventions (2019).\\n[2] Bayesian Network Repository: http://www.bnlearn.com/bnrepository/.\"}",
"{\"title\": \"Response to Reviewer 5 (2)\", \"comment\": \"(continuing from previous comment)\\n\\n2.B)\\n> Perhaps I am misunderstanding, but it seems to me that p(x) = \\\\prod_i p(x_i | x_{Pa_i}) only makes sense for a DAG. Hence, how would the online likelihood be computed for cyclic models?\\n\\nIt is true that the factorization in the definition of the online likelihood only makes sense if the candidate graph contains no cycle. To be more precise, if the graph contains a cycle, we can use the pseudolikelihood in place of the likelihood here (leaving the definition unchanged). Although the pseudolikelihood is only an approximation of the joint likelihood, it has been shown to be a reasonable estimate for maximization (which is performed here for adaptation), instead of maximizing the joint likelihood [4]. We have updated Appendix E (Appendix F in the revision) to clarify this. Also note that what we have is a weighted form of log-pseudolikelihood (with each log-prob weighted by the edge probabilities) so that when learning converges (with appropriate regularization) these edge probabilities define a DAG and the weighted log-pseudolikelihood becomes a proper log-likelihood.\", \"we_would_like_to_also_address_your_suggestions\": \"> Could you explain in what realistic settings we would have access to data from a large number of different interventional distributions?\\n\\nThe setting described here is similar to the few-shot episodic setting frequently encountered in meta-learning, where small datasets (here $D_{int}$) of different tasks (here different interventions) are used for meta-training. We expect these kinds of settings to become especially relevant in RL and multi-agent scenarios (where agents' actions are interventions).\\n\\n> Could you show a plot similar to Fig 1, but with online likelihoods? Such a plot may be more indicative of the ideal episode length than Fig 1.\\n\\nIndeed -- please refer to Figure G.1 in Appendix G in the revision. \\n\\n> Could you provide details on the representation learning experiment? In particular: (1) is \\\\theta_D different in the train dataset and each interventional dataset? (2) How do the gradients of theta flow through the meta-update steps?\\n\\nWe have included Appendix E in the first revision to provide experimental details. To answer your questions: \\n(1): \\\\theta_D is constant throughout. \\n(2): The gradients to theta flow through the optimization process of the transfer episode (similar to MAML [5]). \\n\\n> A reference to Appendix B appears missing in main text.\\n\\nThis has been fixed -- thank you for the pointer!\", \"in_conclusion\": \"thank you again for your detailed review. We hope our response and the revision addresses your concerns; please feel invited to respond to our comment if anything remains unclear.\\n\\n[1] Peters, Jonas et al., Elements of Causal Inference.\\n[2] Ke, Nan et al., Learning Neural Causal Models from Unknown Interventions (2019).\\n[3] Ng, Andrew, On feature selection: learning with exponentially many irrelevant features as training examples (1998).\\n[4] Koller, Daphne and Friedman, Nir, Probabilistic Graphical Models: Principles and Techniques.\\n[5] Finn, Chelsea et al., Model-Agnostic Meta Learning for Fast Adaptation of Deep Networks (2017).\"}",
"{\"title\": \"Response to Reviewer 5\", \"comment\": \"We would like to thank you for the time you have invested in reviewing our work. We are glad that you find our contribution novel and our paper well written, and your suggestions have helped us significantly improve the paper. Our detailed response to your comments follows.\\n\\n1.1) \\n> Are these experiments then good benchmarks for causal discovery based on intervention when the causal model can already be inferred from the non-interventional training data?\\n\\nThis is indeed a valid point, and we provide an additional supporting experiment where causal discovery fails for non-interventional training data (a linear-gaussian model, which is not identifiable), but succeeds with interventional data. This can be found in Appendix D.3.\\n\\n1.2)\\n> The simplicity of the representation learning setup doesn\\u2019t convince that the method is applicable to more real-world settings with more complicated encoders. Additionally, some important details on this experiment are missing (see below).\\n\\nWe agree with your assessment that this is only a first step -- a proof of concept in a minimal setting -- and that much more work is required to explore and understand this very important, but challenging, direction. More complex experimental settings with the encoder are therefore left for future work. \\n\\nRegarding the missing important details -- thank you for pointing this out. We have added them in Appendix E of the revision to address this.\\n\\n1.3)\\n> All experiments show the effect of intervention on the cause p(A). No experiments are given for intervention on p(B|A). Does the method then still work?\\n\\nThank you for the important suggestion. We have good reasons to believe that the cause intervention will generally work best based on the parameter counting argument. We have conducted additional experiments in a setting where the conditional distribution p(B|A) changes while the marginal distribution p(A) remains unchanged, and the results are indeed mixed. To summarize our findings so far: in some cases our method did not work, in others our method can work, with a measure of adaptation performance different from the online likelihood used throughout the paper. The results, together with one failure case, can be found in Appendix D.5.\\n\\n1.4)\\n> No experiments for more than two random variables are performed in this paper.\\n\\nThe focus of the current work is placed on the important class of bivariate causal graphs (cf. chapters 1-5 of Peters et al. [1]). Our objective is to introduce the meta-transfer objective in the two-variable case, thereby laying down the foundations for future work combining causal structure learning with deep learning on larger graphs. While we include theoretical results in Appendix E (Appendix F in the revision), further results on general multivariate graphs is left to future work by the community. For instance, more recent work [2] builds on our insights to show positive experimental results on graphs with up to 8 variables. \\n\\n2.A) \\n> However, the zero-expectation gradient may still be non-zero and even large on the small intervention sample. It is unclear to me why they therefore can be excluded in the number of parameters.\\n\\nWhile the adaptation is performed on the small intervention dataset $D_{int}$, the structural parameter $\\\\gamma$ is updated by SGD over many intervention distributions $\\\\tilde{p}$, justifying our result on expectation over $\\\\tilde{p}$ in Proposition 1. On average over meta-training, the parameter gradients of the unmodified modules will be zero, having a smaller contribution during the computation of the online likelihood for the correct causal model. We think that it would be possible to exploit the kind of theoretical analysis which has been made for sparse regression (where only a few of the inputs need to have non-zero weights, while the expected gradients on the weights from the other inputs would be zero if those weights are set to zero). In that case it can be shown [3] that the capacity scales linearly with the number of truly dependent variables (the number of non-zero true weights), and thus the number examples needed also only need to scale with that number.\\n\\n(continuing in next comment)\"}",
"{\"title\": \"General response to all reviewers\", \"comment\": \"We would like to thank the three reviewers for their insightful comments. We really appreciated the feedback and suggestions to improve the paper. We would like to address here some points relevant to all three reviews, and add details for each individual reviewer in separate comments.\\n\\nIn this paper, we only considered the case where the true causal graph has two nodes (i.e. the cause-effect setting). This both makes the presentation clearer, and allows us to lay the foundations for general causal graphs. Although we discuss possible extensions to multiple variables in Appendix E (Appendix F in the first revision), we would like to stress that extensive experimentations on general causal are left for future work. We included this opening to multiple variables in the Appendix, as part of our initial release of this paper, to encourage other researchers to build on the ideas presented in the paper. For example, [1] builds on from our initial paper and showed empirical success on graphs with multiple variables using the formulation presented in this Appendix; we referenced this work in the ICLR version of the paper.\\n\\nWhile we chose to focus our attention on a setting where we could intervene on the cause to get a stronger intuition, some reviewers are rightfully asking if the proposed ideas still hold if p(B | A) changes instead of p(A). In that case, our parameter counting argument from Section 2.2 would indeed not hold. Following the suggestion of the reviewers, we experimented with settings where the interventions are the results of changes on p(B | A) (leaving p(A) invariant, meaning that our change in distribution is still localised). Our initial experiments are mixed, depending on the experimental setup, which reinforces our argument based on parameter counting. These new results will be included in Appendix D.5 of the revision (together with one failure case), but would require further investigation. However, these remarks from the reviewers we will also help us clarify in the paper that we only claim that causal graphs have been recovered in the case where it is the cause which has been the subject of intervention, because this is the case where the counting argument applies.\", \"we_have_uploaded_a_first_revision_of_the_paper_with_the_following_updates\": [\"We added a missing reference to Appendix B in Section 2, as mentioned by Reviewer 5.\", \"We have updated the proof of Proposition 1 (in Appendix C.1) which had a small mistake making it incorrect. We would like to thank a researcher that privately reached out to us about this proof.\", \"We have included results for changes on p(B | A) in Appendix D.5, as suggested by the reviewers.\", \"We have added a figure similar to Figure 1 with the online likelihood instead of the likelihood on an external validation set in Appendix G, as suggested by Reviewer 5.\", \"We have added a sanity check in Appendix D.3 where we apply our continuous bivariate setting to an SCM which is known to be not identifiable from a single distribution, as requested by Reviewer 5.\", \"We have added the Appendix E, containing details about our experiment on Representation Learning in Section 4. In particular, we mention how the decoder is fixed for all datasets, and we are detailing how the gradient with respect to the encoder\\u2019s parameters is computed as asked by Reviewer 5.\", \"We have clarified the use of a form of pseudolikelihood (rather than likelihood) in the definition of the online likelihood in Appendix E (Appendix F in the revision), as suggested by Reviewer 5.\", \"[1] Ke, Nan et al., Learning Neural Causal Models from Unknown Interventions (2019).\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #5\", \"review\": \"The paper proposes a method of discovering causal mechanisms through meta-learning, by assuming that models transfer faster if their causal graph is correct.\\n\\nFor a possible causal graph, for adaptation to one interventional dataset, the samples are iteratively revealed and the log likelihood of the next sample is measured, after which the parameters are updated using one step of gradient descent on that sample. The sum of these log likelihoods is the \\u2018online likelihood\\u2019 and a measure of speed of adaptation. The parameters are initialised using maximum likelihood estimation on the train dataset.\\nThe meta learning procedure then at each episode samples an interventional distribution and an interventional dataset from that distribution. It then performs a gradient based update of the belief over graphs based on the difference between the online likelihoods of each graph on that dataset.\\n\\nThe meta-learning approach appears to be a novel contribution. The authors provide a theoretical argument by counting \\u2018effective parameters\\u2019 to suggest why models using the right causal model obtain a higher online likelihood. Additionally, they prove that the gradient updates to the graph belief are easy to compute and converge. The method is validated with several synthetic experiments which discover the direction of the arrow between two random variables, each either continuous or discrete. Furthermore, they successfully experiment with the combination of learning a representation of a raw data to two random variables with learning the causal direction. The paper is very well written and most claims are carefully proven.\\n\\nI recommend a weak rejection for this paper, because:\\n1) The empirical validation is not strong enough, as no real dataset is used, only toy datasets. The toy experiments themselves could also be more extensive.\\n2) I am unconvinced of two of the theoretical claims made: (A) the fact that the expected gradients in Prop 1 are 0, implies that the right causal graph has better online likelihood and (B) that the method is easily extensible to more than two random variables [Appendix E].\", \"supporting_arguments\": \"1.1) Fig D.1 suggests that, in the experiments using continuous random variables, the training dataset alone is sufficient to discover the true causal model, under the assumption of independent additive noise, as is done in e.g. [1]. I find it plausible to believe that the training curve on the training dataset alone already makes it possible to disambiguate the causal from anti-causal model. A similar pattern is shown by the authors themselves in Appendix B on discrete variables.\\nFor finite training data, the models are distinguishable, while for infinite training data they are not [Fig B.1]. The exact same holds for finite and infinite interventional data [Fig 1].\\nAre these experiments then good benchmarks for causal discovery based on intervention when the causal model can already be inferred from the non-interventional training data?\\n1.2) The simplicity of the representation learning setup doesn\\u2019t convince that the method is applicable to more real-world settings with more complicated encoders. Additionally, some important details on this experiment are missing (see below).\\n1.3) All experiments show the effect of intervention on the cause p(A). No experiments are given for intervention on p(B|A). Does the method then still work?\\n1.4) No experiments for more than two random variables are performed in this paper.\\n2.A) Prop 1 shows that the expected maximum likelihood gradient for one conditional probability distribution is zero if the graph is correct and that CPD is not intervened on. Subsequently a claim is made that this effectively reduces the number of parameters and thus that adaptation is faster. However, the zero-expectation gradient may still be non-zero and even large on the small intervention sample. It is unclear to me why they therefore can be excluded in the number of parameters. Furthermore, whether the online likelihood is large will depend not only on generalisation, but also on the training convergence, since not empirical risk minimization is used, but SGD with a fixed number of steps. Thus, even though the authors prove that the method will converge to the causal graph with lowest online likelihood, it is unclear why this is necessarily the correct causal graph.\\n2.B) In appendix E it is mentioned that cycles can occur in causal models. However, it is unclear why factorization (76) is still correct in the cyclic case. Perhaps I am misunderstanding, but it seems to me that p(x) = \\\\prod_i p(x_i | x_{Pa_i}) only makes sense for a DAG. Hence, how would the online likelihood be computed for cyclic models?\", \"suggestions_for_improvement\": [\"Could you explain in what realistic settings we would have access to data from a large number of different interventional distributions?\", \"Could you show a plot similar to Fig 1, but with online likelihoods? Such a plot may be more indicative of the ideal episode length than Fig 1.\", \"Could you provide details on the representation learning experiment? In particular: (1) is \\\\theta_D different in the train dataset and each interventional dataset? (2) How do the gradients of theta flow through the meta-update steps?\", \"A reference to Appendix B appears missing in main text.\", \"[1] Nonlinear causal discovery with additive noise models. Hoyer et al 2009\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this work, the authors proposed a general and systematic framework of meta-transfer objective incorporating the causal structure learning under unknown interventions. Under the assumption of small change (out-of-distribution data), the work mainly focuses on the theoretical and empirical analysis of relations on two random variables in causal graphs (causal and anti-causal directions), so that a differentiable regret function using the joint distribution of the small \\\"intervention\\\" dataset can be built.\\n \\nThe motivation is to adapt or transfer quickly by discovering the correct causal direction and learning representation based on it. The idea of disentangling the marginal and conditional factors to reduce the sample complexity and thus achieve fast adaptation is novel and insightful. Proposition 1 and its proof provide the theoretical supports on this point very well. The structure causal model is parametrized and then optimized in a meta-learning procedure. Experiments on simulated data under categorical or continuous distributions can verify the efficiency of inferring causal graphs.\", \"here_are_some_concerns_about_the_proposed_algorithm\": \"1). When the authors discussed small change, there is no formal (mathematical) definition on it. For instance, an invertible function could be one of the properties given for the out-of-distribution data. The example (rotation) in Fig. 3 works to some extent because the small transformation is invertible. Also, the intervention seems simple in the work, for example, the rotate angle (a value) in Fig. 3 only involves one parameter dimension. In this case, learning an encoder to infer the correct causal relation is not that difficult. Is there possible that the encoder cannot learn a good enough theta to find the correct causal direction? It would be nice if the limitations of using causal graphs are discussed.\\n \\n2). Given a direction A causes B, the experiments are conducted by performing interventions on the cause A. How about to put an intervention on the effect B? According to the algorithm analysis (Table D.1), for the discrete bivariate model, the parameter dimension of a correct structure becomes N^2, while the one of an incorrect structure becomes N + N^2. Compared to intervention on cause, the reduction of sample complexity here is not that obvious. A general discussion on the effect intervention for bivariate models would be helpful. \\n \\n3). The work opens a new direction of inferring causal relationships together with representation learning, which has the potential for more out-of-distribution scenarios. While the authors claim that it is the first step, the current empirical studies for structure models use synthetic data with relatively constraint assumptions. It is highly recommended for the authors to provide discussions about real-data tasks with neural causal models in future work.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"\", \"summary\": \"The paper first shows that, in a very simple two-variable task, the model with the correct underlying structure will adapt faster to a causal intervention than the model with the incorrect structure. This idea is used to develop a \\u201cmeta-transfer\\u201d objective function for which gradient ascent on a continuous representation of the model structure allows learning of that structure. The paper shows that optimizing with respect to this objective with a simple model is guaranteed to converge to the correct structure, and also presents experimental results on toy problems to demonstrate.\", \"overall\": \"Accept.\\nI really enjoyed reading this paper. It is clear, well-motivated, well-written, does a good job of connecting to related work, and presents an interesting method for structure learning. While the experiments are quite toy and questions about how well this will work in more complex models with many variables remain largely unaddressed, these do not detract much from the paper for me. Instead, the paper does a good job of motivating its contribution and exploring its effect in simple intelligible tasks, and I feel I got more out of this paper than most SOTA papers.\", \"clarity\": \"Very clear.\", \"significance\": \"Potentially quite significant as this is starting to bring causal structure learning into the realm of tensorflow and pytorch.\", \"questions_and_comments\": [\"All else being equal, the speed of adaptation between two very similar models will serve as a good proxy, as shown in this paper. However, I can easily imagine scenarios where the two models one wants to differentiate between are quite different, and have very different optimization landscapes. Here, the speed of adaptation will be quite dependent on these landscapes and not just on the underlying model structure. Do you have thoughts about how this can be extended to such a scenario?\", \"The parameter counting argument is not nearly so strong if what actually changes is the conditional p(A|B). In that case, the sample complexity for the correct model would be N^2 = O(N^2) and for the incorrect model would be N + N^2 = O(N^2). Does the objective still work here? Would be great to add an additional experiment showing the results in this case.\", \"Doing an intervention and drawing a new D_int for each step of gradient descent seems quite prohibitive in a lot of domains. Are there ways to decrease this burden?\", \"In Figure 2, can you speak to why the N=100 curve for the MLP parameterization converges more slowly than the N=10 curve? I would still expect more data to be beneficial here.\"]}"
]
} |
ByglLlHFDS | Expected Information Maximization: Using the I-Projection for Mixture Density Estimation | [
"Philipp Becker",
"Oleg Arenz",
"Gerhard Neumann"
] | Modelling highly multi-modal data is a challenging problem in machine learning. Most algorithms are based on maximizing the likelihood, which corresponds to the M(oment)-projection of the data distribution to the model distribution.
The M-projection forces the model to average over modes it cannot represent. In contrast, the I(nformation)-projection ignores such modes in the data and concentrates on the modes the model can represent. Such behavior is appealing whenever we deal with highly multi-modal data where modelling single modes correctly is more important than covering all the modes. Despite this advantage, the I-projection is rarely used in practice due to the lack of algorithms that can efficiently optimize it based on data. In this work, we present a new algorithm called Expected Information Maximization (EIM) for computing the I-projection solely based on samples for general latent variable models, where we focus on Gaussian mixtures models and Gaussian mixtures of experts. Our approach applies a variational upper bound to the I-projection objective which decomposes the original objective into single objectives for each mixture component as well as for the coefficients, allowing an efficient optimization. Similar to GANs, our approach employs discriminators but uses a more stable optimization procedure, using a tight upper bound. We show that our algorithm is much more effective in computing the I-projection than recent GAN approaches and we illustrate the effectiveness of our approach for modelling multi-modal behavior on two pedestrian and traffic prediction datasets. | [
"density estimation",
"information projection",
"mixture models",
"generative learning",
"multimodal modeling"
] | Accept (Poster) | https://openreview.net/pdf?id=ByglLlHFDS | https://openreview.net/forum?id=ByglLlHFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"lqmh0Ir-dV",
"S1xHvissjr",
"BkxpEGGQor",
"rJlXs-G7sH",
"BJevwbfXoB",
"SyxDIDyp5S",
"rkxrfrIfcB",
"B1er3Km0KH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745848,
1573792605488,
1573229109146,
1573228955178,
1573228895003,
1572824911132,
1572132108537,
1571858861123
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2310/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2310/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2310/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2310/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2310/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2310/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2310/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a new algorithm called Expected Information Maximization (EIM) for learning latent variable models while computing the I-projection solely based on samples. The reviewers had several questions, which the authors sufficiently answered. The reviewers agree that the paper should be accepted. The authors should carefully read the reviewer questions and comments and use them to improve their final manuscript.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"--\", \"comment\": \"I have read the authors' answers and appreciate the time spent writing the rebuttal. I will maintain my initial assessment.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewers for their time and valuable feedback. Besides fixing small typos, ambiguities and unclearities we elaborated on the relation and differences to previously existing GAN and VI methods (see in particular section 2).\\n\\n Also, we uploaded an implementation of EIM, which can be found at: https://github.com/eimAuthors/EIM\", \"we_are_now_going_to_answer_your_specific_questions\": \"\\u201cMy main concerns focus on the novelty\\u201d / \\u201cFor the former, reverse KL has been exploited before, both in the marginal space [1] and the joint one [2]. Other detailed comments are listed below.\\u201d / \\u201cIn Sec 4.4, it seems EIM is highly overlapped with VIPS. So what're the advantages of EIM here?\\u201d -\\n\\nOur approach is, to the best of our knowledge, the first approach allowing non-adversarial computation of the I-Projection based solely on samples of the target distribution. \\n\\nThe difference to VIPS and [1] is that both assume access to the unnormalized (log) density of the target distribution, i.e. are applicable for variational inference. EIM on the other hand assumes access to samples of the target distribution, i.e. is applicable for density estimation.\\n\\nThe difference to [1] and [2] is that EIM is not adversarial as pointed out in section 4.3. \\n\\nWe reworked the related work section to make these important distinctions clearer. \\n\\nFurthermore, [2] introduces a bound for the symmetric KL between the joints over x and z (where x is the random variable underlying the target samples and z the latent variable). The learned discriminator thus needs both x and z as inputs. In order to infer the latent variable for the target samples an additional variational distribution q(z|x) (i.e. an encoder) needs to be learned. \\nEIM does not use the symmetric KL, but the reverse KL. Furthermore it works with a bound for the KL between the marginals over x, not the joint of x and z. Thus the discriminator only needs to be given x and the latent variable z does not need to be inferred for the training data, i.e., no \\u201cencoder\\u201d is necessary.\\n\\n\\u201cIn Figure 2 (b), the experimental settings for adversarial learning are not fair, as the discriminator is not fixed there. \\u201c\\n\\nThe purpose of this figure (and section 4.3 in general) is to provide an illustrative example of the immediate effects and benefits of avoiding the adversarial forumulation \\n\\n\\u201cIn Figure 3, how many steps for Generator and Discriminator are used for f-GAN? Does f-GAN finally converge?[...]\\u201d -\", \"figure_3\": \"As suggested in the f-GAN paper we alternate single generator and discriminator steps. We also evaluated training the discriminator longer without notable changes to the final performance. The f-GANs do eventually converge and we report the best value achieved on a test set for each run, averaged over 20 runs. A list of all hyperparameters can be found in the appendix, we added the parameters for the f-GAN training to this.\\n\\n\\u201cIn Eq. 9, adding the denominator q(z_i) will change the optimal solution. Why only add it to the first term?\\u201d - The notation in this equation was a bit unclear. The denominator is in fact added to both terms, which scales the optimal value but does not change the optimal solution. We apologize for the unclear notation and adapted the equation to make it clearer. \\n\\n\\u201cIn Section 5.3 and Figure 5, \\u201cSSD\\u201d might be a typo. \\u201c - Indeed a typo, thanks for pointing it out, we fixed it. \\n\\nWe hope we could clarify and remove some of the remaining doubts in our approach. We invite you to ask additional questions and engage in further discussion if this is not the case.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewers for their time and valuable feedback. Besides fixing small typos, ambiguities and unclearities we elaborated on the relation and differences to previously existing GAN and VI methods (see in particular section 2).\\n\\n Also, we uploaded an implementation of EIM, which can be found at: https://github.com/eimAuthors/EIM\", \"we_are_now_going_to_answer_your_specific_questions\": \"\\u201cI would have liked the authors to spend some more time on the right way to evaluate\\u201d/ \\u201cmore discussion about the evaluation metric\\u201d\\n\\nWe believe that the reverse KL is the right metric for the applications that we consider, which motivates the formulation of our optimization problem.\\nSadly it is not possible to compute the reverse KL if the true density underlying the data is not known (which is the case in all but the first experiment, in which we evaluated the reverse KL). This is also the key reason for why minimizing the reverse KL is much harder than minimizing the forward KL (i.e. maximizing the likelihood).\\n\\nThus, we had to resort to auxillary metrics. The likelihood is an obvious choice here since it is a standard evaluation criterion for generative models and easy to compute. Additionally we evaluate our models on metrics, meaningful to the task at hand. By combining the likelihood with the auxiliary metric, we can evaluate both whether the data distribution is covered and whether unrealistic samples are generated by the learned model. The same is typically done for GAN approaches which suffer from the same problem of a non-computable objective for non-toy tasks.\\n\\n\\u201c- would have liked to see an explicit algorithm for the optimization procedure\\u201d - There is pseudocode for the GMM case in the appendix. As previously mentioned, we also released the real code by now.\\n\\n\\u201c- small lack of clarity in the presentation of Section 4.1---notation q_t is not introduced for example\\u201d - We kindly ask you to elaborate on this lack of clarity so we might clarify. We already clarified the introduction of q_t. \\n\\n\\u201c- linking it more to prior work\\u201d - the related work section has been reworked.\\n\\n\\nWe hope we could clarify and remove some of the remaining doubts in our approach. We invite you to ask additional questions and engage in further discussion if this is not the case.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewers for their time and valuable feedback. Besides fixing small typos, ambiguities and unclearities we elaborated on the relation and differences to previously existing GAN and VI methods (see in particular section 2).\\n\\n Also, we uploaded an implementation of EIM, which can be found at https://github.com/eimAuthors/EIM\", \"we_are_now_going_to_answer_your_specific_questions\": \"\\u201cHow this can be applicable to more realistic and complex models where training requires millions of gradient steps?\\u201d - The density ratio estimator does not need to be trained from scratch every iteration. It can be warm started using the density ratio estimator from the previous iteration. We use early-stopping for regularization which will typically end training after a few iterations. The change in density ratio during each iteration is rather small since the model updates are constraint. \\n\\n \\u201cI am a little confused about Sec 4.3. It seems that the latent variable z is not necessary for the proposed EIM?\\u201d - In Sec 4.3 we consider only the simplest possible model, a single univariate Gaussian (i.e. a Gaussian Mixture with one component) for illustrative purposes. Like for any latent variable approach the latent variable can be \\u201comitted\\u201d by choosing q(z) to be a deterministic distribution.\\n\\n\\u201cAlso, the typical practise of training GAN [...] training gets more stable than standard GAN?\\u201d - We in fact update the discriminator for multiple steps with early stopping. The difference to GANs is that our approach is not adversarial, this removes a key reason for the instability of GAN training. Our derivations for this non-adversarial optimization are based on the additional KL term. Hence, we also believe that it is a major reason for the improved stability of EIM. \\n\\n\\u201cWill the same algorithm can be applied on more general latent variable models\\u201d -\\nThe general approach derived in section 4 can be applied to general latent variable models. \\n(For discussion on the KL term see the next bullet point)\\n\\n\\u201c or even implicit models like GAN does?\\u201d / \\u201c[...] , how to compute the regularization term KL(q(x) || q_t(x)) in EIM for normal generator which is typically implicit?\\u201d - \\nIf the KL term can not be computed in closed form it can still be approximated using samples as long as the model density is tractable. Note that we do not need to compute the KL divergence between the marginals KL(q(x) || q_t(x)) but only the KL between the conditionals KL(q(x|z) || q_t(x|z)) and the latent distribution KL(q(z)||q_t(z)). Even if the Marginal q(x) is intractable (as for GANs) the density of the conditionals is usually not. \\nNethertheless, investigating how to use EIM for typical GAN scenarios, e.g., generating image data, is an interesting direction for future work.\\n\\nWe hope we could clarify and remove some of the remaining doubts in our approach. We invite you to ask additional questions and engage in further discussion if this is not the case.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors proposed a new algorithm -- expected information maximization (EIM) -- for computing the I-projection of the data distribution to the model distribution, solely based on samples for general latent variable models, where the paper only focus on Gaussian mixtures models and experts. The proposed method applies a variational upper bound to the I-projection objective which is decomposable for each mixture components and the coefficients. Overall, I think the proposed technique quite sound and results are convincing. However, I do have some questions:\", \"questions\": \"-- The proposed EIM algorithm in Sec 4.1 seems to require \\u201cre-training\\u201d the discriminator every time the q function is updated. How this can be applicable to more realistic and complex models where training requires millions of gradient steps? Will the same algorithm can be applied on more general latent variable models or even implicit models like GAN does? As the paper has pointed out, the vanilla f-GAN itself can be seen as optimizing some forms of the I-Projection (reverse Kullback-Leibler divergence) objective.\\n-- I am a little confused about Sec 4.3. It seems that the latent variable z is not necessary for the proposed EIM? \\n-- Also, the typical practise of training GAN is also iterative between the generator and the discriminator, we sometimes need to update the discriminator with more steps than the generator? Shouldn\\u2019t it be the exactly same as the proposed EIM except we have an additional regularization term of KL(q(x) || q_t(x)) which might be the true reason why training gets more stable than standard GAN?\\n-- Similar to the previous two questions, how to compute the regularization term KL(q(x) || q_t(x)) in EIM for normal generator which is typically implicit?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper propose EIM an analog to EM but to perform the I-projection (i.e. reverse-KL) instead of the usual M-projection for EM. The motivation is that the reverse-KL is mode-seeking in contrast to the forward-KL which is mode-covering. The authors argue that in the case that the model is mis-specified, I-projection is sometimes desired as to avoid putting mass on very unlikely regions of the space under the target p.\", \"The authors propose an iterative procedure that alternates between estimating likelihood ratios and proposal distribution by minimizing an upper bound on the reverse-KL. The derivations seem correct. There are some experiments, majoritarily in the robotics domain. As the author point out, likelihood shouldn't be the right metric since you are now minimizing the reverse-KL---I would have liked the authors to spend some more time on the right way to evaluate---and actually use that new metric. Finally, there has been plethora of work on different objectives and distance between distributions as well as a zoo of lower/upper bounds on how to evaluate them---it would be interesting to have more connections to prior work.\", \"[Pros]\", \"clearly written\", \"clear motivation\", \"correct derivations\", \"interesting algorithm\", \"[Cons]\", \"experiments are a little weak (and focus on a single domain)\", \"would have liked to see an explicit algorithm for the optimization procedure\", \"small lack of clarity in the presentation of Section 4.1---notation q_t is not introduced for example\", \"more discussion about the evaluation metric\", \"linking it more to prior work\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents an algorithm to match two distributions with latent variables, named expected information maximization (EIM). Specifically, EIM is based on the I-Projection, which basically is equivalent to minimizing the reverse KL divergence (i.e. min KL[p_model || p_data]); to handle latent variables, an upper-bound is derived, which is the corresponding reverse KL divergence in the joint space. To minimize that joint reverse KL, a specific procedure is developed, leading to the presented EIM. EIM variants for different applications are discussed. Fancy robot-related experiments are used to evaluate the presented algorithm.\\n\\nOverall, the paper is in good shape wrt the logic and the writing. My main concerns focus on the novelty (compared to existing methods that are similar but not discussed) and the experiments. For the former, reverse KL has been exploited before, both in the marginal space [1] and the joint one [2]. Other detailed comments are listed below.\\n\\nAs Eq 4 is for matching two joint distributions, discussions/comparisons should be made to reveal the novelty of the presented EIM over existing methods such as [2], etc.\\n\\nIn Figure 2 (b), the experimental settings for adversarial learning are not fair, as the discriminator is not fixed there. \\n \\nIn Sec 4.4, it seems EIM is highly overlapped with VIPS. So what're the advantages of EIM here?\\n\\nIn Figure 3, how many steps for Generator and Discriminator are used for f-GAN? Does f-GAN finally converge? It would be helpful if some results are given to demonstrate the final state of each method.\\n\\nIn Eq. 9, adding the denominator q(z_i) will change the optimal solution. Why only add it to the first term?\\n\\nIn Section 5.3 and Figure 5, \\u201cSSD\\u201d might be a typo. \\n\\n[1] Adversarial Learning of a Sampler Based on an Unnormalized Distribution. AISTATS 2018.\\n[2] Symmetric Variational Autoencoder and Connections to Adversarial Learning. AISTATS 2019.\"}"
]
} |
HJlyLgrFvB | All Simulations Are Not Equal: Simulation Reweighing for Imperfect Information Games | [
"Qucheng Gong",
"Yuandong Tian"
] | Imperfect information games are challenging benchmarks for artificial intelligent systems. To reason and plan under uncertainty is a key towards general AI. Traditionally, large amounts of simulations are used in imperfect information games, and they sometimes perform sub-optimally due to large state and action spaces. In this work, we propose a simulation reweighing mechanism using neural networks. It performs backwards verification to public previous actions and assign proper belief weights to the simulations from the information set of the current observation, using an incomplete state solver network (ISSN). We use simulation reweighing in the playing phase of the game contract bridge, and show that it outperforms previous state-of-the-art Monte Carlo simulation based methods, and achieves better play per decision. | [
"Contract Bridge",
"Simulation",
"Imperfect Information Games",
"Reweigh",
"Belief Modeling"
] | Reject | https://openreview.net/pdf?id=HJlyLgrFvB | https://openreview.net/forum?id=HJlyLgrFvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"DWWN_6LZOi",
"BJeVPJj3ir",
"rJgyHy5_jr",
"BJxPbAYOjS",
"ByxET2cDsr",
"BJeru3k2Yr",
"BJx2XMoiFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745808,
1573855068283,
1573588790951,
1573588479042,
1573526716161,
1571712108840,
1571693092221
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2308/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2308/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2308/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2308/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2308/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2308/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"A method is introduced to estimate the hidden state in imperfect information in multiplayer games, in particular Bridge. This is interesting, but the paper falls short in various ways. Several reviewers complained about the readability of the paper, and also about the quality and presentation of the interesting results.\\n\\nIt seems that this paper represents an interesting idea, but is not yet ready for publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper revision\", \"comment\": \"Thanks the reviewer for the insightful feedbacks.\\n\\nWe sincerely apologize for the grammar errors and have updated a revision to correct them. We have also cited the mentioned work in the new revision. For the experiments we use trick losses compared with optimal play assuming perfect information as the evaluation metric. \\n\\nHere is a summarization of our contributions. \\n\\nExisting methods, such as Counterfactual Regret Minimization, can handle incomplete information games with small belief space, e.g. Two-player Poker [1] (51*52/2 = 1326 possible card configuration), Hanabi [2] (only your own cards are not visible. #hidden states is around 10 millions and quickly drops when more cards are played) and Avalon the Resistance [3] (belief space is 60 for 5 players with different hidden roles). In contrast, in Contract Bridge, the hidden information space is large, since 2 out of 4 players' cards are unknown, which is about C(26, ``13) = 10 millions possibilities and unlike Hanabi, this number will not drop quickly over playing. \\n\\nIn this paper we want to address this issue by handling exponentially large hidden information space in Contract Bridge via a hybrid approach of sampling and neural network. We show that a pre-trained neural network on millions of data with ground truth score obtained by complete information (from DDS solver) can predict the next best action given incomplete information fairly accurately, without intensive belief sampling. Furthermore, by reweighing the samples using past action history, we show the performance can be further improved, compared to multiple baselines. In summary, with a tight computation budget we can reach decent performance without a single call to the expensive DDS solver. We also perform a number of ablation studies. \\n\\n[1] Superhuman AI for heads-up no-limit poker: Libratus beats top professionals, N. Brown and T. Sandholm. Science, 2017.\\n\\n[2] The Hanabi Challenge: A New Frontier for AI Research, N. Bard et al, arXiv 2018\\n\\n[3] Finding Friend and Foe in Multi-Agent Games, J. Serrino et al, NeurIPS 2019\"}",
"{\"title\": \"Reply to R3\", \"comment\": \"We thank the reviewers for the insightful feedbacks.\\nFirst, We sincerely apologize the grammatical errors and will make a revision to correct all of them.\", \"explanation_of_reweighing_results\": \"Q1 The understanding of ISSN reweighing is correct, the performance improves. The rows across \\u201cReweighing DDS\\u201d and \\u201cReweighing SSN\\u201d are not meant to be directly compared. Since the cost is small enough, we are trading accuracy for speed, and with a very tight computational budget we are still able to get decent results. We are doing more analysis on which situations the method performs better.\\n\\nQ2 We have tried to use Bayesian Action Decoder as a baseline method. However BAD requires to track the whole possible state space (millions in the beginning of Hanabi, but quickly reduces to hundreds ~ thousands, because cards are played and hinted). However, during the bidding phase the number of card combinations for an agent is C(52, 13), and does not decrease. This makes BAD intractable, and is one of the reasons that we choose to use simulation based Bayesian method. We have also tested BAD on a mini-version of bridge such that each player only holds 2 suits and 5 cards. We enumerate all the possible states and make BAD belief updates, but it cannot converge to the optimal solution. To the best of the author\\u2019s knowledge there is no other similar baseline work done for imperfect information games. \\n\\nQ3 To the best of the author\\u2019s knowledge GIB, Jack and Wbridge5 all use a similar engine for the playing phase, except for the human designed heuristic part. Some of the heuristics include information gathered from the bidding phase. Also, Jack and Wbridge5 are close sourced commercial software so we are not sure what heuristics are used exactly. Thus we compared against the simplest baseline.\", \"other\": \"Yes, all the cost is average trick loss from optimal play, and optimal play is always assumed perfect information\\n\\nWhy playing \\u201cQ\\u201d from AQ3 is unusual: When it goes J-K to a player, if you play A you will win the current trick, and since AK are already played, Q is very likely to win the next trick as well (if it is not trumped). Playing Q will lose this trick, although A will win sometime in the future, the agent basically throws away a trick. That\\u2019s why playing Q with AQ3 is very unlikely. We will put this explanation paragraph in the next revision. \\n\\nWhen \\u201chands\\u201d are referring to all 4 hands, we are switching to call it \\u201cdeal\\u201d\", \"data_augmentation\": \"we stated in the main text (end of 4.4.1) that we augment the data by shuffling suit.\"}",
"{\"title\": \"Reply to R2\", \"comment\": \"We thank the reviewers for the insightful feedbacks.\\nFirst, We sincerely apologize the grammatical errors and will make a revision to correct all of them.\", \"we_list_the_number_of_confused_terms_here\": \"1. Target contract (Explained in Section 2.2): number of tricks (winning rounds) required to make the contract. If the team makes the contract then receives positive rewards (if the contract is higher, e.g., 7spade, then the reward is higher), otherwise receives negative rewards. \\n2. Trump suit/card (Explained in Section 2.2): The suit/card that could be used to beat cards in a normal suit where no cards is available.\\n\\nThere might be some unclarity in the original text and we are working on rephrasing it.\", \"for_the_figures\": \"We do not expect readers to read into specific hand and cards, but show an overall idea of the training process. More detailed explanation for the figure are in the main text, for example, how ISSN is trained and what is the process of simulation reweighing.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper introduces two networks that are trained to predict DDS. While one is trained with perfect information, the other one (ISSN) with imperfect information.\\nThe ISSN is then used to compute posterior probability distribution (based on a history leading to the current state). The ideas is that such posterior distribution should perform better compared to uniform distribution when used in determinization process.\\n\\nI like the idea / motivation of the paper, but the authors could do a better job of explaining the motivation to people less familiar with techniques based on the determinization framework.\\nI also like the baselines that they chose to compare against - but the resulting comparison is far from perfect (see Issues section).\", \"minor_issues\": [\"Please do a careful language check - the grammar is wrong in many places (most notably plural/singular nouns).\", \"While this does not hurt the semantics, it makes it sometimes cumbersome to read.\", \"Since this work is mostly about using non-uniform distribution during the determinization process, I think it's worthwhile to also mention [Whitehouse, Daniel. Monte carlo tree search for games with hidden information and uncertainty. Diss. University of York, 2014.] as reference point.\"], \"issues\": [\"My biggest issue is the experimental and evaluation section. The reported improvements seem small, but most importantly - it is impossible to asses the relevance of the results.\", \"There are no confidence intervals or variance reported. Given the seemingly small improvements, this could easily be noise?\", \"While I am not certain, I assume that your numbers in Table 2 come from the 'test' split of the data - one would guess you used that split to stop the training (select the best model)?\", \"If that is the case, I don't think you can use the same split during the evaluation (even though you evaluate differently) - the reported numbers will be biased.\", \"Improvement Suggestions\", \"Please see my issues with the evaluation.\", \"You say you will release the data and code - that is great, do it!\", \"Your figures are way too large for what they do. I think you should make them much more compact and use the resulting space to improve and expand the experimental section. Please add lot more details about the evaluation.\"], \"summary\": \"I think the paper is looking into an interesting problem and is going in the right direction, but the experimental section is at this point no good enough to suggest an acceptance.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents an approach to playing imperfect information games, an \\u201cIncomplete State Solver Network\\u201d (ISSN) within the domain of contact bridge. The paper\\u2019s primary technical contributions are the network, and a large dataset of contact bridge games, which the authors make publicly available.\\n\\nI believe that the work that the authors did in regards to this paper is valuable. However, I had a very hard time following along with the paper. The major issue is the language, which I mean both in terms of particular phrases or terms going unexplained, and the grammar and phrasing of the sentences. \\n\\nThe most clear early example of terms and phrases going unexplained is the second section describing contract bridge. Many terms are used without any explanation. For example, what a target contract is or what a trump means in the case of contract bridge. This made the remainder of the paper, including the results difficult to understand.\", \"as_an_indication_of_the_grammar_and_phrasing_issues_in_the_paper_i_have_below_included_issues_from_just_the_first_page_of_the_paper\": [\"\\u201cIn real world\\u201d-> \\u201cIn the real world\\u201d\", \"\\u201chave to made decision\\u201d -> \\u201chave to make decisions\\u201d\", \"\\u201cresearchers steers towards\\u201d -> \\u201cresearchers have focused on\\u201d\", \"\\u201cChess, best action\\u201d -> \\u201cChess, the best action\\u201d\", \"\\u201cit is independent of opponent and action history\\u201d -> \\u201cit is independent of the opponent or action history\\u201d\", \"\\u201chistory actions\\u201d -> \\u201caction histories\\u201d\", \"\\u201cIn early ages there are heuristic based system to assign different prior\\u201d -> \\u201cEarly approaches employ heuristic based systems to assign different priors\\u201d\", \"\\u201ctry to convert the problem to a perfect\\u201d -> \\u201ctry to convert the problem into a perfect\\u201d\", \"\\u201cexplicitly with neural networks, and try to optimize it\\u201d -> \\u201cexplicitly with neural networks, trying\\u201d\", \"(not a phrasing issue but I would have appreciated a citation for the claim at the end of the third paragraph of the intro)\", \"\\u201cSimulation based approach usually requires large\\u201d -> \\u201cSimulation based approaches usually require a large\\u201d\", \"\\u201csequences of existing player and opponent\\u201d -> \\u201csequences of the existing player and opponent\\u201d\", \"\\u201ca few underlying complete information state\\u201d -> \\u201ca few underlying complete information states\\u201d\", \"This issue of readability came up in the figures as well. The figures in the paper are dense and very difficult to parse. There is little explanation in the text, and I found them difficult to glean anything from without an understanding of contact bridge.\", \"From what I can gather from the paper this is good and valuable work. However, I think the paper is not yet ready for publication as a communication of this work.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper deals imperfect information games, and builds a Bayesian method to model the unknown part of the current state, making use of the past moves (which constrain the game, here Contract Bridge).\\nThis new Bayesian method is compared to Monte Carlo style techniques, which are much more computationally expensive (they draw random samples of the unknown part of the information and then solve the perfect-information version of the game, for each simulated possible full-state).\\nThe work also introduces a Neural Network (NN) to estimate the best moves in the perfect-information version of the game (instead of making the full tree search of the optimal moves to play).\\nThe final model proposed (ISSN+SSN) uses a NN combined with Bayesian computation, using the NN at each time step of the past to update the belief about the current missing information.\", \"overall_the_paper_is_written_clearly\": \"as a non-expert in RL, I was able to follow rather easily what is done.\\n\\n\\nHowever, the results are not convincingly good. Maybe it is just the interpretation/contextualization that is insufficient. I have 3 important remarks:\\n\\n1. It is stated in the abstract \\\"that it [the new method] outperforms previous state-of-the-art Monte Carlo simulation based methods, and achieves better play per decision.\\\" However, the costs in the 'reweighing DDS' section (table 2) are lower (better) than those of the 'reweighing SSN' section. The results show improvement upon using some reweighing, as ISSN+sthg is better than without ISSN (although by a very small relative decrease in cost).\\n1.a. I understand that the NN-based methods (last three lines of table 2) are incomparably faster than the DDS-based approaches (baseline). And the total rate of tricks loss per initial state remains low (6% seems small as an absolute value). But, they are almost double of the DDS-based loss rates ! It is not clear how this can be considered 'outperforming' or 'better play per decision'. If you can explain in which sense those are good results for the SSN vs DDS, please do so. If they are not, please do acknowledge it, and eventually contextualize (maybe a 2-fold increase of loss rate is okay given the large speedup obtained?).\\n1.b. in the same way, it is not clear how ISSN outperforms its no-memory counterparts, given the loss decreases are essentially negligible. Maybe one needs to increase the value of T to make ISSN's success more obvious?\\n2. In addition, why not compare this Bayesian method with other Bayesian methods (quoted at the very end of section 3)? Here the paper focuses on comparison with 'deterministic' methods, i.e. methods which sample complete states (simulations) to then solve the complete-information version of the game (via exhaustive tree search). Those are what I would call brute-force methods.\\nHowever, although simulations may be done (expensively) for contract bridge, in some other cases this kind of sampling may become prohibitively expensive, so that only Bayesian methods are left. If this kind of argument is the justification for your work, please make it more explicitly. Otherwise please correct me and explain the context (the role of Bayesian methods, what has been done and what hasn't been) more clearly.\\nIf you are to situate the work within the Bayesian-based approaches, the question remains: is ISSN+SSN better than other Bayesian-inspired, \\\"information-completing\\\" methods ? \\nTo summarize, the paper convincingly shows that Bayesian-NN methods can compete with expensive brute force methods, but it would be very nice to see how the method introduced compares with other recent Bayesian approaches. (Or if no such comparison can be done, explain why).\\n\\n3. In addition, here it seems your baseline is based on GIB, which was winning tournaments ~20 years ago: why not compare with Wbridge5 or JackBridge ? I think you need to at least explain your choice.\\n\\nBecause of these weak points in terms of experimental results, I lean to reject the paper. However, depending on the authors answers and clarifications on my 3 important remarks above, I am ready to change my rating.\\n\\n\\n\\nAlso, I have a couple of more or less minor remarks to improve the paper:\", \"the_definition_of_the_cost_is_not_explicitly_given\": \"\\\"We set up the evaluation metric by tricks loss per deal. The ground truth play is DDS result for the original deal and all simulation deals. We compare all the following algorithmS with the ground truth to get costs of the policy.\\\"\\nYou should rephrase this to make the definition of 'cost' very explicit. Is it simply the average rate of lost tricks (per given set of 13 cards in the agent hands) ?\\nThis is quite crucial and make the reading of results a bit complicated (especially since results fo not match the conclusion announced in the abstract).\\n\\n\\\"DDS) 3 computes the maximum tricks each side can get if all the plays are optimal\\\"\\nIn this place and a couple others, you should explicitly recall whether you mean 'assuming perfect information', or not. Sometimes it can get confusing, and a bit of repetition won't hurt. I think I understood correctly that DDS solves (perfectly) the perfect information game, but at times I thought other methods also made use of the full information (?)\\nFigure 4c is nicely explained and this part really illustrates well the idea of the method, I like it. Although, the notation AQ3 was not obvious for me at first, and I think it is worth improving this figure, making use of the right-hand-side space, to make it an autonomously explanatory figure.\\n'position': the term is not defined. I would guess it means the current state of the game (the agent's hand and the cards played in the past or during the current trick). This should be said explicitly.\\nAlso \\\"hands\\\" seem to refer to the 4 hands (1 for each player) (is that correct?)\\n\\nThere are wrong singular/plurals ('s'/no 's') in several places. This is simple to correct and should be corrected.\\n\\nin section 7, ablation studies. This is a very nice study, but you should precise how much you augment the data here (or recall by how much, if you say it earlier).\", \"this_passage_is_unclear_and_should_be_rephrased_for_clarity\": \"\\\"For simplicity, in this work we just use discarding information.\\\"\", \"this_passage_is_slightly_unclear_and_should_be_rephrased_for_clarity\": \"\\\"For each simulation, the moves with the optimal number of\\ntricks are marked with optimal moves. We sum up the optimal moves counter in these k simulations.\\nThis results in a counter for each legal move and we treat this as the training target. The process is\\ndescribed in Figure 1.\\\"\\\"\"}"
]
} |
HyxyIgHFvr | Truth or backpropaganda? An empirical investigation of deep learning theory | [
"Micah Goldblum",
"Jonas Geiping",
"Avi Schwarzschild",
"Michael Moeller",
"Tom Goldstein"
] | We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike. In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization plays a role; (4) find that rank does not correlate with generalization or robustness in a practical setting. | [
"Deep learning",
"generalization",
"loss landscape",
"robustness"
] | Accept (Spotlight) | https://openreview.net/pdf?id=HyxyIgHFvr | https://openreview.net/forum?id=HyxyIgHFvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Yvguv1SCw",
"HJxg3bwisS",
"S1l7D-voiB",
"ryxZZWwsiS",
"SJgN-L5mcH",
"Bygm_Tz0KS",
"H1x75kATFr",
"HJeQuWDTtr",
"SygoTRPiFB",
"r1xCi-IjKr",
"SJlPOVxutB",
"BJgFUN6OB",
"ryeEsMcLOS",
"B1xvkIhNOB",
"H1gFcHcmur"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798745779,
1573773736306,
1573773658625,
1573773560835,
1572214267686,
1571855723454,
1571835787071,
1571807595188,
1571679939219,
1571672486199,
1571452014656,
1570748023516,
1570312859633,
1570190814891,
1570117008535
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2307/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2307/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2307/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2307/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2307/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2307/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2307/AnonReviewer3"
],
[
"~Amartya_Sanyal2"
],
[
"ICLR.cc/2020/Conference/Paper2307/Authors"
],
[
"~Chulhee_Yun1"
],
[
"ICLR.cc/2020/Conference/Paper2307/Authors"
],
[
"~Alex_Matthew_Lamb1"
],
[
"ICLR.cc/2020/Conference/Paper2307/Authors"
],
[
"~Pedro_Tabacof1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The authors take a closer look at widely held beliefs about neural networks. Using a mix of analysis and experiment, they shed some light on the ways these assumptions break down. The paper contributes to our understanding of various phenomena and their connection to generalization, and should be a useful paper for theoreticians searching for predictive theories.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to reviewer #3\", \"comment\": \"We appreciate the positive feedback, and we thank the reviewer for the thoughtful comments. We have added results from further suboptimal minima experiments to the appendix.\"}",
"{\"title\": \"Reply to reviewer #2\", \"comment\": [\"We thank the reviewer for the time and effort spent on our paper. We agree about the note concerning the breadth of our experiments and have made the following additions to the paper.\", \"Experiments have been run on CIFAR-100 data and the results, which agree with our previous findings, are in the appendix.\", \"Our study of suboptimal minima had included experiments with ResNet-18. Results are in the appendix. As mentioned above, we have since added these experiments on CIFAR-100 for diversity of data sets.\", \"The section on rank has been updated to reflect further experiments with new architectures. Specifically, we tested ResNet-18 without skip connections and MLP. See the updated appendix for full details and results.\"]}",
"{\"title\": \"Reply to reviewer #1\", \"comment\": [\"Thank you for your thoughtful input on our work. We address your comments in order:\", \"Our work here is focused on finding suboptimal minima, and we show that certain poor initializations motivated by theory can lead to this. We agree that suboptimal local minima which arise from bad initializations in standard practice would be interesting to study in future work.\", \"We have changed the conclusion of the NTK section to more clearly discuss and conceptualize our findings, and we have added additional plots.\", \"The order of the topics has been fixed, thank you for bringing this to our attention.\", \"We have added details regarding the confidence intervals.\", \"The constant \\\\mu is chosen heuristically by studying the norm of parameter vectors that result from standard weight decay, and setting \\\\mu to be higher to make sure that networks trained with norm-bias indeed have a higher norm than those trained with weight decay. This explanation is now included in the section on weight norms.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors seek to examine carefully some assumptions investigated in the theory of deep neural networks. The paper attempts to answer the following theoretical assumptions: the existence of local minima in loss landscapes, the relevance of weight decay with small L2-norm solutions, the connection between deep neural networks to kernel-based learning theory, and the generalization ability of networks with low-rank layers.\\n\\nWe think that this work is timely and of significant interest, since theoretical work on deep learning has made significant progress in recent years.\\n\\nSince this paper seeks to provide an empirical study on the assumptions in deep learning theory, we think that the results are somehow weak as the paper is missing extensive analysis, using several well-known datasets and several deep architectures and settings. For example, only the CIFAR-10 dataset is considered in the paper, and it is not clear whether the obtained results will generalize to other datasets. This also goes to the neural network architecture, as only MLP is considered to answer the assumption about the existence of suboptimal minima, while only ResNet is considered to study the generalization abilities with low-rank layers. We think that this is not enough for a paper that tries to provide an empirical study.\\n\\n------- \\nReply to rebuttal\\n\\nWe thank the authors for taking into consideration our previous comments and suggestions, including going beyond MLP and adding experiments on other datasets. For this reason, we have increased the rating from \\\"Weak Accept\\\" to \\\"Accept\\\".\"}",
"{\"comment\": \"Thank you for letting us know about your paper. The phenomenon of low-rank hidden states is an interesting direction for research.\", \"title\": \"Interesting research direction\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"The authors look at empirical properties of deep neural networks and discuss their connection to past theoretical work on the following issues:\", \"Local minima: they give an example of setting where bad local minima (far from the global minimum) are obtained. More specifically, they show such minima can be obtained by initializing with large random biases for MLPs with ReLU activation. They also provide a theoretical result that can be used to find a small set of such minima. I believe this is a useful incremental step towards a better understanding of local minima in deep learning, although it is not clear how many practical implications this has. One question that would ideally be answered is: in practical settings, to what degree does bad initialization cause bad performance specifically due to bad minima? (as opposed to, say, slow convergence or bad generalization performance).\", \"Weight decay: the authors penalize the size of the norm of the weights as it diverges from a constant, as opposed to when it diverges from 0 as is normally done for weight decay. They show that this works as well or better than normal weight decay in a number of settings. This seem to put into question the belief sometimes held that solutions with smaller norms will generalize better.\", \"Kernel theory: the authors try to reproduce some of the empirical properties predicted in the Neural Tangent Kernel paper (Jacot et al., 2018) in particular by using more realistic architectures. The results, however, do not appear very conclusive. This might be the weakest part of the paper, as it is hard to draw anything conclusive from their empirical results.\", \"Rank: The authors challenge the common belief that low rank provides better generalization and more robustness towards adversarial attacks. When enforcing a low or high rank weight matrices during training on ResNet-18 trained on CIFAR-10, the two settings have similar performance and are similarly robust to adversarial attacks, showing at least one counter example.\", \"I think overall this is a useful although somewhat incremental paper, that makes progress in the understanding of the behavior of neural networks in practice, and can help guide further theoretical work and the development of new and improved training techniques and initialization regimes for deep learning.\", \"Other comments/notes:\", \"minor: the order of the last 2 sub topics covered (rank and NTK) is flipped in the introduction, compared to the abstract and the order of the chapters\", \"in the table confidence intervals are given, it would be nice to have more details on how they are computed, (e.g. +- 1.96 * std error)\", \"how is the constant \\\\mu in the norm-bias chosen?\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors seek to challenge some presumptions about training deep neural networks, such as the robustness of low rank linear layers and the existence of suboptimal local minima. They provide analytical insight as well as a few experiments.\\n\\nI give this paper an accept. They analytically explore four relevant topics of deep learning, and provide experimental insight. In particular, they provide solid analytical reasoning behind their claims that suboptimal local minima exist and that their lack of prevalence is due to improvements in other aspects of deep networks, such as initialization and optimizers. In addition, they present a norm-bias regularizer generalization that consistently increases accuracy. I am especially pleased with this, as the results are averaged over several runs (a practice that seems to be not so widespread these days). \\n\\nIf I were to have one thing on my wish list for this paper, it would be the small issue of having some multiple experiment version of the local minima experiments (I understand why it is not all that necessary for the rank and stability experiments).\\n\\nNevertheless, I think this paper gives useful insight as to the behavior of deep neural networks that can help advance the field on a foundational level.\"}",
"{\"title\": \"Low Rank Representations\", \"comment\": \"1. Given the discussion about low rank hidden states, I thought I would point out our work on low rank representations and its effect on adversarial robustness.\", \"https\": \"//arxiv.org/abs/1804.07090\\n\\n2. While Linear operators can make the pre-activations low rank, (and the skip connection may or may not increase its rank), the non-linear activation function often increases it to give high rank hidden states (eg. Appendix A in the paper above).\"}",
"{\"title\": \"Interesting previous work\", \"comment\": \"Hello Charlie,\\n\\nThank you for bringing your work to our attention. We agree that it is highly relevant, and we are eager to discuss and contextualize these results in the next version of this submission. We agree that previous work on the existence of local minima was limited in comparison to [1] and find that your work bridges the gap between these and our work.\\n\\nIn terms of differences, Theorem 1 from [1], to our understanding, applies to networks with a single hidden layer and squared error, whereas Theorem 1 in our work applies to networks of arbitrary depth and any continuous loss function. Furthermore, we do not assume that all data points are unique and that output is one-dimensional.\\n\\nAside from these more technical terms, we think it is crucial to note that even if the data can be fitted with a linear classifier of dimension m, our work shows that any network with a smaller width n still contains spurious local minima, corresponding to linear classifiers with rank <= n. What we further find interesting is that our result can be recursively extended to local minima at which a network behaves like a shallower subnetwork on the training data. This extension may not follow directly from [1] since outputs are univariate. Our proof also applies to networks with convolutional layers since they can form the identity necessary for our construction. \\n\\nWe further like the idea of generalizing to other activation functions. We chose ReLUs for simplicity and their wide use, but any activation functions which are affine-linear with nonzero slope on some open interval are equally suitable under our proof technique. Such a corollary inspired by your variant would be a good fit for the next version.\\n\\nBest Regards,\\nThe Authors\\n\\n[1] Small nonlinearities in activation functions create bad local minima in neural networks, ICLR 2019\"}",
"{\"comment\": \"Dear authors,\\n\\nI enjoyed reading your submission. Thanks for the interesting paper! \\n\\nAfter reading the paper, I wanted to bring to your attention a paper of ours on the existence of bad local minima that seems quite relevant:\\n[1] Small nonlinearities in activation functions create bad local minima in neural networks, ICLR 2019, https://openreview.net/forum?id=rke_YiRct7\\n\\nIn particular, Theorem 1 of [1] constructs local minima of neural networks whose predictions perform just as well as the linear predictor, and shows that for general datasets that these local minima are not globally optimal. As far as I understand, the key idea of the proof of Theorem 1 in this submission looks very similar to [1]: pushing the bias high enough so that the network becomes linear. In my opinion, the theoretical results in this submission and [1] are highly relevant, so it would be very helpful if the authors could compare them in the paper.\\n\\nI\\u2019d also like to note that Theorem 1 of [1] also implies that even with slightest nonlinearity (slope 1+\\\\epsilon on positive side and slope 1 on negative side) and for general datasets, there exist bad local minima. Furthermore, I believe the assumptions in [1] are milder than the other previous results cited in Section 2 of this submission.\\n\\nOverall, I believe [1] is highly relevant to this submission. Thus, we would appreciate it if the authors could cite our paper as well as contextualize their results with ours. We hope that the authors will be able to bring out the differences and potential subtleties, if any.\\n\\nThank you!\\nCharlie Yun\", \"title\": \"Relevant work on the existence of bad local minima\"}",
"{\"comment\": \"Thank you for the insightful comments:\\n1) We only consider low-rank linear operators since this topic is studied in the generalization and robustness works we discuss such as Neyshabur et al. (2017) and Langenberg et al. (2019). However, we agree that low-rank hidden states may also be an interesting topic for empirical work.\\n2) The rank of linear operators in ResNets indeed contributes differently to the behavior of the network than the rank of linear operators in MLPs. In the case of linear ResNets, for example, if the applied weight matrix is the negative identity, then after a skip connection, the combined layer would be rank-0. In fact, the combined layer which includes a skip connection may be a low or high rank affine transformation. However, in the non-linear case, it is not clear what the analogous rank measurements would be since there are nonlinearities between affine transformations and skip connections, and thus, we cannot collapse layers and skip connections into one combined affine transformation.\\nWe chose ResNet-18 in order to determine if intuitions developed by theory transfer to a realistic architecture. Even so, we have also tested these claims in the context of MLPs and found that the same results hold as do for ResNets in the case of generalization, and more notably, a naturally trained MLP with RankMax achieves higher robust accuracy than the same MLP trained with RankMin. For example, an MLP with RankMin achieved 49.94% robust accuracy on CIFAR-10 against the small-radius PGD attack from the paper, while the same MLP with RankMax achieved 51.45% robust accuracy. We may include these MLP results in the next version of our paper if there is interest.\", \"title\": \"Rank in residual networks\"}",
"{\"comment\": \"1. As far as I can tell, this section discusses the issue of the weight matrices in a deep network being low-rank. However the discussion in the literature focuses on both the rank of the weight matrices as well as the rank of the hidden states. \\n\\nIt's not clear to me how these issues are related to each other, especially in non-linear networks. Do you think the results in your paper also have some relevance for the study of low-rank hidden states or would you consider it to be separate issue? \\n\\n2. Do you think it's worth analyzing the resnet and non-resnet cases separately here? If I think about a linear neural network, the residual network variant will always amount to a full-rank affine transformation on each layer as a result of the skip connection, even if the applied weight matrix W is low-rank.\", \"title\": \"Comments about Section 5 on Rank\"}",
"{\"comment\": \"Hi Pedro,\\nThank you for pointing out this paper. We agree that the relationship between explicit regularizers, like weight decay and norm-bias, during training and Bayesian priors at inference may be an interesting direction for future work.\", \"title\": \"Interesting connection\"}",
"{\"comment\": \"The work Bayesian Neural Network Ensembles by Pearce et al (https://arxiv.org/abs/1811.12188) proposes to use an ensemble of neural networks that are each trained with L2 regularization from a normal distribution sample. They show this is form of approximate Bayesian inference.\\n\\nThat idea is somewhat similar to the \\\"norm-bias\\\" regularizer proposed in this paper, with the difference that the weights attracted to a normal distribution sample rather than a fixed value.\\n\\nI just wanted to point out this connection, which may be relevant to explain why norm-bias works.\", \"title\": \"Bayesian Neural Network Ensembles connection\"}"
]
} |
BJxAHgSYDB | Learning to Rank Learning Curves | [
"Martin Wistuba",
"Tejaswini Pedapati"
] | Many automated machine learning methods, such as those for hyperparameter and neural architecture optimization, are computationally expensive because they involve training many different model configurations. In this work, we present a new method that saves computational budget by terminating poor configurations early on in the training. In contrast to existing methods, we consider this task as a ranking and transfer learning problem. We qualitatively show that by optimizing a pairwise ranking loss and leveraging learning curves from other data sets, our model is able to effectively rank learning curves without having to observe many or very long learning curves. We further demonstrate that our method can be used to accelerate a neural architecture search by a factor of up to 100 without a significant performance degradation of the discovered architecture. In further experiments we analyze the quality of ranking, the influence of different model components as well as the predictive behavior of the model. | [
"curves",
"ranking",
"model",
"curves many",
"machine learning methods",
"hyperparameter",
"neural architecture optimization",
"expensive",
"work"
] | Reject | https://openreview.net/pdf?id=BJxAHgSYDB | https://openreview.net/forum?id=BJxAHgSYDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"xlZALUMl5D",
"ryx1Vnf5jB",
"BJedSBmDoB",
"B1lInjy7jH",
"rklD0QAzjH",
"rylmAApfsS",
"HylXcz8GcB",
"rJlhVTqTKS",
"S1xmXOVhYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745748,
1573690407429,
1573496127994,
1573219245709,
1573213134900,
1573211850709,
1572131467182,
1571822899681,
1571731483274
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2306/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2306/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2306/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2306/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2306/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2306/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2306/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2306/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Authors propose a new way of early stopping for neural architecture search. In contrast to making keep or kill decisions based on extrapolating the learning curves then making decisions between alternatives, this work learns a model on pairwise comparisons between learning curves directly. Reviewers were concerned with over-claiming of novelty since the original version of this paper overlooked significant hyperparameter tuning works. In a revision, additional experiments were performed using some of the suggested methods but reviewers remained skeptical that the empirical experiments provided enough justification that this work was ready for prime time.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the modifications\", \"comment\": \"I have read the updated draft and I think the modifications are great. But as my concerns are mostly about improvement (but not correctness or contribution), I'd like to maintain my current rating for weak accept. For the learning-by-learning algorithms using LBFGS paper, I'm actually just generally referring to the series of papers on gradient-based hyper-parameter optimization [1,2] and learning-to-learn approaches for optimization algorithm learning [3]. Thanks!\\n\\n[1] Maclaurin, Dougal, David Duvenaud, and Ryan Adams. \\\"Gradient-based hyperparameter optimization through reversible learning.\\\" International Conference on Machine Learning. 2015.\\n[2] Pedregosa, Fabian. \\\"Hyperparameter optimization with approximate gradient.\\\" arXiv preprint arXiv:1602.02355 (2016).\\n[3] Andrychowicz, Marcin, et al. \\\"Learning to learn by gradient descent by gradient descent.\\\" Advances in neural information processing systems. 2016.\"}",
"{\"title\": \"DARTS results\", \"comment\": \"We trained the architecture discovered with random search + LCRankNet using an equivalent training scheme for comparison to the DARTS outcome. This network achieves a classification error of 2.99% after searching for 20 hours. The authors report two results for DARTS. The first order version needs 36 hours and results in 3.00% error and the second order version needs 96 hours and results in 2.76% error. This result indicates that our method requires less time. More importantly, we reveal that early termination of training jobs can be as efficient as the parameter sharing scheme and hence identifying a new direction for efficient NAS. We are not aware of any NAS work claiming efficiency (by e.g. parameter sharing) that considers early termination (e.g. by using Hyperband or other related work) as a baseline. We strongly believe that this is a discussion worth to be held.\\n\\nWe added this result and a short paragraph to the related work.\\n\\nAs requested earlier, would you mind to elaborate on Q4? We would like to plot what you are asking for but as explained earlier we do not fully understand what you are asking for.\"}",
"{\"title\": \"Addressing your comments in the revised version\", \"comment\": \"Thank you very much for your feedback.\\n\\nIn the uploaded revised version we hope to have addressed all of your comments. We discuss the listed related work and conducted additional experiments to compare to Successive Halving and Hyperband. We further added the last observed value baseline.\\n\\nThe model is retrained whenever new data is collected.\\n\\nIn our experiment the training protocol is not part of the search. This is a common setup in the NAS community. In order to consider this as well, we require the learning rate to be another input for the model and have training data with different learning rates. Fundamentally different learning rates due to different learning rates will be a challenge for our method. However, there is no existing work we are aware of that would not fail completely if the learning rate schedule is suddenly changed. Lets assume we use a learning rate schedule with restarts and different runs can have restarts at different intervals. All methods based on Successive Halving will fail because it will terminate those runs which have been restarted most recently. Also Domhan et al. assume a monotonously increasing learning curve and would not be able to fit such learning curves. All other works would only work if sufficiently many learning curves with different settings are observed. But this is also the scenario where our method will work. We hope to follow up on this discussion if you do not agree with us.\\n\\nBayesian optimization is able to manage the trade-off between exploring and exploiting the search space. A random search can be considered a search that only explores. In our experiment we sampled 200 architectures from more than 10^11 candidates at random. We believe that in this scenario even a more advanced search algorithm will be still in full exploration mode and not behave different to random search. However, the experiments for SVHN give you an idea what will happen if most learning curves are very similar which is probably the scenario you have in mind. As you see, more time is spend because it is harder to rule out specific learning curves early on. This is a general problem for all learning curve prediction methods. If really the case happens that BO is in full exploitation mode, one would probably consider to stop the search soon.\\n\\nWe believe that we can freely decide how our algorithm operates. We will consider to create a version which runs in a Successive Halving setting in the future. We have no influence on how the baselines work. To have a fair comparison to our initial baselines, we followed the very same setup. Based on the reviewers requests, we added Successive Halving and Hyperband. However, they don't perform outstanding such that we see no evidence that our method would perform better in this setup.\\n\\nDelta allows to trade-off precision and recall. It allows to decrease the training time at the risk of increasing the regret. Therefore you cannot really consider it sensitive it is more some lever you can use to change the behavior of the algorithm. We added a more elaborate discussion to the paper.\\n\\nWe are happy to answer any follow-up questions.\"}",
"{\"title\": \"Addressing your questions and results for DARTS\", \"comment\": \"We are very grateful that you took your time to review our work.\\n\\nQ1. We think that our method is applicable for both NAS and general ML but we admit that we only conducted experiments for NAS. Therefore, it is fair to assume that this paper focuses on NAS. In this short time we are not able to conduct an experiment for general ML. However, we are currently preparing a comparison against state-of-the-art NAS methods that leverage parameter sharing. We hope to finish this experiment over the weekend.\\n\\nQ2. Assuming that the learning curve is monotonous increasing just the maximum is exactly what all methods based on successive halving use. Furthermore, the last seen value is a strong baseline as pointed out by reviewer 3. For these two reasons alone this kind of modelling makes a lot of sense to us. However, we additionally added convolutional layers. They allow the model to learn to use first and second order gradients of the learning curve if this is considered useful. The resulting embedding vector is then used in another layer which may decide how important the features are. What ablation study would you have in mind? The problem with using all features is that the model will overfit. What we could do is use max pooling with strides. Since the architecture as modelled worked fine we did not consider changing it. As our ablation study shows, the learning curve component turns out to be important in its current form. We acknowledge that there might be scope for further improvement.\\n\\nQ3. A change in the search space will lead to a change in the representation. If this representation is again a sequence of tokens, our model will not change at all. If you refer to whether this model can transfer to new search spaces, then likely not. However, all learning curve prediction methods that are data-driven face this problem. That is a general property of machine learning algorithms. We refer at this point to the discussion with reviewer 3.\\n\\nQ4. We think this was addressed with figures 4 and 5 and their discussion. Can you outline in more detail what experiment you expect?\\n\\nQ5. We agree that there is always some uncertainty and that's why we investigated this issue and the probabilistic estimation of our model in section 4.4. We further compared our method to the work by Baker and empirically demonstrated to perform better. Can you outline in which case Baker would be able to outperform our method?\\n\\nWe are looking forward to answer your follow-up questions. We will provide the results for the comparison against DARTS as soon as possible.\"}",
"{\"title\": \"Clarification of the method and addition of further baselines\", \"comment\": \"Thank you very much for your time to prepare this review.\\n\\nWe rewrote section 3 to make things a little bit more clear. The choice of the logistic function is common practice in the learning-to-rank community. We clarified that in the text and provided a reference. We elaborated the calculation of p_{i,j} as well. Basically, whenever model i is better than model j, p_{i,j} = 1, if they are equally good the value is 0.5 and if model j is better the value is 0. Whether one model is better than the other is defined by the given learning curves. With posterior we refer to p_{i,j}. See also the provided reference. \\\\mathcal{D} is the training data for f, that is correct. Since we actually did not use this notation anywhere, we removed it to avoid confusion. Thanks for spotting the typo.\\n\\nWe added a discussion Successive Halving, Hyperband and BOHB as suggested by the reviewers. We further added Successive Halving and Hyperband as additional baselines. Could you point us to the papers on learning-by-learning algorithms using LBFGS? We are happy to discuss this work as well. At the moment we are preparing a comparison to alternative NAS methods. This is still work in progress and we'll hopefully finish this over the weekend.\\n\\nPlease let us know if you have further comments or ideas.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper considers the problem of automatic early termination in hyper-parameter search and neural architecture search. The authors propose to form this problem as learning curve ranking and transfer learning. Unlike most previous approaches for learning curve prediction, which estimates the probability of whether the current model is better than the current best model or not by first extrapolating the learning curve and then invoke a heuristic measure, this paper proposes to predict the probability directly. The pairwise comparison probability is modeled as the logistic function of the difference of a scoring function f, where f is modeled as a neural network with the learning curve data as the input, and constructed with three components, including a learning curve component, an architecture component and a data set component. The neural network f is trained with some meta dataset, and then the early termination is then decided based on this pairwise comparison probability. The paper applied the proposed early termination approach to the neural architecture search of five image classification data sets, and evaluated the performance in terms of the Spearman correlation of the learning curve ranking and the regret and time for the architecture search. It also analyzes the learning curve prediction characteristics through a few concrete examples, followed by some ablation analysis.\\n\\nThe proposed approach is novel to my knowledge, and the numerical performance is also satisfying and convincing. However, there are a few issues that this paper should better improve upon. \\n\\nFirstly, the methodology description at the beginning of section 3 is not clear enough. In particular, the motivation of choosing logistic function in (1) is not explained, and how the calculation of p_{i,j} from the final learning curves is done is also not elaborated. Some of the terminologies are also not clearly defined or specified. For example, what does \\\"posteriors\\\" mean (cf. the line before (2))? What is the meta-knowledge \\\\mathcal{D}? Is it the one used to train the neural network of f? There is also a small typo in line 2 of Algorithm 1, where d should be D probably. \\n\\nSecondly, although the numerical experiments are convincing within the framework of learning curves prediction, they are not sufficiently convincing when it comes to the scope of the neural architecture search (NAS) or hyper-parameter optimization (HPO). Comparisons with state-of-the-art NAS and HPO algorithms that do not use learning curves prediction (e.g., HyperBand, learning-by-learning algorithms using LBFGS, etc.) are not mentioned or compared with. The authors may either want to add comparisons with those algorithms, or provide some more applications of learning curve prediction to showcase the flexibility of the proposed approach.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Learning curve prediction methods try to predict the final performance of a candidate model before it gets fully trained. In this way, learning curve prediction can act as a fast method for performance measurements in AutoML. Compared with previous approaches, the proposed method allows for transferring useful knowledge among different datasets.\\n\\nWhile the proposed method is simple, the authors have shown promising results. However, I have some concerns about the motivations and usage of the proposed method. Please see the questions below:\\n\\nQ1. Learning curve prediction is a general approach that can be combined with many zero-order optimization methods. Does this paper focus on learning curve prediction on general AutoML problems or just NAS?\\n- If the authors want to target general AutoML problems, they should perform experiments with (1) more datasets (come from various domains) and consider more (2) search algorithms. \\n- For (1), the authors can follow Auto-sklearn and Auto-weka, or some experiments on graph (e.g., GCN) or some experiments on language modeling. For (2), they can combine the proposed approach with Bayesian optimization or genetic algorithms. CNN has nice transfer learning ability, to show the wide applicability of the proposed approach is important.\\n- If authors want to target at NAS, then the proposed method is not useful. Parameter sharing is a better method in NAS for fast performance evaluation. The authors need to compare with some recent NAS papers in CV (e.g., DARTS).\\n\\nQ2. \\\"Learning curve component\\\". Learning curves are naturally a sequence of data points. The proposed method simply does a maximum pooling over all positions, which ignores the sequence of data. Could the authors give some explanations? Is it better to carry an ablation study on this point?\\n\\nQ3. \\\"Architecture component\\\". How will the changes in the search space affect the proposed method? \\n\\nQ4. Could the authors random draw many architectures from \\\"NASNet search space\\\" and then plot their learning curve? It is better to show what kinds of curves will the proposed method likely to give early stops.\\n\\nQ5. Another main pitfall of the proposed method cannot offer a probabilistic estimation, which can be done by Baker et al. (2018). Since the model is early stopped, it is naturally that there are some uncertainty in the estimated ranks.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a new method to rank learning curves of neural networks which can be used to speed up neural architecture search.\\nCompared to previous work, the learning curve model not only takes hyperparameter configurations into account, but by training it on offline generated data, it is able to model learning curves across different datasets.\\nApplied to a simple neural architecture search strategy, the proposed method achieves higher speed-ups on image classification tasks than other methods from the literature.\\n\\nWhile the method seems interesting, I don't think the paper is ready for acceptance yet, since it a) misses some important details and b) the empirical evaluation is not sufficient.\", \"more_precisely_the_following_points_need_to_be_addressed\": [\"In section 3, the paper says that all automated methods follow broadly the same principal that the first model is trained to completion. This is not correct, commonly used methods, such as Hyperband (Li et al.) or BOHB (Falkner et al.), use successive halving (Jamieson et al.) which trains a batch of configurations for just a minimum budget and then already discards poorly performing configurations. I also miss a discussion of these methods in the related work section.\"], \"hyperband\": \"A Novel Bandit-Based Approach to Hyperparameter Optimization\\nLi, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., and Talwalkar, A.\\nJournal of Machine Learning Research 2018\", \"bohb\": \"Robust and efficient hyperparameter optimization at scale\\nS Falkner, A Klein, F Hutter\\nProceedings of the 35th International Conference on Machine Learning\\n\\nNon-stochastic best arm identification and hyperparameter optimization\\nK Jamieson, A Talwalkar\\nArtificial Intelligence and Statistics, 240-248\\n\\n\\n - Is the learning curve model updated during the optimization process with the new observed data? If not, is there any way the model can adapt to new data? Also, what happens if learning curves are fundamentally different to the training data, e.g if different learning rate schedules are used than for generating the offline data?\\n\\n - A simple baseline that is missing, is to use the last observed value as approximation for the final performance which often works competitively to more powerful learning curve extrapolation methods (e.g see Klein et al.).\\n\\n - The method is only applied for a simple random search strategy, however, in practice, one would use more sophisticated methods, such as for example Bayesian optimization. The question is, how effective is the proposed method in this setting, since the distribution of learning curves might be dramatically different and biased towards more well-performing configurations with almost identical learning curves.\\n\\n- I think the experiments would be more convincing, if the method shows strong performance when deploy in commonly used NAS/HPO methods, such as Hyperband or successive halving. This should be straight forward, since decisions which configuration is promoted to a higher budget can be made based on the model instead of just the last observed value.\\n\\n- What is delta in the experiments? How sensitive is this hyperparameter in practice?\", \"minor_comments\": \"- Related to this paper is the work by Gargiani et al. which also models learning curves across datasets on offline generated data\\n Probabilistic Rollouts for Learning Curve Extrapolation Across Hyperparameter Settings\\n M Gargiani, A Klein, S Falkner, F Hutter\", \"arxiv_preprint_arxiv\": \"1910.04522\\n\\n- Figure 5: visually it seems that all learning curves are almost identical, maybe it would be better if the plot could zoom in at least for the final stage of training.\\n\\n\\n\\nafter the rebuttal\\n-----------------------\\n\\nI thank the authors for answering my questions. In the rebuttal the authors addressed my concerned about the insufficient empirical evaluation and included other relevant baselines. I will increase my score.\"}"
]
} |
ByxCrerKvS | Set Functions for Time Series | [
"Max Horn",
"Michael Moor",
"Christian Bock",
"Bastian Rieck",
"Karsten Borgwardt"
] | Despite the eminent successes of deep neural networks, many architectures are often hard to transfer to irregularly-sampled and asynchronous time series that occur in many real-world datasets, such as healthcare applications. This paper proposes a novel framework for classifying irregularly sampled time series with unaligned measurements, focusing on high scalability and data efficiency.
Our method SeFT (Set Functions for Time Series) is based on recent advances in differentiable set function learning, extremely parallelizable, and scales well to very large datasets and online monitoring scenarios.
We extensively compare our method to competitors on multiple healthcare time series datasets and show that it performs competitively whilst significantly reducing runtime. | [
"Time Series",
"Set functions",
"Irregularly sampling",
"Medical Time series",
"Dynamical Systems",
"Time series classification"
] | Reject | https://openreview.net/pdf?id=ByxCrerKvS | https://openreview.net/forum?id=ByxCrerKvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"YGt9E1n9u",
"rJe5FdAqjS",
"ryx5adhtjr",
"ryxt2xmwjB",
"HkxFegXPjS",
"rkeqAk7wir",
"H1xKiyXvsr",
"Hygu7HYM9B",
"BJeBhVSAYS",
"H1l8MVt9FS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745718,
1573738625856,
1573664961927,
1573494960784,
1573494769202,
1573494738010,
1573494689299,
1572144416160,
1571865772852,
1571619853997
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2305/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2305/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2305/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2305/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2305/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2305/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2305/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2305/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2305/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper investigates a new approach to classification of irregularly sampled and unaligned multi-modal time series via set function mapping. Experiment results on health care datasets are reported to demonstrate the effectiveness of the proposed approach.\\n\\nThe idea of extending set functions to address missing value in time series is interesting and novel. The paper does a good job at motivating the methods and describing the proposed solution. The authors did a good job at addressing the concerns of the reviewers. \\n\\nDuring the discussion, some reviewers are still concerned about the empirical results, which do not match well with published results (even though the authors provided an explanation for it). In addition, the proposed method is only tested on the health care datasets, but the improvement is limited. Therefore it would be worthwhile investigating other time series datasets, and most important answering the important question in terms of what datasets/applications the proposed method works well. \\n\\nThe paper is one step away for being a strong publication. We hope the reviews can help improve the paper for a strong publication in the future.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Further updates to the paper\", \"comment\": \"Thank you for the further pointers. We now explicitly mention that Transformers are also set functions and refer to literature where they have been used as such.\\n\\n> Transformers also embed the observations individually.\\n\\nWe totally agree, the key difference is that the embedding of the transformer also incorporates information from other elements being encoded.\\n\\n> Further details on the structure of Transformers used\\n\\nWe now explicitly refer to the Transformer model architecture used for the experiments and add exact details in the section \\u201cTraining, Model Architectures and Hyperparameter Search\\u201d of the appendix.\\n\\nPlease let us know if there are any further questions!\"}",
"{\"title\": \"Relationship with Transformers\", \"comment\": \"Thank you for clarification on the relationship between this work and the Transformers and including them in the experiments. Transformers also embed the observations individually. Would you please explicitly mention in the paper that the Transformers are indeed \\\"set functions\\\" too?\\n\\nCan you provide further details on the structure of the Transformers that you have used as the baseline?\"}",
"{\"title\": \"Thanks for your reviews\", \"comment\": \"We thank the reviewers for their thoughtful reviews. We have used them to update the paper and run several additional experiments. You may find the responses to your review as comments, containing further clarifications. Please let us know if there are any other questions.\"}",
"{\"title\": \"Our comments to your review\", \"comment\": \"> Decoupling \\u201ctime encoding step\\u201d and \\u201cattention-based aggregation\\u201d\\n\\nThanks for this suggestion! We added an ablation study to the appendix (Table A.4) and discuss it in the results section. Both factors (time encoding and attention) are shown to contribute to the overall performance. For time series with relatively comparable lengths (such as M3-Mortality), the time encoding improves performance only marginally, whereas for datasets with highly-varying lengths (such as M3-Phenotyping), time encoding leads to larger performance increases.\\n\\n> Key motivation for attention formulation\\n\\nThe projection into the dot product space includes both the individual set elements and a global representation. It is thus possible to determine the importance of an individual observation using both global information about the whole time series, as well as the value and time point of this particular observation. The mechanism was designed this way to allow the model to have the possibility of selecting observations based on time, value, or modality alone. For example, the global representation could encode an average value of a channel so that individual observations would be attended to if they significantly deviate from this value.\\n\\nWe have now extended and rewritten this section.\\n\\n> Is a_{j,i} shared across instances? [...]\\n\\n$a_{j,i}$ is calculated for *each* time series individually. We now clarify this in Section 3.3. Thanks for the suggestion!\\n\\n> It would be useful to provide how exactly a label is inferred for a *new* test instance.\\n\\nThanks for the suggestion; we have updated Section 3.2 accordingly.\", \"concerning_your_minor_comments\": \"We improved the introduction and added a citation for our claim.\\nThis has been fixed, thank you very much!\\nWe clarify this now in the paper in Section 4.3.\\nWe changed the notation now in all the sections; for Section 3.3, we continue to use $i$ and $j$ for reasons of simplicity; we motivate their choice now.\\n\\nPlease let us know if there are any other questions!\"}",
"{\"title\": \"Our comments to your review\", \"comment\": \">[...] But this loses the information [...]\\n\\nWe understand this concern. However, we are convinced that no information about the ordering is lost. Since we encode the times into the set elements, it is completely possible for the model to account the ordering of time points in the time series. Nevertheless, we agree that (similar to the statement of R1), this approach does not contain a strong inductive bias for sequences. This would have stronger implications in forecasting or generative modelling scenarios, while in a pure classification setting, the indispensability of sequential orderings can be challenged (see \\u201cOrder Matters\\u201d, Vinyals et al. 2016, where the authors show that RNNs achieve higher performance when processing sequences without their inherent ordering).\\n\\n> [...] long history dependence properties \\n\\nOur experiments did not specifically focus on long-term dependencies (we are convinced that medical time series could feature those dependencies, though). However, we would expect that our approach can exploit predictive long-term dependencies easier, because it has access to all information simultaneously. By contrast, RNNs have to propagate long-term dependencies through time, which remains challenging for very long time series. For this purpose, it would be interesting to run all methods on such time series with known long-term dependencies. We will tackle this large-scale problem in future work. \\n\\n> [...] Memetracker datasets of web postings and limit-order books datasets. [...]\\n\\nWe want to emphasize that we focus on time series classification (TSC), and not time series modelling (forecasting, generative modelling etc.). Our empirically-supported claim is that SeFT is useful and scalable for TSC. Whether SeFT is meaningfully applicable to time series modelling (such as the meme tracker task) is an interesting question (to be considered in future work), albeit not essential to our current work.\\n\\n> Hawkes processes\\n\\nWe now discuss Hawkes processes in the related work section of the paper. However, we would be grateful if the reviewer could point us towards literature that employs Hawkes processes in a classification scenario; to our knowledge, Hawkes processes are mostly used as generative models, and an extension to classification scenarios appears to be non-trivial.\\n\\n> The discussion about complexity (order m and m\\\\log m) at the bottom of page 1 is weird -- what does this complexity refer to?\\n\\nWe removed this sentence as it is indeed too confusing, especially as GP adapters and MGP adapters work with inducing points. Instead, we created a runtime plot in Figures A.1 and A.2 to support our scalability claims. Originally, we aimed to compare to GP adapter approaches. However, due to significant memory issues (on most tasks!) we were not able to include the GP adapter baseline (specifically, its multivariate extension MGP adapter Futoma et al. 2017).\\n\\n> The loss function in formula (5) is not specified later in the paper (at least hard to find)\\n\\nThank you for pointing this out, it seems it was missed in the initial manuscript of the paper. For the multi-class classification we applied categorical cross-entropy loss where the final layer was set to utilize the softmax activation function. For multi-label and binary classification tasks sigmoid activation functions in combination with binary cross entropy loss were utilized.\\nWe updated the manuscript accordingly.\\n\\n> Table 1 details: Performance criteria for the first two and last two datasets?\\n\\nWe require different metrics as the types of tasks are different. The first two datasets contain multi-class (H-MNIST) and multi-labels (Phenotyping) targets, whereas the last two datasets are binary classification tasks.\\n\\n> Table 1 details: MICRO, MACRO, WEIGHTED AUC\\n\\nFor this, we follow the sklearn convention as found in \\u2018sklearn.metrics.roc_auc_score\\u2019:\\n'Micro': Calculate ROC metric globally by considering each element of the label indicator matrix as a label.\\n'Macro': Calculate ROC metric for each label, and find their unweighted mean. This does not take label imbalance into account.\\n'Weighted': Calculate ROC metric for each label, and find their average, weighted by support (the number of true instances for each label).\\n\\nWe clarified these details in the caption of Table 1.\\n\\n> SEFT-ATTN for H-MNIST? [...]\\n\\nAs H-MNIST merely includes 10 time steps, with missingness being randomly induced, we figured that the attention extension of our method is not meaningful for this specific setting. We added this clarification in the footnote of Table 1.\\n\\n> Minor comments\\n\\nThanks for the suggestions! We fixed the typos and re-formulated as suggested and explained the meaning of channels. Moreover, concerning Eq. 3 and Eq. 4: we select our wavelengths between $2\\\\pi$ and $\\\\text{max\\\\_ts} \\\\cdot 2\\\\pi$, so no additional $\\\\pi$ parameter is necessary.\\n\\nPlease let us know if there are any other questions!\"}",
"{\"title\": \"Our comments to your review\", \"comment\": \"> Sequentialness prior\\n\\nOur approach is exhibiting competitive performance with respect to standard measures. To better showcase this, we added a visualisation (Figures A.1 and A.2 in the appendix) depicting the trade-offs in performance and runtime.\\n\\n> The proposed idea in this paper can be considered a simplified version of the Transformers architecture [...]\\n\\nInterestingly, in the initial phases of the project we found that the naive application of transformer architectures leads to low generalization performance and overfitting, despite the use of regularisation. We found that the independent encoding of individual observations *improves* the generalization performance. Hence, in contrast to transformers, we do not include interactions between set elements during the computation of the embedding, as this strategy regularises the network. This is in line with the observation that for medical time series simple machine learning methods with summary statistics can already achieve very good performance (see Harutyunyan et al. (2019), for more details). Using an independent encoding of the individual set elements thus corresponds to computing multivariate data set specific summary statistics which optimize the classification performance.\\n\\nWe completely agree that including transformers as an additional baseline makes a lot of sense; we ran new experiments and updated the paper accordingly. In general, we observe that the performance of the transformer architecture exhibits competitive runtimes, but non-competitive classification performance. It was thus necessary to change parts of the results and discussion section to maintain a consistent story of the paper.\\n\\n> However, the MIMIC-III Mortality benchmark [...]\\n\\nIn order to stay comparable with other research in the field, we decided to use the same dataset as Harutyunyan et al. (2019) and only remove the imputation and resampling steps to obtain a more raw, irregularly sampled version of the dataset. The original publication by Harutyunyan et al. (2019) consists of 21,139 patients (in their paper, see Page 10, Fig. 7). We discard 32 of these patients as they showed a drastically higher sampling rate, which lead to memory issues in the baselines. For further details, please refer to Section A.1 in the appendix.\\n\\nPlease let us know if there are any other questions!\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper considers the problem of supervised classification of time-series data that are irregularly sampled and asynchronous, with a special focus on the healthcare applications in the experiments. Inspired by the recent progress on differentiable set function learning, the paper proposes an approach called Set Functions for Time Series (SEFT), which views the time series as sets, and use a parametrized sum-decomposing function f as the model for representing the probabilities of different classes, with the sets as the inputs. The problem then reduces to learning the finite dimensional parametrization of the function f under a given loss, which is a differentiable optimization problem that can be learned via standard optimization methods. Together with a positional embedding of the timestamps and an attention-based aggregation, the paper reports improved performance of the proposed approach on a few healthcare time series with asynchronous and irregularly sampled data. In particular, the runtime is largely shortened, while the final accuracy remains competitive to other methods compared in the paper.\\n\\nThe idea of SEFT is novel and the results are also showing its promise. In addition, the interpretability shown in section 4.3 is also attractive. However, there are several issues that limit the contribution and maturity of this paper. \\n\\nFirstly, the paper proposes to model time series as a set. But this loses the information of the order of the time series, which can be extremely important in those datasets with long history dependence. In such cases, I'm not convinced that the set modeling would work. The authors should double check the characteristics of the datasets that are used, and see if they lack long history dependence properties in intuition. If so, this should be mentioned clearly. The authors should also make a more fair comparison with other approaches (like those based on RNN) on datasets with strong history dependence, e.g., Memetracker datasets of web postings and limit-order books datasets. Otherwise, it would be not clear whether this set modeling is generally applicable for general time series data.\\n\\nSecondly, the authors missed a large amount of related literature for approaching asynchronous and irregularly sampled time series, namely (marked) point-process based approaches. See papers like [1, 2, 3], to name just a few. The authors should at least include some of the recent approaches in this direction for comparison before claiming the superiority of SEFT.\\n\\nThirdly, there are a few parts that are not very clear. 1) The discussion about complexity (order m and m\\\\log m) at the bottom of page 1 is weird -- what does this complexity refer to? Does it include the learning of the unknown parameters in the models (like training of the neural networks in this paper)? 2) The loss function in formula (5) is not specified later in the paper (at least hard to find). 3) The Table 1 should be explained in much more details. In particular, why don't we include SEFT-ATTN for H-MNIST? The comment after * is also not clear to me -- is it relevant to why SEFT-ATTN is not included? And what are MICRO/MACRO/WEIGHTED AUC? And why are we using different sets of performance criteria for the first two and last two datasets?\\n\\nFinally, some minor comments: 1) On page 2, \\\"the following methods\\\" should be \\\"the above methods\\\"; 2) on page 3, the meaning of \\\"channels\\\" should be specified clearer; 3) on page 4, in formulae (3) and (4), should there be \\\\pi or 2\\\\pi in the formula?\\n\\n[1] Mei, Hongyuan, and Jason M. Eisner. \\\"The neural hawkes process: A neurally self-modulating multivariate point process.\\\" Advances in Neural Information Processing Systems. 2017.\\n[2] Xiao, Shuai, et al. \\\"Joint modeling of event sequence and time series with attentional twin recurrent neural networks.\\\" arXiv preprint arXiv:1703.08524 (2017).\\n[3] Yang, Yingxiang, et al. \\\"Online learning for multivariate Hawkes processes.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n############## post rebuttal ###############\\nAfter reading the authors' rebuttal, I decide to improve the rating to 5 (reflected as 6 due to the ICLR rating system limitation this year).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThe work is focused on classification of irregularly sampled and unaligned multi-modal time series. Prior work has primarily focused on imputation methods, either end-to-end or otherwise. This paper approaches the problem as a set function mapping between the time-series tuples to the class label. The proposed method is uses a set encoding of a multi-modal time series input, followed by mode-specific encoding of the tuples which are then aggregated in multiple ways prior to classification. An attention mechanism is attached in order for the model to automatically weigh the relevance of tuples for the classification. The model is compared to imputation based baselines on clinical ICU time series classification tasks. The performance mostly appears comparable across baselines but the proposed method has much better run-times. \\n\\nThe paper is for the most part well written, and related work well characterized. The formulation is interesting and clinically relevant as well so the choice of data-sets makes some sense. I have a few concerns about the architecture formulation and lack of clarification and intuition in what appears to be the main contribution of the paper (Sec 3.2 and 3.3) which I will detail below:\\n\\na. In the evaluation, I really want to see a decoupling between the \\\"time encoding step\\\" and \\\"attention based aggregation\\\" on the performance to figure out to isolate different sources of performance improvements. That is can there be a SEFT without time encoding? If not, why not? I encourage more ablation like studies that look at different sources of performance gains and demonstrate them in experiments.\\n\\nb. The description of Sec 3.3. is really missing key motivation for the choices made around how the attention formulation is designed. For example why does the dot produce include the set elements? What if it doesn't? What is Q supposed to capture? \\n\\nc. Is a_{j,i} shared across instances? Then irrespective of the number of observations per instance, the $j^{th}$ tuple gets similar weights? If not appropriate indexing will help clarify this.\\n\\nd. It would be useful to provide how exactly a label is inferred for a *new* test instance. \\n\\nI have some minor additional feedback (just for presentation and motivation purposes):\\n\\n1. Authors make a claim in the introduction which should likely be qualified with a citation - \\\"Furthermore, even though a decoupled imputation scheme followed by classification is generally more scalable, it may lose information that is relevant for prediction tasks\\\". How does decoupled imputation imply loss of relevant information? By losing information about which observations are missing and relying on that for prediction? Does this clinically make sense? Or even generally? \\n\\n2. In Sec 3.3, you probably mean $W_i \\\\in R^{(im(f') + |s_j|) \\\\times d}$. That is parenthesis are missing?\\n\\n3. What are the +- std errors indicating? Is it cross validation error on a held-out test set? \\n\\n4. Initially $i$ is indexing samples and by equation (3), (4) $i$ indexes time(?) and in Sec 3.3 $i$ indexes observations? How are observations defined here? is it measurement of specific modality at a specific time instance? Can you clear this in the introduction itself? \\n\\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\\nI have read the authors updated draft and response. The experiments section looks much better now. \\n\\n1. The overall contribution has less clinical utility in my opinion as generally a patient likely deteriorates over time before an adverse outcome and therefore -- to give the model too much flexibility w.r.t. time ordering doesn't make quite as much sense. This is reflected in the fact that experimental results are not drastically better than other baselines. The authors might be able to show the utility of the method on other time series classification datasets where this is not a limitation of the data itself. However in those settings, it may be a bit hard to beat transformers. Do the authors have a sense of where the benefits of this method really are?\\n\\n2. Mortality tasks are generally on the simpler side of clinical prediction problems as well. Nonetheless I think the contribution has some utility to the community. I do encourage the authors to try non--clinical datasets for a comparison\\n\\n3. Please have a discussion that includes limitations and to discuss where the benefits of your methods really lie. A clear and thoughtful discussion is currently missing in your conclusions. \\n\\nWith that said, I am updating my score to a 6.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper idea of this paper is straightforward and clear: treat the irregular time series as a bag of events, augment them with time information using positional encoding, and process the events in parallel. The idea is certainly faster than the sequential algorithms such as RNNs and their extensions. However, as shown in the experiments, because it does not encode the \\\"sequential-ness prior\\\" to the model, it is less accurate. Compared to RNNs, the proposed model has better access to the entire length of sequences and does not suffer from the limited memory issues of RNNs and variants.\\n\\nThe proposed idea in this paper can be considered a simplified version of the Transformers. Like transformers, the time and order are only provided to the model using the positional encoding and attention is central to aggregation over the sequence. Realizing the relationship with the Transformers not only decreases the novelty degree for this paper but also requires the authors to include the Transformers in the baselines.\\n\\nFinally, the results reported in the experiments are nice, especially for the baseline GRU-D! However, the MIMIC-III Mortality benchmark has a lot more than 21,000 stays to the best of my recollection. Can you please elaborate on how the number of data points has decreased?\"}"
]
} |
SylpBgrKPH | MissDeepCausal: causal inference from incomplete data using deep latent variable models | [
"Julie Josse",
"Imke Mayer",
"Jean-Philippe Vert"
] | Inferring causal effects of a treatment, intervention or policy from observational data is central to many applications. However, state-of-the-art methods for causal inference seldom consider the possibility that covariates have missing values, which is ubiquitous in many real-world analyses. Missing data greatly complicate causal inference procedures as they require an adapted unconfoundedness hypothesis which can be difficult to justify in practice. We circumvent this issue by considering latent confounders whose distribution is learned through variational autoencoders adapted to missing values. They can be used either as a pre-processing step prior to causal inference but we also suggest to embed them in a multiple imputation strategy to take into account the variability due to missing values. Numerical experiments demonstrate the effectiveness of the proposed methodology especially for non-linear models compared to competitors. | [
"treatment effect estimation",
"missing values",
"variational autoencoders",
"importance sampling",
"double robustness"
] | Reject | https://openreview.net/pdf?id=SylpBgrKPH | https://openreview.net/forum?id=SylpBgrKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"sjPYc09Tek",
"rJegLivtjr",
"B1eiqH-QoS",
"SyepcEk7iS",
"rkxbxm1msr",
"r1l-nW1QoH",
"HkeSD_qeoB",
"BylnDUKyqB",
"SJxGMoVaYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745686,
1573645128323,
1573225874579,
1573217429223,
1573217001358,
1573216681046,
1573066844748,
1571948132058,
1571797769658
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2302/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2302/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2302/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2302/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2302/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2302/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2302/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2302/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper addresses the problem of causal inference from incomplete data. The main idea is to use a latent confounders through a VAE. A multiple imputation strategy is then used to account for missing values. Reviewers have mixed responses to this paper. Initially, the scores were 8,6,3. After discussion the reviewer who rated is 8 reduced their score to 6, but at the same time the score of 3 went up to 6. The reviewers agree that the problem tackled in the paper is difficult, and also acknowledge that the rebuttal of the paper was reasonable and honest. The authors added a simulation study which shows good results.\\n\\nThe main argument towards rejection is that the paper does not beat the state of the art. I do think that this is still ok if the paper brings useful insights for the community even though it does not beat the state fo the art. For now, with the current score, the paper does not make the cut. For this reason, I recommend to reject the paper, but I encourage the authors to resubmit this to another venue after improving the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Clarification of the arguments and new simulations on benchmark data\", \"comment\": \"We would like to thank the reviewer for his detailed and insightful comments.\\n\\n- Regarding the identifiability issue: we realize our arguments were not clear enough and have created some confusion, and are sorry for that.\\n\\nOn the one hand, we fully agree with the reviewer that without further assumptions, the causal effect is in general not identifiable from the knowledge of $P(Z|X^*)$ only (plus an infinite number of observations of (W,Y,X*)). For example, if $X$ is independent from $Z$ (corresponding to a case where the observation is a very bad proxy for the latent confounder), then $P(Z|X)=P(Z)$ is perfectly known but the causal effect can not be estimated from the observation of $(W,Y)$ only and some noise $X^*$.\\n\\nOn the other hand, what we wanted to show with the new equations in our paper is that if one knows the treatment effect conditioned on $Z$ (or an unbiased estimate, as we write), then we can derive the treatment effect conditioned on the observation $X^*$ by averaging the treatment effect given $Z$ according to $P(Z|X^*)$. This is useful to justify why in equation (9) we form an estimate of $\\\\tau$ for realizations of $Z$ according to $Z|X^*$, and then average these estimates over several realizations to estimate the average treatment effect. However, this does *not* tell us how to create an unbiased estimator of the treatment effect conditioned to $Z$; in particular, to compute the doubly robust estimator (9), this does not tell us how to correctly estimate the propensity score $\\\\hat{e}$ and the response function $\\\\hat{\\\\mu}$ that are needed . For that purpose, we propose a heuristic method inspired by the approach of Kallus et al (2018) in the linear case, which may be biased (as discussed above).\\n\\n- Regarding experiments: as you suggested, we applied our methods on a close to real data set, namely the IHDP data set used as well by Shalit et al. (2017) and Louizos et al. (2017). We added a new Section 4.4 in the updated version of the paper which shows the good behavior of MissDeepCausal.\\n\\nThank you again for your comments which help us improving the paper.\"}",
"{\"title\": \"Synthetic experiments with more covariates seems promising but arguments about unbiasedness not convincing\", \"comment\": \"I thank the authors for taking time to update the manuscript considering my comments previously. I have some concerns still.\\n\\n\\\"We could also write $(\\\\mu_{(Z)},\\\\Sigma_{(Z)}) = (V\\\\tanh(UZ+a)+b, \\\\operatorname{diag}\\\\{\\\\exp(\\\\eta^T\\\\tanh(UZ+a) + \\\\delta\\\\})$\\\" - I would prefer it to be written this way.\\n\\n\\\"Regarding the feature dimension, as explained above we added in the revised version new results with and where we show that MDC is unbiased (Figs 6-7)\\\"\\nI agree that the results do indeed show that in higher dimensions this works better than even the initial reported results.\\n\\n(Most important concern still) \\\"Of course, a complete answer to the question of asymptotic consistency would require us to answer point (i), i.e., study the asymptotic consistency of VAE and quantify how well we approximate the conditional distribution of the latent variables given the observed data when we sample with MIWAE, which is beyond the scope of this paper.\\\"\\nSo the authors claim that if the correct P (Z|X*) is captured, then their estimates would be asymptotically unbiased due to the use of doubly robust estimators. However, I have the following problem with this claim. May be I am not able to see it clearly.\\n\\nSuppose (to make my point clear and in a simple way) there are only two data points Y_1,X*_1,T_1 and Y_2,X*_2,T_2. Let us assume the *true* latent *realizations* behind these points were *Z_1* and *Z_2* - lets say we don't observe Z_1 and Z_2 directly. However, let us say we also know P (Z|X*) distribution exactly. Now my question is - Is the estimator you propose even unbiased (even asymptotically) ?\\n\\nNow when you sample B times from P(Z|X*) and create B tables - there is no reason that in each of the B tables, you will get points Z that are even close to Z_1 or Z_2 most of the times while the targets Y's correspond to these specific unknown Z_1 and Z_2. So if you form your regression estimates along with the doubly robust estimator based on the Z's one generates - there is no reason to believe it is unbiased. In fact, let us even take a stronger case where we know the treatment assignment conditional also P (T|Z) exactly. But the doubly robust estimates use regressed mean of Y (obtained only under Z_1) on Z's (sampled) given X* under T=T_1. Why is this unbiased even asymptotically ? The point I am making is we only know Y according to one Z_1 (sampled from P (Z|X*) and unobserved). Regressing this target on Z's that are sampled from the distribution (P(Z|X*)) - will it not make it biased - why would it be unbiased? \\n\\n Is there a reference where this issue has been dealt which the authors are relying on ?\\n\\n Do you think you can get close to estimates formed by the unobserved realizations (Z_1 and Z_2) with a regression estimate based on Z's samples from P (Z|X*)??\\n\\n- The only idenitifability result I know when Z is unknown in general comes from https://ftp.cs.ucla.edu/pub/stat_ser/r366-reprint.pdf (Kuroki and Pearl 2014). In fact they want to know P (X*|Z) and not P (Z|X*) [which might be counter intuitive] and they establish identifiability results. Again this is possibly through a different estimator since for them both Z and X* are discrete and there are no missingness issues. But to claim that P(Z|X*) alone is enough to identify through sampling with doubly robust estimator requires a proof.\", \"regarding_experiments\": \"I am happy that the authors did synthetic experiments with higher p and it shows better results. \\n\\nShalit et al 2017 used two real world datasets (one sort of semi synthetic) in https://arxiv.org/pdf/1606.03976.pdf for their ITE work. Is it possible to hide some variables and introduce missingness to create the situation the authors have and then perform an experiment using their data ? Anything close to real world data would add value to the experimental claims given that I am still not convinced by the identifiability claims that seem strong even knowing P(Z|X*) (in Section 3)\"}",
"{\"title\": \"Revised version better highlights the contribution\", \"comment\": [\"Thank you for your comments.\", \"Regarding the novelty and contribution of the paper: this is of course a subjective judgment, but we provide a detailed description of the state-of-the-art in the paper to clarify the original contribution of the paper, namely, the first model to estimate ATE with latent confounding variables and missing values in observed features, without assuming linear relationships among variables.\", \"In the revised version of the paper we added in particular a paragraph in Section 3 page 4 to clarify that the MDC.mi method we propose provides an asymptotically unbiased ATE estimation if the VAE correctly estimates the conditional distribution of the latent factors given the observations. To strengthen the paper and highlight our contribution with multiple imputation, we have also included in the Appendix of the revised version of the paper the results of other simulations where we vary the number of covariates that highlight the fact that we get unbiased estimates of the ATE when $p$ increases. Finally, we would like to stress out that there are not so many methods that explicitly consider missing (covariate) values in causal inference tasks. The assumptions on the missing values mechanism (MCAR for Kallus et al., 2018) or on the unconfoundedness (unconfoundedness with missingness, Mayer et al., 2019) are strong and it is difficult to assess their suitability for a given problem. Therefore we believe that there is room for our proposal which bears an innovative approach for exploiting the latent confounding assumption in the sense that using the multiple imputation approach, we exploit the posterior distribution of $(Z|X^*)$ instead of only the posterior expectation $\\\\mathbb{E}[Z|X^*]$.\", \"Regarding Figures 1 and 2: We agree that the directed edge from $X^*$ to $X_{mis}$ is confusing. It stems from an alternative representation where we used $X_{mis}$ and $X_{obs}$, which is classical in the missing data literature. Nevertheless, we agree that the assumptions are clearer without any $X_{mis}$ in the graphical representation but only $X^{\\\\star}$ and $M$. Consequently, we have modified the graphs in Figure 1 and Figure 2 of the revised version and added also $Y(0)$ and $Y(1)$ to highlight the unconfoundedness assumption.\", \"About the validity of using Z to predict ATE accurately: Like all methods for ATE estimation, MDC is only valid under some assumptions that we state in the paper, in particular, that there are no other confounding factors between treatment and effects beyond the (unobserved) Z factors (i.e., that the model shown on Figure 2 is correct). Ensuring and evaluating the validity of this hypothesis is unfortunately usually not possible without further assumptions, as for other methods for ATE estimation. If the model is correct, then we added a paragraph (Section 3 page 4) to justify by the MDC.mi estimator is a valid estimator for ATE.\", \"About the training of the conditional response surfaces $\\\\mu_0$ and $\\\\mu_1$: As explained in Section 4.3.2, we estimate $\\\\mu_0$ and $\\\\mu_1$ by performing a predictive model of $Y$ on $Z$ using either a linear regression or a random forest.\", \"We have corrected all the typos, thank you.\"]}",
"{\"title\": \"Comments taken into account\", \"comment\": [\"Thank you for your positive feedback.\", \"We forgot to indicate in the initial version that we used the default parameters for the VAE as proposed in the implementation of the authors of MIWAE available at \\\\url{https://github.com/pamattei/miwae}. We have now added this information in the updated version of our article.\", \"We included results of new simulations where we vary the number of observed covariates in the revised version of the paper (Figs 6-7). It allows us to highlight even more the good performances of our methods as it leads to unbiased estimates when $p$ increases.\", \"The code is not public yet just to keep the review process double-blind, but the code will be available on GitHub with all scripts needed to reproduce all experiments, which is crucial for reproducibility of our work. We remove the sentence mentioning the availability of the code \\\"upon request\\\" in the revised version of the paper.\"]}",
"{\"title\": \"Multiple imputation strategy asymptotically unbiased for ATE\", \"comment\": [\"We would like to thank the reviewer for their comments and careful reading of the paper.\", \"About the methodological novelty and the properties of our ATE estimator: while we have no theoretical claim about the MDC.process estimator, which we propose merely as a natural nonlinear generalization of the matrix factorization technique of Kallus et al (2018), we would like to clarify that the MDC.mi estimator is asymptotically unbiased for ATE under the latent confounding assumption, if we assume that (i) the VAE asymptotically estimates the correct conditional distribution of the latent variable given the observed features, and (ii) we use an asymptotically consistent estimator of ATE given the latent variables, such as the double robust estimator (Chernozhukov et al., 2018). We clarify this original methodological contribution in the revised version of the paper, where we added a paragraph in Section 3 page 4 to state and prove this property. Of course, a complete answer to the question of asymptotic consistency would require us to answer point (i), i.e., study the asymptotic consistency of VAE and quantify how well we approximate the conditional distribution of the latent variables given the observed data when we sample with MIWAE, which is beyond the scope of this paper.\", \"Concerning the experiments: First, we would like to clarify that the experiment with LRMF model (Fig 4) mentioned by the Reviewer is precisely meant to give an advantage to the MF approach, since the simulated data follow precisely the model behind the MF method; hence it is not a surprise that the MF approach is competitive in this case. To the contrary, as we write in the text, the interesting observation in this experiment is that MDC approaches are competitive with MF, and even slightly better. Second, we agree that more experiments are needed to evaluate empirically the asymptotic biases of the different methods. In the revised version, we add new results (figures 6 and 7 in appendix B) showing that MDC outperforms other methods in terms of bias as the number of features increases. For instance, choosing $p=100$ and $n=1000$, we obtain unbiased estimations of the ATE with both MDC.process and MDC.mi coupled with the double robust estimator in both the low-rank and deep-latent variables settings. Regarding real world data, as noticed by the reviewer ground truth is hard to come by.\", \"Regarding the influence of $B$, we only have preliminary results but these indicate that for a small number of covariates, choosing small ($B=50$) or large ($B=500$) values for $B$ does not have an impact on the final estimate, however for larger numbers of covariates, it seems better to pick a large $B$. We have added these results in Figure 8 in appendix B of the revised paper.\", \"We have corrected the typos and added the reference, thank you.\", \"Regarding the covariance of X given Z for the DLVM model: we use the model suggested by Kingma and Welling (2014) who used a diagonal covariance matrix, i.e., conditionally on $Z$, we have independence between the (normally distributed) covariates $X$. Hence we could also write $(\\\\mu_{(Z)},\\\\Sigma_{(Z)}) = (V\\\\tanh(UZ+a)+b, \\\\operatorname{diag}\\\\{\\\\exp(\\\\eta^T\\\\tanh(UZ+a) + \\\\delta\\\\})$\", \"Regarding the feature dimension, as explained above we added in the revised version new results with $p=100$ and $n=1000$ where we show that MDC is unbiased (Figs 6-7)\"], \"references\": [\"Kallus, N., Mao, X., \\\\& Udell, M. (2018). Causal inference with noisy and missing covariates via matrix factorization. In \\\\textit{Advances in neural information processing systems} (pp. 6921--6932).\", \"Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In \\\\textit{International Conference on Learning Representations}, 2014.\", \"Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James Robins. Double/debiased machine learning for treatment and structural parameters. \\\\textit{The Econometrics Journal}, 21(1):C1--C68, 2018.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n The paper considers average treatment effect estimation treatment T and an unobserved confounder Z causes the outcome Y with an added constraint that the observed X is a noisy measurement of the underlying Z and some of the entries of observed X are missing at random (MAR). Previous work (Kallus et al. 2018) on similar settings assumed a low rank model connecting Z and X along with some entries missing at random which we do not observe. Further, Y (outcome) is related to the treatment and Z with a linear model. They actually show that matrix factorization techniques with these assumptions form an unbiased estimator for the Average Treatment Effect. There is prior work on doubly robust estimators under ignorability assumptions.\\n\\nIn this paper, the authors want to consider general non-linear relationships between Z and X with the same MAR (missing at random assumption) for missing entries. So they fit a latent variable model in the form of VAE to find the P(Z| observed part of X) using a slightly modified version of the ELBO Lower Bound. For missing entries, they just replace those entries by a constant and do the usual VAE fit. After the VAE fit, multiple Z's are samples from the optimized P(Z| observed X) and then used in the doubly robust formula on each Z for estimating the average treatment effect and then finally the estimates averaged over the different Z's.\\n\\nThere is an alternative where the conditional mean of the latent variable is estimated from the VAE and used in the doubly robust computation.\\n\\nIn many synthetic examples, authors compare this with existing baselines and show that their method improves.\", \"pros\": [\"Baselines compares are comprehensive enough from my perspective.\", \"The paper is well written with clear pointer to existing work on doubly robust estimators with standard ignorability assumptions and the work for the linear, low rank model case by Kallus et al. 2018.\"], \"cons\": [\"Major Issues\", \"There is no reason to believe that even for the synthetic experiments, that the VAE posteriors would asymptotically yield unbiased ATE's which is provably the case in (Kallus et al. 2018) (of course for their restricted linear model/low rank assumptions). There is no reason to suppose Z's used in eq (9) from the VAE satisfy ignorability even in the asymptotic limit. In this light, the paper in essence just estimates Z's from some latent model that is fit and then use those latents to regress Y and then computes ATE. This seems a natural heuristic to try given such a problem. So I don't find a big methodological novelty. I would be willing to increase my scores if the authors could convince me on this point.\", \"For the LRMF model (Fig 4) the MF approach seems to do as well as the authors proposal and we have a guarantee for the MF case under those linear/low rank modeling assumptions. So the only demonstrated benefit is for the synthetic experiments for Fig 5 and Fig 3 (I agree that it is considerable particularly with large fraction of missing values in Fig 3) in whose settings we dont know about how unbiased it is in the limit. More synthetic experiments with different kind of generative models could be more convincing.\", \"Some real world data set would have been more convincing - although I agree ground truth is hard to come by.\"], \"minor_issues\": [\"You have set B=200 for all the experiments for MDC-MI. Do the results change when B is increased or decreased ?? Does variance go down or the bias itself changes with B - This would be a useful insight to have.\", \"There is typo in the ELBO lower bound equation in page 11. There are other minor typos. Please correct for it.\", \"Since the paper is about estimate treatment effect from measurements of an unobserved confounder - it is important to cite - https://ftp.cs.ucla.edu/pub/stat_ser/r366-reprint.pdf from the causal DAG literature.\", \"The covariance of X given Z for the DLVM model is not clear - It seems to say exp ( or some matrix vector products) * Identity. What does this mean ?\", \"The feature dimensions seems to be set at 10 - so would we expect the same results in much higher dimensions - like say 100s for few tens of thousands of samples??\", \"********UPDATE after reading the rebutall,changes and the new experiment**********\", \"I appreciate the authors actually accepting that identifiability issues cannot be easily resolved even if one knows P*(Z|X).\", \"I recommend the authors to elaborate on this point in the camera ready version. However, showing that the proposed methods work on a real benchmark (semi-synthetic one used in Shalit et. al 2017) is commendable. However, I find that the MF method is competitive (almost all the time) with their method when Doubly robust estimators are used.\", \"But having matched an existing baseline (the MF method) that deals with confounders on real data and showing superior synthetic results and authors clarifying and toning down their theoretical claims, I am inclined to increase the score to Weak accept.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This contribution considers deep latent-factor models for causal inference -inferring the effect of a treatment- in the presence of missing values. The core challenge is that of confounders not directly observed, and accessible only via noisy proxys, in particular in the missingness. The contributed method relies on using the latent-factor model for multiple imputation in doubly-robust causal treatment effect estimators. As a consequence, it requires the missing at random assumption to control for the impact of imputation. Given that the confounders are not directly assumed, a first approach estimates their effect via an estimate of P(Z|X*) (probability of confounder given observed data), which is then plugged in the doubly robust estimator in a multiple imputation strategy. A second approach uses heuristically the estimated latent confounders as regressors of non interest in a linear-regression model. The approaches are empirically compared to other imputation strategies used as plugins in the doubly-robust estimator. The contributed approach show marked benefits when the problem is highly non-linear.\\n\\nThe manuscript is clearly written. \\n\\nI do not have many comments.\\n\\nOne concern though is that the VAE comes with a significant amount of hyper-parameters that do not seem obvious to set. This is to be contrasted with other approaches compared to. How was the specific architecture and learning strategy of the VAE selected?\\n\\nThe simulation settings are somewhat artificial. More simulations inspired from real-life causal scenario would improve the work.\\n\\nI hope that in the final version, the code will be available publicly, and not on request.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces MissDeepCausal method to address the problem of treatment effect estimation with incomplete covariates matrix (missing values at random -- MAR). It makes use of Variational AutoEncoders (VAE) to learn the latent confounders from incomplete covariates. This also helps encoding complex non-linear relationships in the data, a capability that is missing in the work of Kallus et al. (2018) -- the work which this paper extends. They employ the Missing data Importance Weight AutoEncoder (MIWAE) approach (Mattei & Frellsen, 2019) to approximate the posterior of their latent factors Z given the observed incomplete covariates X*. The main contributions of this work are presented in sections 3.2 and 3.3, where they use the approximated posterior derived from MIWAE to sample Z to be used for estimating outcomes and finally calculating the Average Treatment Effect (ATE). This is done according to the doubly robust estimator developed for data with incomplete covariates (Mayer et al., 2019b).\\n\\nIn summary, I am not convinced that the contribution of this paper is enough, nor of its novelty. However, I will read the rebuttal carefully and am willing to increase the score if the authors address this concern.\\n\\nThere are several points that need further clarification; e.g., \\n\\t- Figure 1 as well as Figure 2 show a directed edge from X* to X_{miss}. Does this mean that X* has all the proxies needed to identify X_{miss}? \\n\\t- How does this method assure/evaluate that Z embeds enough information to predict accurate effects?\\n\\t- How are \\\\mu_0 and \\\\mu_1 functions trained on Z\", \"things_to_improve_the_paper_that_did_not_impact_the_score\": [\"Page 2, par. 2, last line: state-of-the-art method\\u201ds\\u201d\", \"Page 3, under Unconfoundedness par., line -7: [...] for each observation \\u201ccomma\\u201d treatment assignment [...]\", \"Page 3, Figure 1: According to ICLR\\u2019s formatting guidelines, the figure number and caption must always appear after the figure.\", \"Page 3, Missingness par., line 1: [...] is one \\u201cof\\u201d the most [...]\", \"Page 5, line after Eq. (8): 8 should be in parentheses.\", \"Page 7, Figure 3: box-plots are hardly legible.\", \"Page 7, Figure 3 caption, line 2: keep \\u201c(logistic-)linear\\u201d together with \\\\mbox{} or ~ in latex\"], \"references\": [\"Kallus, N., Mao, X., & Udell, M. (2018). Causal inference with noisy and missing covariates via matrix factorization. In Advances in neural information processing systems (pp. 6921-6932).\", \"Mattei, P. A., & Frellsen, J. (2019). MIWAE: Deep Generative Modelling and Imputation of Incomplete Data Sets. In International Conference on Machine Learning (pp. 4413-4423).\", \"Mayer, I., Wager, S., Gauss, T., Moyer, J. D., & Josse, J. (2019). Doubly robust treatment effect estimation with missing attributes. preprint.\", \"********UPDATE after reading the rebuttal********\", \"The authors have provided further clarifications in their rebuttal and therefore, I increased my score form \\u201cweak reject\\u201d to \\u201cweak accept\\u201d.\"]}"
]
} |
H1e3HlSFDr | Variational Constrained Reinforcement Learning with Application to Planning at Roundabout | [
"Yuan Tian",
"Minghao Han",
"Lixian Zhang",
"Wulong Liu",
"Jun Wang",
"Wei Pan"
] | Planning at roundabout is crucial for autonomous driving in urban and rural environments. Reinforcement learning is promising not only in dealing with complicated environment but also taking safety constraints into account as a as a constrained Markov Decision Process. However, the safety constraints should be explicitly mathematically formulated while this is challenging for planning at roundabout due to unpredicted dynamic behavior of the obstacles. Therefore, to discriminate the obstacles' states as either safe or unsafe is desired which is known as situation awareness modeling. In this paper, we combine variational learning and constrained reinforcement learning to simultaneously learn a Conditional Representation Model (CRM) to encode the states into safe and unsafe distributions respectively as well as to learn the corresponding safe policy. Our approach is evaluated in using Simulation of Urban Mobility (SUMO) traffic simulator and it can generalize to various traffic flows. | [
"Safe reinforcement learning",
"Autonomous driving",
"obstacle avoidance"
] | Reject | https://openreview.net/pdf?id=H1e3HlSFDr | https://openreview.net/forum?id=H1e3HlSFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Q0Ah0bdfQg",
"SklUtlxBqB",
"HJl-v-KAYH",
"B1lF6gJTtH"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745656,
1572302973880,
1571881305196,
1571774657336
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2301/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2301/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2301/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to add constraints to the RL problem within a variational method. The hope is to specify a safe vs non-safe states. The reviewers were not convinced that this paper makes the cut for ICLR. Moreover, there was no rebuttal from the authors, so it didn't give the reviewer a chance to reconsider their opinion. Based on the current ratings, I recommend to reject this paper.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper tackles the challenge of control with safety constraints using a learned soft constraint formulation. It models data as coming from a hierarchical generative model p(s) p(z|s) p(x|z,s) where s is a safety indicator variable, z is a latent variable, and x is an observation. Then it learns a variational approximation q(z|x) to the true posterior p(z|x). By then measuring a divergence between q(z|x) and p(z|s), the authors aim to evaluate whether a state is safe or unsafe. They then combine this learned safety constraint with a constrained RL method, SSAC, to learn safe policies.\\n\\nWhile this setting is interesting and worthy of further exploration, there are sufficient issues with this work that it should not be accepted at this time. \\n\\nFor one thing, the motivation behind the constrained MDP formulation of reinforcement learning, such as the original SSAC paper, is to provide safety throughout training. However, this work learns a soft constraint over the course of training by using examples where the policy leads to catastrophic failures. This means that, unlike SSAC, this work does not encourage safety at all early in training. Since it collects a large number of catastrophic experiences, this method is not categorically different from e.g. training a policy with large negative rewards for crashing.\\n\\nFurthermore, since there is no constraint preventing crashing, a poorly-performing learned constraint might lead to better performance as measured by reward alone. Since safety during training comes with a tradeoff against rewards, as shown by SSAC, a poorly-functioning learned constraint might lead to improved reward. This work lacks an evaluation of the number of crashes to complement the rewards depicted in Figure 6.\\n\\nThe particulars of the method used for computing whether the safety constraint is violated are somewhat surprising. The authors use a Wasserstein distance to compute a cost as a function of the q(z|x) and p(z|s=1). They motivate this choice by the fact that the KL divergence would go to infinity for non-overlapping distributions; however, in equation (12) that would not significantly affect the computed cost. Since this divergence is calculated in the low-dimensional latent space and one of the distributions is fixed, it is also unclear that this would ever arise. It also seems that a likelihood ratio test between p(x|s=1) and p(x|s=0) would provide a more meaningful signal than simply classifying whether a point is near the \\\"dangerous\\\" prior p(z|s=1).\\n\\nOverall I think there are some interesting ideas inside this work, but it needs some improvements:\\n1. Reframing to make the setting make more sense. If you want to compare against SSAC, you need to be minimizing the total number of catastrophic events during training. It might make sense to assume a pre-existing set of \\\"dangerous\\\" examples, e.g. labeled ahead of time by a human.\\n2. Textual editing and work to make the notation more consistent, e.g. the top of page 3 uses s for states as well as the \\\"safe\\\" indicator. \\n3. Significantly improved evaluation. The results here lack crucial information about the number of catastrophic events during training. I would also like to see ablations or probe experiments showing what is learned by the encoder. Furthermore, this one environment is extremely simple (one degree of freedom) and to have confidence that the method works more generally, I would like to see the method applied to richer tasks.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a reinforcement learning model to solve a task relevant to autonomous driving, and evaluates that model in a traffic simulator called Simulation of Urban Mobility (SUMO).\\n\\nThe paper describes related work but the connection to the exact problem they are solving wasn\\u2019t 100% clear to me.\\n\\nThe description of the model was somewhat confusing to me, but my understanding is that the model does the following:\\n- The dataset contains states, and each state has a label that says whether it is safe or unsafe\\n- However, we don\\u2019t know the implicit constraints that determine whether a state is safe or unsafe\\n- We learn a latent representation of the states, where we want the safe states and unsafe states to be separated in the latent space\\n- We use a Wasserstein distance metric in the latent space to construct a safety cost function\\n\\nThe paper gives pseudocode for the algorithm, and experiments in a traffic simulator of the task of driving through a two-lane roundabout with four exits.\\n\\nI don\\u2019t see a significant enough algorithmic contribution from this paper to yield an ICLR acceptance. I think this paper would be better suited for a more application-specific conference that would be more interested in the empirical progress on a task specific to autonomous vehicles.\\n\\nAs a last note, the paper contains a large number of grammatical errors and typos - I\\u2019m not considering these in the score, but the paper would benefit from a close proofreading by a native English speaker.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presented a CRM model which is VAE with separate priors for save and unsafe modes and utilize it in RL for roundabout planning task.\", \"pros\": \"1. The motivation of the work is clear and solves an important task\\n2. The approach is sensible\", \"cons\": \"1. Experimental evaluation is very weak. It is only performed on one task which is on getting out of the roundabout. \\nIt also does not compare with any real-baseline methods. It is only the proposed method compare with the weaker version of the method. \\n\\n2. Using a mixture of Gaussian for VAE has been used in many works before, such as Nazabal, Alfredo, Pablo M. Olmos, Zoubin Ghahramani, and Isabel Valera. \\\"Handling incomplete heterogeneous data using vaes.\\\" arXiv preprint arXiv:1807.03653 (2018).\\nAlso, make such prior should be equivalent to regular Gaussian prior with multi-head decoder as in figure1 (b) in VCL paper https://arxiv.org/pdf/1710.10628.pdf where s is T there. \\n\\nSo, this work did not discuss these approaches and the novelty of the work is also limited. \\n\\n3. How to set the mean of s may be critical for the performance. Analysis needed. \\n\\n4. The method should at least compare with conditional VAE used in the same way in RL. In this case, the label is s and I believe that the latent space will be meaningful regarding s.\\n\\n5.. The second term in the right-hand-side of equation (7) has missed some constant factor. \\n\\n5. How figure 5 shows CRM could efficiently model the latent constraint? It only shows that the model converged. \\n\\n6. Writing needs to be improved. There are grammatical mistakes here and there.\"}"
]
} |
Byl3HxBFwH | Efficient Deep Representation Learning by Adaptive Latent Space Sampling | [
"Yuanhan Mo",
"Shuo Wang",
"Chengliang Dai",
"Rui Zhou",
"Zhongzhao Teng",
"Wenjia Bai",
"Yike Guo"
] | Supervised deep learning requires a large amount of training samples with annotations (e.g. label class for classification task, pixel- or voxel-wised label map for segmentation tasks), which are expensive and time-consuming to obtain. During the training of a deep neural network, the annotated samples are fed into the network in a mini-batch way, where they are often regarded of equal importance. However, some of the samples may become less informative during training, as the magnitude of the gradient start to vanish for these samples. In the meantime, other samples of higher utility or hardness may be more demanded for the training process to proceed and require more exploitation. To address the challenges of expensive annotations and loss of sample informativeness, here we propose a novel training framework which adaptively selects informative samples that are fed to the training process. The adaptive selection or sampling is performed based on a hardness-aware strategy in the latent space constructed by a generative model. To evaluate the proposed training framework, we perform experiments on three different datasets, including MNIST and CIFAR-10 for image classification task and a medical image dataset IVUS for biophysical simulation task. On all three datasets, the proposed framework outperforms a random sampling method, which demonstrates the effectiveness of our framework. | [
"Deep learning",
"Data efficiency"
] | Reject | https://openreview.net/pdf?id=Byl3HxBFwH | https://openreview.net/forum?id=Byl3HxBFwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"kWmxSk0YTw",
"SJeEOpVOiH",
"r1xpqsNuoS",
"r1x78YE_or",
"BygrWUEOoB",
"SJldn3zH9H",
"BygRX1NCtS",
"rJxqsKr6YH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745626,
1573567851803,
1573567380934,
1573566794891,
1573565948843,
1572314288052,
1571860262370,
1571801506205
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2300/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2300/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2300/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2300/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2300/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2300/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2300/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"VAE-based sample selection for training NNs. A well-written experimental paper that is demonstrated through a number of experiments, all of which are minimal and from which generalization is not per se expected. The absence of an underlying theory, and the absence of rigorous experimentation makes me request to extend either or, better, both.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the clarification\", \"comment\": \"Thanks for the detailed comments. It has resolved my concerns. I think the paper is very interesting and insightful. We should encourage such work that explores how a method works. Although it is not practical for large-scale experiments yet, it may do with some extensions in future work. Therefore, I have raised my rating to \\\"Accept\\\" for this paper.\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"We thank the reviewer for the constructive comments. We address the comments point-by-point below.\\n\\n1. \\u201cWhy was the space chosen to be the VAE latent space? Would it be possible to demonstrate some benefits of doing so, theoretically or empirically?\\u201d\\n\\nThere are several reasons that motivate us to sample in the VAE latent space.\\n\\nFirstly, the latent space of a VAE model (or a deep generative model) plays a fundamental and important role in the proposed framework with the basic assumption that many high-dimensional data structures (e.g. images or patches) can be represented as a manifold in a low-dimensional space. Deep generative models enable us to identify such a manifold in the format of the latent code. Sampling in this latent space will generate plausible samples that follow the original data distribution. In contrast, sampling in the original data space or intermediate feature space might generate out-of-distribution samples. This is very important if we would like to use the \\u2018interpolation\\u2019 sampling method as discussed in Section 3.5. \\n\\nSecondly, the latent space also provides us with better representation and interpretation of data, for example, similar samples tend to be distributed nearby. As a result, the exploration trajectory becomes more explainable and meaningful, as discussed in Figure 4. \\n\\nLastly, when we synthesize new samples, an interpolated point in the latent space can still produce a plausible training sample, compared to interpolating directly in the original data space. Therefore, when we navigate through the latent space according to the gradient direction, we can synthesize not only difficult but also plausible samples. The hardness information of synthesised training sample may be more accurate than the training sample selected by the nearest neighbours.\\n\\n2. \\u201cThe datasets used are relatively small and in two out of the three datasets, the method does not improve.\\u201d\\n\\nAlthough two of the datasets are relatively small, we still observed improvement according to the reported results (Figure 3 and Table 1). As Reviewer 3 pointed out, \\u201cThe experiments are conducted on small-scale datasets like MNIST CIFAR-10, and IVUS MSE with satisfactory gain over the baselines.\\u201d More importantly, in this paper, we focus on proposing a novel framework and demonstrating its feasibility. Applying our framework onto large scale datasets will be a meaningful follow-on work for future research. \\n\\n3. \\u201cThe dimension of the latent space is also surprisingly small (2).\\u201d\\n\\nThe latent spaces used for the three experiments (MNIST, CIFAR10, IVUS) have more than 2 dimensions. In the Appendix, the details of generative models and the corresponding latent space are given where you can find the actual numbers of dimensions (3 for all three datasets). We also discuss the choice of dimensions in Section 5. In the revised version, we clarify the actual dimensions of latent spaces used for our experiments. \\n\\nIn Figure 4, an example of 2-dimensional latent space was shown mainly for visualisation purposes and for discussing the exploration trajectories.\\n\\n4. \\u201cWhile the main body of the paper describes the method with VAEs, the experiments for the CIFAR10 dataset (where the results were in favour) were done \\\\alpha-GAN.\\u201d\\n\\nIn this paper, we aim to demonstrate a general framework that adaptively selects more informative (harder) samples from the latent space to improve the training efficiency of a deep learning model. The deep generative model is a component of this framework. In the main body of the paper, we use the VAE as example to demonstrate the idea. But the framework can also use other kinds of generative models such as alpha-GAN etc as a component. We explicitly explained in Sec 4.3 Implementation that for the three tasks, two tasks (MNIST and IVUS) used VAE and other tasks (CIFAR-10) used alpha-GAN due to its better reconstruction performance on this dataset.\\n\\nMore specifically, for the scenario where a labelling tool is not available, an auto-encoder-like generative model is essential for our framework since we need the correspondence between the latent space and original image space to identify the label of the selected training sample. For the scenario where a labelling tool is available, deep generative models like GAN can be used to generate informative training samples at the interpolated point in the latent space and the generated samples can be annotated subsequently by the labelling tool or human analyst.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank Reviewer 3 for appreciating the novelty of our work and the benefits for future research along the direction of data efficient learning. We address the comments point-by-point below.\\n\\n1.Clarification of the difference from online hard negative mining (OHNM).\\n\\nIn the field of hardness-aware learning, online hard negative mining (OHNM) or hard negative mining (HNM) selects the informative samples mainly by utilising the rank ordered by the training sample loss. This requires the whole dataset to be annotated before training. On the contrary, our framework is heuristic where the new informative training samples are identified within the neighbourhood of existing ones along the gradients derived from the loss function. We do not need to annotate the whole dataset and it is possible to perform human-in-the-loop learning. I.e., find the informative sample for annotation. This could dramatically benefit the tasks that have high cost on labelling each sample. We added this discussion to the Related Work Section.\\n\\n2. The effectiveness of the method on large-scale and challenging datasets as the current work use 3 relatively small datasets.\\n\\nWe agree with the reviewer that training a deep generative model for large-scale datasets and even \\u201cthe preparation step itself is a very challenging task\\u201d. As the aim of this paper is to demonstrate the basic idea of the proposed training framework, we use the most popular datasets (MNIST and CIFAR10) in machine learning community as examples and an additional datasets (IVUS) to demonstrate the case when an online labelling tool is available. We also demonstrate that the proposed framework is flexible by using either VAE (for MNIST and CIFAR10) or alpha-GAN (for IVUS) as the generative model. For the large-scale cases where high-fidelity reconstruction is required, many recent SOTA works are based on either VAE or GAN [1-3]. We believe that the proposed framework is flexible to allow more advanced deep generative model to be integrated in future research.\\n\\n[1] Razavi, Ali, Aaron van den Oord, and Oriol Vinyals. \\\"Generating Diverse High-Fidelity Images with VQ-VAE-2.\\\" arXiv preprint arXiv:1906.00446 (2019).\\n[2] Gulrajani, Ishaan, et al. \\\"Pixelvae: A latent variable model for natural images.\\\" arXiv preprint arXiv:1611.05013 (2016). (Results reported on ImageNet) [UPDATE: ICLR 2017]\\n[3] Brock, Andrew, Jeff Donahue, and Karen Simonyan. \\\"Large scale gan training for high fidelity natural image synthesis.\\\" arXiv preprint arXiv:1809.11096 (2018). [UPDATE: ICLR 2019]\\n\\nIn addition, for cases where there is an available labelling tool, a GAN-based decoder is also acceptable to the framework, where harder samples can be synthesised to be annotated by the labelling tool in an online manner.\\n\\n3. \\u201cAs the input to the DNN is the reconstructed image from the pre-trained decoder, there will be some information loss during the reconstruction process. This is the major challenge, I think, for large-scale applications. Is that possible to use the original image as the input to DNN while still being able to find hard samples using the latent space and the image space correlation?\\\"\\n\\nWe understand your concern about the information loss during reconstruction. It is possible to replace the synthesised training samples with their corresponding original input. In fact, we noticed this issue when we were performing the experiments. On the other hand, because the datasets we used are relatively small-scale, currently we have not yet observed significant impact of the issue of information loss. Thus, to keep the description of methodology intuitive and simple, we did not include this alternative approach in the paper. But we believe that the alternative way would be a very promising extension for future work.\\n\\n4. \\u201cI really like the visualization of Figure 4 that shows the trajectory of the sampling process that follows the boundaries between classes. The authors also mentioned that some trajectories explore towards outside util there is no real samples, which should be avoided. Could the authors comment on how to avoid such cases?\\\"\\n\\nWe really appreciate that you like Figure 4. To address the question, during the experiments, we periodically re-select a set of random points in the latent space as the new initial points for exploring harder samples, which would empirically reduce the chance of being trapped in the outer area. In the revised version, we further clarify this in Section 5.\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"We thank Reviewer 2 for finding our work interesting and easy to follow. We answer the two questions below.\\n\\n1.\\u201dThe information samples are fed to the training process, what about the rest \\u2018Non-informative\\u2019 ones?\\u201d\\n\\nThe proposed framework aims to improve the data-efficiency of a deep learning model. Although those relatively non-informative samples may be left out, the other more informative samples are already able to make the deep learning model achieve a desirable performance. \\n\\nMoreover, those relatively non-informative (left-out) samples would not be annotated, which significantly reduce the annotating cost.\\n\\nHowever, it should be noted that all samples including the non-informative ones are still necessary and useful to construct a compact latent space capturing the data distribution in the preparation stage.\\n\\n2.\\u201dWhat is the characteristics of the selected informative samples? I.e., for a class of images, which images should be informative?\\u201d\\n\\nIn our paper, Figure 4 illustrates 6 trajectories (4 of them are positive examples) of how informative training samples are selected. It shows that these trajectories are more likely to explore the boundary area between classes in the latent space where massive ambiguous training sample are located. For example, in Figure 4, trajectory \\u201cc\\u201d keeps sampling point between class \\u201c6\\u201d and \\u201c0\\u201d (also shown in the right panel as sampling snapshots) where they look quite similar to each other. From both our visualisation and experiments, the characteristic of more informative samples is that they are more likely to be distributed around the boundary area between classes.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a novel training framework which adaptively selects informative samples that are fed to\\nthe training process. The adaptive selection or sampling is performed based on a hardness-aware strategy in the latent space constructed by a generative model.\\nThe idea is intuitive and easy to follow. Experimental results demonstrate the efficacy of the proposed method.\", \"i_have_two_questions_about_this_work\": \"1. The informative samples are fed to the training process, what about the rest \\\"non-informative\\\" ones?\\n2. What is the characteristics of the selected informative samples? i.e., for a class of images, which images should be informative?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a method to efficiently select hard samples during the training of a neural network. This is achieved via a variational auto-encoder (VAE) that encodes the samples into a latent space. The VAE is trained in a preparation stage using the images only and fixed at later stage. During training of a DNN framework, samples are selected in the latent space and then decoded via the Decoder in VAE to generate the input for DNN framework. The advantage of such a framework is that now it is able to calculate the gradient w.r.t. the input samples of DNN. This gradient is used to determine the sampling strategy in the next iteration to select harder samples. Two different sampling methods are explored, including nearest neighbor and interpolation (with annotation tool step). The experiments are conducted on small-scale datasets like MNIST CIFAR-10, and IVUS MSE with satisfactory gain over the baselines. Overall, the paper is very well-written and easy to follow. Although the experiment results are not super exciting mainly because of small-scale datasets and not enough gain in the numbers, some of the analysis in Figure 4 are quite insightful to validate the assumption and motivation of this work. So I propose to accept this work for its novelty. I think this work will benefit future research in this direction.\\n\\nHowever, I do have some concerns that I wish the authors could clarify if possible. First, the approach is very similar to online hard negative mining (OHNM) that is purely based on the loss to repeatedly select the samples that generate a larger loss. The major difference is that this work can model the sample distribution and thus select samples based on the gradient w.r.t. the samples in the latent space. This is very novel to me. However, I am wondering if the authors could compare with this sample baseline of OHNM. My concern is that the baselines in this work is too simple and it is not surprise that there is advantage over a simple baseline that is trained without any hard sample mining. \\n\\nSecond, the experiments are all conducted on small-scale and simple datasets like MNIST and CIFAR10. I am concerned how effective this approach could work for large-scale dataset. In the experiment, even for CIFAR10, a vanilla VAE will not work to reconstruct the input. So the authors have used alpha-GAN to help image reconstruction. If that is the case for CIFAR10 with only 10 classes, how could we extend this work to even larger dataset with more complicated background like ImageNet? I would think the preparation step itself is a very challenging task. This is my major concern that will question the effectiveness of the approach in real applications. \\n\\nThird, a related question to the above one. As the input to the DNN is the reconstructed image from the pre-trained decoder, there will be some information loss during the reconstruction process. This is the major challenge, I think, for large-scale applications. Is that possible to use the original image as the input to DNN while still being able to find hard samples using the latent space and the image space correlation? \\n\\nFourth, I really like the visualization of Figure 4 that shows the trajectory of the sampling process that follows the boundaries between classes. The authors also mentioned that some trajectories explore towards outside util there is no real samples, which should be avoided. Could the authors comment on how to avoid such cases? In my understand, as the input is randomly sampled at the beginning, it cannot avoid such cases unless some evaluation is done during training to stop the sampling for these trajectories.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThe paper proposes a method for sequential and adaptive selection of training examples to be\\npresented to the training algorithm. The selection happens in a latent space, based on choosing\\nsamples which are in the direction of gradient of the loss in the latent space. Two selection\", \"strategies_are_investigated\": \"nearest neighbor and interpolation followed by generation. Results on\\nshown on MNIST, CIFAR10 and IVUS (Intravascular Ultrasound) datasets.\", \"detailed_comments\": \"The proposed method works in two stages. First a VAE is trained using unannotated samples. In the\\nsecond stage, hard examples are found, in every iteration, in the latent space of the VAE and used\\nfor sequential training. The sampling is done using the gradient of the objective function in the\\nlatent space. The method makes sense, however the choice of space in which the sample selection is\\nbeing done is not well motivated or validated. The space could have been the original image space\\n(although given the high dimension, it would probably not work), or could have been any intermediate\\nfeature space. Why was the space chosen to be the VAE latent space? Would it be possible to \\ndemonstrate some benefits of doing so, theoretically and empirically? \\n \\nThe experiment section is relatively weak. The datasets used are relatively small and in two out of\\nthe three datasets, the method does not improve. The dimension of the latent space is also\\nsurprisingly small (2). While the main body of the paper describes the method with VAEs, the\\nexperiments for the CIFAR10 dataset (where the results were in favor) were done \\\\alpha-GAN.\"}"
]
} |
r1nSxrKPH | Learning Functionally Decomposed Hierarchies for Continuous Navigation Tasks | [
"Lukas Jendele",
"Sammy Christen",
"Emre Aksan",
"Otmar Hilliges"
] | Solving long-horizon sequential decision making tasks in environments with sparse rewards is a longstanding problem in reinforcement learning (RL) research. Hierarchical Reinforcement Learning (HRL) has held the promise to enhance the capabilities of RL agents via operation on different levels of temporal abstraction. Despite the success of recent works in dealing with inherent nonstationarity and sample complexity, it remains difficult to generalize to unseen environments and to transfer different layers of the policy to other agents. In this paper, we propose a novel HRL architecture, Hierarchical Decompositional Reinforcement Learning (HiDe), which allows decomposition of the hierarchical layers into independent subtasks, yet allows for joint training of all layers in end-to-end manner. The main insight is to combine a control policy on a lower level with an image-based planning policy on a higher level. We evaluate our method on various complex continuous control tasks for navigation, demonstrating that generalization across environments and transfer of higher level policies can be achieved. See videos https://sites.google.com/view/hide-rl | [
"Hierarchical reinforcement learning",
"planning",
"navigation"
] | Reject | https://openreview.net/pdf?id=r1nSxrKPH | https://openreview.net/forum?id=r1nSxrKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"EBa9tvs6Aw",
"S1lqIISnjr",
"SJlFt6EnoB",
"S1lUPaEnsB",
"Bylf9sVnsS",
"SyxS5cVnjH",
"ByxorF4hsr",
"S1xc8U4hoS",
"Hyly6HEniB",
"Bye3eya2KB",
"HJg9A-usKH",
"BJxSfTH8KH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745598,
1573832274278,
1573830016853,
1573829981779,
1573829514187,
1573829260791,
1573828931465,
1573828177841,
1573828022832,
1571766003830,
1571680721528,
1571343629315
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2299/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2299/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2299/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2299/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2299/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2299/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2299/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2299/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2299/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2299/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2299/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The submission proposes a complex, hierarchical architecture for continuous control RL that combines Hindsight Experience Replay, vision-based planning with privileged information, and low-level control policy learning. The authors demonstrate that the approach can achieve transfer of the different control levels between different bodies in a single environment.\\n\\nThe reviewers were initially all negative, but 2 were persuaded towards weak acceptance by the improvements to the paper and the authors' rebuttal. The discussion focused on remaining limitations: the use of a single maze environment for evaluation, as well as whether the baselines were fair (HAC in particular). After reading the paper, I believe that these limitations are substantial. In particular, this is not a general approach and its relevance is severely limited unless the authors demonstrate that it will work as well in a more general control setting, which is in their future work already. \\n\\nThus I recommend rejection at this time.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Improvements, Clarifications and Ablations Part 3\", \"comment\": \"In section 4.3 it says that the control layer is the only layer with access to the agent's proprioceptive state. Would it not be good to at least include the agent facing direction or current average velocity to higher layers to improve the attention mask estimation?\\n\\nThis is a good point. Such an addition to the planning layer might improve the attention mask. We consider adding this to our approach in the future.\\n\\n*** References ***\\n[1] Aravind Srinivas, Allan Jabri, Pieter Abbeel, Sergey Levine, and Chelsea Finn. Universal planning networks, abs/1804.00645, 2018.\\n[2] Andrew Levy, Robert Platt, and Kate Saenko. Learning Multi-Level Hierarchies with Hindsight. InInternational Conference on Learning Representations, 2019.\\n[3] Anonymous Authors, Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?, under double blind review for ICLR 2020, https://openreview.net/forum?id=rJgSk04tDH\\n[4] Nantas Nardelli, Gabriel Synnaeve, Zeming Lin, Pushmeet Kohli, Philip H. S. Torr, and NicolasUsunier. Value Propagation Networks. InInternational Conference on Learning Representations,2019\"}",
"{\"title\": \"General Comment Part 1\", \"comment\": \"We would like to thank the reviewers for their feedback and constructive comments. We have prepared an updated version of the paper addressing the main concerns. All of the presented results have been added to the revised paper. Moreover, we reply to each comment and question more specifically down below.\\n\\nWe would like to emphasize the difficulty of the problem that our method HiDe solves. In all our experiments, HiDe learns from a sparse reward, i.e., the environment gives zero reward for reaching the final goal and -1 in all other cases. As shown in [1], such a task is difficult to solve with standard RL algorithms. Furthermore, HiDe does not rely on any modification of the task configuration, such as random start or goal sampling. In such constraint configurations, the tested baseline methods fail. Hence, we train the baselines with randomly sampled goals in our experiments.\", \"we_want_to_clarify_our_main_contributions\": \"1. A hierarchical architecture that can be trained end-to-end with Reinforcement Learning from sparse rewards in a setting without any modification of the task configuration via the use of an effective planner and a novel architecture.\\n2. We show that our method is able to generalize to new environments, while current state-of-the-art methods need to be retrained for each specific environment they are trying to solve. \\n3. Our decomposed structure, achieved via information asymmetry, allows learning a planner with a very simple agent such as a ball and transferring this planner to a more complex locomotion agent such as an ant or a humanoid, which neither HAC [1] nor HIRO [2] can achieve because of the shared state space across all layers.\", \"the_following_are_the_most_important_changes_that_have_been_updated_in_the_revised_version_of_the_paper\": \"*** Comparison to Top-Down View Approach ***\\nTo address the concern of privileged information available to HiDe, we have conducted an additional experiment comparing our methods with [3], an extension of HIRO, which also has access to top-down view images of the environment. We verify that our method shows better generalization performance (contribution 2), even without being provided with random goal sampling and dense rewards during training (contribution 1) as opposed to [3]. The results for experiment 1, which were added to the revised paper, are as follows:\\n=============================================================================\\nExperiment 1 Forward Backward Flipped\\nHIRO follow-up 97 +- 5 0 +- 0 0 +- 0\\nHiDe-R 89+-3 61+-14 90 +- 3\\nHiDe 85+-6 55+-20 69+-40 \\n=============================================================================\\nAs indicated, even with the improved version of HIRO, which uses dense reward, randomly sampled goals and a top-down view of the environment, the method does not generalize beyond the training configuration and environment. This is in agreement with the original paper where the authors claim and show generalization on the flipped environment, but only for the learned goal space representation. Both policies had to be retrained from scratch, essentially requiring retraining for each environment. HiDe generalizes without retraining (contribution 2).\\n\\n*** Additional Humanoid Domain***\\nTo further highlight contribution 3, we have added an additional domain of a humanoid agent (17 DoF) as a proof of concept, showing that a planning layer from a simple ball agent that requires only short training time can be transferred to a very complex humanoid agent. As can be seen in the video ( https://drive.google.com/file/d/1nOPaIylOP_hLdZy5TirdHE8LPkqAKQZ4/view ), it can successfully solve the provided tasks. We follow standard procedure of training a humanoid [5], which includes using a shaped reward. We train the humanoid in an empty environment. We then use the planning and interface layer from a ball agent and transfer it to the trained humanoid without any modifications. While transfer of a trained HiDe-planner to a humanoid can be achieved, it is currently not possible to train the humanoid with HiDe end-to-end with only sparse rewards. However, the shown transfer demonstrates the benefits of our method.\"}",
"{\"title\": \"General Comment Part 2\", \"comment\": \"*** Ablation Study Interface Layer ***\\nWe provide an ablation study where we compare our method against a version of HiDe without an interface layer, verifying findings from other work [1,4] that an additional intermediate layer can improve performance. The results for an ant agent in experiment 2 are as follows:\\n=============================================================================\\nExperiment 2\\t\\tForward\\t\\tRandom\\t\\tBackward\\t\\tFlipped\\nHiDe no interface\\t10+-5\\t\\t46+-16\\t\\t3+-4\\t\\t\\t0+-0\\nHiDe\\t\\t\\t\\t81+-8\\t\\t89+-3\\t\\t56+-8\\t\\t\\t74+-11\\n=============================================================================\\n\\n*** Ablation Absolute vs. Relative Positions ***\\nWe conduct an experiment comparing HiDe with absolute versus relative positions. We train two variants of HiDe with absolute positions, one with randomly sampled goals during training (HiDe-AR) and one with a fixed goal position (HiDe-A). For the HiDe and HiDe-R with relative positions, we use the results reported in the initial submission. We evaluate on experiment 1 (simple navigation tasks) and experiment 2 (complex navigation tasks) from our paper. The results look as follows:\\n\\n=============================================================================\\nExperiment 1\\t\\tForward\\t\\tBackward\\tFlipped\\nHiDe-A\\t\\t\\t\\t0+-0\\t\\t0+-0\\t\\t0+-0\\nHiDe-AR\\t\\t\\t\\t95+-1\\t\\t52+-33\\t\\t34+-45\\nHiDe-R\\t\\t\\t\\t89+-3\\t\\t61+-14\\t\\t90+-3 \\nHiDe\\t\\t\\t\\t85+-6\\t\\t55+-20\\t\\t69+-40\\n=============================================================================\\n=============================================================================\\nExperiment 2\\t\\tForward\\t\\tRandom\\t\\tBackward\\tFlipped\\nHiDe-A\\t\\t\\t\\t0+-0\\t\\t0+-0\\t\\t0+-0\\t\\t0+-0 \\nHiDe-AR\\t\\t\\t\\t0+-0\\t\\t0+-0\\t\\t0+-0\\t\\t0+-0 \\nHiDe\\t\\t\\t\\t81+-8\\t\\t89+-3\\t\\t56+-8\\t\\t74+-11\\n=============================================================================\\n\\nAs seen in the results for experiment 1, HiDe-A never manages to solve the task. When allowed random goals during training as in HiDe-AR, it shows a slightly better performance on the forward task than both HiDe and HiDe-R , but does not manage to generalize as well to unseen environments.\\nIn experiment 2, we observe that neither HiDe-A nor HiDe-AR manage to solve any of the environments. This indicates that when scaling to larger environments, relative goal positions can be crucial in learning how to solve a task. We argue that relative positions have the advantage of being reused for similar paths, therefore generalizing within and beyond the training environment, while in the absolute case, the policy has to learn how to generalize to the whole environment map. \\n\\nTherefore, the results indicate that i) relative positions improve performance and are an important aspect of our method to achieve generalization to other environments (contribution 2) and ii) random goal position sampling can help agents, but may not be available depending on the environment. Our approach can handle both random and fixed goals.\\n\\n*** Ablation Study Fixed vs. Learned Window ***\\nWe provide an ablation study, comparing our learned attention window to fixed window sizes. The results indicate the fixed window size can only achieve comparable performance if the size hyperparameter is correctly tuned per agent and environment.\"}",
"{\"title\": \"Improvements, Clarifications and Ablations Part 2\", \"comment\": \"It would be very helpful to have a description of the algorithm in the paper. [...] it would be very helpful if anyone wanted to reimplement this work.\\n\\n We have added a description of the algorithm to the revised version of the paper. Additionally, all the code as well as pretrained models have been publicly released with the initial submission under an open licence.\"}",
"{\"title\": \"Improvements, Clarifications and Ablations\", \"comment\": \"It is noted that many other methods require prior knowledge of the environment I would say this method also requires certain kinds of prior knowledge about the task. For example, a top-down view of the environment is needed which is not often feasible.\\n\\nWe agree on this and therefore have removed the part-sentence about prior knowledge from the related work section. Moreover, we now provide results that compare our approach against [3], which is follow-up work of HIRO that also uses the top-down view image of the environment (see general comment for the results and discussion).\"}",
"{\"title\": \"Minor Changes and References\", \"comment\": \"*** Minor Changes ***\\nTable 2 and 3 in the original submission contained one wrong entry due to incorrect aggregation. We now updated these tables. The data stems from the _unedited_ Tables 7 and 8 (prev 5 and 6) in supplementary materials. \\n==================================================\\nExperiment 2\\t\\tAnt (before)\\t\\tAnt (now) \\nForward\\t\\t\\t\\t81+-8\\t\\t\\t81+-8 \\nRandom\\t\\t\\t\\t90+-2\\t\\t\\t89+-3 \\nBackward\\t\\t\\t58+-10\\t\\t\\t56+-8 \\nFlipped\\t\\t\\t\\t56+-10\\t\\t\\t74+-11 \\n==================================================\\n\\n============================================================\\nExperiment 3\\tAnt -> Ball (before)\\t\\tAnt -> Ball (now)\\nForward\\t\\t\\t100+-0\\t\\t\\t\\t\\t100+-0 \\nRandom\\t\\t\\t97+-1\\t\\t\\t\\t\\t97+-1 \\nBackward\\t\\t90+-22\\t\\t\\t\\t\\t98+-4 \\nFlipped\\t\\t\\t81+-44\\t\\t\\t\\t\\t100+-0 \\n============================================================\\n\\n*** References ***\\n[1] Andrew Levy, Robert Platt, and Kate Saenko. Learning Multi-Level Hierarchies with Hindsight. InInternational Conference on Learning Representations, 2019.\\n[2] Ofir Nachum, Shixiang Shane Gu, Honglak Lee, and Sergey Levine. Data-efficient Hierarchical Reinforcement Learning. In Advances in Neural Information Processing Systems, pp. 3303\\u20133313,2018.\\n[3] Ofir Nachum, Shixiang Gu, Honglak Lee, and Sergey Levine. Near-Optimal Representation Learning for Hierarchical Reinforcement Learning. InInternational Conference on Learning Representations, 2019\\n[4] Anonymous Authors, Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?, under submission for ICLR 2020, https://openreview.net/forum?id=rJgSk04tDH\\n[5] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms, arXiv preprint arXiv:1707.06347, 2017.\"}",
"{\"title\": \"Ablations and Clarifications\", \"comment\": \"Since HAC seems to perform worse when combined with the proposed low and mid level policies (RelHAC), it would make sense to compare to the proposed high-level policy using low and mid level from HAC instead.\\n\\nThe low and mid level from original HAC cannot be directly combined with our planning layer, since the goal space in HAC is equal to the state space, such that the subgoals given to layers below consist not only of position goals, but also goals for the proprioceptive state of the agent, such as the joint angles and joint velocities. Contrarily, our planner is decoupled from such a task and therefore only learns to give position goals to the mid level.\\n\\n*** Minor Comments ***\\n1. Comment: Missing description of the mid level policy - what encourages the proposal of closer short-term goals.\\n\\nThe interface layer is motivated by results from previous work [1]. Furthermore, it is also claimed by [2] that the use of more layers leads to better exploration. To empirically show that the mid-layer is a necessary part in the architecture, we ran an ablation study (see general comment for the results and discussion).\\n\\n2. Comment: Missing literature on information asymmetry in RL (e.g. see Tirumala et al 2019 \\u2018Exploiting Hierarchy for Learning and Transfer in KL-regularized RL\\u2019)\\n\\nWe have added the missing literature [3] to the revised version of the paper.\\n\\n3. Comment: Unclear description of how models that get attached to existing planning layers have been trained (Sec 5.3).\\n\\nWe have updated and clarified the description of how the transferred models get trained. More specifically, we evaluate the functional decomposition of HiDe by testing the compatibility of layers trained by different agents. We trained HiDe with either the Ball or the Ant agent. Then, we use the planning layer from such an agent and transfer it onto another agent that was trained using RelHAC, i.e., HiDe without our proposed planner on the top layer. The planning layer agent is trained in the more complex environment of experiment 2 and the second agent is trained in the easier environment from experiment 1. Despite being trained in different environments and with different agents, the planner is transferable. Moreover, our estimate indicates that training the planner with a Ball and then transferring it to a more complex agent is as much 3 to 4 times faster than training HiDe with the Ant or Humanoid from scratch.\\n\\n4. Comment: Additional unclear description in the description of the experiments and method sections (4.2, 4.3)\\n\\nCould the reviewer be a bit more specific about what we can clarify/improve in Sections 4.2 and 4.3? We will gladly apply such changes.\\n\\n*** References ***\\n[1] Andrew Levy, Robert Platt, and Kate Saenko. Learning Multi-Level Hierarchies with Hindsight. InInternational Conference on Learning Representations, 2019.\\n[2] Anonymous Authors, Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?, under double blind review for ICLR 2020, https://openreview.net/forum?id=rJgSk04tDH\\n[3] Dhruva Tirumala, Hyeonwoo Noh, Alexandre Galashov, Leonard Hasenclever, Arun Ahuja, Greg\\nWayne, Razvan Pascanu, Yee Whye Teh, and Nicolas Heess. Exploiting Hierarchy For Learning\\nand Transfer in KL-regularized RL. CoRR, abs/1903.07438, 2019\"}",
"{\"title\": \"Improvements of Experiments via Ablation Studies and a Humanoid\", \"comment\": \"HiDe uses a top down view of the maze and the x y position of the agent, which certainly is more privileged information. If this is correct, first, this should be discussed more thoroughly and directly in the paper. Second, the experimental setup should be elaborated on: is HIRO or HAC modified to include the same information for the top-level policy? Or can HiDe somehow be extended to not require this information?\\n\\nHiDe has indeed access to further information as it receives a top-down image, while HAC [1] and HIRO [2] do not get such information. We mention this more explicitly in the updated version of the paper. Moreover, we now provide results that compare our approach against [3], which is follow-up work of HIRO that uses a simplified top-down view image as well (see general comment for the results and discussion).\\n\\n*** Experiment Section Improvements ***\\n\\n1. Comment: Table 1 and Figure 5 seem disconnected. In particular, the numbers reported in Table 1 are clearly not achieved in Figure 5. Is the figure cut off early?\\n\\nTable 1 and Figure 5 may seem disconnected. For the evaluation shown in Table 1, we selected the checkpoint for each seed where the validation success rate during training was highest. Since these checkpoints are taken at different timestamps, the averaged score of Figure 5 may yield the impression of not achieving these scores. We have added a clearer description of this process in the revised version of the paper and apologize for any possible misunderstandings.\\n\\n2. Comment: Furthermore, an additional experiment on a more complicated domain would greatly strengthen the paper. A humanoid agent, for example, seems easy to test for the current method.\\n\\nWe have added a humanoid agent as a proof of concept. We thereby highlight that our method allows training the planning layer with a simplistic agent such as a ball and transferring it to a very complex agent such as the humanoid (contribution 3). \\n\\n3. Comment: In my opinion, as the paper currently rests heavily on the results, this section should be further improved. \\n\\nWe try to address this by adding further ablation studies to our experiment sections. More specifically, we compare relative and absolute positions, fixed and learned window sizes for the planning layer, and HiDe with and without interface layer. \\n\\n\\n*** References ***\\n[1] Andrew Levy, Robert Platt, and Kate Saenko. Learning Multi-Level Hierarchies with Hindsight. InInternational Conference on Learning Representations, 2019.\\n[2] Ofir Nachum, Shixiang Shane Gu, Honglak Lee, and Sergey Levine. Data-efficient Hierarchical Reinforcement Learning. In Advances in Neural Information Processing Systems, pp. 3303\\u20133313,2018.\\n[3] Ofir Nachum, Shixiang Gu, Honglak Lee, and Sergey Levine. Near-Optimal Representation Learning for Hierarchical Reinforcement Learning. InInternational Conference on Learning Representations, 2019\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The submission proposes a novel method for explicit decomposition of hierarchical policies for long-horizon navigation tasks. The approach proposes to separate a policy into 3 modules, high-level planner, intermediate planner and low-level control. The evaluation shows that explicit decomposition is well suited for generalisation across a limited set of RL domains.\\n\\nThe proposed method integrates aspects from a variety of recent work including planning layers from value propagation networks, hindsight training paradigms from hierarchical actor critic and hindsight experience replay and related techniques. The variety of different techniques combined instead of a single main contribution renders it challenging to follow all aspects and in particular to trace relevant contributions to performance - which is rendered harder by a limited evaluation section.\\n\\nWhile the approach shows good performance against a couple of start of the art methods, it is necessary to provide sufficient ablations to enable long-term insights for the community. The submission high level goal (explicit decomposition and information asymmetry) is clear, the execution involves the combination of many existing techniques plus variations such that it is hard to make solid statements about the relevance of any part.\\n\\nIt is commendable that the authors have introduced adaptations and improvements to their baselines for a stronger and fairer comparison but the evaluation remains very limited. \\nI suggest to run different domains as given by other domains from OpenAI gym or DeepMind control suite. But more importantly I suggest to run further ablations without the intermediate planning layer & with absolute goal positions. Furthermore, since HAC seems to perform worse when combined with the proposed low and mid level policies (RelHAC), it would make sense to compare to the proposed high-level policy using low and mid level from HAC instead. \\n\\nThe submission provides an overall interesting perspective but makes it hard to narrow down on contribution and important insights by being unclear in formulation and providing only very limited ablations.\", \"minor_issues_include\": [\"Missing description of the mid level policy - what encourages the proposal of closer short-term goals.\", \"Missing literature on information asymmetry in RL (e.g. see Tirumala et al 2019 \\u2018Exploiting Hierarchy for Learning and Transfer in KL-regularized RL\\u2019)\", \"Unclear description of how models that get attached to existing planning layers have been trained (Sec 5.3).\", \"Additional unclear description in the description of the experiments and method sections (4.2, 4.3)\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper addresses hierarchical deep reinforcement learning (RL), an important problem in control learning and RL. Based on my understanding of this paper and recent prior work, the most important difference between the proposed approach (HiDe) and other recent approaches, such as HIRO and HAC, is that the top-level goal proposal policy uses a learned planner based on VIN and a learned attention mask to decide on a subgoal. There are also other differences, e.g., this policy outputs a goal position that is relative to the agent's position, rather than an absolute position. HiDe seems to demonstrate impressive transferability to both unseen mazes and new agent embodiments, which are important problems to address for hierarchical RL.\\n\\nIntroducing learned planning and attention into the top-level policy seems to also introduce additional assumptions into the method. For example, it is my understanding that HIRO and HAC use the same state representation, e.g., joint positions and velocities of the agent, as input to their top-level policies. In contrast, HiDe uses a top down view of the maze and the x y position of the agent, which certainly is more privileged information. If this is correct, first, this should be discussed more thoroughly and directly in the paper. Second, the experimental setup should be elaborated on: is HIRO or HAC modified to include the same information for the top-level policy? Or can HiDe somehow be extended to not require this information? I do not think that requiring this information is egregious, but currently the experimental comparison is not clear in this regard.\\n\\nThe experiments are arguably the strongest part of the paper, and the transfer results and videos are quite nice. But there is still room for improvement. Table 1 and Figure 5 seem disconnected. In particular, the numbers reported in Table 1 are clearly not achieved in Figure 5. Is the figure cut off early? Furthermore, an additional experiment on a more complicated domain would greatly strengthen the paper. A humanoid agent, for example, seems easy to test for the current method. Another option would be the movable blocks tested in HIRO, though it is unclear if this readily fits into the current method's assumptions. In my opinion, as the paper currently rests heavily on the results, this section should be further improved. Doing so would also improve my rating of the paper.\\n\\n------\", \"edit_after_author_response\": \"I appreciate the authors' efforts in providing extensive responses to all of the reviewers' concerns as well as a significant general response detailing what seems to be a large amount of additional experimental work. I think that all of this warrants a change to my score, and seeing that the authors have more or less addressed my concerns, I am bumping up to a weak accept.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a neat framework for creating HRL framework that will be able to generalize its application to slightly different environment layout. This is done via an image-based top-down from as input to the high level. An intermediate layer is used to help create more fine-grained goal specification for a final goal-based control layer. These layers are trained together using HAC. Overall, the method shows promise but there needs to be more analysis to understand which parts of this combination of ideas are the most important. The results are also only shown for a single environment. Last, the generalization analysis in the paper does not appear to be overly thorough. It would be good to perform this on more than one type of environment also the random environment is not very random.\", \"more_detailed_comments\": [\"The results seem very similar to some of the work in \\\"Universal Planning Networks\\\" that did not need a more complex HRL design to achieve subgoal specification via images. This should be discussed more in the paper.\", \"The authors point out that the use of relative goal positions \\\"ensures generalization to new environments\\\", this is a rather strong statement. The use of relative goal specification my help improve generalization but that can only be shown empirically.\", \"The demonstration to show that the method generalizes to other configuration after being trained on a fixed environment should be evaluated over many randomly generated environments so that we have a non-biased estimate of the true generalization performance. In Table 2, it is rather surprising that the HiDe trained model does better on the \\\"Random\\\" environment vs the \\\"FOrward\\\" environment it is trained on. Can more details be provided on how the \\\"Random\\\" environment is created? Are the locations of the walls randomized? Is the initial position of the agent and goal randomize?\", \"The video seems to contradict the ordering of operations for training the planning network. The video suggests that first it is learned with the Ant then transferred to the ball which is less complex to control.\", \"At the beginning of the prior work section, it is noted that many other methods require prior knowledge of the environment I would say this method also requires certain kinds of prior knowledge about the task. For example, a top-down view of the environment is needed which is not often feasible.\", \"HIRO and HAC use a more proprioceptive state space but I don't think the sharing of global states is intentional. I am not convinced that this choice, in particular, is what makes the approaches prone to overfitting.\", \"You show a comparison to the \\\"windows\\\" created from your method vs a fixed neighbourhood. Do you perform any empirical evidence that your introduced methods provide an improvement over this fixed window?\", \"It is mentioned at the end of section 4.1 that MVProp is differential so it can be trained with the Bellman error objective. Because many policies are being trained concurrently does the MVProp attention model need to be recomputed after every sub-policy update? Does the frequency of updates have a large effect on performance?\", \"Section 4.2 introduces an interface layer that is not a very common practice. It would be good to include an ablation study of the effects of this introduced layer.\", \"In section 4.3 it says that the control layer is the only layer with access to the agent's proprioceptive state. Would it not be good to at least include the agent facing direction or current average velocity to higher layers to improve the attention mask estimation?\", \"In figure 5 it says HIRO converges the fastest because it has dense rewards. Can you be more specific? Also, If different agents are using different reward signals I am not sure this evaluation is a fair comparison.\", \"Are tables 2 and 3 just for the HiDe algorithm? Is it possible to include data for the other algorithms?\", \"You perform an experiment to train HiDe with random initial and goal locations for comparison. I think running this comparison for HIRO and HAC would be a good additional point of comparison. This would help the reader know if the generalization is not biased to the particular initial environment configuration for Maze Forward.\", \"In the generalization analysis for the paper, how is the analysis performed? There are percentages for the success of the policy, where does the randomness come from is the agent state and goal are always fixed? Are these averaged because the agent has a stochastic policy during evaluation? If this is the case how many random trajectories are collected to compute these statistics?\", \"It would be very helpful to have a description of the algorithm in the paper. How the algorithm works is not very clear and some details about how the goal and states are passed to the different policies would be very helpful if anyone wanted to reimplement this work.\"], \"updated_comments\": [\"The addition of more results and added analysis helps show the improvement of this method over the most related baselines.\"]}"
]
} |
rygjHxrYDB | Deep Audio Priors Emerge From Harmonic Convolutional Networks | [
"Zhoutong Zhang",
"Yunyun Wang",
"Chuang Gan",
"Jiajun Wu",
"Joshua B. Tenenbaum",
"Antonio Torralba",
"William T. Freeman"
] | Convolutional neural networks (CNNs) excel in image recognition and generation. Among many efforts to explain their effectiveness, experiments show that CNNs carry strong inductive biases that capture natural image priors. Do deep networks also have inductive biases for audio signals? In this paper, we empirically show that current network architectures for audio processing do not show strong evidence in capturing such priors. We propose Harmonic Convolution, an operation that helps deep networks distill priors in audio signals by explicitly utilizing the harmonic structure within. This is done by engineering the kernel to be supported by sets of harmonic series, instead of local neighborhoods for convolutional kernels. We show that networks using Harmonic Convolution can reliably model audio priors and achieve high performance in unsupervised audio restoration tasks. With Harmonic Convolution, they also achieve better generalization performance for sound source separation. | [
"Audio",
"Deep Prior"
] | Accept (Poster) | https://openreview.net/pdf?id=rygjHxrYDB | https://openreview.net/forum?id=rygjHxrYDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"2dQkGQmGobz",
"va2iF9wmc7s",
"bdbXuz1rL6q",
"POx-dJ6jqV",
"NLQT-sVw1m",
"Csq5l8wPh",
"Hyg0_EvhsH",
"H1l2JDmroS",
"ByxcMLXSsS",
"r1xOpSmroS",
"HJxtYHmSjH",
"rJeNYCa6KS",
"BJl815FntH",
"Skgs9l3vtr",
"H1ea5LjwFr",
"HJeBEdKvtr"
],
"note_type": [
"official_comment",
"comment",
"comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"comment",
"official_review"
],
"note_created": [
1603163635436,
1603156262581,
1599825990450,
1578497229218,
1578426697615,
1576798745570,
1573839989965,
1573365476150,
1573365265744,
1573365184325,
1573365121171,
1571835515826,
1571752414271,
1571434642654,
1571432084936,
1571424301403
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2298/Authors"
],
[
"~Kazuyoshi_Yoshii2"
],
[
"~Gabriel_Soares_Xavier1"
],
[
"ICLR.cc/2020/Conference/Paper2298/Authors"
],
[
"~Hirotoshi_Takeuchi1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2298/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2298/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2298/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2298/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2298/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2298/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2298/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2298/Authors"
],
[
"~Joe_Renner1"
],
[
"ICLR.cc/2020/Conference/Paper2298/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Re:HCQT\", \"comment\": \"Thanks for the comments and we'll add related references to our text.\", \"we_would_like_to_point_out_several_major_differences\": \"1. CQT-based representations usually has inversion issues, which somehow constraint its usage in generative models, such as ours.\\n2. Our Harmonic convolution is composed of different anchoring and learnable mixtures as well, in addition to sampling at harmonic locations. We showed in our experiments that all of them play an important role in bringing the implicit prior to the network.\\n3. Though we did not experiment with HCQT, but we did experimented with CQT+dilated convolutions, which I would assume is extremely similar to the paper mentioned. However it did not work for us, mostly due to the problems of inverting it back to raw waveform. Also, since the temporal resolution is usually varying for different frequency under CQT, one needs to deform the filter in the temporal axis as well, making sure the convolution window covers the same amount of time.\\n\\nHope this could be helpful!\\n\\nBest,\\nAuthors.\"}",
"{\"title\": \"HCQT\", \"comment\": \"Hello. This is an interesting paper in a sense that the harmonic convolution is shown to work as an inductive bias for forming the deep audio prior. However, the idea of the proposed \\\"Harmonic Convolution\\\" is essentially identical to \\\"Harmonic CQT\\\" (HCQT), which has widely been used in the field of music audio processing. Stacking the pitch-shifted versions of an original spectrogram into a tensor enables efficient implementation.\", \"https\": \"//github.com/rabitt/ismir2017-deepsalience\"}",
"{\"title\": \"Doubt about harmonious convolution.\", \"comment\": \"Hello, I am a graduate student and I need to better understand this idea of \\u200b\\u200bharmonious convolution, there is some material that can explain this more clearly, I would like to implement this in tf.keras.\"}",
"{\"title\": \"Re: Similar implementation question\", \"comment\": \"Hi,\\n\\nWe use zero paddings for all the operations.\\n\\nThanks,\\nAuthors\"}",
"{\"title\": \"Similar implementation question\", \"comment\": \"Hello.\\n\\nYour paper interested me!\\n\\nI have a similar question.\\nI want to know how to calculate X[k\\u03c9/n,t] when \\u03a9 < k\\u03c9/n.\\n\\nThanks.\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper introduces a new convolution-like operation, called a Harmonic Convolution (weighted combination of dilated convolutions with different dilation factors/anchors), which operates on the STFT of an audio signal. Experiments are carried on audio denoising tasks and sound separation and seems convincing, but could have been more convincing: (i) with different types of noises for the denoising task (ii) comparison with more methods for sound separation. Apart those two concerns, the authors seem to have addressed most of reviewers' complaints.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Results for Additional Experiments\", \"comment\": \"1. Comparing denoising results with Deformable Convolution:\\nWe report the denoising performance of Deformable Convolution on the LJ-Speech dataset. We use the same network structure and hyper-parameter described in the paper. To keep the number of parameters consistent, we do not use extra convolution layers to predict kernel offsets, as described in the original deformable convolution paper. We treat the offsets as parameters and optimize them together with the network's weight.\", \"deformable_convolution\": \"\", \"csig\": \"1.00119 CBAK:1.67021 COVL:1.00383 PESQ:1.05173 SSNR:-1.50226\", \"the_channels_for_adding_layers_are\": \"[input\\u2192input, input\\u219235], [35\\u219235, 35\\u219270],[70\\u219270, 70\\u2192140], [140\\u2192140, 140\\u2192140], [280\\u2192280, 280\\u219270], [140\\u2192140, 140\\u219235], [70\\u219270, 70\\u219235]\", \"the_corresponding_metrics_are\": \"\", \"the_channels_for_removing_layers_are\": \"[input\\u2192input, input\\u219235], [35\\u219235, 35\\u219235],[70\\u219270, 70\\u219235]\", \"we_also_updated_the_corresponding_results_on_our_webpage\": \"\", \"https\": \"//anyms-sbms.github.io/speech_denoising.html\\n\\nThe shallow version of our network is still capable of generating restored content to some degree, yet the deep version seems to overfit to the noise.\"}",
"{\"title\": \"General Response to Reviewers' Comments\", \"comment\": \"We thank the reviewers for their efforts and helpful comments. We have revised the paper to address concerns on the presentation and are running additional experiments as suggested. We will update the paper again before Nov 15 to include the results of those experiments.\\n\\nText revisions include (highlighted in the revised manuscript):\\n\\n1. A better introduction of deep priors (R1):\\n\\nWe have modified the first paragraph in the introduction section, adding a highly summarized overview of Lempitsky et al. (2018), to establish the foundation of deep priors.\\n\\n2. More accurate arguments on deep models for image and audio processing (R1):\\n\\nWe modified Sec. 2 to convey the core message and motivation of this paper: investigating whether various designs for generative audio signal modeling could capture audio priors by their structure. We do recognize that CNNs prove effective for various discriminative tasks, yet our task is more related to generative modeling of audio signals. \\n\\n3. Clearer introduction of Fig. 1 (R1):\\n\\nWe\\u2019ve added more descriptions in the caption of Fig. 1 to clarify the setup. \\n\\n4. Better relate and motivate the design of Harmonic Convolution in Sec. 2.1 (R1):\\n\\nWe modified Sec. 2.1 by adding more explanation at the end to make it more clear and accessible. We also added the missing definition of x_0 in the text.\\n\\n5. Improve Sec. 3 to be more accurate on CNN models for image and audio processing (R1):\\n\\nWe modified this sentence to be more explicit, acknowledging their success in discriminative tasks. \\n\\n6. Add deformable convolution into context (R1):\\n\\nWe added a section in the related works to discuss the structured operators in various deep learning models. \\n\\n7. Better explanation to Fig. 2 (R2, R3):\\n\\nWe have modified the caption for Fig. 2 and text in Sec. 2.4 to be more clear about natural statistics analysis. \\n\\n8. More clarification on Harmonic Conv in Approach section (R3):\\n\\nWe have modified the text in the implementation details in Sec. 3 and the setup paragraphs in Sec. 4.2, 4.3, and 4.4 to make this point. \\n\\n9. Better Explanation of Fig. 4 (R2):\\n\\nWe rewrote the caption for Fig. 4 to make it more clear. \\n\\n10. Dilated Convolution in the paper\\u2019s notion (R2):\\n\\nWe have added a section in the appendix to include dilated convolution in the paper's formulation.\\n\\n11. Typo in Equation 1 (R3):\\n\\nWe fixed the typo indicated by R3.\\n\\nExtra experiments include\\n\\n1. Compare against deformable convolution (R1)\", \"we_report_the_speech_denoising_result_using_deformable_convolution\": \"\", \"deformable_convolution\": \"\", \"csig\": \"1.00119 CBAK:1.67021 COVL:1.00383 PESQ:1.05173 SSNR:-1.50226\", \"we_also_removed_the_middle_layers_and_the_corresponding_metrics_in_speech_denoising_are\": \"\", \"we_also_updated_the_corresponding_results_on_our_webpage\": \"\", \"https\": \"//anyms-sbms.github.io/speech_denoising.html\\n\\nWe highly recommend listening to the results for different approaches. \\n\\n3. Compare against source separation outside the framework (R2)\\nWe compared with Non-negative Matrix Factorization (NMF) for source separation, and we report results for both unsupervised and supervised NMF:\\n\\n----unsupervised----\", \"guitar\": \"SDR: 5.97 SIR: 7.56 SAR: 12.81\", \"congas\": \"SDR: 1.77 SIR: 2.76 SAR: 11.97\", \"xylophone\": \"SDR: 8.08 SIR: 12.33 SAR: 11.72\\n\\nPlease let us know for any questions. Thanks again for all the suggestions, which have made this submission stronger. \\n\\nBest,\\nAuthors.\"}",
"{\"title\": \"Response to Reviewer #2's comments\", \"comment\": \"Thank you for your constructive comments! We would like to address your concerns as follows:\\n\\n1. Clarification on Fig. 4.\\n\\nWe rewrote the caption for Fig. 4. Specifically, for Wave-U-Net, the green curve indicates the fitting result compared against the noisy target, and the red curve is the result evaluated against the clean signal. Therefore, Wave-U-Net fits the noisy target fast but does not produce the clean version of the signal during fitting. For Convolution and Dilated Convolution networks, they do fit faster but saturates with low-quality output. Harmonic Convolution produces much better results, which is ~3.5 dB higher. We highly recommend listening to examples at https://anyms-sbms.github.io to feel the difference. \\n\\n2. Dilated convolution in paper\\u2019s notation.\\n\\nWe have added a section in the appendix to include dilated convolution in the paper's formulation.\\n\\n3. Clarification on Fig. 2.\\n\\nSince the plots in Fig. 2 are log-scale, one would expect nearly linear fall-off of energy from low-frequency components to high-frequency components, which is the case of (a). But (c)(e) exhibit drastically different fall-offs of energies compared with (a). We have modified the caption of Fig. 2 to be more specific.\\n\\n\\nWe compared our model with unsupervised/supervised NMF for sound source separation, a common unsupervised baseline for this task. The evaluations are reported as follows:\\n----unsupervised----\", \"guitar\": \"SDR: 5.97 SIR: 7.56 SAR: 12.81\", \"congas\": \"SDR: 1.77 SIR: 2.76 SAR: 11.97\", \"xylophone\": \"SDR: 8.08 SIR: 12.33 SAR: 11.72\\n\\nPlease let us know for any questions. Thanks again for your suggestions, which have made this submission stronger. \\n\\nThanks,\\nAuthors\"}",
"{\"title\": \"Response to Reviewer #3's comments\", \"comment\": \"Thank you for your helpful suggestions and we would like to address your concerns as follows:\\n\\n1. Better explanatory texts for natural statistics comparison.\\n\\nWe have modified the caption for Fig. 2 and text in Sec 2.4 to be more clear about the natural statistics analysis. This analysis is intended to contrast the natural statistical differences among the representations, to indicate that different modeling approaches are needed for each of them. Models that capture image priors well might not transfer to spectrograms or raw waveforms.\\n\\n2. Equation 1 typo fixed.\\n\\n3. Complex Coefficient vs Spectrograms.\\n\\nThanks for the suggestion. We intentionally use the spectrogram notation as we do not use complex-valued kernels with complex-valued convolution. Yet in order to generate the audio signal, we simply generate the real and imaginary parts of the STFT coefficients such that we can convert them to waveform using inverse STFT.\\n\\nWe have modified the text in the implementation details in Sec. 3 and the setup paragraphs in Sec. 4.2, 4.3, and 4.4 to make this point. \\n\\n4. Details in the experiments to clear up the settings.\\n\\nWe have modified the text in Sec. 4.2, 4.3, and 4.4 to make the details more clear. \\n\\nFor experiments in Sec. 4.2 and 4.3, the network\\u2019s output is the complex STFT coefficient, the raw waveform is then recovered by inverse STFT using the overlap-and-add method. For experiments in Sec 4.4, the output of the network is the ratio mask, and the separated audio is generated by an Inverse STFT operated on the input STFT coefficients multiplied by the predicted ratio mask. The L1 loss is calculated between the predicted ratio mask and the ground truth ratio mask.\\n\\nPlease let us know for any questions. Thanks again for your suggestions, which have made this submission stronger. \\n\\nThanks,\\nAuthors\"}",
"{\"title\": \"Response to Reviewer#1's comments\", \"comment\": \"Thank you for your constructive comments about our manuscript. We have modified the paper in the following way and hopefully, this could address your concerns:\\n\\n1. Introducing Harmonic Convolution with the context of deformable convolution\\n\\nWe emphasize that our method is only loosely related to deformable convolution. In our paper, we show that the harmonic structure is important for the network to capture priors in audio signals. This structure is general and does not need to be learned. In contrast, deformable convolution emphasizes that learning custom offsets for convolutional kernels can boost object detection performance. The only connection we have now with deformable convolution is that we used their implementation of fractional bilinear sampling during convolution as a building block for our method. Such an implementation can and will be replaced for better efficiency. To clarify this explicitly, in revision, we have added a section in related work to discuss the various structured operators in deep learning. \\n\\nWe agree that it\\u2019d still be a good idea to compare with deformable convolution. We\\u2019ll include the results in a later revision by Nov 15.\\n\\n2. An early and formal notion of Deep Priors, without assuming moderate exposure to Lempitsky et al. (2018):\\n\\nWe have modified the first paragraph in the introduction section, adding a highly summarized overview of Lempitsky et al. (2018), to establish the foundation of deep priors.\\n\\n3. More accurate narratives towards deep learning models in image and audio processing:\", \"we_modified_it_to_convey_the_core_message_and_motivation_of_this_paper\": \"investigating whether various designs for generative audio signal modeling could capture audio priors by their structure. We do recognize that CNNs prove effective for various discriminative tasks, yet our task is more related to generative modeling of audio signals.\\n\\n4. A more precise description of Fig. 1:\\n\\nWe\\u2019ve added more descriptions in the caption of Fig. 1 to explain the setup. \\n\\n5. Relate Sec. 2.1 to audio priors, explain why mapping random vector z to corrupted target signal is meaningful and why this matters.\\n\\nAs demonstrated in Lempitsky et al. (2018), when the neural network is fitting a corrupted signal x_0 with randomly initialized weights and with random vector z as input, it would first learn a mapping from z to the clean version of $x_0$. The argument being that the network provides an implicit regularization, where the clean signal is much easier for it to fit. Thus, the inductive bias implied by the network itself can be seen as more suitable for modeling images. In terms of the audio signals, we aim to use the same setup to probe if the inductive biases implied by various models are suited for audio signal modeling. \\nWe modified Sec. 2.1 by adding more explanation at the end to make this point more clear and accessible. We also added the missing definition of $x_0$ in the text.\\n\\n6. The opening sentence in Sec. 3, more on the motivation of harmonic convolution:\\n\\nThe opening of Sec. 3 is not about whether CNNs are the best blocks for learning from audio signals (in fact, we acknowledged their success in supervised learning tasks), but to state that they do not, by nature, capture audio priors as shown in Figure 1 and our experiments in general. We clarify that whether a model can capture audio priors is not necessarily related to their performance in supervised learning tasks such as speech recognition. We have edited this sentence to be more explicit. \\nIn light of the facts above and motivated by experiments in psychoacoustics, we aim to exploit the harmonic structure in audio signals explicitly, which leads to the design of harmonic convolutions.\\nThe motivation above was added to the paper, as suggested.\\nAs for the illustrations, we are assuming this refers to Fig. 1. We added text referring to the exact architecture of the models, which is described in the appendix.\\n\\n7. Confusion about Fig. 3\\n\\nWe have revised Fig. 3 to be more straightforward. We added the annotation for input and output, as well as equations to make the anchoring parameter explicit. We also changed the aggregation annotation to be more clear. \\n\\n8. Different types of noise\\n\\nThanks for the suggestion. We focus on additive Gaussian noise in our experiments because that is the most common approximation of the channel noise during transmission. We also showed restoration results under very aggressive quantization, which is rather signal-dependent instead of additive. We are happy to experiment with other types of channel noise if there is any specific suggestion.\", \"we_are_working_on_the_following_experiments_to_address_the_points_you_raised\": \"1. Comparing denoising results with Deformable Convolution. \\n\\n2. Varying the depth of each model and report performance change, in the context of denoising.\\n\\nPlease let us know for any questions. Thanks again for your suggestions, which have made this submission stronger. \\n\\nThanks,\\nAuthors.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper considers the effectiveness of standard convolutional blocks for modelling learning tasks with audio signals. The effectiveness of a neural network architecture is assessed by evaluating its ability to map a random vector to a signal corrupted with an additive noise. Figure 1 illustrates this process with a network taking a single standard normal vector as input and having a single target output consisting of some signal corrupted with additive noise.\\n\\nThe paper is not well written and it is rather difficult to follow. It is also not well structured with a number of relevant concepts properly described only sections after they appear for the first time.\\n\\nThe first issue I had with the paper was the notion of audio prior. It was only after reading the whole paper that I have realized what this means. Having said this, it is unclear why the employed notion would work in general. I see why it could work when the distribution of the input vector and additive noise are correlated. This has not been clarified nor discussed and I believe it merits a couple of sentences.\\n\\nIn the introduction, the paper states \\\"... unlike CNNs for image modelling, the design of deep neural networks for auditory signals has not yer converged\\\". First, it is not clear what it means for the architecture to converge. If we assume that it refers to standard convolutions with a couple of widely accepted filter size and max pooling, the I would say that in speech recognition the structures that work are quite similar for mel-frequency coefficients or fbank features as inputs (which are again convolutional feature extraction layers).\\nShortly after this, there is a question on justification of network designs. I disagree with a potential implication that this is well understood for image processing. For some insights relevant to speech, the work by Mallat (\\\"Group invariant scattering\\\", 2012) might be useful.\\n\\nFigure 1 and the paragraph just below its caption are not clear. It is not explained what is the input/output of the network and this is of great importance for the understanding of the illustration in Figure 1.\\n\\nThe introduction does not explicitly define the notion of audio prior and the whole paper is about this. In my opinion, it is wrong to assume that a reader has seen the paper by Lempitsky et al (2018).\\n\\nSection 2.1, the optimization objective as formulated implies that z and x_0 are completely independent. I do not see how any meaningful conclusion can be derived by fitting a map between independent input and output vectors. Some assumption is required for the proper notion of \\\"audio prior\\\" (if not, then a discussion arguing for the opposite).\\n\\nSection 3, opening paragraph concludes that standard CNNs are not the best blocks to model learning tasks with audio signals. For this implication, one needs the exact structure of CNN network and more details with regard to the experiment itself. In particular, there are deep CNNs (with mel-frequency coefficients as inputs) that work very well in speech recognition (e.g., on noisy datasets such as aurora4). This illustration does not say anything about the influence of the depth and number of convolutional blocks on a learning task. The language should be more moderate here and, in general, some additional work is required on the motivation of harmonic convolutions.\\n\\nIn my understanding, harmonic convolutions are a special case of deformable convolutions (Dai et al., 2017). In essence, standard convolution is applied over time and deformable over the frequency axis of a spectrogram. The main contribution seems to be in that the work provides a structure to the offsets in Dai et al. (Section 2.1, 2017). If I am correct, then this should be discussed in details and the harmonic convolution needs to be placed in the context of prior work. It might help by starting with a review of that work and then introducing the imposed structure on the offset vectors. I am having problems understanding the illustration in Figure 3.\\n\\nIn the experiments, the work is evaluated on signal de-noising (audio restoration) and sound separation. \\n\\nThe first task is carried out under the assumption that the signal has been corrupted with Gaussian noise and shows advantages of the approach over baselines which include standard convolutional networks. It would be interesting here to see how the depth of a convolutional network affects the performance. Also, as the approach is (in my understanding) a special case of deformable convolutions it would be insightful to show an experiment with that baseline. While additive noise is difficult on its own, many signals are corrupted by channel noise. It would be interesting to add an experiment with different types of channel noise and which network design is more likely to de-convolve the noise from the signal.\\n\\nThe second experiment deals with separation of sounds of different musical instruments and the results again show advantages of harmonic convolutions over the baselines.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies the problem of how to design generative networks for auditory signals in order to capture natural signal priors. Compared to state-of-art methods in images [Lempitsky et al., 2018], this problem is not so easy on audio signals. Existing work [Michelashvili &Wolf] trains generative networks to model signal-to-noise ratio rather than the signal itself. This paper proposes a new convolutional operator called Harmonic Convolution to improve these generative networks to model both signals or signal-to-noise ratio. Applications on audio restoration and source separation are given.\\n\\nThe paper starts to show that an existing generative network Wave-U-Net does not capture audio signal priors. The explanation in Fig 2 on why this is the case seem to me not so clear. Are you trying to show that the Wave-U-Net does not work since there is no 1/f^2 law for clean audio signals? \\n\\nThe Harmonic Convolution is similar to deformable convolutions, but specifically designed to capture audio harmonics. It is further combined with the idea of anchors and mixing to capture fractional frequencies. The explanation of this section is slightly unclear. There is a little typo in Formula 1 for the STFT spectrogram, I would use the modulus |.| rather than || . ||. Is Harmonic Convolution applicable to complex STFT coefficients as well? It seems to be yes based on Section 4.2. If so it would be better to define the operator in a more general notation.\\n\\nNumerical experiments show that the Harmonic Convolution improves over existing regular and dilated convolutions in various settings. Section 4.2 aims to fit the complex STFT coefficients of corrupted signals. However, the setting is less clear to me for both the unsupervised speech/music restoration and supervised source separation problems. In Section 4.3 and 4.4, is the x_0 (defined in Section 2.1) complex-valued STFT coefficients or something else? It seems to me x_0 = ratio mask in Section 4.4. What is the L1 loss defined in Section 4.4? To obtain the final separated audio waveform, an inverse STFT is applied on what? These details can be written in supplementary material if more space is needed. After all, the numerical results seem to me encouraging.\"}",
"{\"comment\": \"Hi, Joe\\n\\nThank you for your comment!\\n\\nYes your understanding is correct. We use the bilinear interpolation for sampling the fractional frequencies.\", \"title\": \"Re: Implementation Question\"}",
"{\"comment\": \"Hello,\\n\\nGreat paper! One quick question:\\n\\nIn the case where the anchor value n is > 1, how do you aggregate lower order harmonics if the target frequency location is not divisible by 2 ^ (n - 1)? \\n\\nFor example, the target frequency bin is 5, and the anchor is 2, so the first lower order harmonic frequency bin in the kernel would be computed as 5/2.\\n\\nYou mention you used deformable convolution to implement equation 5, which would lead me to believe you used bilinear interpolation to compute the value if the offset is fractional, but I just wanted to make sure I was understanding this correctly.\\n\\nThanks!\", \"title\": \"Implementation Question\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors introduce a new convolution-like operation, called a Harmonic Convolution, which operates on the STFT of an audio signal. This Harmonic convolution are like a weighted combination of dilated convolutions with different dilation factors/anchors. The authors show that for noisy audio signals, randomly initialized/untrained U-Nets with harmonic convolutions can yield cleaner recovered audio signals than U-Nets with plain convolutions or dilated convolutions. The authors beat a variety of audio denoising tasks on a variety of metrics for speech and music signals. The authors also show that harmonic convolutions in U-Nets are better than plain and dilated convolutions in U-Nets for a particular sound separation task.\\n\\nI recommend a weak accept for this paper because a new architecture for audio priors was presented, with reasonable empirical data supporting that this architectural choice an improvement over other more immediate alternatives. It is important to extend the work on deep nets for imaging to other domains, such as audio. My recommendation is not stronger because of the following concerns. \\n\\nI think the paper could be strengthened by\\n(a) a comparison to other methods (outside the current framework) for sound separation\\n(b) a significant clarification of Figure 4. The authors claim that this data shows that Harmonic Convolutions produce a \\\"cleaner signal faster\\\" than other methods. When I look at Figure 4abcd, it appears that the Convolution and Dilated Convolutions fit a clean signal faster (it is just not as clean. Additionally, the Wave-U-Net appears to reach the same accuracy as the Harmonic Convolution with many fewer iterations (while also continuing to get much higher PSNRs). Perhaps I am misreading this plot, but it is not obvious to me that this plot supports the claims the authors are making.\\n(c) The authors should present what they mean by a dilated convolution using the notation of the paper. \\n(d) In Figure 2, it is unclear to me how the 1/f^2 law is observed in (a) but not in (c) or (e).\"}"
]
} |
BJxsrgStvr | Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks | [
"Haoran You",
"Chaojian Li",
"Pengfei Xu",
"Yonggan Fu",
"Yue Wang",
"Xiaohan Chen",
"Richard G. Baraniuk",
"Zhangyang Wang",
"Yingyan Lin"
] | (Frankle & Carbin, 2019) shows that there exist winning tickets (small but critical subnetworks) for dense, randomly initialized networks, that can be trained alone to achieve comparable accuracies to the latter in a similar number of iterations. However, the identification of these winning tickets still requires the costly train-prune-retrain process, limiting their practical benefits. In this paper, we discover for the first time that the winning tickets can be identified at the very early training stage, which we term as Early-Bird (EB) tickets, via low-cost training schemes (e.g., early stopping and low-precision training) at large learning rates. Our finding of EB tickets is consistent with recently reported observations that the key connectivity patterns of neural networks emerge early. Furthermore, we propose a mask distance metric that can be used to identify EB tickets with low computational overhead, without needing to know the true winning tickets that emerge after the full training. Finally, we leverage the existence of EB tickets and the proposed mask distance to develop efficient training methods, which are achieved by first identifying EB tickets via low-cost schemes, and then continuing to train merely the EB tickets towards the target accuracy. Experiments based on various deep networks and datasets validate: 1) the existence of EB tickets and the effectiveness of mask distance in efficiently identifying them; and 2) that the proposed efficient training via EB tickets can achieve up to 5.8x ~ 10.7x energy savings while maintaining comparable or even better accuracy as compared to the most competitive state-of-the-art training methods, demonstrating a promising and easily adopted method for tackling cost-prohibitive deep network training. | [
"tickets",
"eb tickets",
"efficient training",
"deep networks",
"existence",
"mask distance",
"efficient",
"frankle",
"carbin",
"small"
] | Accept (Spotlight) | https://openreview.net/pdf?id=BJxsrgStvr | https://openreview.net/forum?id=BJxsrgStvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"7OVzVpDj71",
"HkxzLrTjoB",
"HkghlnSjjB",
"SyeWhISisS",
"B1xzRFWqjH",
"r1lESd-qsr",
"HJedJdZ9jr",
"BJxIRropKH",
"Bkx7VRIhFH",
"rk5HxSNKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745542,
1573799242275,
1573768179937,
1573766825071,
1573685706324,
1573685308235,
1573685215554,
1571825101881,
1571741226746,
1571209281516
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2297/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2297/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2297/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2297/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2297/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2297/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2297/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2297/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2297/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This work studies small but critical subnetworks, called winning tickets, that have very similar performance to an entire network, even with much less training. They show how to identify these early in the training of the entire network, saving computation and time in identifying them and then overall for the prediction task as a whole.\\n\\nThe reviewers agree this paper is well-presented and of general interest to the community. Therefore, we recommend that the paper be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer#2\", \"comment\": \"Thanks for your careful review and feedback.\", \"q1\": \"Sorry for the confusion. The FLOPs of all the pruned models (including EB Train) in the table consist of both winning ticket search and retraining costs, hence leading to higher FLOPs of the pruned NS and SFP models over the unpruned model.\", \"q2\": \"We appreciate your suggestion and share your curiosity. Due to the limited time frame in rebuttal, we actually did not have enough time and resources to finish the experiments under 50% and 70% pruning. \\n\\nWe will continue our experiments and make sure to obtain those results to be updated in the final version, to verify your \\\"imagination\\\". We would also try the low-precision schemes (e.g., EB Train LL) and see if better accuracy-training FLOPs trade-offs could be achieved.\"}",
"{\"title\": \"Response to your comments\", \"comment\": \"Thank you for the clarifications.\\n\\n1. How come the FLOP numbers of the pruned NS and SFP models are so much higher than the unpruned baseline?\\n\\n2. To the best of my understanding, the table shows that indeed, using a stronger baseline and a harder datasets yields a slightly different picture: compared to Table 2, which shows that pruning up to 70% yields similar or better performance compared to the baseline, here even 30% pruning reduces performance, and I can only imagine that 50% and 70% would yield even larger reductions. I would encourage the authors to add these results to the paper and discuss them.\"}",
"{\"title\": \"Response to the Authors\", \"comment\": \"Thank you for addressing my comments. The experiments on ImageNet are a nice addition too.\"}",
"{\"title\": \"Response to Reviewer#2\", \"comment\": \"We thank the reviewer for the insightful comments. We have revised the manuscript and added the following response to address your concerns and questions:\", \"c1\": \"ResNet and VGG on CIFAR-10/100 are popular benchmarks widely used in both latest \\u201clottery ticket\\u201d and efficient CNN training papers. Furthermore as requested, we add a group of experiments on ImageNet and ResNet18 show that our method translates to a harder dataset. As shown in the following table, we compare the retrain accuracy and training FLOPs of EB Train with those of the unpruned ResNet18, network slimming, and SFP on ResNet18 + ImageNet.\\n\\n---------------------------------------------------------------------------------------------------------------------------------------------\\nMethods Pruning ratio Top-1 Acc. Acc. Improv. (%) Top-5 Acc. Acc. Improv. (%) Training FLOPs (P)\\n---------------------------------------------------------------------------------------------------------------------------------------------\\nUnpruned \\t -\\t 69.57 \\t-\\t 89.24\\t -\\t 1259.13\\nNS\\t 10%\\t 69.65 +0.08\\t 89.20\\t -0.04\\t 2424.86\\nNS\\t 30%\\t 67.85\\t -1.72\\t 88.07\\t -1.17 \\t 2168.89\\nSFP\\t 30%\\t 67.10\\t -2.47\\t 87.78\\t -1.46\\t\\t 1991.94\\nEB-Train\\t 10%\\t 69.84\\t +0.27\\t 89.39 +0.15\\t\\t 1177.15\\nEB-Train\\t 30%\\t 68.28\\t -1.29\\t 88.28\\t -0.96\\t\\t 952.46\\n---------------------------------------------------------------------------------------------------------------------------------------------\\n\\nThe above table shows that 1) when the pruning ratio is 10%, EB Train achieves a better accuracy (+0.27 vs. +0.08 over the unpruned one) while requiring 51% less total training FLOPs, as compared to the network slimming (NS) pruning method; 2) when the pruning ratio is 30%, EB Train results in a higher accuracy (-1.29 vs. -1.72(NS)/-2.47(SFP) over the unpruned one) while reducing the training FLOPs by 56% and 52%, as compared to the NS and SFP baselines. Note that \\u201cAcc. Improv.\\u201d in the above table is referred to that of the unpruned models. In the above table, the unpruned results are based on the official implementation (see https://github.com/facebookarchive/fb.resnet.torch; The SFP results are obtained from their original paper (see https://arxiv.org/pdf/1808.06866.pdf), which did not provide results at 10% pruning ratio; and the NS results are obtained by conducting the experiments ourselves.\", \"c2\": \"Thank you for pointing out this and we agree with your comment. Also, we have conducted experiments to show the retrain accuracy of sub-networks drawn from different learning rate schedules. Please kindly see our response to Q2 of Reviewer 1.\", \"q1\": \"It is pruned after training for 1 epoch because Figure 1 plots the retrain accuracy of the sub-networks obtained by pruning after training for M epochs (M starts from 1). Sorry for not having made it clear enough.\\nWhile sometimes the extracted networks after training for 1- 2 epochs might sometimes achieve good accuracy, they can be unstable. As an illustration, we have plotted error bars for all data points in Figures 1 and 2 based on three independent runs.\", \"q2\": \"Figure 5 shows the total training FLOPs of EB Train vs. different early-stop epochs as an \\u201cablation\\u201d example to show the effectiveness of EB Train. We apologize for the confusion. Meanwhile, we use Table 2 to illustrate the proposed mask distance-based method which automatically identifies the early stopping point, by benchmarking over the state-of-the-art designs.\", \"q3\": \"Please kindly find our response in the reply to Q1 of Reviewer 1.\\n\\nThank you for providing your kind suggestions on our writing! We have: 1) toned down our language in the revised manuscript; and 2) corrected the typos. We will carefully proofread our paper.\"}",
"{\"title\": \"Response to Reviewer#3\", \"comment\": \"Thanks for your careful review and positive feedback! We have addressed your comments in our revised manuscript as summarized below:\\n\\nwe have followed your kind suggestions to 1) remove the while condition (Max(Q) > eps) in algorithm 1 for making it more concise; and 2) plot d and invert the color bar in Figure 3 to better visualize the effectiveness of the proposed mask distance. \\n\\nWe have toned down the strong statement in the abstract. Furthermore, we have added experiments on the ImageNet dataset in Table 3 to more thoroughly validate our claim.\"}",
"{\"title\": \"Response to Reviewer#1\", \"comment\": \"Thanks for your careful review and comments and for appreciating our contributions! We have conducted more experiments and revised our paper to 1) address your comments and 2) improve the paper. Please find our itemized responses below.\", \"q1\": \"After the submission, we carefully revisited all experiments and found that the plunges were caused by the resulting over-pruning of the networks\\u2019 certain layers when the target pruning ratio is high (e.g., 70%), due to the adopted global pruning scheme, i.e., enforcing to reach the target compression ratio in a network-wise instead of layer-wise manner. For example, pruning VGG-16 on CIFAR100 by 70% will lead to several layers of the resulting network to have less than 10 channels.\\n\\nFurthermore, we found that this \\u201cplunge\\u201d issue can be fixed by applying a simple \\u201cprotective pruning\\u201d heuristic on top of our current algorithm. Specifically, we stop pruning a layer, when its remaining channels after pruning become less than 10% of the original, for avoiding overly slimming this layer; meanwhile we (uniformly) prune other layers more to meet the overall pruning ratio. Experiments show that such a simple protective strategy can effectively eliminate the plunges as shown in the updated Figures 1-2, without incurring overhead. Note that this \\u201cprotective pruning\\u201d is only activated when there are over-pruned layers and thus affects only a few data points in the figures.\", \"q2\": \"we followed the common learning rate setting for drawing winning tickets (see Section 6 of https://arxiv.org/pdf/1810.05270.pdf). As you kindly suggested, here we show the retrain accuracy of sub-networks drawn from different learning rate schedules.\\n\\nInterestingly, a large initial lr of 0.5 and degraded schedule [80_{LR\\\\rightarrow0.05}, 120_{LR\\\\rightarrow0.005}] indeed works better for drawing EB tickets in VGG16 performed on CIFAR-10/100, leading to an earlier emergence of EB tickets and a higher retrain accuracy. This seems promising and is consistent with our hypothesis \\u201clarge learning rates favor the emergence of EB Tickets.\\u201d \\n\\n(VGG16@CIFAR10)\\n---------------------------------------------\\nInitial LR Epoch EB drawn from\\n\\t\\t 10 20 40\\t final\\n---------------------------------------------\\n 0.1 93.26 93.34 93.20 92.96\\n 0.5 93.49 93.45 93.44 93.29\\n---------------------------------------------\\n(VGG16@CIFAR100)\\n---------------------------------------------\\nInitial LR Epoch EB drawn from\\n\\t\\t 10 20 40\\t final\\n---------------------------------------------\\n 0.1 71.11 71.07 69.14 69.74\\n 0.5 70.69 71.65 71.94 71.58\\n---------------------------------------------\\n\\nA similar observation is found when the initial lr is 0.2, in experiments on ResNet + CIFAR-10/100, although the schedule of initial lr 0.5 does not seem to further help EB tickets:\\n\\n(PreResNet101@CIFAR10)\\n---------------------------------------------\\nInitial LR Epoch EB drawn from\\n\\t\\t 10 20 40\\t final\\n---------------------------------------------\\n 0.1 93.60 93.46 93.56 92.42\\n 0.2 93.40 93.46 93.87 93.69\\n---------------------------------------------\\n\\n(PreResNet101@CIFAR100)\\n---------------------------------------------\\nInitial LR Epoch EB drawn from\\n\\t\\t 10 20 40\\t final\\n---------------------------------------------\\n 0.1 71.58 72.67 72.67 71.52\\n 0.2 72.58 72.90 72.86 72.71\\n---------------------------------------------\\n\\nGiven the aforementioned observation in experiments with various models and datasets (thanks to the Q2 comment from review#2), we revise the claim in the paper to a more accurate one as \\u201cappropriate large learning rates can enable an earlier emergence of EB tickets\\u201d. We will conduct more experiments with even larger lrs to find more insights on \\u201clr vs. emergence time of EB tickets\\u201d and update in the camera ready version.\", \"q3\": \"Indeed, applying low-precision EB tickets on top of low/full precision retraining is more interesting as it can achieve more energy savings while maintaining a comparable retrain accuracy. As you kindly suggested, we add two sets of corresponding experiment results into Table 2: 1) EB Train with low precision search and full precision retrain (EB-Train LF) and 2) EB Train with both low precision search and retrain. Specifically, the resulting energy savings and training FLOPs are 5.8-24.6x and 1.1-5.0x over the baseline competitors.\", \"q4\": \"the 4.7x energy savings can be found in Table 2\\u2019s experiment on PreResNet-101@CIFAR100: when the pruning ratio is 70%, the energy savings of EB-Train compared to the lottery ticket (one-shot) baseline is 6095/1294 (\\\\approx 4.7x). Note that additional experiments as responding to your Q3 comment show that EB Train (with both low precision search and retrain) can lead up to 24.6x energy savings.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper empirically analyzed the wide existence of \\\"early-bird tickets\\\", e.g., the \\\"lottery tickets\\\" emerging and stabilizing in very early training stage. The potential connection to (Achille et al., 2019; Li et al.,2019) reads interesting.\", \"the_authors_made_several_contributions_in_addition_to_the_observation\": \"(1) the EB tickets stay robust under large learning rates (while early stopping) and low-precision training; (2) the EB tickets can be detected using epoch-wise consecutive comparison (mask distance), rather than comparing with some oracle ticket; (3) the application of EB ticket towards energy efficient training, which is interesting as this is perhaps the first practical application demonstrated of lottery ticket.\\n\\nWhile I like how the paper connects theory hypothesis to real applications, the experiments need to be solidified in a few aspects:\\n\\n1) Figures 1 and 2, why a few plunges of curves (say Fig 2.a, p = 70%)? Does it imply the training might not be stable?\\n\\n2) Table 1, the authors test two lr schedules to show \\\"large learning rates favor the emergence of EB Tickets\\\". Yet the choice of lr matters a lot and can be tricky. Why the authors pick the two specific learning rate schedules? Why are they \\\"comparable\\\"? What if being more aggressive in choosing larger lr, say starting from lr= 0.5?\\n\\n3) The low precision EB ticket is not actually applied or evaluated in Section 4. It would have been interesting to see.\\n\\n4) I fail to find the 4.7 times energy saving as claimed in abstract from Table 2?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors further study the lottery ticket hypothesis formulated by Frankle and Carbin. They demonstrate that the sparsity pattern corresponding to a lottery ticket for a given initialization can be uncovered via low-cost training. By doing so, they propose a method to: 1) first identify the lottery ticket efficiently and 2) exploit the sparsity of the resulting network to train it at a lower cost.\\n\\nThe contribution is however a bit incremental in my opinion. The original LT paper was not focused on efficiency, and it is not a far stretch to try to find the tickets sooner during the training. On the other hand, the experiments are well conducted (especially 4.3) and even if the original idea is simple, it is of interest to see it tested as clearly. All in all, I found this paper convincing and worth reading, and I think it should be accepted.\", \"positive_points\": [\"The literature review is sufficient and present with great clarity the latest results.\", \"The problem tackled is of great interest and has potentially impactful applications.\", \"The authors focus on hardware friendly types of pruning.\", \"The paper is well written and enjoyable.\", \"The algorithm used to compute the EB tickets seems a bit ad hoc, but in my opinion sufficient as a first approach.\"], \"nitpicking\": [\"Not sure why Max(Q) > eps is in the while condition (return if Max(Q) < eps should be sufficient)\", \"The treatment of the mask distance in figure 3 is confusing. It is not obvious why the authors are plotting 1-distance, and the legend of the figure suggest that a mask has a distance of 1 with itself. Recommend plotting d instead of 1-d and invert the color bar instead if they feel so inclined (yellow=0).\", \"Abstract: \\u201cconsistently exist across models and datasets\\u201d a bit of a strong claim as only cifar is used.\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a method to speed up training of deep neural networks. The main contribution is a method to quickly identify winning lottery tickets (denoted early-bird, or EB by the authors), without running the model to convergence. The authors present interesting preliminary experiments that motivate their method, and show that it works on two image recognition datasets using two models.\\n\\nThis paper addresses an under-explored, but very important problem in AI: the increasing cost of training models. The authors present interesting evidence about the potential to detect EBs early on. The experiments presented in Figures 1 and 3 are convincing and will be of interest to the community. The proposed method seems to work, at least on the setups explored by the authors. I am leaning towards acceptance, but am concerned with the following: \\n\\n1. The authors experiment with a limited set of datasets (CIFAR-10 is a relatively easy task), and with a set of non-competitive baselines (SOTA for CIFAR-10/100 is 99%/91.3%, see https://benchmarks.ai/cifar-10{,0}). I would have liked to see whether the proposed method translates to harder datasets and stronger models.\\n\\n2. I might be missing something here, but to the best of my understanding the large learning rate part (page 4) does not demonstrate the benefits of increasing the learning rate, but the problems with *decreasing* it. The two might seem like the same thing, but in fact they're not: the authors claim the [80,120] policy is standard, and use it when training the subnetwork, so showing that [0,100] is inferior does not present a way to improve over the current approach, but evidence that the other approaches are inferior.\", \"other_questions\": \"1. In Figure 1, it seems that the extracted subnetworks are doing very well even after 0 epochs. Does this mean that a trained version of a random subnetwork could reach within 1-2 points of the unpruned model? or is it pruned after training for 1 epoch?\\n\\n2. If I understand correctly, Figure 5 should be illustrating the proposed method, which automatically identifies the early stopping point. In that case, I am not sure why the plot is a function of the epoch.\\n\\n3. Do the authors have any intuition as to the sharp decrease in the 70% graph in Figures 1 and 2 around epoch 50?\", \"writing\": \"1. The language used by the authors is sometimes exaggerated. Expressions such as \\\"bold guess\\\" (section 3.2), \\\"innovative ... scheme\\\" (section 4) and comparisons to Winston Churchill would be better left out of the manuscript. \\n\\n2. Typos and such: \\n-- several across the paper. For instance: \\n- Intro: After *bring* identified (should be \\\"being\\\")\\n- Related work: when training *it* isolation (in)\\n\\n-- Missing venue for Frankle and Corbin (2019)\"}"
]
} |
SygcSlHFvS | On Understanding Knowledge Graph Representation | [
"Carl Allen*",
"Ivana Balazevic*",
"Timothy M Hospedales"
] | Many methods have been developed to represent knowledge graph data, which implicitly exploit low-rank latent structure in the data to encode known information and enable unknown facts to be inferred. To predict whether a relationship holds between entities, their embeddings are typically compared in the latent space following a relation-specific mapping. Whilst link prediction has steadily improved, the latent structure, and hence why such models capture semantic information, remains unexplained. We build on recent theoretical interpretation of word embeddings as a basis to consider an explicit structure for representations of relations between entities. For identifiable relation types, we are able to predict properties and justify the relative performance of leading knowledge graph representation methods, including their often overlooked ability to make independent predictions. | [
"knowledge graphs",
"word embedding",
"representation learning"
] | Reject | https://openreview.net/pdf?id=SygcSlHFvS | https://openreview.net/forum?id=SygcSlHFvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"JC_X0mTeM-",
"3vFfxhvrS",
"rJxbP-CqoH",
"rJlY-jGmiH",
"ryxf-Kf7sH",
"Bygk9dGQoB",
"HJx2gzzmjS",
"S1e-n1f7sS",
"rkgCydwkcr",
"B1guFCaotH",
"HJeKIG6LYr",
"B1lsoiXVKH",
"Hklleo57tr",
"Ske42A1AdS"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"comment",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1579519853415,
1576798745513,
1573736793185,
1573231361335,
1573230841798,
1573230726764,
1573229043580,
1573228457355,
1571940325767,
1571704448145,
1571373649379,
1571204003191,
1571166952119,
1570795179644
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2296/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2296/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2296/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2296/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2296/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2296/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2296/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2296/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2296/AnonReviewer2"
],
[
"~Apoorv_Umang_Saxena1"
],
[
"ICLR.cc/2020/Conference/Paper2296/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2296/Authors"
],
[
"~Apoorv_Umang_Saxena1"
]
],
"structured_content_str": [
"{\"title\": \"TL;DR: Rejected for lack of reviewer response\", \"comment\": \"We are very disappointed that our work was rejected by the AC despite the reviewers' acceptance, essentially due to reviewers not engaging further. We accept there were points to clarify, a key purpose of the rebuttal phase, but rejecting our paper (deemed to provide \\\"much needed analysis\\\") due to lack of further reviewer engagement seems unreasonable and based on process rather than the paper's merit.\\n\\n[Even if this were a valid justification, it is incorrectly applied since Reviewer 1 did increase their score following the rebuttal, which may have gone unnoticed since they edited their initial response (see date-stamp).]\\n\\nWe appreciate the difficult job of reviewing panels, but this is very disappointing.\"}",
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a set of conditions that enable a mapping from word embeddings to relation embeddings in knowledge graphs. Then, using recent results about pointwise mutual information word embeddings, the paper provides insights to the latent space of relations, enabling a categorization of relations of entities in a knowledge graph. Empirical experiments on recent knowledge graph models (TransE, DistMult, TuckER and MuRE) are interpreted in light of the predictions coming from the proposed set of conditions.\\n\\nThe authors responded to reviewer comments well, providing significant updates during the discussion period. Unfortunately, the reviewers did not engage further after their original reviews, and so it is hard to tell whether they agreed that the changes resolved all their questions.\\n\\nOverall, the paper provides much needed analysis for understanding of the latent space of relations on knowledge graphs. Unfortunately, the original submission did not clearly present the ideas, and it is unclear whether the updated version addresses all the concerns. The paper in its current state is therefore not yet suitable for publication at ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Final comments\", \"comment\": \"Dear reviewers, we thank you once again for taking the time to review our paper. We hope that we have addressed all concerns raised, in particular highlighting what our work achieves in respect of better understanding the representations learned by knowledge graph models. We would be very happy to answer any further questions you may have.\\n\\nBest regards,\\nAuthors\"}",
"{\"title\": \"Author response to Reviewer #3\", \"comment\": \"Thank you for your review and the time taken for it, below we address each of your points in turn.\\n\\n1) The difference in performance on the FB15k-237 (FB) dataset is due to multi-task learning (as discussed in the TuckER paper, Balaevic et al. (2019b)). Amongst the models considered, TuckER is the only one to share parameters between relations, which is encouraged within its core tensor due to its low-rank. As such, on datasets with many relations and relatively few instances per relation, e.g. FB and NELL, the multi-task learning ability of TuckER offers material advantage. We mention this (briefly) in respect of the the NELL dataset in section 4.1, which we have italicised to make more clear. For avoidance of doubt, this is not our reason for omitting results for the FB dataset, which is because the vast majority of FB relations are found to be of type C, whereas we analyse performance differences between relation types and hence use WN and NELL, which have a broader variety of relations by type.\\n\\n2) we agree that rationale for the predictions could be more clear and have improved clarity (e.g. the start of Sec 3.2 and key contributions in Sec 1), to present the predictions is a clearer light. To expand on their rationale:\", \"p1\": \"Type R is a special case of type S, which is a special case of type C (Section 3.1). Hence type S, for example, subsumes type R and (all else being equal) a type S relation requires more parameters to be learned than one of type R and, in that sense, is ``harder to learn\\u2019\\u2019. Further to this, the \\u201cshape\\u201d of the relation-mapping changes between relation types: type R require only a multiplicative (matrix) component, type S an extra additive (vector) component and type C a further additive component. Whether the architecture of a model supports a relation\\u2019s \\u201cshape\\u201d (or, in the language of P1, meet the \\u201crelation conditions) is expected to affect performance.\", \"p2\": \"Since type R relations are (by definition) symmetric, their relation matrices must also be.\", \"p3\": \"Since no additive components (\\u201coffset vectors\\u201d) are required for relatedness relations (type R), any vector norms for those relations are predicted to be small.\", \"p4\": \"The strength (s) of the relatedness of a relation r is defined as |S| where S is the set of context words that both entities must co-occur with similarly for r to hold. In the full rank space of PMI vectors each word corresponds to a dimension, thus s is also the dimensionality of a common PMI vector component (i.e. the component in the dimensions of S). That common component can be tested for with a projection matrix of rank s. In the lower dimension of embeddings, the dimensionality of S is obscured, but is anticipated to be reflected in the eigenvalues of the relation matrix due to its relationship with the projection matrix (Section 3.2).\\n\\n3) As an example of how hits@k metrics can be flawed: consider computing hits@3 for a model with a dataset whose entities include {UK, London, Edinburgh, Brighton, Manchester, York, Birmingham}, training set contains (UK, contains_city, Edinburgh), (UK, contains_city, Manchester) and test set contains (UK, contains_city, London). \\n\\nEach test triple is evaluated by removing an entity and ranking the score assigned to that \\u201ctrue\\u201d entity amongst all entities in the dataset, excluding other known true triples (i.e. \\u201cfiltering\\u201d). When the test triple (UK, contains_city, London) is evaluated by removing London, let the top scoring entities be, in descending order:\\n Edinburgh, Brighton, Manchester, York, London, Birmingham, ...\\nAfter removing Edinburgh and Manchester (as known \\u201ctrue\\u201ds), the order is:\\n Brighton, York, London, Birmingham, ...\\nAnd the sought answer (London) appears in the top k (i.e. 3), and contributes to hits@k metric.\\nHowever, if (UK, contains_city, Edinburgh) happened to not be in the dataset, the order would be:\\n Edinburgh, Brighton, York, London, Birmingham, ...\\nAnd the sought answer would not appear in the top k, even though all top 5 answers are correct.\\n\\nThis demonstrates that ranking metrics for 1-to-many, many-to-many or many-to-1 relations can be affected (arbitrarily) by unknown true answers, which cannot be tested for or evaluated (without further annotation), since they are by definition unknown. Further, such instances are always assumed to exist since the central aim of link prediction is to predict them, i.e. true facts that are not previously known.\"}",
"{\"title\": \"Author response to Reviewer #2: concerns\", \"comment\": [\"Thank you for your review and the time taken for it, below we address each of your points in turn.\", \"Re \\u201ctechnical novelty of \\u2026 proposed method\\u201d: we are delighted that our research direction is considered essential and well regarded. To be clear, we do not propose a new method here, rather we derive, using word embeddings, theoretical conditions that a relation representation must satisfy for the relations of knowledge graphs. Amongst other things, we then demonstrate that the performance of recent knowledge graph models corresponds with their ability to satisfy those conditions, demonstrating a commonality between the interpretable latent space of PMI-based word embeddings and the previously \\u201cblackbox\\u201d latent space of knowledge graphs. We agree that our key findings were insufficiently clear and have improved clarity (as below).\", \"Re \\u201comitting Max/Avg path\\u201d: by definition (see also the MuRE paper, Balazevic et al. (2019a)), where a Kh score is zero the relation is not tree-structured and has no extended paths. We have changed this to \\u201c1\\u201d for greater clarity.\", \"Re \\u201calso_see\\u201d path lengths: we agree this is a notable outlier, but correctly reflects the data (see also the MuRE paper) indicating that the relation subgraph contains long chains.\", \"Re \\u201cchoosing WN18RR and NELL-995 KGs\\u201d: these datasets contain a relatively broad spread of relations by type and so provide the best testbed to demonstrate differences in model performance across relation types. By comparison, the FB15k-237 dataset (FB) contains almost all type C relations. We agree that is not sufficiently clear and have moved the explanation from Appx B to Sec 3.1.\", \"Re \\u201csignificance of conclusions\\u201d: we agree with the importance and confirm that, due to low stochasticity of the algorithms, variance between model runs is broadly negligible and error bars are typically omitted in the literature. One recent work \\u201cHypernetwork Knowledge Graph Embeddings\\u201d (Balazevic et al. (2019)) that does, reports standard deviations across 5 model runs of at most 0.003 for the metrics we consider (their Tables 7 & 8), which is consistent with our experience.\", \"Results aggregated by model and relation type that more succinctly validate our conclusions are included in our response to Reviewer #1.\", \"Re \\u201ctriple classification task\\u201d: Standard metrics in the link prediction literature are MRR and Hits@k based on the ranking of the score attributed to the sought answer amongst the scores of all possible answers. A typical classification task would assign a stand-alone prediction (in [0,1]), which, to the best of our knowledge, rarely appears in the link prediction literature. Indeed, since datasets (e.g. WN, FB) typically contain only positive samples (true facts), it is not straightforward to evaluate model classification performance since perfect test set accuracy is achieved trivially by predicting all relations to be true (i.e. a high false positive rate). We address this by assessing the truth of a random sample of the triples each model predicts as true but for which the model has no ground truth (\\u201cother true\\u201d) (Sec 4.2).\", \"Re \\u201cother true\\u201d: for a given (subject_entity, relation) test pair, a model predicts the truth for each object_entity of the knowledge base ( an evaluation triple). Any evaluation triple that appears in the training or test set, and so known to be true, contributes to \\u201cAccuracy (train)\\u201d or \\u201cAccuracy (test)\\u201d, resp. The remaining majority of triples are unknown to be true/false (although typically the vast majority are false) - any a model predicts to be true we term \\u201cother true\\u201d. As above, we review a random sample of these for each model to estimate model precision. Note that predicting such \\u201ctruly unknown\\u201d instances is the ultimate aim of link prediction so should not be overlooked.\"]}",
"{\"title\": \"Author response to Reviewer #2: conclusion\", \"comment\": \"Re \\u201cdecisive conclusions\\u201d: we agree that the conclusions of the paper are insufficiently clear and have improved the paper to address. Decisive conclusions that we make:\\n 1) previous understanding of how semantic relations are encoded between PMI-based word embeddings for a few relations (e.g. similarity, analogies, etc - Allen & Hospedale (2019), Allen et al. (2019)) is extended to derive the difference between word embeddings for the general relations of knowledge graphs, which translate into linear algebraic mappings. From their mappings, relations can be categorised into 3 types and components of the mappings (e.g. projection matrix, translation vector) relate to meaningful/interpretable semantic aspects of the relation (e.g. relatedness between entities, entity-specific features).\\n 2) that PMI-based word embeddings and knowledge graph entity embeddings show commonality to their latent structure \\u2014 despite the significant differences between their training data and methodology. We demonstrate this by: (i) deriving properties of the relation mappings (based on word embeddings), e.g. vector norm, matrix symmetry/effective rank, and identifying those in actual knowledge graph representations; and (ii) showing that the relative performance of knowledge graph models for each relation type accords with how well a model\\u2019s architecture satisfies the corresponding relation conditions (based on word embeddings).\\n 3) that stand-alone classification performance should be evaluated for future models since the task itself may be of more practical use than ranking metrics, and it provides novel insight into model performance.\\n Overall, we provide an important step towards a theoretical understanding of the latent structure of knowledge graph representations. In terms of practical use, our results: provide understanding as to which model is most appropriate for a new dataset (e.g. if relations were known, a priori, to be symmetric); suggest that different aspects of relations (e.g. type, strength of relatedness) could be quantitatively evaluated; and indicate where future research effort might be directed (e.g. type C relations). Furthermore, whilst whether multiplicative (e.g. DistMult) or additive (e.g. TransE) link prediction models are superior has been an open question, we now provide theoretical justification that the answer is both (e.g. as in MuRE).\"}",
"{\"title\": \"Author response to Reviewer #1: suggestions\", \"comment\": \"* Re \\u201caggregated results\\u201d: below we summarise results by relation type and model, supporting our conclusions more succinctly:\\nWN18RR \\n Tr_E M_I Dist Tuck MuRE\\nR 0.91 0.95 0.95 0.95 0.96\\nS 0.04 0.23 0.22 0.23 0.30\\nC 0.11 0.37 0.33 0.37 0.42\\n\\nNELL\\n Tr_E M_I Dist Tuck MuRE\\nR 0.68 0.77 0.84 0.82 0.81\\nS 0.37 0.51 0.58 0.61 0.64\\nC 0.39 0.48 0.49 0.50 0.53\\n\\n* Re \\u201cinsights for embedding improvement\\u201d: we provide the first theoretical insight into how relations of knowledge graphs are represented by: deriving relation-specific mappings for PMI-based word embeddings; showing that their properties are reflected in actual knowledge graph representations; and that the better a model\\u2019s architecture accommodates them, the better its performance. We also identify performance at answering standalone knowledge base queries, ie as a classifier as opposed to a ranking mechanism, giving novel insight.\\nWe believe that our results will enable future development of improved knowledge base representation based on a more principled understanding and by specifically identifying where to focus effort, e.g. type C relations. Combined with other works, current practitioners effectively have a \\u201cdecision tree\\u201d for deciding which model to use for a new dataset depending on its properties, e.g. many relations => TuckER, highly symmetric => DistMult, hierarchical => MuRP, and in the general case MuRE (a future research direction would be to combine these). Further, our results suggest that relation properties can be identified from representation components and/or the relative performance on different models.\"}",
"{\"title\": \"Author response to Reviewer #1: clarifications\", \"comment\": \"Thank you for your review and the time taken for it, below we address each of your points in turn.\\n\\n* Re \\u201ctake-aways\\u201d: we agree that we have not made this sufficiently clear and have updated the paper accordingly (in particular introduction, part of results and conclusion). To summarise, the key take-aways from our work are: \\n 1) that the previous understanding of how semantic relations are encoded between PMI-based word embeddings for a few relations (e.g. similarity, analogies, etc - Allen & Hospedales (2019), Allen et al. (2019)) is extended to derive the difference between word embeddings for the general relations of knowledge graphs, which translate into linear algebraic mappings. From their mappings, relations can be categorised into 3 types and components of the mappings (e.g. projection matrix, translation vector) related to meaningful/interpretable semantic aspects of the relation (e.g. relatedness between entities, entity-specific features).\\n 2) that PMI-based word embeddings and knowledge graph entity embeddings show commonality to their latent structure - despite the significant differences between their training data and methodology. We demonstrate this by: (i) deriving properties of the relation mappings (based on word embeddings), e.g. vector norm, matrix symmetry/effective rank, and identifying those in actual knowledge graph representations; and (ii) showing that the relative performance of knowledge graph models for each relation type accords with how well a model\\u2019s architecture satisfies the corresponding relation conditions (based on word embeddings).\\n 3) that stand-alone classification performance should be evaluated for future models since the task itself may be of more practical use than ranking metrics, and it provides novel insight into model performance.\\n\\nOverall, we provide an important step towards a theoretical understanding of the latent structure of knowledge graph representations. In terms of practical use, our results: provide understanding as to which model is most appropriate for a new dataset (e.g. if relations were known, a priori, to be symmetric); suggest that different aspects of relations (e.g. type, strength of relatedness) could be quantitatively evaluated; and indicate where future research effort might be directed (e.g. type C relations).\\n\\n* Re \\\"FB15k-237 dataset\\\" (FB): The prevalence of type C relations in FB does not contradict or weaken our results, it means only that FB is less useful for demonstrating the differences between model performance across relation types relative to datasets that contain a broader spread of relations by type (e.g. WN, NELL). We agree that this is insufficiently clear and move the explanation from Appx B to Sec 3.1.\\n\\n* Re \\u201cTable 3 & 4 findings\\u201d: we agree these are insufficiently clear and have updated the paper (Sec 3.2, to more clearly motivate the experiments, and Sec 4). Key findings of Tables 3 and 4 are: \\n(i) that MuRE\\u2019s advantage over other models largely corresponds to type S/C relations, fitting prediction P1 since those relations require both additive and multiplicative components of the loss function (no other model has both); \\n(ii) that, between MuRE_I and DistMult, multiplicative-only DistMult (typically) performs better for type R relations, which require a multiplicative component only; whereas additive-only MuRE_I performs best for type C/S relations (see Table 3), which require an additive component (but may also require a multiplicative component, explaining the inconsistency in Table 4); and\\n(iii) performance of TuckER is comparable to MuRE_I/DistMult models for datasets with few relations (e.g. WN), but is more comparable to MuRE for datasets with many relations (e.g. NELL, FB) when multi-task learning of relations provides material benefit (as discussed in the TuckER paper, Balazevic et al.(2019b)).\\nAll findings accord with prediction P1 made based on the latent structure of PMI-based word embeddings (relation conditions). As a corollary, whilst multiplicative and additive link prediction models have historically jostled for superiority and which is better remained an open question, we justify why the answer is both (e.g. as in MuRE).\\n\\n* Re \\u201cDistMult on type R relations\\u201d: DistMult outperforms other models for type R relations, except for the \\u201calso_see\\u201d (AS) and \\u201cderivationally_related_form\\u201d (DRF) relations (Table 3). \\n - AS has a high Krackhardt score and path length indicating a tree structure and obscuring results since the models considered are not suited to hierarchical relations (as discussed in the MuRE paper, Balazevic et al (2019a)). \\n - DistMult can be fully expressed by MuRE and thus only outperforms when MuRE\\u2019s additional parameters may allow overfitting. DRF has an abundance of data (34% of all training examples), whereby any overfitting is reduced, explaining why DistMult does not outperform MuRE.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"There has been a large family of knowledge base models developed in the recent years, aiming to encode both entities and relations in a latent space, so the entities can be \\u201clinked\\u201d via a relation-specific mapping.\\n\\nThe paper focuses on understanding these entity embeddings (and geometric embedding relationships), built on top of the connections with PMI-based word embeddings.\\n\\nThe paper categorizes all the relations into 3 types (1) related, (2) specialisation, and (3) context shift, and examines some relations in WordNet and NELL, and then empirically evaluates the performance of different types of models and draws the correlation of the results and intuitive understanding of different types of relations.\\n\\nTo me, this paper is more like providing some intuitive explanations of existing KG embeddings methods and their performance (not really theoretical justifications). It was an interesting read and I appreciate the authors trying to understanding the latent structure that has been encoded in these models. However, I am just not that sure how many take-aways we can get from this study. \\n\\nI am wondering how loose this categorization is , esp. for the important relations in practice. I\\u2019d be also interested in seeing more results on Freebase (and possibly Wikidata) as those KG embeddings are usually more useful. As indicated in the Appendix, the paper mentiosns most of FB15k-237 datasets are in type C, so I am just not sure how many R/S relations are actually there.\\n\\nAlso, according to Table 3 and Table 4, I am not sure if there are any surprising findings from there. It seems that there is some randomness/noise, but MuRE generally works better than the others. It is true that DistMult works well on the R-type relations but it is not consistent between WN and NELL.\\n\\nIt\\u2019d be useful to show results on more relations (and aggregated results in each category). \\n\\nIt'd be really great if the paper actually provides some insights on we can further improve these entity embeddings according to this categorization.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes to provide a detailed study on the explainability of link prediction (LP) models by utilizing a recent interpretation of word embeddings. More specifically, the authors categorize the relations in KG into three categories (R, S, C) using the correlation between the semantic relation between two words and the geometric relationship between their embeddings. The authors utilize this categorization to provide a better understanding of LP models\\u2019 performance through several experiments.\\n\\nThis paper reads well and the results appear sound. I personally believe that works on better understanding KGC models are a very essential direction which is mostly ignored in this field of study. Moreover, the provided experiments support the authors\\u2019 intuition and arguments.\\n\\nAs for the drawbacks, I find the technical novelty of the paper is somewhat limited, as the proposed method consists of a mostly straightforward combination of existing methods. Further, I believe this work needs more experimental results and decisive conclusions identifying future directions to achieve better performance on link prediction. My concerns are as follows:\\n\\n\\u2022 I am wondering about the reason for omitting Max/Avg path for two of the relations in WN18RR? Further, the average of 15.2 for the shortest path between entities with \\u201calso_see\\u201d relation appears to be a mistake?\\n\\u2022 Was there any specific reason in choosing WN18RR and NELL-995 KGs for the experiments?\\n\\u2022 It would be interesting to see the length of paths between entities for train and test data separately. \\n\\u2022 I suggest providing a statistical significance evaluation for each experiment to better validate the conclusions.\\n\\u2022 I find the provided study in section 4.2 very similar to the triple classification task in KGs. Can you elaborate on the differences and potential advantages of your setting?\\n\\u2022 I am wondering how you identified the \\u201cOther True\\u201d triples for WN18RR KG in section 4.2 experiments?\\n\\nOn overall, although I find the proposed study very interesting and enlightening, I believe that the paper needs more experimental results and decisive conclusions.\"}",
"{\"comment\": \"Hi, thanks for your answer\\n\\nIf your hypothesis is that R-type relations have similar subject and object embeddings - which is your analogy to word embeddings - then shouldn't it be easier to show that by just measuring the norm of the relation vector/matrix?\", \"edit\": \"Never mind, I didn't read the full paper. The answer to this question is already there :). Sorry for bothering\", \"title\": \"Type R relation = Close embedding, evaluation\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThe paper attempts to understand the latent structure underlying knowledge graph embedding methods. The work can be seen as an extension of understanding of PMI-based word embedding methods. They categorize knowledge graph relations into three categories based on their relation conditions: Relatedness (R), Specialisation (S), and Context-shift (C). For each category, they evaluate a representative of different types of knowledge graph embedding methods. Through results, they demonstrate that a model\\u2019s ability to represent a specific relation type depends on the limitations imposed by the model architecture with respect to satisfying the necessary relation conditions.\", \"questions\": \"1. The results in Tables 3 and 4 demonstrate that MuRE is the most effective method for handling different types of relations but then how come its performance on FB15k-237 (.336 MRR) is significantly lower than other methods like TuckER (.358 MRR). Can you provide an explanation?\\n\\n2. In Section 3.2, the authors list 4 predictions (P1-4). It would be great if authors could provide some more reasoning behind coming with these predictions. \\n\\n3. In Section 4.2, it is stated that \\u201cranking based metrics like MRR and hits@k are flawed if entities are related to more than k others\\u201d. It would be great if the authors could give an example to make it more clear.\"}",
"{\"comment\": \"Hi, thanks for your interest.\\n\\nWords associated by type R relations are highly \\\"related\\\" (in the extreme they are \\\"similar\\\"), i.e. they co-occur with many words in common (Sec 3). As such, their PMI vectors (and also their word embeddings) have a significant common component and so tend to have a relatively small difference (i.e. they are close). Additive-only models with a small relation vector can identify object and subject embeddings that are close, but cannot identify a relation-specific common subspace component, if required (as possible with multiplicative models). As such, additive-only models might be expected to be insufficiently discriminating. This is in fact confirmed in Table 6: additive-only model M_I predicts a high number of triples to be true for type R relations, but many of those (~69%) are in fact false (Sec 4.2).\", \"title\": \"Type R (highly related) relations = close embeddings\"}",
"{\"comment\": \"In section 4.1, you say\\n\\n[Additive models] achieve their best results on type R relations, where the relation vector can be zero/small. \\n\\nHowever, if relation vector is 0, then model should not be able to give the correct prediction? Since head = tail if r is 0 vector. Shouldn't this be an anomaly rather than an explanation?\", \"title\": \"Performance of TransE on R type relations\"}"
]
} |
Hkg9HgBYwH | Encoding Musical Style with Transformer Autoencoders | [
"Kristy Choi",
"Curtis Hawthorne",
"Ian Simon",
"Monica Dinculescu",
"Jesse Engel"
] | We consider the problem of learning high-level controls over the global structure of sequence generation, particularly in the context of symbolic music generation with complex language models. In this work, we present the Transformer autoencoder, which aggregates encodings of the input data across time to obtain a global representation of style from a given performance. We show it is possible to combine this global embedding with other temporally distributed embeddings, enabling improved control over the separate aspects of performance style and and melody. Empirically, we demonstrate the effectiveness of our method on a variety of music generation tasks on the MAESTRO dataset and an internal, 10,000+ hour dataset of piano performances, where we achieve improvements in terms of log-likelihood and mean listening scores as compared to relevant baselines. | [
"music generation",
"sequence-to-sequence model",
"controllable generation"
] | Reject | https://openreview.net/pdf?id=Hkg9HgBYwH | https://openreview.net/forum?id=Hkg9HgBYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"r5Iy5tlmjt",
"Hkx-_FQnjS",
"BygAqcDQsS",
"Byecdcvmsr",
"HyeZH9PXjr",
"rJlAcEsTtS",
"Hygtfbj6FH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798745484,
1573824873450,
1573251734316,
1573251698128,
1573251641106,
1571824790290,
1571823888773
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2295/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2295/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2295/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2295/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2295/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2295/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Main content:\\n\\nBlind review #3 summarizes it well:\\n\\nThis paper presents a technique for encoding the high level \\u201cstyle\\u201d of pieces of symbolic music. The music is represented as a variant of the MIDI format. The main strategy is to condition a Music Transformer architecture on this global \\u201cstyle embedding\\u201d. Additionally, the Music Transformer model is also conditioned on a combination of both \\u201cstyle\\u201d and \\u201cmelody\\u201d embeddings to try and generate music \\u201csimilar\\u201d to the conditioning melody but in the style of the performance embedding. \\n\\n--\", \"discussion\": \"The reviewers questioned the novelty. Blind review #2 wrote: \\\"Overall, I think the paper presents an interesting application and parts of it are well written, however I have concerns with the technical presentation in parts of the paper and some of the methodology. Firstly, I think the algorithmic novelty in the paper is fairly limited. The performance conditioning vector is generated by an additional encoding transformer, compared to the Music Transformer paper (Huang et. al. 2019b). However, the limited algorithmic novelty is not the main concern. The authors also mention an internal dataset of music audio and transcriptions, which can be a major contribution to the music information retrieval (MIR) community. However it is not clear if this dataset will be publicly released or is only for internal experiments.\\\"\\n\\nHowever, after revision, the same reviewer has upgraded the review to a weak accept, as the authors wrote \\\"We emphasize that our goal is to provide users with more fine-grained control over the outputs generated by a seq2seq language model. Despite its simplicity, our method is able to learn a global representation of style for a Transformer, which to the best of our knowledge is a novel contribution for music generation. Additionally, we can synthesize an arbitrary melody into the style of another performance, and we demonstrate the effectiveness of our results both quantitatively (metrics) and qualitatively (interpolations, samples, and user listening studies).\\\"\\n\\n--\", \"recommendation_and_justification\": \"This paper is borderline for the reasons above, and due to the large number of strong papers, is not accepted at this time. As one comment, this work might actually be more suitable for a more specialized conference like ISMIR, as its novel contribution is more to music applications than to fundamental machine learning approaches.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the changes.\", \"comment\": \"Dear Authors,\\n\\nThank you for all the changes to the draft. I think the paper is much improved due to all the changes. I need some time to go through all the changes in detail and reconsider my rating for the paper.\"}",
"{\"title\": \"Specific Comments\", \"comment\": \"In addition to the common concerns as written above, we address Reviewer #3's specific concerns below:\\n\\n> 1. What does \\u201ctypically incorporate global conditioning\\u201d mean in the Introduction?\\n\\nThe generative models which \\u201ctypically incorporate global conditioning...\\u201d are simply conditional variants of models such as the conditional VAE (Sohn et al. 2015) and conditional GAN (Mizra et. al 2014) which perform generation by conditioning on a global signal, such as a one-hot encoding of the class label.\\n\\n> 2. Need more clarification about \\u201cinternal dataset\\u201d and \\u201cpreprocessing procedure\\u201d for melody extraction.\\nAs the reviewer noted, we did our best to anonymize the submission with respect to the dataset and preprocessing techniques used in the paper. Our internal dataset is comprised of approximately 400K piano performances which comprise the 10,000+ hours of audio. Due to licensing restrictions we are unable to release the internal piano performance dataset -- however, we will provide pre-trained models based on this dataset for public use. \\n\\nFor the melody representation (vocabulary), we followed (Waite et. al 2016) to encode the melody as a sequence of tokens and quantized it to a 100ms grid. For the melody extraction procedure, we used an algorithm as in the open-sourced code (Anonymous for review), where we use a heuristic to extract the note with the highest in a given performance. Specifically, we construct a transition matrix of melody pitches and use the Viterbi algorithm to infer the most likely sequence of melody events within a given frame. We will add additional details regarding the melody extraction and encoding in the Supplement. \\n\\n>3. There needs to be additional clarification of how the model is trained.\\n\\nThis is a good point. As noted by the reviewer, for performance-only conditioning, the decoder is tasked with predicting the same performance that was fed as input to the encoder. In this way, we encourage the model to learn global representations (the mean-aggregated performance embedding from the encoder) that will faithfully be able to reconstruct the input performance. For melody & performance conditioning, the Transformer autoencoder is trained to predict a new performance using the combined melody+performance embedding, where the loss is computed with respect to the conditioned input performance that is provided to the encoder.\\n\\nTo make this point more clear, we will update the submission with a new version of Figure 1 with the reviewer\\u2019s suggestions. We have also added these additional details on the model training procedure in the supplemental materials in the revision.\", \"references\": \"Sohn et. al 2015: Learning Structured Output Representation using Deep Conditional Generative Models\\nMizra et. al 2014: Conditional Generative Adversarial Nets\\nWaite et. al 2016: https://magenta.tensorflow.org/2016/07/15/lookback-rnn-attention-rnn\"}",
"{\"title\": \"Specific comments\", \"comment\": \"In addition to the common concerns as written above, we address Reviewer #1's specific concerns below:\\n\\n1. Is there a mathematical definition of style-specific generation, with more relevant baselines (e.g. cycle-consistency)? \\nWe appreciate the reviewer\\u2019s suggestion and additional references. Because our method is learning a conditional generative model, we do not incorporate any additional style-specific terms in our learning objective as we perform maximum-likelihood training. However, developing a more fine-grained notion of style for music generation is certainly interesting.\\n\\nWe originally compared against the Music Transformer to ensure that conditioning would improve the model, and added various versions of our model (e.g. melody-only conditioning) as relevant baselines for comparison. To the best of our knowledge, our work is the first to incorporate conditioning information in sequential language models for music generation.\\n\\nAdding a consistency loss term is a good idea for a baseline, but for our problem it was ill-posed because we did not have a straightforward way of partitioning our data into different categories. In image translation literature (e.g. Zhu et. al 2018), there exists a clear source and target domain even if the images themselves are unlabeled. For both MAESTRO and the internal dataset, such separations between the source and target domains are unclear (e.g. musical tempo, rhythm, pitch, etc.). Nevertheless, we agree that this would be interesting to explore as future work.\", \"references\": \"Zhu et. al 2018: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks\"}",
"{\"title\": \"Addressing Common Concerns\", \"comment\": [\"We thank the reviewers for their insightful comments! Before we address common threads of concern below, then respond to each reviewer individually, we highlight significant changes we have made to the paper:\", \"Following helpful reviewer suggestions, we have completely reworked the evaluation section (Section 4) of the paper, and uploaded a *revised* version of the paper.\", \"For more meaningful and fine-grained evaluation, we have adopted a collection 8 commonly used musical similarity metrics with which to compare samples.\", \"As an aggregate similarity metric, we now use the average of the 8 similarity metrics, rather than the IMQ kernel, which is more intuitive and better motivated.\", \"We have added tables 3 and 4 to report these fine-grained new metrics, and updated figures 2 and 3 to use the new aggregate similarity metric (Section 5).\", \"All of the key findings of the paper remain the same with these new metrics, and many effects are actually more pronounced than with the original kernel similarity metric.\", \"> R1/R3: Why not use existing techniques for measuring similarity between musical performances? Why not compare the conditioning melody with the generated performance similar to query-by-humming (QBH)?\", \"We appreciate the reviewers\\u2019 feedback regarding our kernel evaluation metric. Upon further reflection, we agree that there are simpler and more intuitive ways to evaluate musical similarity, and as described above we have reworked large parts of the paper to reflect that. In the revised version of the paper, we follow existing techniques (Yang & Lerch 2018) using the Overlapping Area (OA) of common similarity metrics including:\", \"Note Density\", \"Pitch Range\", \"Mean/Var Pitch\", \"Mean/Var Velocity\", \"Mean/Var Duration\", \"We note that certain features in the paper are not applicable to our setting (e.g. note length transition matrix) because they were developed for monophonic melodies, while we are evaluating polyphonic piano performances.\", \"Although we no longer use the IMQ kernel as the similarity metric, we emphasize that the key results remain the same. For clarity, we quickly examine here why that is the case and what originally motivated the kernel approach. The kernel approach assumes that the conditioning performances (x~p(x)) and generated performances (y, y\\u2019~q(y)) are drawn from two different distributions, and MMD computes the degree to which these two distributions are similar in a kernel feature space. We experimented with a variety of kernels commonly used in the literature (e.g. RBF kernel) and found that the IMQ worked best empirically. Our results remained unchanged because the MMD distributional similarity correlates well with the average difference in extracted similarity features.\", \"We did not compare an input melody to the generated performance (as in QBH) because we wanted to compare the similarities of polyphonic sequences. As the melody is represented using a different encoding and vocabulary than the performance, comparison is not straightforward and would not provide relevant information. We do note that our user listening studies also implicitly serve as a proxy to measure melodic similarity. Thus we performed our similarity evaluations against the original performance from which the melody was extracted. This is reflected in the melody & performance conditioning case: we average two OA terms, OA(source performance of extracted melody, generated sample) and OA(conditioning performance, generated sample), as our final metric. In this way, we account for the contributions of both the conditioning melody and performance sequence.\", \"> R1/R3: Algorithmic novelty is somewhat limited.\", \"We emphasize that our goal is to provide users with more fine-grained control over the outputs generated by a seq2seq language model. Despite its simplicity, our method is able to learn a global representation of style for a Transformer, which to the best of our knowledge is a novel contribution for music generation. Additionally, we can synthesize an arbitrary melody into the style of another performance, and we demonstrate the effectiveness of our results both quantitatively (metrics) and qualitatively (interpolations, samples, and user listening studies).\"], \"references\": \"Hung et. al 2018: Improving Automatic Jazz Melody Generation by Transfer Learning Techniques\\nYang & Lerch 2018: On the evaluation of generative models in music\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"## summary\\nIn this paper, the author extends the standard music Transformer into a conditional version: two encoders are evolved, one for encoding the performance and the other is used for encoding the melody. The output representation has to be similar to the input. The authors conduct experiments on the MAESTRO dataset and an internal, 10,000+ hour dataset of piano performances to verify the proposed algorithm.\\n\\n## Novelty \\nThe application is interesting, but the novelty of the architecture itself is limited. Multiple encoder structure has been widely investigated in machine translation.\\n\\n## Questions\\n1.\\tIn section 4.2, how do you use the $\\\\mathcal{Y}$? Since it is defined but never used. What does the $p()$ and $q()$ mean ? You mentioned that \\u201cWe omit the usual first term in the MMD loss \\u2026\\u201d but if so, why do you introduce this term to evaluation metric?\\n2.\\tBy checking the music Transformer, in Table 3, it is not surprising to see that the proposed method outperforms the corresponding baselines, because no conditional information is used. \\n3. It is better to give some mathematical definition of music generation with specific style. I am not working on music generation but I list two CV related papers about conditional image translation, which mathematically describes \\\"an image with specific style\\\".\\n4.\\tConsidering that this is an unsupervised setting that two styles are transformed, can cycle-consistency be implemented as a baseline? The following two papers are about conditional unsupervised image-to-image translation, which build a cycle-consistency loss during the feedback and might help improve the performances.\\n\\n\\n## Reference\\n[ref1] Multimodal Unsupervised Image-to-Image Translation, ECCV\\u201918\\n[ref2] Conditional image-to-image translation, CVPR\\u201918\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a technique for encoding the high level \\u201cstyle\\u201d of pieces of symbolic music. The music is represented as a variant of the MIDI format. The main strategy is to condition a Music Transformer architecture on this global \\u201cstyle embedding\\u201d. Additionally, the Music Transformer model is also conditioned on a combination of both \\u201cstyle\\u201d and \\u201cmelody\\u201d embeddings to try and generate music \\u201csimilar\\u201d to the conditioning melody but in the style of the performance embedding.\\n\\nOverall, I think the paper presents an interesting application and parts of it are well written, however I have concerns with the technical presentation in parts of the paper and some of the methodology. Firstly, I think the algorithmic novelty in the paper is fairly limited. The performance conditioning vector is generated by an additional encoding transformer, compared to the Music Transformer paper (Huang et. al. 2019b). However, the limited algorithmic novelty is not the main concern. The authors also mention an internal dataset of music audio and transcriptions, which can be a major contribution to the music information retrieval (MIR) community. However it is not clear if this dataset will be publicly released or is only for internal experiments. \\n\\nIn terms of technical presentation, I think the authors should clarify how the model is trained. It took me a couple of passes and reading the Music Transformer paper to realise that in the melody and performance conditioning case, the aim is to generate the full score (melody and accompaniment) while conditioning on the performance style and melody (which is represented using a different vocabulary). This point can be easily clarified in Figure 1, by adding the input to the encoder as input to the decoder for computing the loss. Although I understand the need for anonymity and constraints while referring to unreleased datasets, it would still be useful for the reader/reviewer to have some details of how the melody was extracted and represented. \\u201cAn internal procedure\\u201d is quite mysterious. \\n\\nMeasuring music similarity is a difficult problem and the topic has been the subject of at least 2 decades of research. I find the description of the \\u201cperformance feature\\u201d to be lacking in necessary background and detail. Firstly, I am not sure what the final dimensionality of the feature vector is. Is it real valued? The authors mention (Yang and Lerch, 2018) but use a totally different set of attributes compared to that paper. I also don\\u2019t see the connection between this proposed feature vector and using the IMQ kernel for measuring similarity. This connection is not motivated adequately and after reading (Jitkrittum et. al. 2019) its not obvious to me why this is the most appropriate metric. Finally, it would be useful if the authors comment on existing methods for measuring music similarity in symbolic music and how their proposed feature fits into existing work. A lot of work has been published on this topic, most recently in the context of Query-by-Humming [1]. \\n\\nMinor Comments\\n\\n1. \\u201c...which typically incorporate global conditioning as part pf the training procedure\\u201d Could you elaborate on this point? Is the global conditioning the samples from the noise distribution? \\n2. Figure 1 should be clarified or another figure should be added to show how the melody conditioning works. Maybe a comment on the melody vocabulary or a reference would also be useful. \\n3. The MAESTRO dataset is described in terms of the number of performances while the internal dataset is described in terms of the number of hours of audio. Its not possible for the reader to get a sense of the relative sizes of the 2 datasets and how the results should be interpreted. \\n4. There should be more background and description in Section 4. Where does the performance feature come from? Why use this feature compared to existing techniques for measuring similarity between symbolic music pieces? Is it computational efficiency? Why not compare the conditioning melody with the generated performance similar to query-by-humming? Where does the IMQ kernel come from? What is the size of the feature vector? \\n5. In section 5.2, a conditioning sample, a generated sequence and an unconditional sample are used to compute the similarity measure. Which terms do these correspond to in the MMD-like term (x,y,y\\u2019)? \\n6. I like the experiments performed in Section 5.3 with the linear combination of 2 performance embeddings. \\n\\n[1] A Survey of Query-By-Humming Similarity Methods: http://vlm1.uta.edu/~athitsos/publications/kotsifakos_petra2012.pdf\"}"
]
} |
BkeYSlrYwH | Collaborative Inter-agent Knowledge Distillation for Reinforcement Learning | [
"Zhang-Wei Hong",
"Prabhat Nagarajan",
"Guilherme Maeda"
] | Reinforcement Learning (RL) has demonstrated promising results across several sequential decision-making tasks. However, reinforcement learning struggles to learn efficiently, thus limiting its pervasive application to several challenging problems. A typical RL agent learns solely from its own trial-and-error experiences, requiring many experiences to learn a successful policy. To alleviate this problem, we propose collaborative inter-agent knowledge distillation (CIKD). CIKD is a learning framework that uses an ensemble of RL agents to execute different policies in the environment while sharing knowledge amongst agents in the ensemble. Our experiments demonstrate that CIKD improves upon state-of-the-art RL methods in sample efficiency and performance on several challenging MuJoCo benchmark tasks. Additionally, we present an in-depth investigation on how CIKD leads to performance improvements.
| [
"Reinforcement learning",
"distillation"
] | Reject | https://openreview.net/pdf?id=BkeYSlrYwH | https://openreview.net/forum?id=BkeYSlrYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ewQScuPBCY",
"rJl2BS_niH",
"HJgbNT4nor",
"ryg6qbs5iS",
"r1g2F6OqiS",
"rJlz4hu4sr",
"rygrTiuEiB",
"Bkg-TbEmor",
"BkgE-WYp5H",
"HygygT3M9H",
"Hkx1u0nj_H",
"rylFv_5hvB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"comment"
],
"note_created": [
1576798745456,
1573844291787,
1573829928562,
1573724564767,
1573715332389,
1573321769563,
1573321660910,
1573237177133,
1572864251684,
1572158695073,
1570651751406,
1569658976627
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2294/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2294/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2294/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2294/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2294/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2294/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2294/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2294/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2294/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2294/AnonReviewer3"
],
[
"~Kai_Li2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper introduces an ensemble of RL agents that share knowledge amongst themselves. Because there are no theoretical results, the experiments have to carry the paper. The reviewers had rather different views on the significance of these experiments and whether they are sufficient to convincingly validate the learning framework introduced. Overall, because of the high bar for ICLR acceptance, this paper falls just below the threshold.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"[Update] Response to Reviewer #4\", \"comment\": \"[Update] Question/Comment #8 Response:\\nWe have attached the new experimental results in the appendix.\"}",
"{\"title\": \"Status update for paper\", \"comment\": \"We would like to thank all the reviewers for their helpful comments. We have provided responses to all of the reviewers, and have updated our paper in response to the reviews. In particular, we have improve the presentation of the material, and have run additional experiments. These additional experiments have been added to the Appendix, Section A1. Unfortunately, we were unable to complete all of the reviewers\\u2019 experiments, and were only able to run SAC-CIKD for 1.5M timesteps on the additional domains, as opposed to 3M, despite the fact that our GPUs have been continuously running. These partial results are in Appendix A1. We will expand our set of experiments in the future.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We would like to thank the reviewer for reading our paper and providing feedback on our work. We will address the reviewer\\u2019s various points here.\\n\\n- \\u201cThough extensive ablation studies have shown the effectiveness of each component of CIKD. It is still not clear why this approach can be effective. Intuitively, it is possible that the exploration from a set of agents would outperform a single agent. The measure of exploration efficiency could help in explaining the results.\\u201d\\n\\t\\nThe purpose of the Ensemble-SAC baseline was to investigate how CIKD itself improves upon Ensemble-SAC, since Ensemble-SAC may naturally benefit from improved exploration upon a single SAC agent. In this way, we can decouple (to a certain degree), the benefits of an ensemble vs. the benefits of applying CIKD to an ensemble. In future work, it would be interesting to perform more experiments and analyses on the effect of improved exploration and the benefit of distillation (e.g., through plotting the state-visitation frequencies or distilling the knowledge from a separate hand-crafted dataset as opposed to agent data).\\n\\n- \\u201cFurthermore, better exploration not necessarily leads to better performance and sample-efficiency. Does knowledge distillation serve as a better alternative to exploit existing data?\\u201d\\n\\nIt is unclear how to compare various approaches to exploiting existing data, since there is no general framework for data exploitation. However, we would like to highlight two of our experiments that investigated this question. Off-policy RL offers an obvious way to exploit experiences. In Section 5.3 (Fig. 3a), we performed an experiment where we tuned an Ensemble-SAC agent to perform additional off-policy RL updates. We found that using CIKD with Ensemble-SAC outperforms Ensemble-SAC with additional RL updates. Our second experiment, the \\u201chard-copy\\u201d experiment (Fig. 3b, Section 5.3), copies the best teacher into the students rather than performing distillation. We found that distillation performs better than strictly hard-copying the best agent. Interesting directions for future work include performing additional analyses on various data exploitation methods.\\n\\n- \\u201cModel/algorithm agnostic: The proposed method is more convenient to be applied with off-policy approach when the policy is in the form of softmax. Is it also applicable to other approaches? \\u201c\\n\\nOur method is certainly applicable to other approaches. In particular, our KL Loss can be applied to other policy gradient approaches as long as the policy outputs a distribution and is differentiable, as is the case with most modern policy representations. In principle, CIKD can be applied to value-based approaches as well by changing the distillation loss from a KL-Loss to another loss, such as mean-squared-error (MSE). In fact, in our paper, we distill our critics using an MSE loss.\\n\\n- \\u201cHow do you determine when to stop the KD process? As mentioned in section 5.5, if we conduce KD fully, all students would be just imitating the teacher's behavior. It seems the key is to tune a good termination threshold for each task? Are there any guidelines to set up this threshold? Do you have some automatic way to terminate the KD procedure?\\u201d\\n\\nWe didn\\u2019t focus on optimizing the terminating threshold for distillation and found that CIKD worked quite well by randomly dividing the entire (bounded) replay buffer into several minibatches and performing distillation on all of these minibatches. If this process were to be repeated infinitely, this would amount to imitation learning. To verify that CIKD is not tantamount to pure imitation learning, we ran two key experiments. In one experiment (Section 5.5, Figure 5d), we tested an alteration of CIKD where we re-initialized the student networks prior to distillation. This amounts to pure imitation learning in that we have a randomly initialized student learning to directly imitate the teacher. We found that pure imitation learning fails to perform as well as CIKD. In Section 5.5 (Fig. 5c), we show that the student often outperforms the teacher after distillation. Note that outperforming the teacher is atypical in imitation learning, which further supports that CIKD does not reduce to imitation learning. Returning to the reviewer\\u2019s question, an interesting direction for future work is to investigate the tradeoff between pure imitation learning and a moderate amount of distillation. But in this work, we found that CIKD achieved good performance with straightforward distillation termination conditions and is in fact superior to distilling via pure imitation learning.\\n\\n- \\u201cMinor: L1, P5, \\u2018how to CIKD improves the sample efficiency\\u2019\\u201d \\n\\nWe have corrected this mistake in the paper.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"First, we would like to thank the reviewer for the time and effort given to the review, and for his/her valuable comments. We will address various points that the reviewer mentioned here.\\n\\n-\\u201cInterestingly they only use the most recently collected trajectories to update all policies, and despite storing the rollouts in a replay buffer, they seem to only use the stored transitions for the imitation part described below.\\u201d\\n-\\u201cSome of their experimental results uses extra gradient steps, although it\\u2019s not clear if those gradient steps are also only on the last rollout collected, or on transitions sampled from the replay buffer as it is typical in off-policy RL methods\\u201d.\\n\\nIn fact, we indeed do what the reviewer notes that we should do, i.e., we use \\u201ctransitions sampled from the replay buffer as it is typical in off-policy RL methods\\u201d, for all of our experiments, as the original SAC algorithm does. Perhaps this misunderstanding stems from our pseudocode (Algorithm 1), which was written to be general. Though we allude to the use of experience replay for policy training in Section 4.2, we realize that the pseudocode is misleading to make one think that our policy is only trained by the most recent rollouts. We have rewritten our pseudocode to accurately reflect our experiments and the SAC algorithm.\\n\\n\\n-\\u201cI suspect that most of the benefit of their method comes from randomly perturbing the parameters of the policies in the ensemble. More thorough and careful experimentation needs to be carried out to investigate this direction.\\u201d\\n\\nFirst, we would like to clarify that our results demonstrate that selecting the best agent to be the teacher has some benefit over choosing a random teacher. Perhaps we misunderstood, but we interpreted the reviewer\\u2019s mention of the \\u201crandom perturbation\\u201d to mean the change in parameters after performing distillation with a random teacher. We agree that this question is worth investigating, and we will address these in subsequent experiments in the future.\\n\\nOur hypothesis (which we have updated in the draft), outlined in Section 5.4, is that the reason that distillation from a random teacher performs quite well is because reducing the KL divergence between policies in the ensemble makes each policy better at learning from off-policy data generated from other members of the ensemble, by reducing the extrapolation error that comes from off-policy data distributions [11]. Thus, the distillation improves the quality of the off-policy RL updates, leading to better-than-expected performance for the agents.\\n\\nWe intend to investigate this improvement by measuring the extrapolation error [11] (stemming from learning from off-policy data) before and after distillation to measure this effect. We can do so with the method used by Fujimoto et al. [11], where they measured the extrapolation error for DDPG, an off-policy actor-critic method applied to Mujoco tasks. We will also run additional ablations where we withhold some agents in the ensemble from distillation or distill from all members of the ensemble to a single agent. \\n\\nWe would like to re-emphasize that these additional investigations are not fundamental to our core claims and results in the paper. These experiments are interesting supplementary experiments that better explain the reasons behind our performance improvements upon Ensemble-SAC. However, they will not change our core result which is that Ensemble-SAC augmented with CIKD gives improves performance across several Mujoco tasks.\\n\\n-\\u201cspecially realizing that the \\u201cHalfCheetah\\u201d experiments seem to not have all seeds run to convergence, please report the full results\\u201d\\n-\\u201cFurthermore, the authors only run the environments for 1M steps, whereas in previous works some environments are shown to get higher return after more training steps.\\u201d \\n\\nWe intend to run experiments for longer training times and on all standard Mujoco tasks. Due to limited resources, we have prioritized (for the rebuttal) running experiments for longer training times. These experiments for longer training times are currently underway and we will post them to the rebuttal as soon as possible. We will also run experiments on more Mujoco tasks, though it is not likely that we will be able to complete them within the rebuttal period.\"}",
"{\"title\": \"Response to Reviewer #4 (2/2)\", \"comment\": \"Question/Comment #5 Response:\\n\\nWhether or not it is surprising that the KL update is necessary is quite subjective. However, given that this question (i.e. Point 5) is listed under the reviewer\\u2019s concerns about the paper, we would urge the reviewer to consider our contribution. When we have an empirical hypothesis, and we test it, and demonstrate that it is useful for improving performance, that is valuable to the community. We do not think that a hypothesis needs to be surprising for it to be a valuable contribution to the field (if that was a concern). \\n\\nRegarding the reviewer\\u2019s point about why selecting a random teacher provides some improvement, we agree with the reviewer, and appreciate the comment. We viewed our random teacher experiment as an auxiliary experiment to our core result of using the best teacher, and thus did not devote as much text to discussing this experiment. However, the reviewer\\u2019s comments are interesting and important, and we have updated the paper to reflect those comments.\\n\\nQuestion/Comment #6 Response: \\n\\nThe core idea of Osband\\u2019s method is to combine several value functions, which each induce a policy, and have these individual policies act in the environment and generate trajectories, which are then used to train all the value functions off-policy. Our Ensemble-SAC similarly consists of several agents/policies, which act in the environment and generate trajectories, which are then used to train all the agents off-policy. \\n\\nWe did not presume to say that Osband\\u2019s method is the same as CIKD but without the KL update. Our wording was that it was \\u201ceffectively equivalent to CIKD-RL without inter-agent knowledge distillation\\u201d, which we realize is strong wording. We have rephrased this in the paper to indicate that Ensemble-SAC is the natural analog to Osband\\u2019s method in this setting.\\n\\nQuestion/Comment #7 Response: \\n\\nActually, we did hyperparameter searches (on learning rate) for Ensemble-SAC (extra) and Vanilla-SAC (extra) and found out that the default hyperparameters are the best. Presumably, a smaller learning rate should be used with extra policy updates. However, it turned out that neither a smaller nor larger learning rate are better than the default hyperparameters reported in the original SAC paper.\\n\\nQuestion/Comment #8 Response: \\n\\nThe dominant agent experiment is meant to test whether a single agent is consistently better than other agents in the ensemble. For us it was not obvious that the KL update will necessarily cause the best agent to change so frequently within the ensemble. While the KL update brings the policies closer, it certainly doesn\\u2019t suggest that one agent should surpass another after a KL update. In particular, the KL update is unidirectional, so if an agent A is better than agent B, then we perform updates on agent B. While we expect agent B\\u2019s policy to grow more similar to A\\u2019s policy, we wouldn\\u2019t necessarily expect it to surpass A, especially considering that B is worse than A before the KL update. It would be useful to run an additional experiment in the absence of a KL update to see whether a single agent is consistently dominant. We will try to get these results before the end of the rebuttal period.\\n\\nQuestion/Comment #9 Response: \\n\\nIn an abstract sense, we are related to genetic algorithms in that we share knowledge between individuals, in our case an ensemble of RL agents, and in the case of genetic algorithms, a population of genotypes (candidate parameters to a certain problem in genetic algorithms). However, there are several differences in the details between CIKD and genetic algorithms. First, we use knowledge distillation for sharing knowledge, while typical genetic algorithms use crossover, which usually randomly exchanges the elements between two sequences. \\n\\nSecondly, we use RL and distillation to optimize each individual while typical genetic algorithms solely use mutation and crossover (i.e., randomly exchanging the elements between sequences). While distillation is somewhat related to crossover, typically crossover is a destructive process, either explicitly replacing the structure or parameters of the individual, unlike distillation, whose parameter changes are through gradient updates. Note that, as we showed in Figure 5d, the destruction of parameters before distillation harms the performance, which suggests the advantage of being able to preserve the learned knowledge.\\n\\n\\nAgain, we would like to thank the reviewer again for providing valuable feedback and comments which we used to improve the paper.\"}",
"{\"title\": \"Response to Reviewer #4 (1/2)\", \"comment\": \"Question/Comment #1 Response:\\n\\nFirst, we would like to thank the reviewer for his/her detailed/thorough review of our paper. Regarding the biggest concern being that the paper is not motivated from a theory standpoint, it is unclear whether the reviewer is suggesting that we should have theoretical results or whether we should have theory motivating our method. We certainly think that rigorous, empirical contributions are extremely valuable contributions to the field, and there have been several empirical papers published at ICLR.\\n\\nRegarding 1a), it is true that our empirical results are on soft actor-critic, an actor-critic method, whereas Osband\\u2019s results are on value-function based methods. Is there a concern to be raised in 1a that we may address? \\n\\nRegarding 1b), we respectfully disagree that the KL distillation update is the only significant contribution of the paper. Our contribution is an empirical demonstration that combining the training of an ensemble of RL agents with periodic distillation between the members of the ensemble can significantly improve performance, and it is backed by several experiments. However, it is true that the distillation itself is primarily carried out through a combined KL update for actors and mean-squared error for the critics. However, we disagree with the characterization that the loss functions alone are the main/significant contribution, as it diminishes the importance of executing this gradient update in the context of training an ensemble with periodic distillation, which to our knowledge nobody has attempted. The point 1c seems to implicitly suggest (please correct us if we are wrong) that our paper\\u2019s aim is to justify diversity through randomization and/or imitation learning. Our paper builds off of ensemble RL, and our paper\\u2019s aim is to provide a method that improves upon a vanilla/standard form of ensemble RL. So our experimental results are not meant to further highlight the benefit of diversity beyond existing literature that justifies it. If we overemphasized the importance of diversity to the point where it mischaracterizes our contribution, we apologize. Our goal in motivating diversity was because we are considering settings where we are training an ensemble of RL agents. That is, our paper first motivates diversity since diversity is a precursor to the application of our actual method/contribution.\\n\\nQuestion/Comment #2 Response: \\n\\nWe apologize if we are unclear, and are happy to address any individual instances of unclear wording that the reviewer presents. In RL, exploration typically refers to trying actions randomly or randomly perturbing policy parameters to collect trajectories in the environment. To address the reviewer\\u2019s question, gathering more data through exploration can help improve the policy by adding some noise to the gradient, which is similar to the reviewer\\u2019s note. Our point was that for the agent to improve, it needs to be rewarded for \\u201cgood\\u201d behavior. An RL agent typically explores policies that are close to its greedy policy (e.g., sampling from a gaussian policy or adding the noise to the greedy actions). If that greedy policy is poor, it can require quite a bit of exploration in order to acquire those good experiences. We understand the reviewer\\u2019s concern here, and have reworded those sentences in the paper.\\n\\nQuestion/Comment #3 Response: \\n\\nThank you for pointing this out. This is correct. It was our intention to suggest that we can take traditional on-policy updates and have them use off-policy data for learning by applying importance sampling. However, these are then off-policy algorithms, not on-policy algorithms. We have removed this from the paper.\\n\\nQuestion/Comment #4 Response: \\n\\nIn Section 4.3, we are speaking theoretically, whereas in Section 5.3, we are speaking empirically. That is, theoretically, off-policy methods can update their policies by experience generated by any policy (e.g., human experts, past experience, and the other agents\\u2019 policies). However, in Section 5.3, we are saying that SAC, in practice, cannot fully benefit from the past experience (similar to the reviewer\\u2019s comments in point 5). Recent work [11] which we cite in our paper, supports this claim, demonstrating that DDPG, an off-policy critic method failed to learn well from data that deviates too much from the agent\\u2019s current policy. We have updated the draft based on these comments.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper introduces a method for using an ensemble of deep reinforcement learning policies, where members of the ensemble are periodically updated to imitate the most promising member of the ensemble. Thus learning proceeds by performing off policy reinforcement learning updates for each individual policy, as well as some supervised learning for inter-policy imitation learning.\", \"i_start_by_what_i_view_as_the_positive_aspects_about_the_paper\": \"1- The algorithm is quite simple (to understand and to implement).\\n2- Experimental results are performed on a variety of domains, and more importantly, each experiment is motivated by a question.\\n\\nThat said, I have some concerns about this paper which I list below:\\n\\n1- Perhaps my biggest concern is that the approach is not motivated from a theory stand point. There has been interesting results in Osband's work [Osband, 2016] (and references therein) for randomized value functions which can serve as a foundation for this work. That said, a) Osband's results, at least immediately, are related to value-function based methods, as opposed to policy gradient b) the KL update which one could argue is the main and only significant contribution of the paper, is not justified by Osband or any other prior work c) there is not anything that this paper adds to the literature to better justify diversity through randomization and/or imitation learning based on the best member of the ensemble.\\n\\n2- I have found various claims in the paper which are unclear, scientifically not true, or sometimes even contradicting. In Introduction, for example, the authors mention that the agent sometimes gets into a sub-optimal policy and may require a large number of interactions before escaping the sub optimal policy. How does gathering more data help to improve the policy? Either we are in a local maximum, which if we are doing gradient ascent, there is really not much we could do, or that we are in a saddle point, which we can escape by adding some noise to the gradient. [Jin,2017]\\n\\n3- In section 4.3 the authors talk about on-policy methods requiring importance sampling (IS) ratios. To the best of my knowledge, IS is only used for off-policy learning. Can the authors provide a link to an on-policy method that does IS?\\n\\n4- Again in section 4.3 authors claim and I quote \\\"Using off-policy methods, all the policies in the ensemble can easily be updated, since off-policy update methods can perform updates from any \\\\tau\\\". But later on in Section 5.3 authors claim that \\\"off-policy actor-critic methods (e.g. SAC) cannot fully utilize the other agent's or past experience.\\\" So which statement is true?\\n\\n5- Again, the KL update is interesting, but is it even surprising that the KL update is necessary for an ensemble of policies updates using policy gradients? In the absence of this KL update, which the authors characterize as the method that Osband proposed, the policies could generally be arbitrarily far from one another. This means that each policy needs to perform policy evaluation using trajectories that are coming from other policies who in principle can be radically different than the policy we want to update. This means that updates will be quite \\\"off-policy\\\" which we know can really degrade the quality of the estimated gradient. This is perhaps why even choosing a random policy to update towards is providing \\\"some\\\" improvement. I think this is the real insight, but it is not really discussed at all in the paper.\\n\\n6- On the same note, I do not think that one can say Osband's method is the same as CIKD but only without the KL update. Most notably, Osband's work was presented for value-function-based methods like DQN. These methods work fundamentally different than policy gradient methods, which rely on (near) on-policy updates to perform good policy improvements. In that sense, the presented results make sense, but I disagree with the framing of the results and how they are presented here.\\n\\n7- In section 5.3, when the authors utilize more policy updates to have a fair comparison, are they retuning hyper parameters? Surely they need to do that, at least for hyper-parameters that are known to be super important such as the step size.\\n\\n8- Overall I liked section 5.5 that is trying to dissect causes for improvement. However, it seems like that the \\\"dominant agent\\\" hypothesis has been rejected hastily, unless I misunderstood the experiment. The authors show that the notion of best is spread across different agents. But of course this will be the case in light of the KL update, since the policies are getting closer to one another. Can you redo the experiment in the absence of the KL update?\\n\\n9- Have the authors thought about any connection between this and genetic algorithms? In genetic algorithms, the idea is the next set of candidates are chosen based on the most promising candidates in the current iteration. CIKD seems like a soft implementation of this idea.\\n\\nIn light of the comments above, I am voting for weak rejection, though as I said before, I do see some interesting things in this paper. I encourage the authors to think about CIKD from a theoretical lens in the future.\"}",
"{\"title\": \"Compare\", \"comment\": \"Thank you for your interest in our work. Population-Based Training of Neural Networks (PBT) is similar to our work at a high level in that it similarly employs multiple agents for training.\\n\\nWe appreciate you for mentioning a related work. Population-Based Training of Neural Networks (PBT) is similar to our work in an abstract sense. PBT similarly employs multiple agents for training. However, our work differs from PBT in multiple ways.\\n\\nFirst, the goal of PBT is to optimize the hyperparameters online. However, our work aims to optimize an ensemble of policies given the same hyperparameters. Thus, PBT can be incorporated into CIKD, optimizing the hyperparameters of CIKD.\\n\\nSecondly, the core idea is different despite the similarity at an abstract level. Resembling evolutionary algorithms, PBT searches for the optimal set of hyperparameters via mutation, selection, and reproduction. Each set of hyperparameters is considered as an individual in the population. PBT iteratively mutates (i.e. randomly perturbs) the existing hyperparameters, then selects a group of top-ranked agents, and finally reproduces the population using the selected agents. Differing from PBT, our work does not require mutation and reproduction. We instead focus on improving the existing agents via distilling the knowledge of the selected best-performing agent. For a more detailed comparison, the reader can refer to Section 2 in our paper.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes an RL training procedure that maintains an ensemble of k policies and periodically pushes all the policies to be closer to the best performing one. The formulation, experiments and analysis are very clear and show a mild improvement over using the same underlying RL algorithm without the imitation part. The idea is close to many other proposed in the literature, but to my knowledge it is the first time this exact procedure is studied in detail.\\n\\nThe first piece of their approach is an off-policy RL algorithm. In their case, they use SAC. The second piece is adding an ensemble of policies (3 in their case), and randomly selecting one of them every time a rollout is collected, and using the collected rollout to update all the policies. This effectively implies 3 times more overall gradient updates compared to SAC. They call this ablation SAC-ensemble. Interestingly they only use the most recently collected trajectories to update all policies, and despite storing the rollouts in a replay buffer, they seem to only use the stored transitions for the imitation part described below. Some of their experimental results uses extra gradient steps, although it\\u2019s not clear if those gradient steps are also only on the last rollout collected, or on transitions sampled from the replay buffer as it is typical in off-policy RL methods. In general, I think the work could improve with more details about how much the policy training could improve by increasing the number of gradient steps on the full replay buffer.\\n\\nThe final piece of their method is selecting the best performing policy (or \\u201cteacher\\u201d) of the ensemble based on the recent experience, and update all other policies by executing some gradient steps on the KL divergence between them and the current \\u201cteacher\\u201d. They also try an experiment where the \\u201cteacher\\u201d is selected randomly, and it does surprisingly well in my opinion (specially realizing that the \\u201cHalfCheetah\\u201d experiments seem to not have all seeds run to convergence, please report the full results). I suspect that most of the benefit of their method comes from randomly perturbing the parameters of the policies in the ensemble. More thorough and careful experimentation needs to be carried out to investigate this direction. This is in fact not very surprising given the results of Evolutionary Strategy methods, or Population-based training (even if usually used for hyper-parameters adaptation).\\n\\nFurthermore, the authors only run the environments for 1M steps, whereas in previous works some environments are shown to get higher return after more training steps. I would also encourage the authors to report the results in all the standard MuJoCo benchmarks for the ablations (even if it\\u2019s in the appendix) to better asses their claims.\\n\\nOverall, this is a very well presented work, although it lacks some novelty and a few more thorough experiments to fully understand the improvements they show. I think this idea is worth sharing with the community, and I recommend a weak accept.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis paper proposed an ensemble method (CIKD) that train multiple agents and\\nuse knowledge distillation to transfer knowledge from the current best agent to\\nsub-optimal agents periodically. According to the reported results, CIKD is a\\nsimple yet effective approach to improve sample-efficiency and final performance. \\nThe experimental results are sufficient, and the ablation studies are conducted thoroughly. It is shown that both selecting the best agent and using KD to\\ntransfer knowledge are effective comparing to other naive alternatives. \\n\\n\\nI recommend the acceptance of this paper. \\n\\nThe paper proposed a novel approach (CIKD) to improve the sample-efficiency of the state-of-the-art. The proposed ensemble approach is aligned with our intuition, and it is effective. The authors proposed to train several agents at the same time and randomly select one of\\nthe agents as a behavior policy during each rollout. Then the collected trajectory is used to update the policy of all agents. Meanwhile,\\nthey keep tracking the performance of each agent and use the current best agent to conduct knowledge distillation to other agents periodically. \\n\\nThis paper first conducts experiments to show when consolidating\\nthe SAC with CIKD, both of the final performance and sample-efficiency can be improved. Then a set of ablation studies verified the best agent selection strategy, and the knowledge distillation\\nstrategy is necessary for the ensemble method.\", \"investigation_on_the_reasons_for_improvement\": \"Though extensive ablation studies have shown the effectiveness\\nof each component of CIKD. It is still not clear why this approach\\ncan be effective. \\nIntuitively, it is possible that the exploration from a set of agents would outperform\\na single agent. The measure of exploration efficiency could help in explaining the results. Furthermore, better exploration not necessarily\\nleads to better performance and sample-efficiency. Does knowledge distillation serve as a better alternative to exploit existing data? \\n\\nModel/algorithm agnostic\\nThe proposed method is more convenient to be applied with off-policy approach when the policy is in the form of softmax. Is it also applicable\\nto other approaches?\", \"experiments\": \"How do you determine when to stop the KD process? As mentioned in section 5.5, if we conduce KD fully, all students would be just imitating\\nthe teacher's behavior. It seems the key is to tune a good termination\\nthreshold for each task? Are there any guidelines to set up this threshold?\\nDo you have some automatic way to terminate the KD procedure?\", \"minor\": \"L1, P5, \\\"how to CIKD improves the sample efficiency\\\"\"}",
"{\"comment\": \"Very interesting idea, how is this method compared to <Population Based Training of Neural Networks>\", \"title\": \"How\"}"
]
} |
HJeYSxHFDS | Gauge Equivariant Spherical CNNs | [
"Berkay Kicanaoglu",
"Pim de Haan",
"Taco Cohen"
] | Spherical CNNs are convolutional neural networks that can process signals on the sphere, such as global climate and weather patterns or omnidirectional images. Over the last few years, a number of spherical convolution methods have been proposed, based on generalized spherical FFTs, graph convolutions, and other ideas. However, none of these methods is simultaneously equivariant to 3D rotations, able to detect anisotropic patterns, computationally efficient, agnostic to the type of sample grid used, and able to deal with signals defined on only a part of the sphere. To address these limitations, we introduce the Gauge Equivariant Spherical CNN. Our method is based on the recently proposed theory of Gauge Equivariant CNNs, which is in principle applicable to signals on any manifold, and which can be computed on any set of local charts covering all of the manifold or only part of it. In this paper we show how this method can be implemented efficiently for the sphere, and show that the resulting method is fast, numerically accurate, and achieves good results on the widely used benchmark problems of climate pattern segmentation and omnidirectional semantic segmentation. | [
"deep learning",
"convolutional networks",
"equivariance",
"gauge equivariance",
"symmetry",
"geometric deep learning",
"manifold convolution"
] | Reject | https://openreview.net/pdf?id=HJeYSxHFDS | https://openreview.net/forum?id=HJeYSxHFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Sx0-BssqxN",
"BJeVkETtiH",
"SygRPmTYsH",
"S1ltX7TYsS",
"BJl9iMTKsS",
"SJeURqxxcB",
"HylO6pcycB",
"SJgLAnS0YB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745427,
1573667803642,
1573667685833,
1573667617326,
1573667489972,
1571977933675,
1571954112292,
1571867854419
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2293/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2293/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2293/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2293/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2293/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2293/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2293/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper extends Gauge invariant CNNs to Gauge invariant spherical CNNs. The authors significantly improved both theory and experiments during the rebuttal and the paper is well presented. However, the topic is somewhat niche, and the bar for ICLR this year was very high, so unfortunately this paper did not make it. We encourage the authors to resubmit the work including the new results obtained during the rebuttal period.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewer #3\", \"comment\": \"Thank you for your comments. It is true that the present paper does not introduce a new framework like the paper by Cohen et al. However, although that paper contained a detailed continuous mathematical theory, as well as a discretized implementation of the idea for the icosahedron, it lacked an explanation of how exactly the method is to be implemented for manifolds that are not locally flat like the icosahedron. This is certainly not a trivial matter, as there are many ways one might derive a discrete algorithm from the continuous theory, all of which will have different characteristics in terms of runtime efficiency, numerical accuracy, and task performance. After careful consideration and preliminary experiments, we have settled on the convolution algorithm described in the paper, which is based on a non-trivial interpolation scheme.\\n\\nNevertheless, the reviewer is correct that this is not a theoretical paper. In order to strengthen the theoretical side of the paper, we have added appendix E, which contains a theoretical analysis of the equivariance of the regular non-linearity under SO(2) gauge transformations. It shows that pointwise nonlinearities, which tend to perform best, can be used for continuous groups and the number of samples can be selected to trade off computational cost with the equivariance error.\\n\\nWe have also worked to further strengthen the experimental section of the paper. We added comparisons to more prior work and an isotropic baseline to the Spherical MNIST; we demonstrate state of the art on molecular energy prediction; and demonstrate scalability on a high resolutions cosmology dataset - a task Fourier based spherical convolutions would fail at. We hope that the additional experiments added to the revised paper, such as the baseline using only scalar features and isotropic filters, help showing the strengths and weaknesses of various convolutional methods on the sphere.\"}",
"{\"title\": \"Response to reviewer #2\", \"comment\": \"Thank you for your comments.\\n\\n1. We have fixed this typo, thank you for pointing it out.\\n2. When an equivariant method uses only scalar features, it necessarily must use isotropic kernels. This can be seen from Case III in Proposition 1 of [1]. This limits expressivity and complicates the detection of orientable features, such as lines. We show this empirically with the Isotropic baseline in the MNIST experiment. The main motivation for using non-scalar features is that it allows for equivariant anisotropic kernels. Additionally, some datasets consist of non-scalar signals, like optical flows, SIFT-like features that measure local properties in different directions, wave polarization, or wind direction. In these cases one has no choice but to use non-scalar features in the input space.\\n3. The higher computational complexity of S2CNN [2] prevents scaling up to high resolution grids, as noted by [3]. Our method does not suffer from this limitation. The fact that in the MNIST experiments, which use a relatively coarse grid, the runtimes are quite similar, is possibly due to the custom CUDA kernels used in [2]. We expect custom kernels can yield big improvements in the runtime of our method and are investigating possible implementations.\\n\\n[1] Kondor & Trivedi 2018\\n[2] Cohen et al. 2018\\n[2] Perraudin et al. 2019\"}",
"{\"title\": \"Response to reviewer #1\", \"comment\": [\"Thank you for your comments and kind words on the readability of our paper.\", \"We agree that the regular non-linearities and log map could be clarified more and attempted to do so in the revised version. We have added a proof that shows in the limit of infinite samples in the regular non-linearity, we achieve full equivariance and provide bounds on the equivariance error for finite number of samples.\", \"We have added a comparison to Kondor et al\\u201918 to our MNIST experiment and further comparisons in the additional experiments.\", \"We agree that the difference between the IcoCNN is marginal, except crucially for a task that directly tests equivariance, which is the NR/R experiment for rotated MNIST. This experiment shows that the icosahedral approximation leads to poor equivariance under SO(3) rotations. The other experiments don\\u2019t test for generalisation to arbitrary rotations and thus show less difference between the IcoCNN and our method.\"]}",
"{\"title\": \"Overview of revisions and responses\", \"comment\": \"We thank the reviewers for their valuable comments, which we have taken into account in our revised version.\\n\\nFirst, we would like to stress the difference between our method and other equivariant spherical convolution methods:\\n- Isotropy vs Anisotropy: Methods using only scalar features are restricted to using isotropic filters in order to be SO(3) equivariant, which includes graph methods such as [1]. Alternatively, one can pool over orientations immediately after convolution (as in [2]). In either case, however, detecting the direction of orientable patterns such as lines is complicated. We show this empirically in the isotropic baseline we added to our revised version (see below).\\n- Computational Efficiency: Fourier based methods, such as [3] and [5], are automatically SO(3) equivariant, but scale poorly to high resolution grid, due to the nonlinear complexity of the Fourier transform and various difficulties implementing it efficiently in current deep learning frameworks. Additionally, they can\\u2019t be easily applied to grids on part of the sphere.\\n- Icosahedral CNN: The Icosahedral CNN [4] is fast to compute thanks to the use of conv2d routines and exactly gauge equivariant and as a result automatically equivariant up to 60 discrete symmetries of the icosahedron. However, it is not equivariant to SO(3), while our model is fully SO(3)-equivariant. Additionally, Icosahedral CNNs assume a particular sampling grid whereas our Spherical CNN can admit arbitrary grids. This feature is particularly important when considering the fact that in many applications spheres are discretized differently than the one Icosahedral CNNs and others presume. Finally, it is not straightforward to adjust Icosahedral CNNs to operate over partially observed spherical inputs in an efficient manner as our method.\\n\\nAll in all, to the best of our knowledge, our method is the first spherical convolution which is SO(3) equivariant, supports anisotropic filters, and scales to arbitrary high resolution grids.\\n\\nSecondly, the reviewers have pointed out that the experimental validation could be improved. To address this, we add two new experiments to the revised version. With these experiments, we would like to emphasize both scalability and flexibility aspects of our method using a different sampling strategy and larger dimensional spherical signals as well as cases where our anisotropic filters could potentially lead to better performance in comparison to isotropic counterparts. We list our experimental revisions below.\\n\\n- Additional Experiment #1 (Atomization Energy Prediction): In this dataset, we achieve state of the art compared to other sphere-based methods, in order to expand the experimental validation of our method. \\n- Additional Experiment #2 (Cosmological Model Classification): The resolution of the signals in this problem are a few orders of magnitude larger in comparison to the existing spherical datasets. Also, it requires a different spherical sampling scheme namely, Healpix. Thus we use it to demonstrate our approach's scalability and grid-agnostic aspects. \\n\\nIn addition to those, the following aspects have changed in the revised version:\\n- We added a comparison to [5] in our Spherical MNIST experiment, as requested by reviewer #1.\\n- We added a baseline to our Spherical MNIST experiment in which we use only scalar features and isotropic filters. The results show that anisotropic filters are important in this task, as mentioned by reviewer #2.\\n- We prove a bound on the error to the gauge equivariance of the Regular NonLinearity in Appendix E, as requested by reviewer #1.\\n- We clarified the log map, as requested by reviewer #1. We moved the sections regarding the spherical geometry to the appendix to make space for the additional experiments.\\n- We added a figure of the icosphere with exponential/log map to visually support equations as requested by reviewer #1.\\n\\n[1] Perraudin et al. 2019\\n[2] Masci et al. 2015\\n[3] Cohen et al. 2018\\n[4] Cohen et al. 2019\\n[5] Kondor et al. 2018\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Cohen et al. recently proposed the \\\"Gauge equivariant CNN\\\" framework for generalizing convolutions to arbitrary differentiable manifolds. The present paper instantiates this framework for the case of the sphere.\\n\\nThe sphere is the simplest natural non-trivial manifold to try out gauge invariant networks on, and spherical CNNs have several applications. However, other than the details of the interpolation etc., there is really very little in this paper that is new relative to the original paper by Cohen et al., it reads a bit more like an extended \\\"experiments\\\" section. \\n\\nUnfortunately the experimental results are not all that remarkable either, probably because the tasks are relatively easy, so other SO(3) equivariant architectures do quite well too. Given that there is essentially no new theory in the paper, I would have welcomed a much more thorough experimental section, comparing different architectures, different discretization strategies of the sphere and different interpolations/basis functions.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"1. (p1) ``in almost in all cases\\\" to \\\"in almost all cases\\\"\\n2. (1.1) The authors could explain more about why we would want to consider tensor features. \\n3. They conducted experiments on different datasets, including the MNIST dataset. They achieved good results comparing to baseline spherical CNNs. However, the advantage of this method over S2CNN can be further elaborated, as S2CNN already achieved high accuracy; it seems like the one improvement is the complexity (improved from S2CNN's $O(N \\\\log N)$ to their model's $O(N)$), but the reduction of complexity is not significantly reflected in the training time per epoch (from 380s to 284 s). \\n4. Overall, the paper provides clear theoretical backgrounds on gauge CNNs that justifies their definition of convolution operator only uses the intrinsic structure of the manifold (does not reply on higher dimensional embedding).\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes SO(3) equivariant layer, derived from the recently introduced Gauge equivariant CNN framework. The novel contributions are in taking the Gauge equivariance CNN and finding efficient ways to perform logarithmic mapping, parallel transport, and convolution by the equivariant kernel when applied to the sphere. An interpolation scheme for an improved approximation to global SO(3) symmetry is also discussed. Experimental results on spherical MNIST, climate pattern segmentation, and omnidirectional semantic segmentation demonstrate the usefulness of the proposed method for prediction on a sphere.\\n\\nFor the most part, the paper is very clearly written despite the challenging and technical nature of the topic. In particular, the first four pages provide a nice overview of related work and a clear explanation of the Gauge equivariance framework of Cohen et al\\u201919. Two sections that can benefit from further clarification are the proposed \\\"regular non-linearities\\\" (where it would be nice to show in equation why we have equivariance), and equation (4), the logarithmic map (where I had a hard time mapping the discussion in words to the equation). \\n\\nHowever, the experiments, while satisfactory, are not impressive: one issue is that the results are mostly compared to the results of two relevant papers by Cohen and colleagues. In recent years, there have been other proposals for deep learning on the sphere, and I wonder why experiments do not try to compare with these works? (see Kondor et al\\u201918, Coors et al\\u201918, and others cited in the paper.) Moreover, although in theory, the proposed framework improves the Icosahedral CNN of Cohen et al\\u201919 (by directly operating on the sphere rather than an Icosahedral approximation), the practical improvements over the Icosahedral CNN seem to be often marginal (with one exception in spherical MNIST). Do you have any explanation for this? Is there any setup where you expect the proposed approach would give a substantial improvement?\"}"
]
} |
ryxOBgBFPH | Preventing Imitation Learning with Adversarial Policy Ensembles | [
"Albert Zhan",
"Pieter Abbeel",
"Stas Tiomkin"
] | Imitation learning can reproduce policies by observing experts, which poses a problem regarding policy propriety. Policies, such as human, or policies on deployed robots, can all be cloned without consent from the owners. How can we protect our proprietary policies from cloning by an external observer? To answer this question we introduce a new reinforcement learning framework, where we train an ensemble of optimal policies, whose demonstrations are guaranteed to be useless for an external observer. We formulate this idea by a constrained optimization problem, where the objective is to improve proprietary policies, and at the same time deteriorate the virtual policy of an eventual external observer. We design a tractable algorithm to solve this new optimization problem by modifying the standard policy gradient algorithm. It appears such problem formulation admits plausible interpretations of confidentiality, adversarial behaviour, which enables a broader perspective of this work. We demonstrate explicitly the existence of such 'non-clonable' ensembles, providing a solution to the above optimization problem, which is calculated by our modified policy gradient algorithm. To our knowledge, this is the first work regarding the protection and privacy of policies in Reinforcement Learning. | [
"Imitation Learning",
"Reinforcement Learning",
"Representation Learning"
] | Reject | https://openreview.net/pdf?id=ryxOBgBFPH | https://openreview.net/forum?id=ryxOBgBFPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"gbRYb-R4IC",
"Hyer3HfnsB",
"rkl_PHfhjB",
"SkgsaNM3oS",
"S1xnK7fniH",
"BkxgCNX0tr",
"HJeehMRpYS",
"SkeyZ3HTYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745397,
1573819820843,
1573819744492,
1573819587031,
1573819267748,
1571857607682,
1571836583738,
1571802102878
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2291/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2291/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2291/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2291/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2291/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2291/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2291/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Although the reviewers appreciated the novelty of this work, they unanimously recommended rejection. The current version of the paper exhibits weak presentation quality and lacks sufficient technical depth. The experimental evaluation was not found to be sufficiently convincing by any of the reviewers. The submitted comments should help the authors improve their paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of Changes\", \"comment\": \"1. To better situate this in other \\u201cadversarial\\u201d methods in RL, we changed the Introduction to create the distinction between current methods, which serve as attacks to the RL algorithms, and ours, which is purely a protection method to prevent BC.\\nWe included a new section in Related Work to reflect this change, as well as moved Related Work section to section 3 (right after preliminaries).\\n2. Edited the methodology to make it more clear \\u2014 these were all tabular representations. \\n3. Notation fixes \\u2014 preliminaries were edited to be consistent with remaining notation. We make it clear we have similar notation to that of other policy gradient papers.\\n4. Figure 1 changed to incorporate both figure 1 and 2 from previous iteration to improve method clarity.\\n5. Figure 2 caption moved to right after the Equation 4,5\\n6. Equation 5 broken to two pieces, also to add clarity\\n7. Table 1: added more seeds for PG-APE, as well as more explanation for how the numbers were collected, and what they represent.\"}",
"{\"title\": \"Response\", \"comment\": \"We would like to thank AnonReviewer3 for their comments, and their excitement in possible future directions that may stem from our work.\\n\\n1. We have fixed many typos, and made other changes to make the paper easier to follow / read.\\n2. While more experiments may enhance our work, we stand by that the current experiment is enough to show validation in our concept and idea. Our environment is the most straightforward environment to demonstrate our \\u201cunclonableness\\u201d concept, which we elaborate in the experiment section. \\n3. Second paragraph in 2.3 Policy Ensembles has been removed. As well, we have shifted away from the privacy of human demonstrations. We would like to thank AnonReviewer3 for the suggestion on this motivation. We have made changes to the introduction, to focus on inputs generated adversarially, as well as situate it in the current RL space of mainly adversarial attacks on learning policies (whereas ours can be considered a defense against imitation learning).\\n4. We thank AnonReviewer3 for the paper suggestions, and have incorporated them into the new Related Work paragraph, where we situate our work within the field of Adversarial and Private Reinforcement Learning.\\n5. Min-max approaches generally come from min-maxing the same quantity. In our case, we are minimizing and maximizing different objectives. However, scaling using GANs may be potential areas of future work.\\n6. Equations 4,5 had descriptions / explanations that were in the caption of a figure, have now been moved to be right after introducing the equations.\\n7. Bottom of page 5: Currently, popular continuous policies are gaussians or mixture of gaussians. However, we suspect that it may take more expressive policies (that are not gaussian / mixture of gaussian) to fully take advantage of our method in continuous action space. Otherwise, exploitations of how the cloner clones could occur, and would not lead to interesting examples. Consider the classic CartPole Swing-up. If the BC, as usual, is performed by minimizing L-2 loss, then simply having 2 experts, one which moves right, and the other which moves left would cause the cloned policy to remain still at the starting point. This would be uninteresting, as it is exploiting our lack of ability to express policies in continuous state space. Similarly, we would like to shy away / put less emphasis from work that exploits NN. \\n8. The Stay action is not necessary, however we found the plots to be more informative (as the agent would run into the wall, which would be equivalent to Stay).\\n9. We noticed that with more contexts, there was more stable learning (less variance across seeds), although that may also be due to hyperparameter changes needed to train more experts. The result / reward difference would also increase with more experts, although not by any significant amount -- n=3 gets reward difference of 30.83, 0.66 std across 3 seeds, up from 27.81 with n=2.\\n10. We are not considering the scenario of finite data and poor generalization, as we instead are considering when the collector has as much data as desired, to create the perfect clone. We show that mathematically and empirically, we can make this perfect clone bad.\"}",
"{\"title\": \"Response\", \"comment\": \"We would like to thank AnonReviewer2 for their insightful comments.\\n\\nSpecifically, we targeted to address your two concerns. (1) To make the paper easier to follow, we situate the paper in the current RL literature and explain our approach in the introduction, as well as improving the overall language. (2) To clarify methodology, we have rephrased, reordered, the Method and Experiments section. However, we would like to note that our Appendix should answer any questions about practical implementation.\\n\\n1. We thank AnonReviewer2 for their paper suggestions. We have included them accordingly. As well, we included a new section to properly place our work with respect to other adversarial RL work. While the idea of noising demonstrations may seem relevant, we are actually completely separate, as we assume that the observer can clone based off of perfect observations. Our novel approach focuses on protecting the policies, rather than an explicit attack any learners.\\n2. As this section is a discussion and future work, we have written what we speculate to be exciting avenues. Specifically, if our method cannot train bad ensembles in a particular environment, that would imply that behaviour cloning should excel in cloning expert demonstrations of that environment. The notion of using cloning sequential policies is not new, in fact is very successful [1], which is why we feel that it is important to include how to tackle such types of cloning. Similarly, the paper AnonReviewer2 referenced above wrote the future work and outlined a potential algorithm.\\n3. We have added a bit more detail in the Algorithm box, and changed the figures to be more illustrative of the algorithm. As well, we have made changes to accommodate the requests.\\nWe have \\u201ccorrected\\u201d the notation for $A(s_t, a_t)$ in the preliminaries -- we clarify that $R_t$ and $A_t$, wich appear frequently in PG papers such as GAE and PPO and papers that vary PG such as Strouse et al [2], are the sample estimates. We additionally would like to differentiate between the returns from trajectories collected under different policies, namely $\\\\pi_{c^{(i)}}$, and $\\\\pi_o$, which is why we superscript the policy as is also done in other papers. Based on our revisions, our notation follows the current conventions, and is consistent throughout the paper.\\n4. Related work has been moved, to better situate the paper within the field of adversarial RL and confidential ML. As well, we would like to thank AnonReviewer2 for their suggestions on additional papers to include.\\n5. \\u201c...multiple trajectories to learn from\\u2026 In fact,\\u201d We are unsure if the reviewer has forgotten to write something, or if there was a typo. However, we did notice, with our other preliminary experiments, that cloning 2 tabular policies, with an RNN to predict which policy to use, was effective in combatting our proposed strategy. This experiment is why we mention in our Future Work section that strong representations should and can clone no matter what.\\n6. \\u201cOtherwise you could also just cut out the variants bit since it's not necessary. \\u201c\\nThe original intention was to cite the different possible ways to estimate the gradient used in PG, which is nicely summarized in GAE. We have changed the section to clarify this.\\n7. While we take into the account of the suggestion to remove the gridworld visualizations, however we feel that it is quite instructive to see with the colours how the two experts learn to sacrifice \\u201cscrew over\\u201d the observer. \\n8. \\u201cDoes Table 1 represent returns for rolled out policies after learning or across all episode returns during learning\\u201d The description of Table 1 has been updated to address this. It was mentioned prior that a reasoning for discrete state space was for the closed form solution of the expected returns from each state, which is how the returns are calculated. As well, the caption is more descriptive of what it contains. Most noteworthy is that we added 2 more seeds, which did not affect the statistics very much...\\n9. The only possible \\u201coverpowering\\u201d would be how the cloner can only have 1 table, while we have $n$ tables. However, this would be analogous to real-life cloning humans, as there are many humans, and only one parameterization of the cloned policy.\\n\\n[1] Rahmatizadeh, Rouhollah. \\\"Learning robotic manipulation from user demonstrations.\\\" (2017).\\n[2] Strouse, D. J., et al. \\\"Learning to share and hide intentions using information regularization.\\\" Advances in Neural Information Processing Systems. 2018.\"}",
"{\"title\": \"Response\", \"comment\": \"We would like to thank the official blind reviewer for their comments.\", \"in_response_to\": \"\\u201cHowever, there are no robust empirical experiments that the proposed method could achieve comparable performance/accumulated return as the policy ensemble (PE)\\u201d \\nWe assume that the reviewer meant an optimal policy when they refer to PE \\u2014 in this case, we can choose how optimal the PE trained by PG-APE via \\u00df, how well the APE should perform. Our derivations would lend us to believe that performance of the PE trained through PG-APE should not be of any concern.\\n\\nWe have added literature in the intro and related work section regarding privacy in ML and RL.\", \"addressing_the_minor_mistakes\": \"$J$ has been labeled in Equations 4,5. $\\\\mathcal{M}$ is the MDP as defined in the Preliminaries\\n$\\\\alpha$ was a parameter used in the derivation, and not a hyperparameter for implementations (although it is chosen via $\\\\beta$).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: The paper introduces a method for generating trajectories which prevent behavioral cloning in a policy gradient setting by learning varying experts which try to minimize the ability of a cloned policy. It runs experiments on a grid world to validate empirically that cloning is unsuccessful.\", \"recommendation\": \"While this is a novel concept and interesting, I cannot recommend acceptance in its current state. The paper was a bit hard to follow and I found the experiments not robust enough to fully characterize the method at this point. It is unclear whether this method really would prevent cloning given an apples-to-apples comparison. My understanding from the paper -- which was a bit hard to follow -- is that cloned policies were tabular while the APE policies were NNs. I would be more confident in results if more environment variations were tested, the cloned policies used more current and apples-to-apples comparisons, and overall if there were more clear details about the methodology.\", \"comments\": [\"It might be worth perusing the differential privacy and adversarial attack literature to think about whether demonstrations can simply be noised to retain information while crashing performance. This work seems relevant for example (it was put online in June which is sufficiently before the September deadline to mention it I believe): Behzadan, Vahid, and William Hsu. \\\"Adversarial Exploitation of Policy Imitation.\\\" arXiv preprint arXiv:1906.01121 (2019).\", \"In the discussion:\", \"\\\"We found in our preliminary results that using an RNN classifier which outputs p(c|\\u03c41:t) simply ended up in with either optimal policies or crippled policies. In both cases, there was a relatively minor difference in performance between the policy ensemble and the cloned policy.\\\" --> There are no quantitative results for this so either results should be included and discussed or this should be future work.\", \"The algorithm box doesn't really add a whole lot of information other than saying that trajectories are collected and then gradients are updated. It would be really nice to have a very clear picture of what's happening at each point in the algorithm. In its current state the paper is hard to follow and decipher this sequence. See for example Algorithm one in: https://papers.nips.cc/paper/6391-generative-adversarial-imitation-learning.pdf .\", \"Notation-wise, R(t) is a bit unusual notation for the RL literature, the advantage is usually r + \\\\gamma V(s') - V(s), where r+\\\\gamma V(s') is the action value Q(s,a). Given that the advantage is denoted as A(s,a), it would be clearer I think to use the Q(s,a) notation. Also the notation changes from section 2.2 to section 3.2 from A(s,a) to A(t). Keeping consistent notation would make this paper a lot easier to read.\", \"The related work section is in the middle of the paper. it'd be nice to have it earlier to set the context of the work.\", \"In the multiple policies section, a recent work has shown how to learn multiple policies from multiple experts using a mixture of experts framework -- though they frame it as options: Henderson, Peter, Wei-Di Chang, Pierre-Luc Bacon, David Meger, Joelle Pineau, and Doina Precup. \\\"Optiongan: Learning joint reward-policy options using generative adversarial inverse reinforcement learning.\\\" In Thirty-Second AAAI Conference on Artificial Intelligence. 2018.\", \"Part of the way this defeats behaviour cloning is through the assumption that there are multiple trajectories to be learned from. It would be interesting to see if methods like the one above or any of the others mentioned can recover optimal performance from noisy trajectories by similarly learning multiple policies. In fact,\", \"\\\"Policy Gradient (PG) (Sutton et al., 2000) and its variants (Schulman et al., 2015) aim to directly learn\", \"the optimal policy \\u03c0, parameterized by \\u03b8.\\\" --> I think some other citations of variants should be added for the final version instead of only referencing Schulman 2015. There are a lot now, so maybe adding PPO, DDPG, and a few others might be nice. Otherwise you could also just cut out the variants bit since it's not necessary.\", \"All first quotation marks are backwards in the document\", \"I think the experiments ran were a bit lacking in robustness and details. Since this is an adversarial method, I would expect more variance across seeds and 3 seeds may not be enough to characterize this. Table 1 has +/- but does not state what this represents. Standard Deviation or Standard error? Does Table 1 represent returns for rolled out policies after learning or across all episode returns during learning? For the behavioural cloning method, it says a \\\"tabular policy\\\" was trained. Does this mean that the experts were trained using policy gradients and neural networks while the behavioural cloning method used a tabular policy? If so, I think this would be at a detriment to the method being tricked. I think it is a necessary condition to validate this method across several gridworld environment variations, seeds, and with more robust cloning methods (if in fact the behavioural policy was underpowered (tabular vs. nn). Overall, it would be great to have more details. While the visualizations of the gridworld itself were nice, I think they took up a lot of space which could be replaced with more detailed explanations and robust quantitative results.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a method to learn an ensemble of policies that is hard to imitate from their rollout trajectories. I like the idea of introducing the problem of privacy in reinforcement learning, and it is quite essential. However, some concerns are raised after checking the draft, and I believe the paper could be improved if some of the questions are addressed:\\n\\n* The current experiment could be an interesting demonstrative part to show how the algorithm works. However, there are no robust empirical experiments that the proposed method could achieve comparable performance/accumulated return as the policy ensemble (PE). I think some experiments on popular benchmarks like Mujoco simulation environment, robotics learning tasks, and Atari games are needed to make the point. The paper will become more convincing if the argument is proved on those benchmarks. Also, more experts should be explored (n > 2) in the experiments. \\n\\n* It is better to have some mathematical/theoretical analysis of the learning behavior of APE. For instance, is there a theoretical guarantee that APE could achieve comparable performance as PE?\\n\\n* The paper should discuss more details/analysis of the algorithm, like the choice of $\\\\alpha$ and $\\\\beta$, etc., which I think will affect the algorithm a lot. \\n\\n* Some related literature on privacy in machine learning could be discussed in the related work section. \\n\\n\\n====Minor that leads to confusion:\\n-No mention about J and M before Alg 1; It is assumed to be the objective function and environment \\n-No mention of the hyper-parameter $\\\\alpha$ after equation 2.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper addresses the problem of poisoning behavioral cloning using an optimized ensemble of demonstrators. The goals is allow the ensemble to still achieve an expected return above a certain threshold while minimizing the return of a policy trained via behavioral cloning.\\n\\nThis is a very exciting and novel paper, but it is not yet ready for publication. There are many typos and the paper is difficult to read at times. Also, the experiments are still very basic. While interesting, further experiments in more complicated discrete or continuous domains would greatly enhance the work. \\n\\nI would recommend not focusing on the privacy of human policies. I think a better motivation is to focus on theoretical ideas of adversarial inputs to behavioral cloning to study robustness as well as potential counter-intelligence strategies for autonomous agents. \\n\\nThis work has similarities to machine teaching and poisoning attacks. It would be interesting to see if recent methods for machine teaching for IRL [1] or poisoning for RL [2] can be used to solve the proposed problem. It would be good to situate this work within these related works. It seems like the proposed problem can be seen as a kind of anti-machine teaching for IRL where the goal is to find a set of good demonstrations that are maximally uninformative.\\n\\nSecond paragraph in 2.3: It's unclear what is the point of this paragraph. I would recommend not focusing so much on human demos.\\n\\nThe min-max approach seems related to GANs and Generative Adversarial Imitation Learning. Can something similar be used to scale this approach to high-dimensional tasks?\\n\\nEquations (4) and (5) are difficult to unpack. It would be nice add a little more explanation and intuition.\", \"bottom_of_page_5\": \"What do you mean that continous policies can't be parameterized? Aren't most policy gradient algorithms continous with parameterized policies?\\n\\nIs the no-op action required to make BC fail?\\n\\nWhy only ensembles of 2? If you have 3 what happens in the grid env?\\n\\nThe authors mention that given an expressive enough policy, it should be possible to imitate any policy and thus the worst-case experts cannot prevent cloning. I would argue that a stronger representation such as a deep network would make the problem easier since deep networks are very susceptible to adversarial attacks and will likely over fit and poorly generalize given finite amounts of demonstrations.\\n\\n[1] Brown et al. \\\"Machine teaching for inverse reinforcement learning: Algorithms and applications.\\\"\\n[2] Yuzhe et al. \\\"Policy poisoning in batch reinforcement learning and control.\\\"\"}"
]
} |
S1evHerYPr | Improving Generalization in Meta Reinforcement Learning using Learned Objectives | [
"Louis Kirsch",
"Sjoerd van Steenkiste",
"Juergen Schmidhuber"
] | Biological evolution has distilled the experiences of many learners into the general learning algorithms of humans. Our novel meta reinforcement learning algorithm MetaGenRL is inspired by this process. MetaGenRL distills the experiences of many complex agents to meta-learn a low-complexity neural objective function that decides how future individuals will learn. Unlike recent meta-RL algorithms, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training. In some cases, it even outperforms human-engineered RL algorithms. MetaGenRL uses off-policy second-order gradients during meta-training that greatly increase its sample efficiency. | [
"meta reinforcement learning",
"meta learning",
"reinforcement learning"
] | Accept (Spotlight) | https://openreview.net/pdf?id=S1evHerYPr | https://openreview.net/forum?id=S1evHerYPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9WG3Xv9-de",
"SlKuUi-3Q",
"SygfXwUnoB",
"SylY3hWssS",
"HylZtS-9sS",
"B1eo8QIEsS",
"rkeLWm84iH",
"BklpFf8VsS",
"B1ln3l84sS",
"HkxPiyXRKr",
"HJgkRUO6Kr",
"ryxWqYyctr"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1581696374042,
1576798745366,
1573836569795,
1573751985464,
1573684601514,
1573311315480,
1573311230294,
1573311109008,
1573310644221,
1571856286701,
1571813062955,
1571580297266
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2289/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2289/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2289/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2289/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2289/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2289/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2289/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2289/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2289/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2289/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2289/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Summary of changes for the camera ready version\", \"comment\": [\"Based on the area chair\\u2019s and reviewers\\u2019 feedback we have made the following additional changes to the camera ready version:\", \"We have re-ran all experiments to include 6 meta-train x 2 meta-test seeds (totaling 12 meta-test seeds) for MetaGenRL. In order to make this computationally feasible (as meta-training involves a population of agents) we have reduced the number of meta-training environment iterations from 1M to 600K, which gave similar results.\", \"For RL^2 these experiments now include additional environment interactions (from 50M to 100M) and also 6 meta-train x 2 meta-test seeds.\", \"For EPG these experiments now include up to 1 billion environment interactions per run (as opposed to 400M before), which forced us to consider 3 meta-train x 2 meta-test seeds (i.e. a total of 6 during meta-testing).\", \"We have added a new experiment in which we ablate the number of agents used in the population during meta-training MetaGenRL and confirm that a larger population is indeed beneficial.\", \"We have also added an experiment that investigates the benefit of meta-training on additional environments (with up to 40 agents), but were unable to report major improvements, possibly related to the current form of the objective function.\", \"While we previously reported that increasing the number of inner gradient updates from 1 to 3 was beneficial, we now find (after using additional seeds) that its performance is similar (with slightly larger variance) to using only a single gradient step. As before, we find that using 5 steps limits performance due to a large variance.\", \"We have incorporated several additional references to related work.\", \"Finally, we emphasize that at this point all code to reproduce our results is available online.\"]}",
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper proposes a meta-RL algorithm that learns an objective function whose gradients can be used to efficiently train a learner on entirely new tasks from those seen during meta-training. Building off-policy gradient-based meta-RL methods is challenging, and had not been previously demonstrated. Further, the demonstrated generalization capabilities are a substantial improvement in capabilities over prior meta-learning methods. There are a couple related works that are quite relevant (and somewhat similar in methodology) and overlooked -- see [1,2]. Further, we strongly encourage the authors to run the method on multiple meta-training environments and to report results with more seeds, as promised. The contributions are significant and should be seen by the ICLR community. Hence, I recommend an oral presentation.\\n\\n[1] Yu et al. One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning\\n[2] Sung et al. Meta-critic networks\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of changes after rebuttal\", \"comment\": [\"We would like to thank the reviewers for their thoughtful reviews and useful feedback.\", \"In the following we summarize the modifications that we have made to the paper since the start of the rebuttal.\", \"Fixed minor issues regarding the notation (R1)\", \"Improved the discussion of the LSTM as a general function approximator (R1)\", \"Updated Table 1 to include a comparison to the non-meta baseline results that was previously in the appendix (R2)\", \"Clarified why MetaGenRL can not implement DDPG, which motivates our comparison to PPO and the policy gradient baselines (R2)\", \"Clarified how we perform model selection (R2)\", \"Clarified how the baselines are tuned and the hyer-parameters are obtained (R2)\", \"Ensured that the captions describe what the error bars are for all plots (R2)\", \"Incorporated related work on transfer / generalization in RL (R2)\", \"Moved the algorithm box for MetaGenRL to the main text, and added a separate box for meta test time, to clarify the differences to meta training (R2)\", \"Added an overview figure to better explain the different interactions in MetaGenRL (R2)\", \"Removed the part on MetaGenRL meta-learning exploration strategies, which was only meant for illustration (R3)\", \"Investigated randomness during meta-training on the Cheetah and Lunar environments. While results from additional meta-training seeds did not alter our conclusions in any way, we do believe that it is valuable to also include additional meta-training seeds for all experiments in the future (see below) (R3)\", \"In the future we will also\", \"Include an experiment, where MetaGenRL has been meta-trained on **many** different environments, to assess its capabilities (suggested by R1 & R3)\", \"Include additional seeds for meta-training for **all** experiments, to provide an indication of variance during meta-training (and meta-testing) in all settings.\", \"Some of the reviewers' suggestions involved moving content from the Appendix to the main text, and we also added an additional figure. While this has slightly increased the length of the main content, we believe that the improved exposition makes this worthwhile.\"]}",
"{\"title\": \"Second response to reviewer #2 with comments on improvements\", \"comment\": \"Thank you for taking the time to discuss this further.\\n\\nDuring meta-test time we reinitialize the critic and the policy with random weights and only keep the weights of the objective function. Then we train in parallel: (1) A critic using the TD-error to estimate V. (2) The policy by following \\\\nabla_\\\\phi L_\\\\alpha(\\\\cdot).\\nThe objective function is kept fixed during meta-test time.\\n\\n> \\u201chow is your algorithm not able to implement something like ddpg?\\u201d\\n\\nSimilar to policy gradient methods \\\\nabla_\\\\phi V = 0, i.e. V is a constant w.r.t. to \\\\phi. We do this by stopping the gradient. DDPG requires to backpropagate through the value function, thus L representing the identity function would not suffice. We will update the paper to clarify this distinction.\\n\\n> \\u201cduring meta-training L is only used to update Q\\u201d\\n\\nThis is a misunderstanding. L_\\\\alpha is only used to update the policy \\\\pi_\\\\phi (\\u2018learning\\u2019). Q_\\\\theta is only updated by the TD-Error (equation 5). We then make use of Q_\\\\theta to update L_\\\\alpha (equation 6, which requires differentiating twice) **only during meta-train time** (\\u2018meta-learning\\u2019). We have added an overview figure 1 to the updated submission that visualizes these different interactions. \\n\\n> \\u201cwhy did you decide to have a different meta-training and meta-testing procedure?\\u201d\\n\\nThe meta-test procedure is designed to evaluate whether the objective function is able to train a randomly initialized agent from scratch. Hence, we only consider \\u2018learning\\u2019 by using the objective function, and prevent potential confounders that may arise when also simultaneously meta-learning. \\n\\nIn contrast, during meta-training we also have \\u2018meta-learning\\u2019 due to updating the objective function. The analog to this is a researcher designing a new objective function (\\u2018meta-learning\\u2019) and then using it to train RL agents (\\u2018learning\\u2019).\\n\\n> \\u201cRelated work\\u201d\\n\\nThank you for pointing out this related work, we were aware of some of these already, and our goal is to incorporate all of these as part of a broader discussion (in the related work section) on generalization and transfer to other environments.\\n\\n> Paper update\\n\\nOn Friday we will upload another updated version of our paper to incorporate your additional suggestions (such as the separate meta test-time box), while also incorporating the changes suggested by the other reviewers.\"}",
"{\"title\": \"Question\", \"comment\": \"Thank you for your in depth response! This has satisfied me and I plan on bumping my score up to a 6 -- weak accept.\\n\\nJudging from your comments I think I am misunderstanding your meta-test time procedure. I originally thought something like equation 6 was used for both meta-training AND meta-testing. I still have questions however. In particular, how is your algorithm not able to implement something like ddpg?\\n\\nYou state -- \\\" During evaluation (meta-test time), the meta-learned objective function can then be used to train a randomly initialized RL agent in a new environment.\\\"\\n\\nThe learned objective function, L, is a function of trajectory, the current policy, and some value estimate.\\n\\nI believe your value function which is implemented by a Q function. I assume this Q function is learned from scratch when meta-testing as well though I couldn't find a reference to this. If L implements something akin to an identity function (and ignores all temporal aspects) I believe the resulting algorithm will look very similar to DDPG no?\\n\\nWould it be possible to include a meta-test time algorithm box as well?\\n\\nPutting this aside, why did you decide to have a different meta-training and meta-testing procedure? Both versions require collecting data, a replay buffer, training a Q function, and so on so it seems like they could do the same thing (equation 6 as opposed to directly minimizing L). Its not obvious to me that minimizing L will even do something reasonable as during meta-training L is only used to update Q (I believe?)\\n\\n=====\\nCompletely unrelated nit about your comment -- in particular (2). While I agree that the work showing transfer of RL algorithms across environments is interesting and under explored, there has been some instances in past work. This is also quite subjective -- what is a \\\"different environment\\\" anyway? First, https://arxiv.org/pdf/1812.01054.pdf shows transfer across atari games. https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/retro-contest/gotta_learn_fast_report.pdf shows transfer across different levels using RL2 type approach. https://arxiv.org/pdf/1606.04671.pdf shows some transfer across atari games too. EPG (https://papers.nips.cc/paper/7785-evolved-policy-gradients.pdf) also tests very different tasks. I do believe your claim is technically true but its ignoring the fact that moving from (1) - (2) is continuous.\"}",
"{\"title\": \"Response to reviewer #2 with comments on improvements [2/2]\", \"comment\": \"> \\u201cHyperparameters of your baseline do not appear to be tuned (taken from appendix) where as for your method has a number of choices. How are you tuning these choices?\\u201d\", \"the_ddpg_baseline_was_derived_from_https\": \"//spinningup.openai.com/ (a tuned version on mujoco environments) and shares the same parameters with MetaGenRL where possible. REINFORCE and PPO also use tuned parameters from the same source. We have not done an extensive hyperparameter search for MetaGenRL. We have validated the RL^2 parameters to work on a bandit-experiment from the original paper and derived parameters for the mujoco environments from the tuned configurations of https://ray.readthedocs.io/en/latest/rllib.html. For EPG we have used official code already tuned for mujoco benchmarks.\\n\\n> \\u201cPlease include what error bars are for all plots.\\u201d\\nWe believe only figure 4 (now figure 5) was missing this and corrected it.\"}",
"{\"title\": \"Response to reviewer #2 with comments on improvements [1/2]\", \"comment\": \"Thank you for your review and valuable feedback!\\n\\nBefore we proceed into detail we would first like to clarify some aspects that motivated this work, including our choice of experiments, and baselines.\\n\\nThe premise of meta-RL is (1) that meta-learning learning rules for RL allows us to outperform existing human-engineered approaches on single environments, and (2) that learned learning rules are able to incorporate knowledge about learning in one task (or environment), to improve learning others. While prior work in meta-RL has show-cased (1), (2) was only shown for very similar tasks and it was unclear to what extent (2) is possible in a more realistic setting consisting of vastly different environments. MetaGenRL presents a novel approach to meta-RL that for the first time showcases both of these aspects: it outperforms hand-design algorithms such as PPO, REINFORCE and sometimes even DDPG; and it is able to generalize to vastly different environments.\\n\\nRegarding (1), while it is important to compare to DDPG it is equally important to consider the other baselines. In particular, while the meta-learned objective functions support learning rules similar to policy gradient estimators, they do **not** support DDPG in their current form. Indeed, during meta-testing there is no component that resembles DDPG in any way (a value function is only used as a constant input to the objective function). Based on this, one can not expect MetaGenRL to outperform DDPG, while on the other hand it is reasonable to expect it to be competitive with REINFORCE and PPO. As we show in the paper, MetaGenRL in fact greatly outperforms these algorithms, while DDPG is still better overall and remains a good target for future work that considers more expressive meta-learned objective functions.\\n\\nRegarding (2), it is also important to compare MetaGenRL to prior meta-RL approaches such as RL2 and EPG that were unable to showcase (2) to this extent. We find that RL2 overfits (which is a non-trivial observation) and that EPG is extremely sample inefficient.\\n\\nAll in all we argue that MetaGenRL is an important step in realizing the potential of meta-RL.\\n\\n> \\u201cThis suggests that this added complexity is not aiding in capacity and or hurting training.\\u201d \\n\\nNote that the learned loss does not modify a DDPG algorithm and there is only added complexity for meta-learning, not at meta-test time.\\n\\n> \\u201cThis makes me fear there is something indirect and not interesting occurring \\u2026\\u201d\\n\\nAs mentioned, MetaGenRL can not implement/modify DDPG, therefore something interesting must occur.\\n \\n> \\u201cGiven how weak EPG is (as you stated for number of frames) and how RL^2 will never generalize across these different tasks it's hard to get a sense of the numbers.\\u201d\\n\\nAs mentioned, it is important, fair, and meaningful to compare to EPG, and RL2. We updated the table to also include the comparison to PPO, DDPG, and on/off-policy REINFORCE from the appendix.\\n\\n> \\u201cI don't understand why meta-training on lunar and transferring to hopper does better than meta-training on hopper (table 1, middle column).\\u201d\\n\\nThe table shows that meta-training on Lunar & Cheetah performed better in general on multiple test-time environments compared to meta-training on Cheetah & Hopper. It is difficult to precisely pinpoint how different combinations of environments affect meta-training.\\n\\n> \\u201c... I would appreciate if it put the meta-test performance on the same graph as meta-train performance \\u2026\\u201d\\n\\nMeta-training performance and (final) meta-test performance are not directly comparable. A population of sub-optimal agents (performance-wise) may already yield an objective function that can train an optimal agent from scratch. Note that while there are some fluctuations in meta-test performance, we observed an increasing trend on average as seen in figure 3 (now figure 4).\\n\\n> \\u201cHow do you select when to test these algorithms?\\u201d\\n\\nWe have tested the neural objective functions after 1 million timesteps of training per agent and this was not tuned in any way.\\n\\n> \\u201cKey details such as meta-training are also not discussed in depth nor ablated\\u201d\\n\\nCould you please elaborate on what aspects you find missing? It is our understanding that all meta-training details are available. Regarding ablations it is computationally infeasible to consider all possible variations, and we used our available resources on ablating the neural objective function, which we believe to be most important. \\n\\n[1/2, Continued in next reply]\"}",
"{\"title\": \"Response to reviewer #3 with comments on improvements\", \"comment\": \"Thank you for your review and valuable feedback!\\n\\nBefore we proceed into detail we would first like to clarify some aspects that motivated this work, including our choice of experiments, and baselines.\\n\\nThe premise of meta-RL is (1) that meta-learning learning rules for RL allows us to outperform existing human-engineered approaches on single environments, and (2) that learned learning rules are able to incorporate knowledge about learning in one task (or environment), to improve learning others. While prior work in meta-RL has show-cased (1), (2) was only shown for very similar tasks and it was unclear to what extent (2) is possible in a more realistic setting consisting of vastly different environments. MetaGenRL presents a novel approach to meta-RL that for the first time showcases both of these aspects: it outperforms hand-design algorithms such as PPO, REINFORCE and sometimes even DDPG; and it is able to generalize to vastly different environments.\\n\\nRegarding (1), while it is important to compare to DDPG it is equally important to consider the other baselines. In particular, while the meta-learned objective functions support learning rules similar to policy gradient estimators, they do **not** support DDPG in their current form. Indeed, during meta-testing there is no component that resembles DDPG in any way (a value function is only used as a constant input to the objective function). Based on this, one can not expect MetaGenRL to outperform DDPG, while on the other hand it is reasonable to expect it to be competitive with REINFORCE and PPO. As we show in the paper, MetaGenRL in fact greatly outperforms these algorithms, while DDPG is still better overall and remains a good target for future work that considers more expressive meta-learned objective functions.\\n\\nRegarding (2), it is also important to compare MetaGenRL to prior meta-RL approaches such as RL2 and EPG that were unable to showcase (2) to this extent. We find that RL2 overfits (which is a non-trivial observation) and that EPG is extremely sample inefficient.\\n\\nAll in all we argue that MetaGenRL is an important step in realizing the potential of meta-RL.\\n\\n> \\u201cI think the most important baseline that is compared against is not RL2, but DDPG\\u201d\\n\\nAs we discussed, it is very important to also compare to RL2 in the context of (2), and to consider the performance of MetaGenRL in relation to other baselines in the context of (1).\\n\\n> \\u201cBecause the proposed algorithm does not depend on the observed states, it generalizes much better, but is also much slower than RL2\\u201d\\n\\nIt is not clear that the poor generalization performance of RL2 is only due to conditioning on the state. In our experiments we observed that RL2 does not perform any meaningful learning during meta-testing (it has simply overfitted to the environment). Hence, while it is indeed \\u201cfaster\\u201d in this regard this is mostly due to a limitation on RL2\\u2019s part. Nonetheless, MetaGenRL is able to outperform most of the time, even in those cases.\\n\\n> \\u201cCheetah, Hopper and Lunar Lander are very simple environments. Evaluation on (slightly) larger scale environments would show that the algorithm can scale.\\u201d\\n\\nWe agree that it would be interesting to consider more complex environments to test the limits of MetaGenRL. Nonetheless, the Mujoco simulator based environments are well established benchmarks in the context of (continuous control) (meta-)RL. It allows us to reuse hyperparameters, and compare more easily against other algorithms.\\n\\n> \\u201cThe authors claim that the algorithm allows sharing of exploration strategies...\\u201d\\n\\nWe agree that it is unlikely that the current architecture with the described inputs is able to learn an exploration scheme and we have not evaluated this empirically. The example in Section 3.3 was only meant for illustrative purposes, and we have removed this.\\n\\n> \\u201cFig 5a vs. Fig 2b: Shouldn't the performance of MetaGenRL be the same in both?\\u201d\\n\\nThose results were obtained using a different seed for meta-training. In preliminary experiments we found that our results were consistent across different meta-training seeds (we already average over many agents), and so we have focused on different meta-testing seeds for computational reasons. That said, it would be better to also average over meta-training runs (yielding 6 * 6 = 36 configs), which we will soon add to the paper for Cheetah, Lunar -> Hopper to provide an indication of the variance also due to meta-training.\"}",
"{\"title\": \"Response to reviewer #1 with comments on improvements\", \"comment\": \"Thank you for your review and valuable feedback!\\n\\nBelow we will address each of your comments in detail.\\n\\n(1) We agree that it would be desirable to consider additional environments, including meta-training on more than 2 environments. However, using a large population of agents and many environments requires a lot of compute that was not available at the time of submission. Nonetheless, it is our goal to incorporate a single large experiment (meta-training on many environments using many agents) to demonstrate the capabilities of MetaGenRL in this regime and to further support our claims.\\n\\nRegarding additional meta-test environments, note that the current experiments already consider multiple meta-test environments. For example, table 1 shows two configurations of meta-training environments and three meta-test environments each.\\n\\n(2) We agree that it is unlikely that the RNN in practice will learn known variance and bias reduction techniques (although it would be hard to evaluate that). Indeed, based on the drop in performance when not providing V one could argue that the RNN was unable to learn an effective variance-reduction technique, although there could be other factors at play. For example, note that when V was included, our experiments showed that the objective function worked better in many tested cases than commonly used policy gradient algorithms such as REINFORCE with the Generalized Advantage (GAE) estimator, PPO with the GAE, or off-policy REINFORCE with GAE.\\n\\nFinally, we would like to thank you for pointing out various notational issues and suggestions for improving clarity. We have incorporated several of these to improve the paper.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes to meta learn the objective function of a policy gradient algorithm using second order gradients of the objective function w.r.t the state-action value Q.\\n\\nThis is an interesting approach, however, I think the experimental evidence is not sufficiently convincing. \\n\\n- In particular, I think the most important baseline that is compared against is not RL2, but DDPG: RL2 is not designed to generalize but to learn quickly on new tasks from the training-task distribution. Because the proposed algorithm does not depend on the observed states, it generalizes much better, but is also much slower than RL2. On the other hand, it shares a lot of design choices with DDPG: Using TD3 and Double Q-learning, as well as using as objective function the Q-values. \\nLooking at Figures 2, it is not clear that the proposed algorithm is substantially better than DDPG. \\n- Cheetah, Hopper and Lunar Lander are very simple environments. Evaluation on (slightly) larger scale environments would show that the algorithm can scale. \\n- The authors claim that the algorithm allows sharing of exploration strategies, which I don't believe can be the case based on it's current design.\\n- Lastly, I have a question about Fig 5a vs. Fig 2b: Shouldn't the performance of MetaGenRL be the same in both? It appears to perfom much better in Figure 2b.\\n\\nMinor remark/question (didn't influence score):\\nThe authors claim in the very first paragraph (and in the 4th paragraph) that inductive biases in humans are learned by natural evoluation through \\\"distilling the collective learning experiences of many learners\\\" by \\\"learning from learning experiences\\\". I'm not familiar with the relevant literature, but this seems like a strong statement which I believe should be supported by a citation. \\n\\nEdit because I can't make my response visible to authors anymore:\\nThank you for your response to my review and apologies for my delayed answer.\\n\\nAfter reading your responses I agree that PPO is a fairer comparison than DDPG and that you are outperforming PPO is promising.\\nI further agree that it is relevant to show that RL2 overfits (although I personally don't find that very surprising - see below). \\n\\nHowever, I still don't think that RL2 is a relevant baseline for this approach. There's a fundamental trade-off between the speed of adaptation and the amount of overfitting. \\nIf I want to adapt very quickly (like RL2 does), I need to leverage as much task-information as possible, thereby overfitting to the task-distribution. \\nOn the other hand, MetaGenRL is slow, it has training speed comparable with gradient-based approaches (by generalizes better by construction because it e.g. doesn't receive states as inputs). \\nConsequently, because MetaGenRL doesn't offer any speed improvements over gradient-based approaches, it should be compared to them, and not RL2.\\n\\nTaken both the positive results vs. PPO and the negative results vs. DDPG, together with the fact that there's no learning speed advantage for MetaGenRL, I would see it as an interesting, and promising, research direction, but so far without proof that it can advance state of the art, as it looses to RL2 in terms of speed and DDPG in terms of final performance.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"I am borderline leaning towards reject on this paper. I enjoyed reading this work and found the ideas interesting but the empirical comparisons are confusing and not convincing. I hope the authors continue to work to improve this!\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper presents a novel meta reinforcement learning algorithm capable of meta-generalizing to unseen tasks. They make use of a learned objective function used in combination with DDPG style update. Results are presented on different combinations of meta-training and meta-testing on lunar, half cheetah, and hopper environments with a focus on meta-generalization to vastly different environments.\", \"motivation\": \"The work is well motivated and is tackling an important problem. There are a number of design decisions presented and only some are validated experimentally. Given the complexity of many existing meta-rl methods this seems fine but could obviously be improved upon either with more empirical work or with some guiding theory.\", \"experiments\": \"Overall the experiments are not convincing to me. Given that this is the majority of your paper is empirically based this is my main criticism. More detailed comments follow.\\nFigure 2a concerns me that just meta-training on lunar performs worse than ddpg (what your algorithm is based on). This suggests that this added complexity is not aiding in capacity and or hurting training. Can you comment on this? This result also casts doubt onto figure 2b, which, in isolation seems like an extremely promising example of meta-generalization. This makes me fear there is something indirect and not interesting occurring (e.g. the learned loss modifies with the DDPG algorithm which happens to increase noise in generated samples which improve performance only on some environments and hurts in others for example.)\\nTable 1 should include hand designed algorithms imo. Given how weak EPG is (as you stated for number of frames) and how RL^2 will never generalize across these different tasks it's hard to get a sense of the numbers. Your appendix does include a figure like this which shows ddpg performs quite well. Additionally, I don't understand why meta-training on lunar and transferring to hopper does better than meta-training on hopper (table 1, middle column). Can you comment on this?\\nWhile figure 3 is cool, I would appreciate if it put the meta-test performance on the same graph as meta-train performance. From eyeballing the curves it looks like it decreases at 100k iterations then finally increases again at 200k. This is strange. This also seems fraught from an empirical comparison point of view. How do you select when to test these algorithms? Ideally you would have a meta-validation set of tasks then only meta-test on the selected task but I see no mention of this.\\nKey details such as meta-training are also not discussed in depth nor ablated. From the details and curriculum scheme presented in the appendix this seems like quite a feat. Further study of these factors could be useful.\\nHyperparameters of your baseline do not appear to be tuned (taken from appendix) where as for your method has a number of choices. How are you tuning these choices? Once again a meta-validation set would be the principled thing to tune against.\\nFinally, the experimental setup presented here is quite complicated. There are a ton of factors at play -- exploration, meta-generalization, meta-training, inner-training, instability of ddpg, so on. These all complicate the resulting picture. Having some simplified / more controlled setup to demonstrate these pieces would be greatly appreciated.\", \"other_suggestions\": \"\", \"section_3_generalization\": \"I think you mean meta-generalization.\\nPlease include what error bars are for all plots.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a meta reinforcement learning algorithm called MetaGenRL, which meta-learns learning rules to generalize to different environments. The paper poses an important observation where learning rules in reinforcement learning to train the agents are results of human engineering and design, instead, the paper demonstrates how to use second-order gradients to learn learning rules to train agents. Learning learning rules in general has been proposed and this paper is another attempt to further generalize what could be learned in the learning rules. The idea is verified on three Mujoco domains, where the neural objective function is learned from one / two domains, then deployed to a new unseen domain. The experiments show that the learned neural objective can generalize to new environments which are different from the meta-training environments.\\n\\nOverall, the paper is a novel paper and with clear motivation, I like the paper a lot! Hope that the authors could address the following concerns and make the paper even better:\\n\\n1. The current experiment setup is a great proof-of-concept, however it seems a bit limited to support the claims in the paper. The meta-training has only at most two environments and the generalization of the neural objective function is only performed at one environment. It would be great if the authors could show more results with more meta-training environments (say, 10 meta-training environments) and more meta-testing environments (the current setup is only with one);\\n\\n2. The paper states a hypothesis that LSTM as a general function approximator, it is in principle able to learn variance and bias reduction techniques. However, in practice, due to learning dynamics and many other factors, it's not necessary true, i.e., how many samples are required for an LSTM to learn such technique is unclear. At the same time, at Page 8, Section \\\"Dependence on V\\\" actually acts as an example of LSTM couldn't figure out an effective variance-reduction method during the short meta-training time. The authors may want to put more words around the learnability of variance-bias trade-off techniques.\", \"notation_issues_which_could_be_further_improved\": \"1. Page 2, \\\"Notation\\\" section and all of the following time indexing. Note that in Equation (1), r(s_1, a_t) has discount gamma^1, which is not true, I'd recommend the authors to follow the time indexing starting from 0, so that the Equation (1) is correct. (Alternatively, the authors could change from gamma^t into gamma^{t-1});\\n2. Section \\\"Human Engineered Gradient Estimators\\\" is missing the formal introduction of the notation \\\\tau;\\n3. Overall, the authors seem to use \\\\Phi and \\\\theta interchangeably, it's better to use a unified notation across the paper;\\n4. In the paper, the authors choose \\\\alpha to represent the neural net for learning the objective function, to make it clearer for the readers, the authors could consider to change \\\\alpha into \\\\eta, because \\\\alpha is often considered as learning rate notation;\\n5. I'd suggest the authors to rewrite the paragraph in Page 3 \\\"MetaGrenRL builds on this idea of ...., using L_\\\\alpha on the estimated return\\\". This describes a key step in the algorithm while at the moment it's not very clear to the readers what's going on there;\\n6. Section 3.1 is missing a step to go from Q into V;\\n7. The authors could consider to describe the details of the algorithms in a more general actor-critic form, instead of starting from DDPG formulation. It would make the methods more general applicable (for example, extension to discrete action space).\"}"
]
} |
rkevSgrtPr | A closer look at the approximation capabilities of neural networks | [
"Kai Fong Ernest Chong"
] | The universal approximation theorem, in one of its most general versions, says that if we consider only continuous activation functions σ, then a standard feedforward neural network with one hidden layer is able to approximate any continuous multivariate function f to any given approximation threshold ε, if and only if σ is non-polynomial. In this paper, we give a direct algebraic proof of the theorem. Furthermore we shall explicitly quantify the number of hidden units required for approximation. Specifically, if X in R^n is compact, then a neural network with n input units, m output units, and a single hidden layer with {n+d choose d} hidden units (independent of m and ε), can uniformly approximate any polynomial function f:X -> R^m whose total degree is at most d for each of its m coordinate functions. In the general case that f is any continuous function, we show there exists some N in O(ε^{-n}) (independent of m), such that N hidden units would suffice to approximate f. We also show that this uniform approximation property (UAP) still holds even under seemingly strong conditions imposed on the weights. We highlight several consequences: (i) For any δ > 0, the UAP still holds if we restrict all non-bias weights w in the last layer to satisfy |w| < δ. (ii) There exists some λ>0 (depending only on f and σ), such that the UAP still holds if we restrict all non-bias weights w in the first layer to satisfy |w|>λ. (iii) If the non-bias weights in the first layer are *fixed* and randomly chosen from a suitable range, then the UAP holds with probability 1. | [
"deep learning",
"approximation",
"universal approximation theorem"
] | Accept (Poster) | https://openreview.net/pdf?id=rkevSgrtPr | https://openreview.net/forum?id=rkevSgrtPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"PVBwD637v",
"yJ_4TI698N",
"HJxGV3eP2r",
"S1gn0Z1isS",
"BJgppgyiir",
"BylcVlJssH",
"B1gx-ia29r",
"S1eH2aMfqB",
"H1xtdl4CYr"
],
"note_type": [
"official_comment",
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1581714200520,
1576798745336,
1574534185717,
1573741011901,
1573740741080,
1573740594387,
1572817655795,
1572117933332,
1571860593229
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2288/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2288/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2288/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2288/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2288/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2288/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2288/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2288/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Lemmas extracted from proof of Theorem 3.1, and technical oversights corrected\", \"comment\": \"Thank you very much for appreciating our work! Based on your suggestion, we have extracted several lemmas from the proof of Theorem 3.1. These lemmas have been formulated to be more general than what is required in the proof of Theorem 3.1, so that they could (hopefully) be useful to other authors. We have also carefully checked through the paper; all typos have been fixed, and there were a couple of technical oversights that were spotted and duly corrected. Note that the statement of Theorem 3.1 is correct, whether or not the second instance of lambda in (i) is replaced by 1/lambda. Overall, the strategy for proving Theorem 3.1 remains the same, but we have reorganized the proof to improve clarity.\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This is a nice paper on the classical problem of universal approximation, but giving a direct proof with good approximation rates, and providing many refinements and ties to the literature.\\n\\nIf possible, I urge the authors to revise the paper further for camera ready; there are various technical oversights (e.g., 1/lambda should appear in the approximation rates in theorem 3.1), and the proof of theorem 3.1 is an uninterrupted 2.5 page block (splitting it into lemmas would make it cleaner, and also those lemmas could be useful to other authors).\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"UPDATE TO MY EARLIER REVIEW\\n============================\\n\\nSince this paper presets new findings that will be of significant interest to much of ICLR's audience, and the paper is is well-written, I am changing my rating to \\\"Accept\\\". Since Reviewer #1 did not submit a review and Reviewer #2 indicated that (s)he does not feel well-qualified to review this paper (it is very much on the theoretical side after all), it would be great to get one further review from an area chair or otherwise qualified person. \\n\\nMY EARLIER REVIEW\\n=================\\n\\nThis this exciting submission presents a new proof of Leshno's version of the universal approximation property (UAP) for neural networks -- one of the foundational pillars of our understanding of neural networks. The new proof provides new insights into the universal approximation property. I consider these the main contribution of the paper. Specifically, the authors\\n- provide an upper bound on the required width for the neural network\\n- show that the approximation property still holds even if strong further requirements are imposed on the weights of the first or last layer. \\n\\nI rate this submission a weak accept. It\\u2019s a very good paper. The work makes useful contributions that should and will be of interest to many in the field. The paper is generally well-written.\", \"some_remarks\": [\"Being somewhat long, the \\u201cProof of Theorem 3.1\\u201d would be a much better read if the authors prefixed it with an outline of the strategy that the proof takes.\", \"The authors point out that the lack of dependence of Theorem 3.1 on epsilon is surprising, and cite Lin\\u2019s work from 2017 who previously found such an independence. Lin\\u2019s derivation of the epsilon-independent UAP is much more intuitive than that of this submission, in which the epsilon independence really pops out somewhat magically and for me only made sense when I read the paper again. I would encourage the authors to add to Lin\\u2019s paper\\u2019s citation sentence that this paper motivates the epsilon independence well. Alternatively, the authors could add a few sentences to their paper to provide intuition on how the epsilon-independence comes about in their line of argument.\"]}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you very much for appreciating our work!\\n\\nWe have made changes to our paper to improve clarity. Please see our responses to Reviews #3 and #4 for the details of these changes.\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"Thank you very much for appreciating our work!\\n\\n(Optimality of Thm. 3.2) This is a great question! We believe the upper bound for $N$ in Theorem 3.2 is optimal, and we have included a new Appendix B in the revised paper to discuss this (conjectured) optimality. To put your question into context, it was conjectured by Mhaskar (1996) that there exists some smooth non-polynomial activation function such that at least $\\\\Omega(\\\\varepsilon^{-n})$ hidden units is required to uniformly approximate every function in the class of $C^1$ functions with bounded Sobolev norm. Mhaskar provided a heuristic argument for why this conjecture should be true. If Mhaskar's conjecture is indeed true, then our upper bound in Theorem 3.2 is optimal.\\nFor specific activation functions sigmoid and ReLU, it is already known that $(N \\\\log N) \\\\in \\\\Omega(\\\\varepsilon^{-n})$ for the class of $C^1$ functions with bounded Sobolev norm, so there is still a gap between the lower and upper bounds for $N$ in these specific cases. It would be interesting to find optimal bounds for these cases.\\n\\n(Reference to random features) Thank you for pointing this out! We have included a new paragraph in the Discussion section (Sec. 4).\\n\\n(clarity of proof of Thm 3.1) Following a suggestion by AnonReviewer3, we have prefixed the \\\"Proof of Theorem 3.1\\\" with an \\\"Outline of strategy for proving Theorem 3.1\\\". We hope that the new outline helps improve clarity.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thank you very much for appreciating our work!\\n\\nFollowing your suggestion, we have prefixed the \\\"Proof of Theorem 3.1\\\" with an \\\"Outline of strategy for proving Theorem 3.1\\\". We hope that the new outline helps improve clarity, and hopefully captures the underlying intuition of our proof. In particular, we have highlighted (at least an important part of) the underlying intuition for why our upper bound is independent of epsilon.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studies the representation power of single layer neural networks with continuous non-polynomial activation, and specifically, provided a refinement for the universal approximation theorem:\\n1. Established an exact upper bound on the width needed to uniformly approximate polynomials of a finite degree (to any accuracy of which the upper bound is independent), and\\n2. using this error-free bound to deduce a rate (of width) for approximating continuous functions.\\n\\nThe writing of the paper is concrete and solid. The techniques used in establishing the results are interesting, in that:\\n1. The proof for polynomial approximation (Thm 3.1) is direct, via a close examination of the Wronskian of the target polynomial function, and\\n2. the analysis provided that the abilty to universally approximate is also preserved after placing certain restriction on the magnitude of the weights in the approximating neural network. Consequently, this property is inherited by continuous function approximation to which the result is extended (Thm 3.2).\\n3. This analysis and some of the results derived in the proof may be used for other analyses, e.g. representation power of multilayer networks.\\n\\nSome further discussion of the results may be of interest to the readers. \\n- (Optimality of Thm 3.2). When the result in Thm 3.1 is extended to general continous functions via Jackson's theorem, to what extend does the rate deteriorate? What does the rate look like when using certain common activations (such as ReLU, sigmoid).\\n- (Reference to random features). Thm 3.3 appears to be related to random feature representation, whose approximating ability has been studied in prior works. Some comment on those results may be beneficial (e.g. https://arxiv.org/abs/1810.04374).\\n- Although already a straightforward proof, it seems natural, and as a result may promote the presentation and clarity, to organize the proof to Thm 3.1 using smaller parts, which currently spans over 2 pages.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This this exciting submission presents a new proof of Leshno's version of the universal approximation property (UAP) for neural networks -- one of the foundational pillars of our understanding of neural networks. The new proof provides new insights into the universal approximation property. I consider these the main contribution of the paper. Specifically, the authors\\n- provide an upper bound on the required width for the neural network\\n- show that the approximation property still holds even if strong further requirements are imposed on the weights of the first or last layer. \\n\\nI rate this submission a weak accept. It\\u2019s a very good paper. The work makes useful contributions that should and will be of interest to many in the field. The paper is generally well-written.\", \"some_remarks\": [\"Being somewhat long, the \\u201cProof of Theorem 3.1\\u201d would be a much better read if the authors prefixed it with an outline of the strategy that the proof takes.\", \"The authors point out that the lack of dependence of Theorem 3.1 on epsilon is surprising, and cite Lin\\u2019s work from 2017 who previously found such an independence. Lin\\u2019s derivation of the epsilon-independent UAP is much more intuitive than that of this submission, in which the epsilon independence really pops out somewhat magically and for me only made sense when I read the paper again. I would encourage the authors to add to Lin\\u2019s paper\\u2019s citation sentence that this paper motivates the epsilon independence well. Alternatively, the authors could add a few sentences to their paper to provide intuition on how the epsilon-independence comes about in their line of argument.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors derive the universal approximation property proofs algebraically. They note that this holds even with very strong constraints on the non-bias weights.\\n\\nThey assert that their results are general to other kinds of neural networks and similar learners. They leave the paper with a question regarding limitations on bias weights. \\n\\nI do not feel qualified to review this paper. I have opted for a weak accept since it seems thorough and the conclusions offer promise for other applications. However, I will defer to other, more qualified reviewers who have more carefully reviewed the paper than I have.\"}"
]
} |
HJl8SgHtwr | VIMPNN: A physics informed neural network for estimating potential energies of out-of-equilibrium systems | [
"Jay Morgan",
"Adeline Paiement",
"Christian Klinke"
] | Simulation of molecular and crystal systems enables insight into interesting chemical properties that benefit processes ranging from drug discovery to material synthesis. However these simulations can be computationally expensive and time consuming despite the approximations through Density Functional Theory (DFT). We propose the Valence Interaction Message Passing Neural Network (VIMPNN) to approximate DFT's ground-state energy calculations. VIMPNN integrates physics prior knowledge such as the existence of different interatomic bounds to estimate more accurate energies. Furthermore, while many previous machine learning methods consider only stable systems, our proposed method is demonstrated on unstable systems at different atomic distances. VIMPNN predictions can be used to determine the stable configurations of systems, i.e. stable distance for atoms -- a necessary step for the future simulation of crystal growth for example. Our method is extensively evaluated on a augmented version of the QM9 dataset that includes unstable molecules, as well as a new dataset of infinite- and finite-size crystals, and is compared with the Message Passing Neural Network (MPNN). VIMPNN has comparable accuracy with DFT, while allowing for 5 orders of magnitude in computational speed up compared to DFT simulations, and produces more accurate and informative potential energy curves than MPNN for estimating stable configurations. | [
"neural network",
"chemical energy estimation",
"density functional theory"
] | Reject | https://openreview.net/pdf?id=HJl8SgHtwr | https://openreview.net/forum?id=HJl8SgHtwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"SUTjQkAw4y",
"BJxU7QjKiH",
"rJgfufsYiB",
"rkevAhctoB",
"rkeUP-b8oB",
"S1lomgQcYS",
"Hket-APeFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745300,
1573659422064,
1573659241861,
1573657807343,
1573421405808,
1571594274680,
1570958848726
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2287/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2287/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2287/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2287/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2287/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2287/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper considers the problem of estimating the electronic structure's ground state energy of a given atomic system by means of supervised machine learning, as a fast alternative to conventional explicit methods (DFT). For this purpose, it modifies the neural message-passing architecture to account for further physical properties, and it extends the empirical validation to also include unstable molecules.\\n\\nReviewers acknowledged the valuable experimental setup of this work and the significance of the results in the application domain, but were generally skeptical about the novelty of the machine learning model under study. Ultimately, and given that the main focus of this conference is on Machine Learning methodology, this AC believes this work could be more suitable in a more specialized venue in computational/quantum chemistry.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We thank the reviewer for their comments. We have responded to the feedback:\\n\\n\\\"In 4.2 it is explained how different ways of incorporating [...] I would suggest systematically evaluating the different options and including the results in an appendix. \\\"\\n\\nWe agree that the paper would benefit from adding the exploration of different bond type integration strategies into an appendix. We have added these values in Table 3.\\n\\n\\\"In table 1 results are shown for the MPNN baseline [...] It would be good to include experimental results to motivate this.\\\"\\n\\nThe purpose of Table 1 was to evaluate each proposed knowledge integration strategy in turn, against the baseline of no integration at all, and against MPNN. We have now made this clearer in the text of section 5.1. It is true that VIMPNN, in section 5.2 and following, includes all these improvements. We agree that some intermediate evaluations with combinations of some improvements would provide a fuller analysis of their effect in combination. We have now added additional results in Table 4.\\n\\nIn the sake of time, given that the bond-type integration is the most effective improvement, we\\u2019ve elected to experiment with various combinations of bond-type plus-auxiliary instead of all possible combinations.\\n\\n\\\"As acknowledged in the paper, the idea of using bond-type information was already in Gilmer et al. [...] suggesting that there is more prior knowledge being exploited than just bonds.\\\"\\n\\nAs we demonstrate in Table 1, our new strategy for integrating bond type information is more effective than the one in Gilmer et al. . This strategy is to make the information transfer within the NN closer to the physics of atomic interaction through using bond type specialised communication channels, while Gilmer et al. were only using bound type information as a feature. Our principle is directly inspired by the physics of atomic interactions, as we now explain in more detail in sections 4.2 and 5.1.\\nHowever, it may be implemented in various ways, and we experimented with several as the reviewer rightly highlighted, which indeed are driven by experiments rather than physics.\\n\\nIn addition, more physics knowledge is exploited thanks to our second knowledge integration strategy. Indeed, we present the estimation of auxiliary properties as a way to integrate prior knowledge into the model. Indeed, the choice of auxiliary properties allows directing the attention of the model to properties of interest within the problem \\u2014 in the present case the type of atoms, as highlighted by the newly added experiment in Section 5.2, some of their physical properties that are relevant to the problem, or higher level properties such as the localisation of stable atom positions. The reviewer\\u2019s comment indicates that this needs to be clarified in the text, which we now do in Section 5.2. More experiments will be performed in future work to further explore and evaluate this knowledge integration strategy.\\n\\nAs per the reviewer\\u2019s suggestion, we modify the abstract to better explicit the way physics knowledge is exploited into our model: VIMPNN integrates prior physics knowledge, namely the information exchange between atoms being driven and modulated by different interatomic bonds, and the relevance of specific physical properties to the problem of estimating more accurate energies.\\n\\n\\u201cIt produces comparable accuracy to that of DFT [...] It would be good to be explicit about that, and also discuss the speed relative to the MPNN baseline\\\"\\n\\nIndeed, that is correct. We have added the clarification for both the speed w.r.t DFT and MPNN in section 1: It produces comparable accuracy to that of DFT while also improving the computation time by 5 orders of magnitude. We demonstrate that our method also produces more accurate energy estimations than that of MPNN, while being 30% slower at 0.007 seconds per energy estimate, which is marginal compared to the gain from DFT and may be improved with some obvious code optimisation.\\n\\n\\\"\\u201cThe change of atomic distances are performed isomorphically\\u201d - I would say \\u201cisometrically\\u201d. \\\"\\n\\nThank you for pointing this out, we\\u2019ve changed it to \\u201cisometrically\\u201d throughout the article.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We would like to thank the reviewer for carefully evaluating our paper and for his/her constructive suggestions. Our responses are as follows:\\n\\n1. There may be a confusion coming from what is defined or meant by \\\"ground state\\\" and \\\"equilibrium\\\", i.e. whether it concerns the structural ground state (positions of atoms) or the electronic ground state. We thank the reviewer for pointing this out, and we clarify this in the revised version of our article. We always compute the electronic ground state, therefore we have made this clear in section 1.\\n\\n2. Yes. The training data consists of all data augmentations for 80% of the configurations. This means, that the training data for the molecule dataset is 8000 different molecules, each with 10 variations of interatomic distance. VIMPNN learns from examples of (un)stable configurations. Then, it performs inference on unseen systems with no prior knowledge of the stability of the configuration. We make this more clear in section 5.\\n\\n\\\"Why is training performed separately [...] from QM9-style to crystal style, etc ?\\\"\\n\\nTraining was done on the different datasets to demonstrate the versatility of VIMPNN on different case studies, as we now highlight in section 5. Though, we do agree that transfer learning would be a very interesting experiment as it may improve VIMPNN\\u2019s generalisation. We thank the reviewer for this suggestion and will include it in future works, as there may not be enough time to include it in the current article.\\n\\n\\\"Isn't it interesting to see how much training on an augmented data set [...]\\\"\\n\\nWe like the idea of comparing the performance with the augmented dataset and on the original dataset, respectively. It is indeed possible that the augmentation improves on the estimation accuracy. However, we fear that the comparison may not be fair, as one model would be trained on 10 times more data than the other.\\n\\n\\\"This kind of comparison [...] it may be detrimental to the test accuracy?).\\\"\\n\\nAs with point 2b), we have not combined the datasets so far, but in future experiments it may indeed be interesting to do so, to investigate the generalisation ability of the model.\\n\\n3. While strictly speaking crystals are periodic in nature, what we are simulating in this dataset is crystal growth. When creating our dataset we use a crystal structure as a start point, then add atoms one by one on the surface of the growing crystal following the lattice pattern. Even if some atoms are missing in the pattern and the systems are not fully periodic, they are representative of a growing crystal. We make this clearer in sections 3.3. Would the reviewer consider \\u2018crystal growth dataset\\u2019 to be a more suitable name for the dataset and \\u2018crystalline systems\\u2019 for its elements?\\n\\n4. We have edited section 4.1 to provide more details on the different components of MPNN.\\n\\n5. Yes, the bond type coefficient has the same value for all bonds of the same type. The notation was indeed badly chosen and we have modified it in the revised version of the paper as lambda_BT. We also provided the equations for the other approaches where we felt it would disambiguate the explanations, with the exception of approach b ii) which would have required detailing the GRU equations, which would take too much space and may be out of the scope of this paper.\\n\\n6. With MPNN, the best performance for the 13 physical properties were obtained through training a separate MPNN on each property. Although we can also train one VIMPNN per property, we decided to focus on the ground-state energy as a proof of concept. This property is of particular interest for us because it allows finding stable interatomic distances, which is relevant to our augmented datasets and to simulating crystal growth. We may experiment with additional physical properties in future works, either with specialised VIMPNN models or in combination within a unique VIMPNN model.\\n\\n7. In Table 1, the last three lines indeed correspond to \\u201cno BT information + a single auxiliary estimate\\u201d. The aim of this table was to evaluate the impact of each augmentation of the model, so in isolation. The results of using BT information AND the 3 auxiliary estimations are provided in Table 2. This is what we denote as VIMPNN architecture. As shown in Table 1, adding the auxiliary estimations has a small positive impact on the performance. Table 2 shows that this small impact is also present when BT information is used. We now clarify and comment on these in Sections 5.1, 5.2.\\n\\n8. We have added the splits sizes for the training, testing, and validation sets into the second paragraph of section 5.\\n\\n9. We have made section 5.2 more concise as suggested, but we also added some additional experiments following the suggestions of other reviewers.\\n\\n10. Thank you for the suggestion. We have updated section 5.4 to be more concise and accurate in its description, and we improved it with a concluding statement on the current interpretation and future use of this visualisation.\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"We appreciate the detailed feedback given by the reviewer. We have prepared a response:\\n\\n\\\"many different modifications (although no substantially different from each other) are proposed and tested (although no results about the different modifications are reported - it could be nice to have them in an appendix).\\\"\\n\\nWe have reported the additional results in Table 3 of the Appendix.\\n\\n\\\"The authors also considered the idea of adding additional learning modules (and a related loss) to help the model learn more 'physics interpretable' hidden states. While it does not seem to give notable gains here, it is an interesting idea and I believe deserves further experimentations in the future.\\\"\\n\\nWe thank the reviewer for his encouragements. We have added an experiment in Section 5.2 to examine the benefits of one of these additional learning modules in learning to handle two types of crystals (Al and Cu) simultaneously. We also added some evaluations of performance when combining these modules to the bound type driven architecture in the Appendix. Further experimentations will be done in future works.\\n\\n\\\"1. No details are given about the training of the models. I think a small paragraph (or larger and reported in the appendix) should be added.\\\"\\n\\nWe have improved the description of the training procedure in Section 5, and added a section on hardware and training hyperparameters to the appendix.\\n\\n\\\"2. Even if it builds upon previous work, the (VI)MPNN model may be further explained. For example, what type of functions are M_t and R? The explanation on the considered modifications of MPNN may be clearer (maybe with the introduction of a more mathematical notation). \\\"\\n\\nMPNN has been explained in more detail in Section 4.1, especially the purpose and functioning of M_t and R. We also improved the description of VIMPNN in Section 4.2.\\n\\n\\\"3. How long is the message diffusion (T)? What is the effect of larger / smaller T\\u2019s?\\\"\\n\\nFor all our experiments we use 3 diffusion steps which is the default value of MPNN. We clarify this in the new \\u2018Implementation details\\u2019 section of the Appendix. We followed the recommendations of Gilmer et al. and tested with 1 to 8 timesteps, and found empirically that 3 iterations works best for VIMPNN as well.\\n\\nThe question of the effect of smaller/larger T is definitely interesting, and we have started investigating it, as can be seen in the new Table 5 in the appendix. However, we feel that more experiments are needed to answer it satisfactorily, therefore this will be addressed in a future paper.\\n\\n\\\"4. What\\u2019s the point of equations (5) (6) (7)? They are exactly the same and they do not add any information[...]\\\"\\n\\nWe agree that these equations are not very informative, and we have removed them. We added some pointers as to how M_t and R are implemented instead, i.e. using neural networks.\\n\\n\\\"5. Table 1, Auxiliary estimates: Are these the results obtained by the model a.ii trained to jointly learn the energy and the properties i) ii) iii) ? In what sense they improve on the baseline?.\\\"\\n\\nThe results in table 1 show the performance of each of the physics integration methods taken in isolation. They are only combined in Section 5.2 and later. We have now clarified this in Section 5.1. We also added results of each auxiliary estimate combined with model a.ii in Table 4 of the appendix.\\n\\n\\\"6. [...] Isn\\u2019t in fact the VIMPNN architecture the same as MPNN with the modification a.ii? In what sense do you integrate a.ii in it?\\\"\\n\\nMPNN (with no bound type information, so the baseline in Section 5.1) is indeed the base model which we improve by adding our proposed physics integration strategies, namely BT specialised communication channels and auxiliary estimations. We rephrase this sentence to better reflect this: we update the baseline model of Section 5.1 with our proposed physics integration strategies, namely bond type-specialised communication channels and node updates (case a.ii) of Section 4.2 and all 3 of the auxiliary estimations to create VIMPNN.\\n\\n\\\"7. If I understood correctly, the main final objective is to be able to characterize the minima of the energy. In this case, could you assess the performances of the two methods (MPNN) and (VIMPNN) by measuring some kind of distance from the approximated minima from the actual one?\\\"\\n\\nThank you for the suggestion, we have included this to table 2 to show the mean absolute distance from the true minima and the estimated one.\\n\\n\\\"Typos:\\nSection 5.3 first line: \\u2018We investigate the VIMPNN\\u2019s the ability \\u2026\\u2019 -> \\u2018We investigate the VIMPNN\\u2019s ability\\u2019\\nSection 5.2 Fourth sentence: \\u2018The first seeks to demonstrates a the model\\u2019s \\u2026\\u2019 -> \\u2018The first seeks to demonstrates the model\\u2019s\\u2026\\u2019\\\"\\n\\nThank you for pointing out these errors. They have been amended in the revised copy.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studies approximation of the potential energy of molecules by a message passing architecture. The work builds upon [Gilmer et al., 2017] and the contributions are two-fold:\\n1) The creation of new datasets to learn and test such architectures on and the augmentation of an existing dataset in order to account for energies at non-equilibrium states.\\n2) A proposed modification to the MPNN architecture proposed in [Gilmer et al., 2017], in order to account for physical properties in the message-passing procedure.\\nThe performances of the architectures are studied with numerous numerical experiments.\\nThe paper is overall well-written and clear.\\n\\nThe new dataset utility is sound and well-motivated. Unfortunately I can not further motivate upon this, as I am not familiar with this area.\\n\\nFrom the point of view of the proposed architecture, the work is quite incremental. The bond type information, previously included as feature, is now transferred to an architectural modification. On the other hand, many different modifications (although no substantially different from each other) are proposed and tested (although no results about the different modifications are reported - it could be nice to have them in an appendix). This motivates the \\u2018weakly accepted\\u2019.\\n\\nThe authors also considered the idea of adding additional learning modules (and a related loss) to help the model learn more 'physics interpretable' hidden states. While it does not seem to give notable gains here, it is an interesting idea and I believe deserves further experimentations in the future.\\n\\nThe experiments are numerous and various, and they offer a very good overview on the goodness of the model (and its limitations). They first compare with the baseline on the augmented dataset, and they show notable gains on the MPNN baseline. The ability of the network to reproduce the energy curve at different interatomic distances is then studied on the different dataset and in different settings, showing gains over the baseline. The authors also report some negative results and experimental interpretation of the model hidden states, which are also an important contribution in my opinion.\", \"further_comments\": \"1. No details are given about the training of the models. I think a small paragraph (or larger and reported in the appendix) should be added.\\n\\n2. Even if it builds upon previous work, the (VI)MPNN model may be further explained. For example, what type of functions are M_t and R? The explanation on the considered modifications of MPNN may be clearer (maybe with the introduction of a more mathematical notation). \\n\\n3. How long is the message diffusion (T)? What is the effect of larger / smaller T\\u2019s?\\n\\n4. What\\u2019s the point of equations (5) (6) (7)? They are exactly the same and they do not add any information. It would be more useful to explain what type of function R is in my opinion.\\n\\n5. Table 1, Auxiliary estimates: Are these the results obtained by the model a.ii trained to jointly learn the energy and the properties i) ii) iii) ? In what sense they improve on the baseline? This part was not clear to me.\\n\\n6. Section 5.2: \\u2018[\\u2026] we combine our proposed physics integration strategies, namely bond type\\nspecialised\\nnode updates (case a.ii) of Section 4.2) and auxiliary estimations of physical properties,\\ninto the VIMPNN model [\\u2026]\\u2019. I do not understand this sentence. Isn\\u2019t in fact the VIMPNN architecture the same as MPNN with the modification a.ii? In what sense do you integrate a.ii in it?\\n\\n7. If I understood correctly, the main final objective is to be able to characterize the minima of the energy. In this case, could you asses the performances of the two methods (MPNN) and (VIMPNN) by measuring some kind of distance from the approximated minima from the actual one?\", \"typos\": \"Section 5.3 first line: \\u2018We investigate the VIMPNN\\u2019s the ability \\u2026\\u2019 -> \\u2018We investigate the VIMPNN\\u2019s ability\\u2019\\n\\nSection 5.2 Fourth sentence: \\u2018The first seeks to demonstrates a the model\\u2019s \\u2026\\u2019 -> \\u2018The first seeks to demonstrates the model\\u2019s\\u2026\\u2019\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper tackles the problem of estimating the electronic structure's ground state energy of a given atomic system by means of supervised machine learning, as a fast alternative to conventional explicit methods (DFT).\\nThis is done by improving on a previous method, MPNN, Message Passing Neural Networks, and in particular by including information on bond type as input, so that the NN can learn the appropriate weight for messages going through bonds of that type. In addition, training on several target labels (multi-regression) is attempted, with the idea that more physical outputs may help building better hidden representations (on this, there are mixed results). Separately, training sets are enriched with non-equilibrium structures, so as to confront the NN with more diverse data.\\n\\nThe method is tested on 3 kind of training sets. The first is a simple but time-consuming augmentation of QM9, with inter-atomic distances varied, so as to increase the training set's size (and in particular, including non equilibrium configurations). The second consists in a periodic and thus infinite simple crystal structure (with, again, variations in inter-atomic distances, enriching the dataset). The third is a pseudo-cristalline structure with atoms randomly placed on a regular grid, forming a somewhat random structure, also named crystal (this is not a very good name).\\n\\nThe paper is overall rather well written, sometimes being a bit cumbersome (long sentences), but mostly it is stating clearly what is done or discovered. The work is situated within the existing literature (that I am not familiar with at all). The idea of using physics to guide architecture choices is gaining a lot of attention recently and seems to be well-suited to this particular problem, and well applied. Several ways to use the bond type information have been attempted in this work, and several of them are reported and compared (a couple of them are discarded). The results convincingly show that using bond type information indeed increases performance, both for small systems and for regular crystals. The impact of performing multi-regression is less important, but still positive. For large ''random crystals'', the method does not perform very well, and this represents a challenge for future work. Such a confession on the method's limitations is welcomed.\\n\\nGiven the idea (using physical information as bond type) is clearly and honestly presented, produces significant improvement compared with previous works, and has perspective for multiple future developments, I recommend acceptation of this paper.\\n\\n\\nThere are however a number of points that could be improved.\\n\\n1. There is a physical mistake that is not crucial but should be corrected, when talking about the ground state, and in particular before this sentence: ''accurate ground-state energy estimation of out-of-equilibrium molecule''. Ground state means minimal, T=0K energy level, so by definition it is at equilibrium. Thus, the sentence seems quite contradictory to a physicist.\\nWhat DFT and VIMPNN actually compute is the electronic structure's ground state's energy (at fixed positions of the atom kernels). I think this distinction should be mentioned just once, and then you could proceed with saying ground state energy. \\nBecause of this, I would recommend to edit the title so as to suppress ''out of equilibrium'' from it. Otherwise readers may think the method deals with non-equilibrium electronic structures (non ground states), which clearly it does not at all, or they may think that it is especially good at estimating energies for out of equilibrium systems, which is not its primary goal.\\n\\n2. I do not understand very well the training procedure. Also there are some tests that seem to be interesting and that are not performed (as far as I understood).\\nDoes each training set contains the 90%-150% data augmenations, for each non-augmented training configuration ?\\nWhy is training performed separately for each kind of data set ? Wouldn't the ultimate goal be to transfer learning from a type to another, e.g. from QM9-style to crystal style, etc ? (as far as I understood, this was not done)\\nIsn't it interesting to see how much training on an augmented data set (let's say QM9) improves performance on the non-augmented data (the ''true data'' in a sense) ? Although the augmented data is obtained by DFT, and comparing models trained on different data sets is unfair, I think it may be interesting to see if VIMPNN benefits more than MPNN from this strategy (so compare the performance gains of both algorithms obtained by augmenting a data set). This kind of comparison may also be done for the ''augmentation'' of a training set by the concatenation of it with another one (although in that case it may be detrimental to the test accuracy?).\\nIf you actually did some of this, then I misunderstood and I am sorry, but then this also means you should clarify.\\n\\n3. Section 3.3:\\nI would not call this a crystal, but more something like ''random finite structure''. This should be done everywhere in the paper.\\n\\n4. Section 4.1 is a bit too short for the inexperienced reader. I suggest to be a bit more explicit on what is learned\\n\\n5. Section 4.2: It is nice to say you tried other ways, keep that.\\nHowever, try to be more explicit on what is shared and what isn't, in the architecture you finally pick. Is lambda(v,w) a common value for all bonds of the same type, like C-C ? Maybe you could provide an example or some more detailed notation to make your choice fully explicit (after all this is the core of the paper).\\n\\n6. Section 4.3: could you quickly comment on why you don't use more of the 13 physical observables available in QM9 ?\\n\\n7. Table 1: do the three last lines correspond to ''no BT information + a single auxiliary estimate'' ? It seems to be the case, but then you say you will continue with the auxiliaries, in addition to the BT information. Why don't you display the result of using BT information AND the 3 auxiliary estimates ? If you did, then I misunderstood, but it would also mean you did not explain well enough a counter-intuitive result (which would be that adding auxiliary information actually hurts the performance of the VIMPNN).\\n\\n8. Please include a couple of explicit numbers of your training/test/validation sets sizes.\\n\\n9. section 5.2 could be made more concise. In particular, there is no need in repeating what can be seen directly in the figures (stating numbers). It is useful to comment on the meaning of the results however (as you currently do).\\n\\n10. section 5.4 is a good idea, promising, however it does not really conclude into a very strong statement, and takes essentially 1 full page, which could be used to better clarify the architecture and/or the training procedure (or to reduce towards the ideal page length).\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a number of new / extended datasets for the evaluation of ML-based prediction of energies of unstable systems, as well as a network (VIMPNN) that includes a new and better way of including bond-type information. It is also proposed to use auxiliary losses (predicting other chemical properties).\\n\\nAlthough I am not an expert in chemistry, the new datasets seem fairly well thought out and their utility is well motivated. The proposed change to the MPNN network architecture is rather simple and hardly physics inspired, but the empirical improvement seems substantial, so this too is a nice contribution. So I have decided to give the \\u201cweak accept\\u201d rating.\\n\\nIn 4.2 it is explained how different ways of incorporating bond information were evaluated, and it is stated that \\u201cbest results were obtained in the case a.ii). However, no results are presented to support this claim, leaving the reader to wonder how rigorous this exploration was. I would suggest systematically evaluating the different options and including the results in an appendix. \\n\\nIn table 1 results are shown for the MPNN baseline, baseline with specialised node updates a.ii, and with auxiliary estimates. However, combinations of these are not evaluated. Nevertheless, if I understood correctly, the VIMPNN method tested later includes all of the separate improvements. It would be good to include experimental results to motivate this.\\n\\nAs acknowledged in the paper, the idea of using bond-type information was already in Gilmer et al. Also, I think the different ways of including bond-type explored in this paper are not really informed by physics. The choice for method a.ii is made based on empirical results. This is not a problem in itself, but I would suggest that the authors change the wording to not over-promise on the physics-inspiredness. E.g. the abstract says \\u201cVIMPNN integrates prior knowledge such as the existence of different interatomic bonds\\u201d, suggesting that there is more prior knowledge being exploited than just bonds.\", \"comments\": \"\\u201cIt produces comparable accuracy to that of DFT while also improving computation time by 5 orders of magnitude\\u201d. \\nI assume this speedup is relative to DFT. It would be good to be explicit about that, and also discuss the speed relative to the MPNN baseline (I suppose MPNN and VIMPNN are similar).\\n\\n\\u201cThe change of atomic distances are performed isomorphically\\u201d - I would say \\u201cisometrically\\u201d.\"}"
]
} |
S1gLBgBtDH | SLM Lab: A Comprehensive Benchmark and Modular Software Framework for Reproducible Deep Reinforcement Learning | [
"Wah Loon Keng",
"Laura Graesser",
"Milan Cvitkovic"
] | We introduce SLM Lab, a software framework for reproducible reinforcement learning (RL) research. SLM Lab implements a number of popular RL algorithms, provides synchronous and asynchronous parallel experiment execution, hyperparameter search, and result analysis. RL algorithms in SLM Lab are implemented in a modular way such that differences in algorithm performance can be confidently ascribed to differences between algorithms, not between implementations. In this work we present the design choices behind SLM Lab and use it to produce a comprehensive single-codebase RL algorithm benchmark. In addition, as a consequence of SLM Lab's modular design, we introduce and evaluate a discrete-action variant of the Soft Actor-Critic algorithm (Haarnoja et al., 2018) and a hybrid synchronous/asynchronous training method for RL agents. | [
"reinforcement learning",
"machine learning",
"benchmark",
"reproducibility",
"software",
"framework",
"implementation issues",
"parallelization",
"software platforms"
] | Reject | https://openreview.net/pdf?id=S1gLBgBtDH | https://openreview.net/forum?id=S1gLBgBtDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"csuTbCCKZU",
"Sygd4Ol9sB",
"Hklbldgcor",
"r1lT9u6FjB",
"SyxZDeUO5r",
"H1xO_s5CKr",
"SyeSd7ZAKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745270,
1573681200358,
1573681128965,
1573669013494,
1572524120844,
1571887984514,
1571849069380
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2286/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2286/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2286/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2286/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2286/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2286/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"A new software framework fo Deep RL is introduced. This is a useful work for the community, but it is not a research work. I agree with Reviewer4 that somehow it is not a right venue: other papers need to have technical contributions, SOTA, and here - it is difficult but it is another type of work - accurate technical implementation and commenting. I do not feel right to have as it a paper on ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author response to Official Blind Review #2\", \"comment\": \"Thank you for taking the time to read the paper and for your comments.\"}",
"{\"title\": \"Author response to Official Blind Review #1\", \"comment\": \"Thank you for such a comprehensive and thorough review, we really appreciate your comments. Please see our replies below.\\n\\n---\\n\\u201c...I am not convinced that it brings enough novelty to the RL software landscape\\u2026\\u201d\\n\\nSLM Lab presents an empirical contribution and software aimed to address reproducibility problems in RL. We think this paper makes the following novel contributions to RL research:\\n\\n1.1) Fair comparison of policy gradient and value-based methods from a single codebase with minimal implementation differences. This is the most extensive comparison we are aware of among the RL libraries we listed.\\n\\n1.2) Hybrid parallelization: As far as we are aware, none of the RL libraries we listed provides this capability. An RL algorithm can be bottlenecked by stepping the environment or updating the networks, and this method is useful for finding the right mix of parallelization schemes to speedup training.\\n\\n1.3) Method to address reproducibility in RL: We propose that RL libraries would benefit from having a spec file design which exposes all of the hyperparameters. Although not a new idea, the extent to which SLM Lab implements it is novel, e.g. by including the environment details, the hyperparameter search, and automatically savign the git SHA.\\n\\n---\\n\\u201c...many of these features can typically be added to other libraries by plugging in other open source software\\u2026\\u201d\\nWe agree, and in fact SLM Lab also uses Ray Tune for hyperparameter optimization and Tensorboard for visualization. However, integrating other software libraries into a framework to function correctly takes significant time and effort, and our goal is to take that burden away from the users and let them focus on research. \\n\\n---\\n\\u201cThe parallelization capabilities of SLM Lab also seem limited\\u2026parallelization can only occur on a single machine...\\u201d\\nIndeed, SLM Lab can only parallelize on a single machine, but its parallelization capabilities are unique. It allows for hybrid parallelization: on the environment and on network training. We document the benefits of this novel contribution in the paper.\\n\\n---\\n\\u201cIt is not clear to me to which extent multi-agent is supported\\u2026.\\u201d\\nMulti-agent is on our future roadmap, and the current spec file design is designed for future format compatibility with multi-agent.\\n\\n---\\n\\u201c...the paper presents a discrete version of SAC, \\u2026 but results from Table 1 do not look very good\\u2026\\u201d\\nDiscrete SAC on Pong was the most sample efficient of all the algorithms, roughly by 2x compared to the next most sample-efficient algorithm. We included new discrete SAC results in the learning curve in the Appendix and Table 1.\\n\\nThe other discrete SAC results are indeed not strong, but we felt it important to include also negative results, especially since the Pong result shows it is possible for discrete SAC to perform very well on vision-based environments. However, we have been unable to obtain good results uniformly across the Atari games. We feel this is useful for the research community to know.\\n\\nThe \\u201cSoft Actor-Critic for Discrete Action Settings\\u201d you mentioned was released after our submission, and it is difficult to compare with their results because they focused on very few samples (100k frames). Their reported results are lower than ours in Table 1, and some of their results are worse than random.\\n\\n---\\n\\u201c...I am not aware of previous work using the exact same evaluation setting, so it is hard to tell how they compare to other implementations\\u2026.\\u201d\\n\\nThe difficulty you cite of comparing results across algorithms was one of the main motivations for SLM Lab. In addition to the variability in performance across different implementations, evaluation techniques in deep RL have changed over time.\\n\\nFor example, some papers select the best policy during training (e.g. Prioritized Experience Replay, Schaul et. al, 2016) whereas others report results on the final policy (e.g. The Arcade Learning Environment: An Evaluation Platform for General Agents, Bellemare et. al., 2013).\\n\\nOur approach to evaluation for all environments is very similar to the proposal in \\u201cRevisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents\\u201d, Machado et. al, 2017 which recommends:\\n\\n\\u201cAt the end of training (and ideally at other points as well) report the average performance of the last k episodes. This protocol does not use the explicit evaluation phase, thus requiring an agent to perform well while it is learning. This better aligns the performance metric with the goal of continual learning while also simplifying experimental methodology.\\u201d\\n\\n---\\n\\u201cHaving some synthetic result on Atari \\u2026 median human-normalized score\\u2026\\u201d\\nLike most recent RL papers, we do not include human baseline scores since RL algorithms have exceeded human performance at Atari games, and they are also difficult to obtain. However, random baseline scores are easily generated in SLM Lab, so we have added a new \\u201cRandom\\u201d column in the tables for better comparison.\"}",
"{\"title\": \"Author response to Official Blind Review #4\", \"comment\": \"Thank you for taking the time to review this paper and for your thoughtful comments. Please see our responses below.\\n\\n---\\n\\u201cComparison with the library [1] is missing\\u2026\\u201d\\nThanks for bringing this library to our attention. We added a comparison in Table 3, and the similarities and differences are summarized below:\\n\\nBoth libraries addresses reproducibility using config/spec files, although SLM Lab uses the git SHA to reference code as opposed to saving source as Catalyst does. They both report benchmark results, however SLM Lab is more comprehensive. Parallelization in Catalyst can scale to multiple machines, where as parallelization in SLM Lab is focused on the single machine use case. SLM Lab also logs to Tensorboard, and provides a more extensive automatic experiment analysis. Finally, Catalyst does not appear to provide hyper-parameter search. This is a key feature of SLM Lab and is configured in the spec file.\\n\\n---\\n\\u201cI am not sure that ICLR is the right venue for such paper...\\u201d\\nOur paper is as much an empirical contribution as a software contribution. It provides what is, to the best of our knowledge, the most comprehensive set of benchmark RL results published to date (including entirely new results for the SAC algorithm), and moreover one which is a fairer comparison between RL algorithms than previous benchmarks due to the SLM Lab design that minimizes implementation differences. It also includes a new hybrid parallelization capability applicable to all RL algorithms.\\n\\nEven considering just the software contribution of the SLM Lab library, however, we feel that ICLR is an appropriate venue for this paper. We note that the call for papers specifically lists \\u201cimplementation issues, parallelization, software platforms, hardware\\u201d as a relevant topic. We also note that some of the libraries cited in Table 3 have been published at similar venues, such as ELF at NeurIPS 2017, and RLLib at ICML 2018.\\n\\n---\\n\\u201cHow difficult it is to implement distributional algorithms in your framework?\\u201d\\nThis is on the SLM Lab roadmap. It is not difficult, and can be implemented as an extension of the DQN class with a custom network output sampling mechanism and loss computation.\\n\\n---\\n\\u201cWhat about different exploration strategies?...\\u201d\\nBoltzmann and epsilon-greedy exploration strategies are implemented and can be specified in the spec file. We have updated the paper to make it clear that this is available.\\n\\nParameter noise is not currently implemented, but adding it is relatively straightforward, and it is on our future roadmap.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary:\\n\\nThe paper provides a description of a new framework for reproducible and efficient RL experiments, as well as benchmarks of many algorithms on popular environments, such as `Atari and Roboschool.\", \"pros\": [\"I agree that reproducibility is an extremely important question for the RL research, and thus such a code library is very beneficial for the community.\", \"The library is well designed, and allows for creating extensions rather easily in the future.\", \"Benchmarks are quite extensive and instructive.\"], \"cons\": [\"Comparison with the library [1] is missing (see also [2] for description and benchmarks). As both libraries are focused on reproducibility and flexible implementations of algorithms, such a comparison would support authors claims.\", \"I am not sure that ICLR is the right venue for such paper. Perhaps a more specialized conference of a workshop would be better.\", \"Anonymity violation\"], \"questions\": \"- How difficult it is to implement distributional algorithms in your framework?\\n- What about different exploration strategies? (Boltzmann, epsilon-greedy, parameter noise etc.). I guess it should be quite easy to make it configurable as well\\n\\n\\n[1] https://github.com/catalyst-team/catalyst\\n[2] https://arxiv.org/pdf/1903.00027.pdf\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a new RL library called \\u00ab\\u00a0SLM Lab\\u00a0\\u00bb. Its most relevant features for RL research are: (1) modularity to help re-use existing components (thus reducing the risk of subtle implementation differences when comparing algorithms), (2) implementations of most popular algorithms like DQN & variants, A3C, PPO, SAC, (3) ability to parallelize both actors (through vectorized environments) and the learner (through distributed gradient descent), and (4) utilities for hyper-parameter optimization, reproducible experiments and reporting. The paper also reports performance over Atari games, Roboschool environments as well as some Unity ML-Agents tasks. Finally, it provides a high-level overview of SLM Lab\\u2019s capabilities compared to 23 other RL open source libraries, showing that it is the only one that combines: reporting of the performance of the implemented algorithms, ability to specify hyper-parameters in the config file, parallelization, hyper-parameter optimization, and visualization of the results.\\n\\nOverall this looks like a solid RL library, but I am not convinced that it brings enough novelty to the RL software landscape for a published ICLR paper \\u2014 it would better fit in a workhop dedicated to ML libraries for instance, thus the weak reject.\\n\\nTable 3 shows that SLM Lab is the only RL library with such a broad offering of features, and this is definitely impressive, but I would argue that many of these features can typically be added to other libraries by plugging in other open source software. For instance there are several tools for experiment management and hyper-parameter optimization (and for RLLib in particular, hyper-parameter optimization is not checked but is straightforward with Ray Tune). TensorBoard can also often be easily used for visualization.\", \"the_parallelization_capabilities_of_slm_lab_also_seem_limited\": \"if I understand correctly, actor parallelization can only occur on a single machine, and thus an algorithm like Ape-X or R2D2 could not be implemented. If this is correct then it is a major limitation of the framework, since such parallelization across multiple computers can be extremely useful when environments are slow and costly to run (in CPU / RAM).\\n\\nIt is not clear to me to which extent multi-agent is supported. It seems like it is possible to have multiple agents in one environment, but is that enough for general multi-agent RL? (ex: how to specify individual / team rewards? share information between agents? deal with agents not acting all at the same timestep? centralize part of the training / execution?\\u2026)\\n\\nI appreciate that the paper presents a discrete version of SAC, mentioning how easy it was to implement thanks to the modular design of SLM Lab, but results from Table 1 do not look very good (especially since it did not work on some of the environments). Relying on the Gumbel-softmax might not be the most robust & stable way to train a discrete SAC \\u2014 see e.g. the recent \\u00ab\\u00a0Soft Actor-Critic for Discrete Action Settings\\u00a0\\u00bb for a different approach.\\n\\nFinally, it is also great to have some benchmarks of the algorithms being implemented, but at least for Atari, I am not aware of previous work using the exact same evaluation setting, so it is hard to tell how they compare to other implementations.\\n\\nIn spite of the above, I do not mean to criticize SLM Lab too heavily as from what I can tell it seems to be a a solid library with many useful features, and I am sure many researchers will find it useful in their day-to-day work.\", \"minor_points\": [\"Anonymity was clearly violated with the two github links\", \"Having some synthetic result on Atari (like the typically reported median human-normalized score) would be good\", \"I am personally not a fan of large config JSON files due to the lack of comments in JSON\", \"A.5 (\\u00ab\\u00a0Key Implementation Lessons\\u00a0\\u00bb) is great!\"], \"review_update_after_author_feedback\": \"I am on the fence for this paper, but still leaning towards rejection due to the fact I am still not convinced that this library brings that much novelty compared to existing other libraries (although it seems like a nice RL library, I am not sure ICLR is the right venue for talking about it). The authors argue that their benchmark results are a key contribution of the paper, but I do not find these results particularly insightful, especially the Atari ones that are not comparable to previous results (due to using a different evaluation method) and the lack of state-of-the-art algorithms like Rainbow or IQN.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"SLM Lab is a software framework for reinforcement learning, which includes many different algorithms, networks, and memory types. The framework is well structured and modular. Thus, it is easily extendable for anyone and can be a pinnacle for future RL research.\\n\\nThe really like the paper. It is well written, easy to read, and provide a valuable platform / framework to the community, both the scientific community as well as practitioners. Although the scientific contribution may be low in the paper, I think the significance and potential impact of the paper outweigh that. \\n\\nThe paper also include many results from running the framework in various configurations, showing the flexibility and usefulness of it.\\n\\nThe code for SLM Lab is released open source, which is very valuable and enables future research in RL.\"}"
]
} |
rJerHlrYwH | Data-Efficient Image Recognition with Contrastive Predictive Coding | [
"Olivier J Henaff",
"Aravind Srinivas",
"Jeffrey De Fauw",
"Ali Razavi",
"Carl Doersch",
"S. M. Ali Eslami",
"Aaron van den Oord"
] | Human observers can learn to recognize new categories of objects from a handful of examples, yet doing so with machine perception remains an open challenge. We hypothesize that data-efficient recognition is enabled by representations which make the variability in natural signals more predictable, as suggested by recent perceptual evidence. We therefore revisit and improve Contrastive Predictive Coding, a recently-proposed unsupervised learning framework, and arrive at a representation which enables generalization from small amounts of labeled data. When provided with only 1% of ImageNet labels (i.e. 13 per class), this model retains a strong classification performance, 73% Top-5 accuracy, outperforming supervised networks by 28% (a 65% relative improvement) and state-of-the-art semi-supervised methods by 14%. We also find this representation to serve as a useful substrate for object detection on the PASCAL-VOC 2007 dataset, approaching the performance of representations trained with a fully annotated ImageNet dataset. | [
"Deep learning",
"representation learning",
"contrastive methods",
"unsupervised learning",
"self-supervised learning",
"vision",
"data-efficiency"
] | Reject | https://openreview.net/pdf?id=rJerHlrYwH | https://openreview.net/forum?id=rJerHlrYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"5E4qNbTPl2",
"SylzQECKor",
"SygZlNAYoS",
"BJeIT7RKiS",
"rJlQ5mRtjB",
"Byl00MAtoS",
"ryxqDXApcr",
"BJg06Ig69B",
"HJxj6VX0tr",
"Sye7g-moFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745237,
1573671962341,
1573671913256,
1573671870033,
1573671819121,
1573671637606,
1572885345866,
1572828870230,
1571857603131,
1571660010836
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2284/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2284/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2284/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2284/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2284/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2284/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2284/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2284/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2284/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper tackles the key question of achieving high prediction performances with few labels. The proposed approach builds upon Contrastive Predictive Coding (van den Oord et al. 2018). The contribution lies in i) refining CPC along several axes including model capacity, directional predictions, patch-based augmentation; ii) showing that the refined representation learned by the called CPC.v2 supports an efficient classification in a few-label regime, and can be transferred to another dataset; iii) showing that the auxiliary losses involved in the CPC are not necessarily predictive of the eventual performance of the network.\\n\\nThis paper generated a hot discussion. Reviewers were not convinced that the paper contributions are sufficiently innovative to deserve being published at ICLR. Authors argued that novelty does not have to lie in equations, and that the new ideas and evidence presented are worth. \\n\\nThe area chair thinks that the paper raises profound questions (e.g., what auxiliary losses are most conducive to learning a good representation; how to divide the computational efforts among the preliminary phase of representation learning and the later phase of classifier learning), but given the number of options and details involved, these results may support several interpretations besides the authors'. \\n\\nThe authors might also want to leave the claim about the generality of the CPC++ principles (e.g., regarding audio) for further work - or to bring additional evidence backing up this claim. \\n\\nIn conclusion, this paper contains brilliant ideas and I hope to see them published with a strengthened analysis of its components.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank the reviewer for their comments. We respectfully disagree with the assessment that the \\u201cnovelty and technical contributions are limited\\u201d. Although the learning objective we use is the same as in (van den Oord, 2018), we make a number of changes to the training methodology without which the final performance would be uncomparable to the one we arrive at (70.6% Top-1 linear classification accuracy vs 48.7%). We ablate all of these changes and show how important they are for achieving state of the art results. This, combined with the fact that these modifications are very general (and could be straightforwardly applied to audio, video, and text; see footnote), make these technical contributions readily re-usable by the research community. We will open-source our implementation and pre-trained models to make these experimental insights widely accessible.\\n\\nWe agree that it would be interesting to re-evaluate the CPC model with architectures used in other works. These tend to differ widely across papers: (Tian, 2019) use ResNet-101, (Donahue & Simonyan, 2019) use RevNet-50 with 4x width, (Xie, 2019) use ResNet-50 whereas (Zhai, 2019) use ResNet-50 with 4x width. It is therefore difficult to chose a single architecture that will enable comparison to all prior works. Nonetheless, we will systematically list the architectures used by each method and include results from ResNet-50.\\n\\nThe inputs to the masked convolutional network are the feature vectors z_{i,j}. We will make this clear in the text, and provide a reference to the appendix in which this is made explicit.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for their comments. We agree that the modifications we bring to the CPC method are general enough to be applied to a variety of other modalities. For one, the observation that increasing the network depth and ease of optimization can strongly impact performance directly translates to other types of data. Data-augmentation has also become a standard technique in supervised learning, with a considerable amount of domain knowledge being accumulated regarding which techniques are useful for which modalities. Our observation that patch-level augmentation dramatically improves the performance of CPC applied to images could therefore be straightforwardly extended (using analogous augmentation techniques) to audio segments, video cubes, and natural language atoms. Similarly, increasing the number of predictions can easily be applied to other data. As such, since our modifications to the CPC methodology are general enough to be applied to all the modalities for which CPC was originally designed, we think calling it \\u201cCPC v2\\u201d is valid and warranted, but are curious to hear your suggestions in this matter.\\n\\nWe will make sure to make the figures printer-friendly in the final version.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for their comments. Regarding the first point \\u201cMore discussions about why this four axes are investigated in image recognition,\\u201d we agree that a better explanation of the relationship between the new training protocol and our original hypothesis is warranted. Our modifications to the original CPC model can be grouped into 3 categories. Increasing the network scale and ease of optimization both contribute to the representational capacity of the network and its ability to make the complex transformations across space more predictable. The next crucial modification, patch-based augmentation, allows us to control which features of the data will be made more predictable. By making low-level features (such as brightness, color, and contrast) less predictable, we ensure the network capacity is spent on making other features (including the more semantic ones of interest) more predictable. Finally, increasing the number of spatial directions used in the training task amplifies this learning signal. We will update the discussion of these points in section 4.1 to share these intuitions.\\n\\nOur original hypothesis stated that spatially predictable representations should better enable low-data classification. Through our ablation, we are able to titrate the amount of \\u201cpredictability\\u201d in the representation by changing the number of spatial directions included in the prediction task. For example, one model only attempts to predict patches from top to bottom. The next makes predictions in both vertical directions. The third in all four (horizontal and vertical) spatial directions. These models therefore learn to be \\u201cpredictable\\u201d along more and more axes of the data. In line with our hypothesis, representations which are more \\u201cpredictable\\u201d also enable better low-data classification. However, since these models also improve linear classification accuracy, we asked whether these two metrics were necessarily related to each other. This was not the case (they are uncorrelated across other model specifications, a novel finding in itself), and we therefore take this as evidence that more spatially predictable representations enable efficient classification.\\n\\nFinally, we agree that it would be interesting to re-evaluate the CPC model with architectures used in other works. Most of the methods we compare to in Table 3 use a ResNet-101, which is why we opted for that architecture. Nevertheless we will include results for ResNet-50 as you suggest in the final version, and report the architecture used by each method.\\n\\nTo conclude, we respectfully disagree with the assessment that this work is \\u201csimply adjusting network architectures and training strategies, which makes it less interesting\\u201d. Firstly, it is unexpected that the same objective, given a new training protocol, can result in dramatically better performance (from 48.7% to 70.6% linear classification accuracy). Without these results, one might tend to dismiss contrastive learning as impractical or ill-suited to downstream tasks. Furthermore, these modifications are sufficiently general to be applied to a variety of different methods and modalities, and our detailed ablations provide actionable recommendations to the community. We will open-source our implementation and pre-trained models to make these experimental insights widely accessible.\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"We would like to thank the reviewer for their comments on the manuscript. However, we find the decision to dismiss a \\u201ctechnically outstanding\\u201d paper simply because it does not introduce a new mathematical formalism to be rather mystifying. Rather than making ever more complex objectives, there is value in reminding the community of the sobering reality that implementation details are hugely important. Dissecting and highlighting the contributions of these details (as we do) will also facilitate the comparison of different self-supervised objectives in future work. To that end, our work makes a number of contributions, both methodological and experimental, which we think will be very impactful to the community.\\n\\nOn the methodological side, we identify a number of axes which enable the performance of CPC: network scale, local data augmentation, amount of self-supervision, etc. These insights are sufficiently general for them to inform other contrastive methods, and other modalities (e.g. audio and video have analogous forms of data-augmentation). Furthermore, it is an important experimental point to notice just how much these \\u201cimplementation details\\u201d matter. Without them, one might dismiss contrastive learning altogether. With them, they appear to be one of the most promising methods for representation learning. We will open-source our implementation and pre-trained models to make these techniques widely accessible. \\n\\nMoreover, we believe our empirical results to represent a landmark in representation learning: we have shown it to enable substantial gains in data-efficiency for all amounts of available data (as opposed to in only the low-data regime). For the first time, it appears beneficial to train supervised networks on top of learned representations rather than pixel lattices. Going further, our results in transfer learning (which approach that of supervised transfer) raise the possibility of removing the need for large-scale labeled datasets altogether.\"}",
"{\"title\": \"New ideas and findings in this work\", \"comment\": [\"We would like to thank the reviewers for their perspective on the manuscript. The main criticism lies with the novelty of our contributions. We disagree with this assessment, for although we do not present any new objective or equation, we present a series of new ideas and findings in this work:\", \"Representation learning (and CPC in particular) enables unseen gains in the data-efficiency of image classifiers (same performance as purely supervised, with 2-5x less labels) for all amounts of available data (as opposed to only in the low-data regime).\", \"Representations learned without supervision (with CPC) can rival the performance of supervised representations for transfer learning (to PASCAL).\", \"The performance of CPC greatly depends on a variety of implementation details, whose contributions we dissect and highlight, providing important insights to the representation learning community. We will open-source our implementation and pre-trained models to make these techniques widely accessible.\", \"We show that linear classification and low-labeled data classification are not necessarily predictive of each other, motivating the two as independent benchmarks for representation learning.\"], \"we_identified_a_number_of_axes_which_enable_the_performance_of_cpc\": \"network scale, local data augmentation, amount of self-supervision, etc. Following these axes, we have improved our model since the submission, attaining 70.6% Top-1 linear classification accuracy on ImageNet (the original CPC attained 48.7%), setting a new state-of-the-art.\\n\\nTaken together, our experimental and methodological contributions introduce and defend the idea that representation learning, and CPC in particular, are ready for real-world application.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes to use Contrastive Predictive Coding (CPC), an unsupervised learning approach, to learn representations for further image classification. The authors show that using CPC for representation learning allows to achieve better results than other self-supervised methods. Moreover, CPC is shown to be useful for semi-supervised learning (on par with SOTA method), and transfer learning. All results are very impressive and is in-line with current trends of using a linear classifier on top of a deep feature extractor (e.g., Nalisnick et al., \\\"Hybrid Models with Deep and Invertible Features\\\"). The paper is rather well written and the results are convincing. However, The whole idea of the paper is based on the original paper:\\n* Oord, Aaron van den, Yazhe Li, and Oriol Vinyals. \\\"Representation learning with contrastive predictive coding.\\\" arXiv preprint arXiv:1807.03748 (2018).\\nTechnically speaking, the paper is outstanding, but it lacks novelty in terms of new ideas. I highly appreciate new results and new architectures, but it is not enough for a full conference paper.\\n\\nRemarks\\n- In Section 2.1, the problem statement for Contrastive Predictive Coding (CPC) is unclear. For instance, the authors explain CPC by mentioning about masked convolutional layers that is unnecessary at this point. I understand that from engineering perspective it is crucial information, but it does not help to understand CPC. As a result, without knowing the original paper on CPC, Section 2.1 is hard to follow.\\n\\n- The paper can be treated as an uptaded version of the original CPC paper. I really appreciate all new results and implementation of the idea. The paper is well written and it is technically correct. However, I do not find much novelty compared to the original paper. This would be a perfect workshop contribution, but I am afraid that it is not enough for a full paper.\\n\\n==== AFTER REBUTTAL ====\\nI would like to thank the reviewers for their rebuttal. It was not my intention to dismiss your effort in providing new technical results. Please forgive me if you read it in this way. My point is that the paper presents exactly the same idea as the original paper of CPC, but with new, very interesting results. However, I doubt if this is enough for a full conference paper. This point is debatable and I would be happy to further discuss it with other reviewers and the AC. At this point, I keep my original score, but of I am open for a discussion.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Title: DATA-EFFICIENT IMAGE RECOGNITION\\n[Summary]\\n-This paper introduces Contrastive Predictive Coding (CPC) image recognition in the data-efficient regime. Concretely, the authors improve CPC in terms of its architecture and training strategy. The extensive experiments show that CPC enables data-efficient image classification and surpassed other unsupervised approaches. \\n\\n[Pros]\\n- Although the CPC was proposed and evaluated in vision task in [1], a new implementation of CPC with dramatically-improved ability is presented in this paper.\\n- The CPC is utilized to enhance spatially predictable representations which benefits a lot data-efficient image recognition.\\n\\n[Cons]\\n- In Sec. 4.1, four axes are identified to upgrade CPC v1 to CPC v2. But they are not well motivated. More discussions about why this four axes are investigated in image recognition.\\n\\n-The core idea is motivated by a critical hypothesis that good representations should make spatio-temporal variability in natural signals more predictable. However, this hypothesis is not well verified. The concept of amount of \\u2018predictability\\u2019 in page 7 is not clear. It would be great if you provide more evidence that the improvement in low-data classification results from the increased \\u2018predictability\\u2019.\\n\\n- The comparison in Sec. 4.3 seems unfair. The pretrain model trained with different methods should be the same. For example, the Faster RCNN trained on CPC v2 uses ResNet-101 as backbone but Local Aggregation method uses ResNet-50.\\n\\n[Summary]\\n- This work extends CPP to data-efficient image recognition by simply adjusting network architectures and training strategies, which makes it less interesting. Besides, the major hypothesis is not well validated. \\n- The experimental results are convincing and encouraging. Some minor flaws such as unfair comparison should be fixed.\\n- I want to see how the four axes in Sec. 4.1 are related to core motivation (more predictable) since they are major adjustments from CPP v1 to CPP v2. If the author provides a profound explanation of the problem, I would consider changing the rating.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors augment contrastive predictive coding (CPC), a recent representation learning technique organized around making local representations maximally useful for predicting other nearby representations, and evaluates their augmented architecture in several image classification problems. Although the modifications to CPC aren't particularly original, the authors show first that these yield a significant improvement in linear classification accuracy. They then use this improved model to obtain impressive performance in classification within semi-supervised and transfer learning settings, giving strong support for the use of such methods within image processing applications.\", \"pros\": \"Owing to its generality (CPC assumes only a weak spatial prior in the input data), and cheap computational cost relative to earlier generative approaches, CPC is already a promising unsupervised representation learning technique. The paper gives more evidence of this usefulness for image data, yielding leading performance on several different image classification benchmarks.\\n\\nThe authors also make the observation that linear separability, the standard benchmark for evaluating unsupervised representations, correlates poorly with efficient prediction in the presence of limited labeled data. This observation should be of interest in the broader community, and points to the need for more diverse metrics for unsupervised representations.\", \"cons\": \"The improvements given in the paper are quite useful within their stated domain (image data), but aren't directly applicable to other types of input data. Although the authors make a point of emphasizing the relevance of CPC for other problem domains, they don't currently provide any suggestions for how this current work could be generalized to handle these other cases. In this sense, I think it is a bit deceptive to refer to their model as \\\"CPC v2\\\", as the majority of their changes have no bearing on the intrinsic CPC algorithm itself.\\n\\nI am sure that some of the methods used here could lead to improvements in the use of CPC for other types of data, but the authors currently don't provide any insight on this issue. In line with that, I think their work would be improved by some commentary on this, in particular by any concrete suggestions they have about how similar augmentations to CPC could be carried out in text, audio, and/or video data.\", \"verdict\": \"Owing to the reasons given above, I recommend acceptance.\", \"minor_suggestions\": \"Please use a different color scheme for your figures that is still meaningful if the paper is printed in greyscale.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper improves Contrastive Predictive Coding method and reaches a good performance in several downstream tasks. However, the novelty and technical contributions are limited.\", \"strengths\": [\"The experimental results seem good. The reimplemented CPC v2 performs much better than the original version. And the performance of down-stream tasks is comparable or better than the state-of-the-art methods.\", \"The paper is well written. The paper structure is clear and figures are well illustrated.\", \"Figure 3 shows clearly the performance improvements of a series of incremental modifications to the original CPC methods.\"], \"weaknesses\": [\"The novelty and technical contributions are limited. This paper only proposes some minor improvements based on the original CPC method and use a deeper network to get better performance. The proposed method lacks of important insights for the research community.\", \"The capacity of network architecture is crucial for self-supervised learning. But in Table 1,2,3, the network architecture of the proposed method is deeper than that in the comparison methods, which is unfair for the comparison methods. Meanwhile, the network architectures of many compared methods are not listed in the tables, which may be misleading. For example, Unsupervised Data Augmentation (Xie et al., 2019) in table 2 and Instance Discrimination (Wu et al., 2018) in table 3 use ResNet50, which is much more shallow than ResNet-161 in this paper.\", \"In section 2.1, the paper doesn't describe clearly what's the input of masked convolutional network $g_{\\\\phi}$ and how to calculate $c_{i, j}$.\"]}"
]
} |
BkgrBgSYDS | Kaleidoscope: An Efficient, Learnable Representation For All Structured Linear Maps | [
"Tri Dao",
"Nimit Sohoni",
"Albert Gu",
"Matthew Eichhorn",
"Amit Blonder",
"Megan Leszczynski",
"Atri Rudra",
"Christopher Ré"
] | Modern neural network architectures use structured linear transformations, such as low-rank matrices, sparse matrices, permutations, and the Fourier transform, to improve inference speed and reduce memory usage compared to general linear maps. However, choosing which of the myriad structured transformations to use (and its associated parameterization) is a laborious task that requires trading off speed, space, and accuracy. We consider a different approach: we introduce a family of matrices called kaleidoscope matrices (K-matrices) that provably capture any structured matrix with near-optimal space (parameter) and time (arithmetic operation) complexity. We empirically validate that K-matrices can be automatically learned within end-to-end pipelines to replace hand-crafted procedures, in order to improve model quality. For example, replacing channel shuffles in ShuffleNet improves classification accuracy on ImageNet by up to 5%. K-matrices can also simplify hand-engineered pipelines---we replace filter bank feature computation in speech data preprocessing with a learnable kaleidoscope layer, resulting in only 0.4% loss in accuracy on the TIMIT speech recognition task. In addition, K-matrices can capture latent structure in models: for a challenging permuted image classification task, adding a K-matrix to a standard convolutional architecture can enable learning the latent permutation and improve accuracy by over 8 points. We provide a practically efficient implementation of our approach, and use K-matrices in a Transformer network to attain 36% faster end-to-end inference speed on a language translation task. | [
"structured matrices",
"efficient ML",
"algorithms",
"butterfly matrices",
"arithmetic circuits"
] | Accept (Spotlight) | https://openreview.net/pdf?id=BkgrBgSYDS | https://openreview.net/forum?id=BkgrBgSYDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"5wQiyEMdj7O",
"MEjGc2QVdwA",
"Ry6U7DndLFc",
"Qn5FLwxngp",
"HJxIw8m_iB",
"rJgHQL7uir",
"r1eYbUQ_oB",
"rJl51UQ_iH",
"BJlGTr7OsH",
"BkltbF7QsS",
"S1g74Sfq9H",
"HyeOD450tr",
"BJx4TPtRFB"
],
"note_type": [
"comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1618248785385,
1618225101394,
1618223978418,
1576798745208,
1573561949959,
1573561884952,
1573561857396,
1573561825769,
1573561785897,
1573234944659,
1572640042922,
1571886176504,
1571882940394
],
"note_signatures": [
[
"~Abhyuday_Jagannatha4"
],
[
"ICLR.cc/2020/Conference/Paper2283/Authors"
],
[
"~Abhyuday_Jagannatha4"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2283/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2283/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2283/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2283/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2283/Authors"
],
[
"~Yaroslav_Bulatov1"
],
[
"ICLR.cc/2020/Conference/Paper2283/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2283/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2283/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"repose\", \"comment\": \"Dear Tri\\nYes, I am using the same codebase, there are enoumerous files without any comments, please take time to organize the codebase. thank you\"}",
"{\"title\": \"Repo and instructions\", \"comment\": \"Hi Abhyuday,\\n\\nJust to make sure you're using the right repo, the code is here: https://github.com/HazyResearch/butterfly\\nHave you tried the instructions in the repo? Happy to answer questions if you run into problems. You can contact us by email or by creating new Github issues.\\n\\nTri\"}",
"{\"title\": \"Retract the paper from ICLR\", \"comment\": \"Dear ICLR,\\nthis paper did not deserve to be nominated as spotlight, as this can goes only to the papers which are usable for general public. Please have a look at the codebase, this is terribly coded and this is not usable for general public. thank you\"}",
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper generalizes several existing results for structured linear transformations in the form of K-matrices. This is an excellent paper and all reviewers confirmed that.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Kaleidoscope and Kronecker-factored matrices\", \"comment\": \"Since Kronecker-factored matrices have an efficient representation, they are automatically captured by a K-matrix with the correct number of parameters up to logarithmic factors.\\nThere is actually a tighter bound that can be made in the case of the Kronecker products specifically, relating the K-matrix width of the product A \\u2297 B to the K-matrix widths of the constituents A and B. We have included this argument as Lemma H.7 in the updated draft.\"}",
"{\"title\": \"Response to \\u201cOfficial Blind Review #3\\u201d\", \"comment\": \"We thank the reviewer for their helpful feedback on our work.\\n\\nRegarding the IWSLT translation result, the key claim we aim to validate is that the theoretical efficiency of K-matrices translates to practical speedups on real models as well. We agree that there are other approaches that may offer different model quality vs. inference speed tradeoffs; we simply highlight that K-matrices are one promising method, especially given their important theoretical properties. We have added a performance comparison of K-matrices with other structured replacements such as circulant, Fastfood, ACDC, and Toeplitz-like in Appendix B.4.3, showing that K-matrices yield faster inference with similar BLEU score. We also point out that our DynamicConv model with K-matrices in the decoder attains a comparable BLEU score with the state-of-the-art from two years ago \\u2013 the Transformer model, which continues to enjoy widespread use today \\u2013 while having over 60% higher sentence throughput and 30% fewer parameters than this model.\\n\\nAs mentioned in the shared response, we believe that the speed-quality tradeoff of K-matrices could be further improved with more extensively tuned and optimized implementations. Exploring how to continue to improve these structured compression approaches, while retaining the efficiency and theoretical benefits of K-matrices, is an exciting question for future investigation.\"}",
"{\"title\": \"Response to \\u201cOfficial Blind Review #2\\u201d\", \"comment\": \"We thank the reviewer for their encouraging feedback and thoughtful comments on our work.\\n\\nRegarding the permutation learning experiment, in response to the feedback, we have revised the main text to clarify the setup. The core of the experiment is the ability to denoise permuted images using some representation of the permutation set. In order to do this successfully, it is necessary for such a representation to have certain properties such as inducing a distribution over permutations. We have implemented and added a comparison to the Gumbel-Sinkhorn method (Mena et al., 2018), which is a customized representation for permutations with these properties, and requires similar techniques (unsupervised objective, permutation sampling, etc.) in order to learn the latent structure. The ResNet classifier on top can be viewed primarily as a way to evaluate the quality of the learned permutation; both of these representations are capable of learning the right latent structure, with test accuracies of 93.6 (Kaleidoscope) and 92.9 (Gumbel-Sinkhorn) respectively. The highlight of this experiment is that the K-matrix representation also comes with the requisite properties for this learning pipeline, despite not being explicitly designed for permutation learning.\\n\\nRegarding comparison to a dense matrix for the speech experiment, in Table 5 (Appendix B.1.2), we compare the use of K-matrices in the raw-features speech model with several other classes of matrices, including dense matrices. For instance, we find that, while using a trainable dense matrix slightly outperforms just using the fixed FFT (0.3% drop in test phoneme error rate), using a K-matrix instead of a dense matrix yields a further improvement of 0.8% in the phoneme error rate.\\n\\nRegarding ease of training and hyperparameter tuning, we would like to re-emphasize that for all experiments, all hyperparameters for training were kept the same as those for training the default model architecture, other than those we explicitly mentioned as being tuned. In particular, we did not modify any hyperparameters (such as number of epochs, optimizer, or learning rate) for the ShuffleNet and DynamicConv experiments. For the TIMIT speech experiment, we tune only the \\u201cpreprocessing layer\\u201d learning rate. This is because the default speech pipeline already uses different learning rates for different portions of the network, so there is no clear choice a priori for the learning rate of the \\u201cpreprocessing layer\\u201d (note that most methods, including K-matrices, do not seem to be overly sensitive to the choice of this learning rate). Thus, in these experiments, K-matrices can be used as a drop-in replacement for linear layers without significant tuning effort.\", \"regarding_structure_and_sparsity\": \"We use \\u201cstructure\\u201d in the context of structured matrices to mean matrices with a fast (subquadratic) multiplication algorithm. Structured matrices have a sparse factorization with total NNZ on the order of the number of operations required in the multiplication. This connection was known in the algebraic complexity community, and formalized by De Sa et al. (2018).\", \"regarding_the_inductive_bias_encoded_by_k_matrices\": \"the building block of K-matrices is a butterfly matrix, which encodes the recursive divide-and-conquer structure of many fast algorithms such as the FFT. Analyzing the precise effects of the inductive bias imposed by K-matrices is an exciting question for future work.\"}",
"{\"title\": \"Response to \\u201cOfficial Blind Review #4\\u201d\", \"comment\": \"We appreciate the reviewer\\u2019s positive comments about our work.\\n\\nRegarding the convergence and speed of training, we would like to stress that all hyperparameters for training were kept the same as those for training the default model architecture, other than those we explicitly mentioned as being tuned (e.g. learning rate for the speech experiment). In particular, for all experiments, the number of epochs is the same for both the baseline approach and the K-matrix approach. Additionally, for the speech preprocessing and ShuffleNet experiments, we compare the total wall-clock training time of our K-matrix approach to that of the baseline approach, in both cases finding that the training time required by our approach is at most 20% longer than that of the baseline approach. In our updated revision, we also include the training time comparison for the DynamicConv model in Appendix B.4.2 (in this case, the modified model with K-matrices actually trains slightly faster than the baseline). We agree with the reviewer that a training plot can help provide a better understanding of how our proposed approach performs, and therefore have included an example plot (for the ShuffleNet experiment) in our updated revision (in Appendix B.2.3).\\n\\nRegarding empirical comparisons to dense matrices, in Table 5 (Appendix B.1.2), we compare the use of K-matrices in the raw-features speech model with several other classes of matrices, including dense matrices. We find that, while using a trainable dense matrix slightly outperforms just using the fixed FFT (0.3% drop in test phoneme error rate), using a K-matrix instead of a dense matrix yields a further improvement of 0.8% in the phoneme error rate. Another empirical comparison of K-matrices and dense matrices is in Section 3.3, in which we replace the linear layers in the decoder of a DynamicConv model with K-matrices; these linear layers are by default dense (fully-connected) matrices. Theoretically, in Lemma E.3 we show that arbitrary dense matrices are contained in the BB* hierarchy \\u2013 in particular, that any n x n matrix is in (BB*)^{2n-2}, which implies that its K-matrix representation requires at most (4n log n)*(2n-2) = O(n^2 log n) parameters and thus is tight up to a logarithmic factor in n.\"}",
"{\"title\": \"Shared response to reviewers\", \"comment\": \"We thank all the reviewers for their thoughtful feedback. We address general comments and questions from the reviewers here, and then answer specific questions in individual responses. We have also uploaded a revised draft improving clarity in response to the reviewers\\u2019 suggestions and feedback.\\n\\n[Ease of training K-matrices]: As K-matrices are fully differentiable (thanks to the fixed sparsity pattern, Section 2.2), they can be trained jointly with the rest of the model using standard learning algorithms (such as SGD, as used in the paper). For all of the experiments, we use the same number of epochs (and other applicable hyperparameters) for K-matrices as for the baselines. Even though each K-matrix is a product of multiple (sparse) matrices, K-matrices take about the same number of training steps to converge. One reason is that they can be easily initialized or constrained to be orthogonal (Section 2.4), thus avoiding vanishing or exploding gradients.\\n\\n[Role of speed experiment (IWSLT translation task, Section 3.3)]: Even though our implementation is not yet highly optimized, this experiment serves as a proof of concept showing that K-matrices can lead to speedup in practical applications. By contrast, lack of fast implementations has limited the applicability of many other large classes of structured matrices that are efficient in theory, such as Toeplitz-like (Sindhwani et al., 2015) or low-displacement rank (Thomas et al., 2018). \\n\\n[Additional comparisons with other baselines]: We thank the reviewers for suggesting other baselines to compare against to gain further insights into the applicability of our method. For the permutation learning experiment, we have added comparison to the Gumbel-Sinkhorn method (Mena et al., 2018), a specialized method to learn permutations, and this yields similar performance to K-matrices. We have also compared K-matrices to circulant, Fastfood, ACDC, and Toeplitz-like in the DynamicConv translation experiment (Appendix B.4.3); we find that K-matrices outperform these matrix classes. We do not expect a \\u201cfree lunch\\u201d however: for any particular task, there may be a specialized matrix class that will achieve the best performance on that task when subjected to some resource constraints (i.e. speed and memory). However, K-matrices are more general as they can efficiently capture any structured matrices (up to some additional logarithmic factors in space and runtime), thus avoiding the need for hand-picking a specialized matrix class for every task.\\n\\nK-matrices are thus expressive (Section 2.3), and efficient both in theory (Section 2.3) and practice (Section 3.3), and their learnability allows them to replace hand-crafted transformations (Section 3.1) and capture challenging latent structures (Section 3.2). We are excited about future work on further hardware-optimized implementations to fully realize the memory and speed benefits of structured matrices.\"}",
"{\"title\": \"Kronecker-factored maps?\", \"comment\": \"Can this represent Kronecker-factored matrices efficiently?\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary\\nThe authors introduce kaleidoscope matrices (K-matrices) and propose to use them as a substitute for structured matrices arising in ML applications (e.g. circulant matrix used for the convolution operation). The authors prove that K-matrices are expressive enough to capture any structured matrix with near-optimal space and matvec time complexity. The authors demonstrate that learnable K-matrices achieve similar metrics compared to hand-crafted features on speech processing and computer vision tasks, can learn from permuted images, achieve performance close to a CNN trained on unpermuted images and demonstrate the improvement of inference speed of a transformer-based architecture for a machine translation task.\\n\\n\\t\\t\\nReview\\nThe overall quality of the paper is high. The main contribution of the paper is the introduction of a family of matrices called kaleidoscope matrices (or K-matrices) which can be represented as a product of block-diagonal matrices of a special structure. Because of the special structure, the family allows near-optimal time matvec operations with near-optimal space complexity for structured matrices which are commonly used in deep architectures. \\n\\nThe proposed approach is novel. It gives a new characterization of sparse matrices with optimal space complexity up to a logarithmic term. Moreover, the proposed characterization is able to learn any structured matrix and matvec time complexity of the K-matrix representation is near-optimal matvec time complexity of the structured matrix. Even though in the worst-case complexity is not optimal, the authors argue that for matrices that are commonly used in machine learning architectures (e.g. circulant matrix in a convolution layer) the characterization is optimal. This results in a new differentiable layer based on a K-matrix that can be trained with the rest of an architecture using standard stochastic gradient methods. However, it is worth noting that the reviewer is not an expert in the field, and it is hard for him to compare the proposed approach with previous work.\\n\\t\\t\\t\\t\\t\\t\\nThe paper is generally easy to follow. Even though the introduction of K-matrices requires a lot of definitions, they are presented clearly and Figure 1 helps to understand the concept of K-matrices. The experimental pipeline is also clear.\\n\\nGiven the special structure of the family, the reviewer might guess that having K-matrices can slow down the training, i.e. it might require more epochs to achieve the reported results compared to baselines. Providing training plots might increase the quality of the paper.\\n\\nThe experimental results are convincing. First, the authors show that K-matrices can be used instead of a handcrafted MFSC featurization in an LSTM-based architecture on the TIMIT speech recognition benchmark with only a 0.4% loss of phoneme error rate. Then, the authors evaluate K-matrices on ImageNet dataset. In order to do so, they compare a lightweight ShuffleNet architecture which uses a handcrafted permutation layer to the same architecture but with a learnable K-matrix instead of the permutation layer. The authors demonstrate the 5% improvement of accuracy over the ShuffleNet with 0.46M parameters with only 0.05M additional parameters of the K-matrix and the 1.2% improvement of accuracy over the ShuffleNet with 2.5M parameters with only 0.2M additional parameters of the K-matrix. Next, the authors show that K-matrices can be used to train permutations in image classification domains. In order to demonstrate so, they take the Permuted CIFAR-10 dataset and ResNet-18 architecture, insert a trainable K-matrix at the beginning of the architecture and compare against ResNet-18 with an inserted FC-layer (attempting to learn the permutation as well) and ResNet-18 trained on the original, unpermuted CIFAR-10 dataset. With K-matrix, the authors achieve a 7.9% accuracy improvement over FC+ResNet-18 and only a 2.4% accuracy drop compared to ResNet-18 trained on the original CIFAR-10. Finally, the authors demonstrate that K-matrices can be used instead of the decoder\\u2019s linear layers in a Transformer-based architecture on the IWSLT-14 German-English translation benchmark which allows obtaining 30% speedup of the inference using a model with 25% fewer parameters with 1.0 drop of BLEU score.\\n\\nOverall, the analysis and the empirical evaluations suggest that K-matrices can be a practical tool in modern deep architectures with a variety of potential benefits and tradeoffs between a number of parameters, inference speed and accuracy, and ability to learn complex structures (e.g. permutations).\\n\\n\\nImprovements\\n1. Even though K-matrices are aimed at structured matrices, it would be curious either to empirically compare K-matrices to linear transformations in fully-connected networks (i.e. dense matrices) or to provide some theoretical analysis.\\n2. Section 3.3 argues that K-matrices allow to obtain an improvement of inference speed, however, providing the results of convergence speed (e.g. training plots with a number of epochs) will allow a better understanding of the proposed approach and will improve the quality of the paper.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces a structured drop-in replacement for linear layers in a neural network, referred to as Kaleidoscope matrices. The class of such matrices are proven to be highly expressive and includes a very general class of sparse matrices, including convolution, Fastfood, and permutation matrices. Experiments are carried in a variety of settings: (i) can nearly replace a series of hand-designed feature extractor, (ii) can perform better than fixed permutation matrices (though parameter count also increased by 10%), (iii) can learn permutations, and (iv) can help reduce parameter count and increase inference speed with a small performance degradation of 1.0 BLEU on machine translation.\\n\\nThis appears to be a solid contribution in terms of both theory and practical use. As I have not thought much about expressiveness in terms of arithmetic circuits (though I was unable to fully follow or appreciate the derivations, the explanations all seem reasonable), my main comments are regarding experiments. Though there are experiments in different domains, each could benefit from some additional ablations, especially to existing parameterizations of structured matrices such as Fastfood, ACDC, and any of the multiple works on permutation matrices and/or orthogonal matrices. Though Kaleidoscope include these as special cases, it is not clear whether when given the same resources (either memory or computational cost), Kaleidoscope would outperform them. There is also a matter of ease of training compared to existing approximations or relaxations, e.g. Gumbel-Sinkhorn.\", \"pros\": [\"The writing is easy to follow and concise, with contributions and place in the literature clearly stated.\", \"The Kaleidoscope matrix seem generally applicable, both proven theoretically and shown empirically (experiments are spread across a wide range of domains).\", \"The code includes specific C++ and CUDA kernels for computing K matrices, which will be very useful for adaptation.\", \"The reasoning using arithmetic circuits seems interesting, and the Appendix includes a primer.\"], \"cons\": [\"For the squeezenet and latent permutation experiments, would be nice if there is a comparison to other parameterizations of permutation matrices, e.g. gumbel-sinkhorn.\", \"For the speed processing experiment, did you test what the performance would be if K matrix is replaced by a fully connected layer? This comparison appears in other experiments, but seems to be missing here for some reason. It would lead to better understanding than only comparing to SincNet.\", \"The setup for the learning to permute experiment is not as general as it would imply in the main text. The matrices are constrained so that an actual permutation matrix is always sampled, and the permutation is (had to be?) pretrained to reduce total variation for 100 epochs before jointly trained with the classifier. Though this is stated very clearly in the Appendix, I hope the authors can also communicate this clearly in the main text as it appears to be a crucial component of the experimental setup.\"], \"comments\": [\"How easy is it to train with K matrices? Did you have to change optimizer hyperparameter compared to existing baselines?\", \"There seems to be some blurring between the meaning of structure (used to motivate K matrices in the introduction) and sparsity (used to analyze K matrices). Structure might also include parameter sharing, orthogonality, and maybe other concepts. For instance, while Kaleidoscope matrices might include the subclass of circulant matrices, can they also capture the same properties or \\\"inductive bias\\\" (for lack of better word) as convolutional layers when trained?\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose learnable \\\"kaleidoscope matrices\\\" (K-matrices) in place of manually engineered structured and sparse matrices. By capturing \\\"all\\\" structured matrices in a way that can be learned, and without imposing a specific structure or sparsity pattern, these K-matrices can improve on existing systems by\\n* capturing more structure (that was not handled by the existing manually engineered architecture),\\n* running faster than dense implementations.\\n\\nThe claim that \\\"all\\\" structured matrices can be represented efficiently is a strong one, and in section 2.3 the authors make it clear what they mean by this. Although the proof is long and beyond the expertise of this reviewer, the basic explanation given in section 2.3 makes their point clear for the non-expert reader.\\n\\nThe balance of the paper empirically tests the claims of learnable structure and efficiency.\\n\\nOn the basis that these experiments essentially bear out the claims of the paper, I selected to accept the paper.\", \"weaknesses\": \"1. Regarding the ISWLT translation task result:\\nWith this dataset, it's a bit of a stretch to say there was \\\"only a 1 point drop in BLEU score\\\". That's a significant drop, and in fact the DynamicConv paper goes to significant lengths to make a smaller 0.8 point improvement. There are probably many other ways to trade BLEU score for efficiency, and without showing those other methods (and the point drops they have), it's not clear that K-matrices are a good way to speed up decoding a bit.\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.