forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
rJxbJeHFPS | What Can Neural Networks Reason About? | [
"Keyulu Xu",
"Jingling Li",
"Mozhi Zhang",
"Simon S. Du",
"Ken-ichi Kawarabayashi",
"Stefanie Jegelka"
] | Neural networks have succeeded in many reasoning tasks. Empirically, these tasks require specialized network structures, e.g., Graph Neural Networks (GNNs) perform well on many such tasks, but less structured networks fail. Theoretically, there is limited understanding of why and when a network structure generalizes better than others, although they have equal expressive power. In this paper, we develop a framework to characterize which reasoning tasks a network can learn well, by studying how well its computation structure aligns with the algorithmic structure of the relevant reasoning process. We formally define this algorithmic alignment and derive a sample complexity bound that decreases with better alignment. This framework offers an explanation for the empirical success of popular reasoning models, and suggests their limitations. As an example, we unify seemingly different reasoning tasks, such as intuitive physics, visual question answering, and shortest paths, via the lens of a powerful algorithmic paradigm, dynamic programming (DP). We show that GNNs align with DP and thus are expected to solve these tasks. On several reasoning tasks, our theory is supported by empirical results. | [
"reasoning",
"deep learning theory",
"algorithmic alignment",
"graph neural networks"
] | Accept (Spotlight) | https://openreview.net/pdf?id=rJxbJeHFPS | https://openreview.net/forum?id=rJxbJeHFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"DybSK5aelg",
"B1l03Tksjr",
"SygwyYiqoS",
"BJgzlu59iB",
"rkgdaugtjS",
"rJgE4iCNjH",
"rklnp2qziS",
"HJeoqFHZsS",
"rJePttBWor",
"SygoUFBWor",
"ByehQFSWsH",
"ryg30_SWiB",
"HJe-F_CljB",
"HJe7pdvCYr",
"rJeOFDKjFr",
"Hylr6tgjKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798739214,
1573744054453,
1573726430757,
1573722089834,
1573615808229,
1573346092011,
1573199044051,
1573112210787,
1573112191011,
1573112147176,
1573112099526,
1573112020072,
1573083257082,
1571875002582,
1571686271644,
1571649980735
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2051/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2051/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2051/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2051/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2051/Authors"
],
[
"~Hao_Tang5"
],
[
"ICLR.cc/2020/Conference/Paper2051/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2051/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2051/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2051/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2051/Authors"
],
[
"~Hao_Tang5"
],
[
"ICLR.cc/2020/Conference/Paper2051/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2051/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2051/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper proposes a framework which qualifies how well given neural architectures can perform on reasoning tasks. From this, they show a number of interesting empirical results, including the ability of graph neural network architectures for learn dynamic programming.\\n\\nThis substantial theoretical and empirical study impressed the reviewers, who strongly lean towards acceptance. My view is that this is exactly the sort of work we should be show-casing at the conference, both in terms of focus, and of quality. I am happy to recommend this for acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thank you for your updates. I am satisfied with the quality of this work and I recommend its acceptance.\"}",
"{\"title\": \"My update\", \"comment\": \"I\\u2019ve checked the changes in the paper and I\\u2019ve read the correspondence between reviewers and the authors in detail. In particular, I am very grateful to Hao Tang for his involvement in the process and his questions and authors\\u2019 replies clarified a few things for me. The addition of sampled training data experiments is a welcome addition to the paper and a good spot by reviewer #3. The whole discussion made my understanding of the paper clearer and I\\u2019m happy to increase my score as I think the community will profit from further development of theory explaining generalizations of different NN architectures, especially when well experimentally supported. The presented paper does solid work on this for reasoning tasks, and in my opinion it warrants a publication.\"}",
"{\"title\": \"Update\", \"comment\": \"Thank you for adding the experiments.\\n\\nI've decided to retain my original rating.\"}",
"{\"title\": \"Update\", \"comment\": \"Dear Reviewers and AC,\\n\\nWe have updated our draft to incorporate the nice suggestions of the reviewers. In particular, we have made the following changes:\\n\\n- We have added additional experiments to show test accuracy v.s. training set size on sub-sampled training sets to further support our theory. The results are shown on Figure 4 at page 7, which is also discussed in Sec 4.3. We thank Reviewer 3 for the good suggestion of probing the effect of the number of samples empirically. \\n- Thanks to Reviewer 1 for a helpful comment that points out an imprecise statement. We changed it to make it more accurate, and added a discussion at the end of Sec 3 (page 4) regarding reasoning algorithms whose structure is obtuse, and regarding approximation algorithms. This should clarify the the range of problems we address in this paper, and how our results relate to various situations.\\n- We have added the related work as suggested by Reviewer#2.\\n- We will improve other minor points in the final version. \\n\\nIn addition, we have clarified all the concerns and confusion of the public comment regarding our theoretical parts. \\n\\nPlease let us know if you have additional questions. \\n\\nThank you,\\nAuthors\"}",
"{\"title\": \"Confusion comes from reader\\u2019s incorrect assumption\", \"comment\": \"Thanks for your interest again. We clarify your confusion below.\\n\\n1. The reader is confused because the reader\\u2019s assumption -- \\u201cthe sample complexity to approximate sum/mean pooling by MLP is not high compared to that to approximate a single step in DP.\\u201d is indeed not correct. For-loops, including sum/max over functions of all objects, have high sample complexity for an MLP to learn by Thm 3.5, compared to a single step in DP, which is usually a function on a pair of objects (e.g., Bellman-Ford relaxation in Fig. 2). We also discuss this in Sec 3.1, 3.2, and show an example with sum-pooling in Corollary 3.7, where sample complexity increases polynomially with the number of objects to loop over. GNNs could avoid learning such for-loops in DP algorithms so they generalize well. Although Thm 3.5 also has simplifying assumptions, e.g. using gradient descent with infinitesimally small steps, it aligns well with our experimental results (Fig.3). \\n\\n2. This is a good question. Indeed, our bound suggests reasonably deep GNN should generalize well, even if its number of iterations is higher than the DP iteration. As the reader suggests, we have run additional experiments with GNN10 (each sub-module is a 4-layer MLP) on summary statistics task, where GNN1 already performs well. Our experiment shows GNN10 performs equally well as GNN1. Thus, the experiment aligns with our theory here. We also found this result interesting and will expand on it a bit more in the final version. For example, it contrasts with what has been observed in GNN node classification tasks on social networks etc [1], where without JK, 2-layer GNN often perform the best and deeper GNN perform worse. There are several differences between our settings and theirs, one being adaptivity (different algorithm steps and number of steps we shall act on each node) is often needed for different nodes depending on subgraph structures (expanders vs. trees) in node classification tasks [1], which is not the case in many reasoning tasks. Moreover, note that our GNN formula for reasoning (Eqn 2.2) is different from GCN and GIN, e.g. GCN uses one-layer perceptron but our reasoning GNN (Eqn 2.2) uses MLP. Our reasoning GNN also explicitly models pairwise functions but GCN and GIN do not. This makes a difference too, so for failures of deep GCN on node classification tasks do not necessarily hold here.\\n\\n[1] Representation learning on graphs with jumping knowledge networks. ICML 2018.\\n\\nHopefully this clarifies your confusion. Please let us know if you have other questions. Again, we appreciate your interests.\"}",
"{\"title\": \"Still confused\", \"comment\": \"Dear authors,\\n\\nThanks for the detailed response. And please feel free to correct me. It is one of the advantages of OpenReview that our readers can directly consult the authors.\\n\\nI have read your reply carefully. But I still have a few concerns about the theoretical part. \\n1. In my understanding, a fair comparison should give both MLP and GNN the same power of oracles. And then, I tried to derive the sample complexity bound of MLP given your oracle. In my understanding, the sample complexity of MLPs is not high compared to GNNs when approximating the sum pooling, e.g. \\u201cHow many objects are either small cylinders or red things?\\u201d in Sec. 4.1. (Your for-loop argument for sum pooling is still confusing for me). However, it would be difficult for MLPs to approximate max pooling especially when generalizability is considered. So, does the \\\"for-loop\\\" increase the sample complexity? Or it's the max pooling that the MLPs are difficult to approximate in a sample efficient way?\\nFrom another perspective, in my understandings, many GNN variants are shared-weight and highly-regularized MLPs, such as GCN, GIN, and your GNN in Sec.2 (without softmax). For a fixed-dimensional input, given a GCN/GIN/your-GNN, it's easy to construct a MLP that will perform exactly the same on any input. And the key to those GNN variants' sample efficiency is those constraints from GNNs' architecture that restrict or apply a strong prior distribution of the solution space. However, the oracles can apply similar constraints to a general MLP by approximating their intermedia output. Therefore, the sample complexity of GNNs+oracle and MLP+oracle should be similar. This is my original point. I wasn't aware of the difficulty of approximating max pooling by MLPs, although. \\n(An assumption utilized in these arguments is that the sample complexity to approximate sum/mean pooling by MLP is not high compared to that to approximate a single step in DP. It aligns with my intuition, but please feel free to correct me. I will also do some experiments later. The topic is interesting for me anyway.)\\n2. Since the paper is talking about the generalizability and sample efficiency, I assume overfitting is a related topic. According to your sample complexity bound formula, 2k-layer GNNs just need twice as many data samples as k-layer GNNs to achieve similar generalizability for the same DP problem for any sufficiently large k (a simple proof can be found later). This is where I found the formula counter-intuitive. In my understanding, the generalizability or sample complexity of GNNs should generally scale at least quadratically or even exponentially with the layer number without assumptions about residual connections or normalization layers. If your sample complexity bound is reasonable, it would be a very strong and useful conclusion at least for me. Regardless of the theoretical part, if the authors could show the conclusion experientially by comparing the test-error-distribution with respect to the data sample numbers for GNNs of different layer numbers, it would be still a very interesting contribution. \\nThe failure cases of GNNs are also referred to those Deep GNNs. It's reasonable to see GNN9-30 overfitting and therefore less generalizable. \\n\\nAs stated in my first comment, I am not underrating the paper. Instead, I think the paper does a good job in making the abstract concept, relational inductive bias, concrete from different perspectives as stated by the authors. The experimental results also align with the analysis. The insight could be helpful in the future. But I hope that the theoretical part could be more accurate to avoid misunderstandings.\\n\\n================= some proofs ==================\", \"claim\": \"According to your sample complexity bound formula, 2k-layer GNNs just need twice as many data samples as k-layer GNNs to achieve similar generalizability for the same DP problem for any sufficiently large k.\", \"proof\": \"For any DP problem, there will be a layer number, $k$, that is large enough so that each MLP module can just learn a simple function and therefore the maximum sample complexity of MLP submodules is bounded (I think this is also assumed in the paper). Then, for the $2k$-layer GNNs, the $k+1$ to $2k$th GNN layers just need to learn the identity functions, which could be easy for many GNN variants (e.g. the GAT variants). Therefore, the $\\\\max_i C_{A_i}(f_i, \\\\epsilon, \\\\delta)$ part won't change much and the sample complexity bound scales linearly.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for appreciating our work and giving the nice suggestion. It would indeed be very interesting to see sample sizes for different architectures and tasks in practice. However, the number of samples needed for models like MLP to learn the more complex tasks, e.g. DP, would be very high, so the experiments will be prohibitively expensive. We are considering experimenting models on smaller training set and plot accuracy v.s. sample size to showcase the trend. We have included the experimental results in the revised version (Fig 4 and Sec 4.3).\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your constructive feedback. Reviewer points out an imprecise statement/conclusion in our paper. We have adopted the reviewer's suggested version in the revised revision.\\n\\nReviewer asks whether neural networks can learn tasks where the algorithm is not known. Our answer is the algorithm we hope to learn does not need to be known, but knowing the structure of the algorithmic solution can help with designing architectures and theoretical guarantees. For example, our experiments (Sec 4.1, 4.3) show that different architectures that align to different algorithms can both learn the task well. \\n\\nReviewer asks to more carefully consider the situation where the algorithmic solution exists but is obtuse. In this paper, we focus on reasoning tasks whose underlying algorithm is exact and has clear structure, and leave the study of approximation algorithms (do not solve the task exactly) and unknown structures for future work. We discussed this at the end of Sec 3 at page 4. This should clarify the the range of problems we address in this paper, and how our results relate to various situations.\\n\\nIn the case where we face a problem where we do not have knowledge about the underlying algorithmic structure, in order to still generalize well, we think neural architecture search over the algorithmic structure space could be a promising future direction. We will discuss these in the final version.\"}",
"{\"title\": \"Response\", \"comment\": [\"Thank you for your helpful feedback. We answer your questions below.\", \"\\u201cdifference to Kolmogorov complexity is that any algorithmic alignment that yields decent sample complexity is good enough - how do you define decent?\\u201d. Here, \\u201cdecent\\u201d is a loose term we use to refer to a tight enough algorithmic alignment for good generalization performance. We will explain more in the revised version.\", \"\\u201cYou state: \\u2018in Section 4, we will show that we can usually derive a near-optimal alignment by avoiding as many \\u2018for loops\\u2019 in algorithm steps as possible.\\u2019 yet I did not see that there\\u201d. One example is Section 4.2: DeepSets does not algorithmically align well with the relational argmax task. It has to learn the for-loops (Claim 4.1), which requires many samples. On the other hand, GNN algorithmically aligns well with relational argmax --- the for-loops are hard-coded in the computation graph. Therefore, GNN achieves better sample efficiency by avoiding learning the for-loop. We will make the connection clearer in the revised version.\", \"We will add the suggested reference and discuss the relation in the revised version.\"]}",
"{\"title\": \"General response\", \"comment\": \"We sincerely appreciate all the reviews, they give positive and high-quality comments on our paper with a lot of constructive feedback. We answer each reviewer\\u2019s questions individually. We will update the draft soon.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your interest in our work. We address your concerns below.\\n\\nWhile we agree that assumptions of our theorems are strong, we do not over-claim: We have clearly stated our assumptions in the paper and discussed the relation to practice (Sec 3.2). We also write in the introduction that we provide \\u201cinitial theoretical support\\u201d to show that algorithmic alignment is desirable for generalization under \\u201csimplifying assumptions\\u201d. Several theoretical works on deep learning at times make simplifying assumptions. Still, these works have led to interesting insights and triggered many follow-up works. The main goal of our paper is to introduce the perspective of algorithmic alignment and take the first formal initiative towards understanding the interplay of reasoning tasks and NN architecture. Moreover, as we have discussed in Sec 3.2, in our experiments, all models are trained end-to-end. The experimental results agree with our theoretical results despite our sequential assumption, so we believe that future work can extend our theoretical results to end-to-end learning. \\n\\n\\nHowever, we strongly disagree with the reader\\u2019s other concerns. \\n\\n- \\u201cIf such oracles are available for MLPs, the sample complexity bound of MLPs would be the same as or even lower than that of GNNs. The comparison between GNN and MLP's sample complexity is therefore unfair.\\u201d This is not correct. Our comparison is fair. Although we assume an oracle for each sub-module in Thm 3.6, we do *not* assume oracles for individual layers in the MLP modules of GNN. If we add oracles to every layer of both the MLP and each MLP module in GNN as the reader suggests, we can still show that GNN has a better sample complexity. Intuitively, this is because the giant MLP still needs to learn the entire for-loop. On the other hand, GNNs do not need to learn the for-loop because it is encoded in the architecture (Fig. 2). In our theorem, we try our best to keep the number of oracles small so that it is close to practice, where models are trained end-to-end. Therefore, we do not assume oracles in MLP layers. Also, our theorem agrees with experimental results, so future work may further relax the assumption.\\n\\n- \\u201c[Our theorem] induces that increasing the depth of GNNs will not have a huge or dramatic influence on the generalizability or sample efficiency, which is counter-intuitive. \\u201d This is not correct. Increasing the depth of GNNs is crucial to achieving better algorithmic alignment for some tasks (Sec 4.3) and therefore improving sample efficiency. If the depth of the GNN is not sufficient, at least one of the sub-modules needs to learn for-loops. But GNNs with more iterations can align better and avoid such for-loops. One example is the shortest paths problem [Figure 3c]: The number of GNN iterations is the key to good performance. For other tasks, e.g. Fig 3ab, increasing GNN depth is not so necessary. Based on our theory and experiments, both GNN1 and GNN3 can perform well on simple relational argmax tasks. We hope this clarifies your concerns. \\n\\n- \\u201cI believe in the intuition about relational inductive bias and that GNNs are truly more sample efficient than MLP on many relation-related and reasoning-related tasks.\\u201d We would like to clarify that our intuition is more specific than what the reader describes. We not only formalize the relational inductive bias of some popular reasoning architectures, but we also characterize *which tasks* GNN does well, and provide examples where GNN fails.\", \"reply_to_minor_concerns\": [\"\\u201cIt may be better to show some failure cases of GNNs.\\u201d We have shown a failure case in the paper -- GNNs fail on the subset-sum task in Fig 3(d), while NES, an architecture that aligns better with the task, generalizes well.\", \"\\u201cIt's reasonable to see GNN7 behaves poorly on (a) and (b) tasks, i.e. summary statistics and relational argmax, in Figure 3.\\u201d This is not correct. GNN7 performed well in our experiments (we do not show performances of all GNN depths in paper due to space limit).\"]}",
"{\"title\": \"Concerns about the theoretical part (sample complexity bound, def. of algorithm alignment etc.)\", \"comment\": \"Dear authors,\\n\\nThanks for sharing the work. In my understanding, the paper is aiming at formalizing the relational inductive bias intuition in [1] into a more concrete concept (the algorithm alignment), theoretically proving the advantages of GNNs, and experientially evaluating the claim. \\nI believe in the intuition about relational inductive bias and that GNNs are truly more sample efficient than MLP on many relation-related and reasoning-related tasks. And I won't disagree with that the algorithm alignment is a promising direction of formalizing GNNs' relational inductive bias. The experimental part of this paper does show some promising results aligned with those intuitions and provides some analysis of GNNs' power on learning different algorithms according to the experimental results. The paper is overall an interesting paper even without the theoretical part. \\n\\nHowever, I do have some concerns about the theoretical part of this paper. Most importantly, based on a very strong assumption (Sequential learning in Theorem 3.6.), the sample complexity bound (Theorem 3.6.) and then the algorithm alignment definition (Definition 3.4.) proposed in the paper are somehow restrictive and counter-intuitive. The strong assumption not only makes the comparison between MLP and GNNs' sample complexity unfair but also may mislead GNNs' architecture design in the future.\\n1. The sequential learning assumption is a very strong assumption even in the field of PAC learning etc.. It assumes oracles that can supervise each MLPs' behaviours in the neural networks. Actually, MLPs are all composed of several MLPs. If such oracles are available for MLPs, the sample complexity bound of MLPs would be the same as or even lower than that of GNNs. The comparison between GNN and MLP's sample complexity is therefore unfair. Also, it is kind-of inaccurate to compare the sample complexity bound to support some claims in the paper, although the intuitions are reasonable. \\n2. The algorithm alignment definition, which is induced by the bound analysis in Theorem 3.6, is somehow counter-intuitive. $n\\\\cdot \\\\max_iC_{A_i}(f_i, \\\\epsilon, \\\\delta)\\\\le M$. For example, the sample complexity scales linearly with the MLP modules' number $n$. It induces that increasing the depth of GNNs will not have a huge or dramatic influence on the generalizability or sample efficiency, which is counter-intuitive. \\n\\nI would suggest the authors to put the strong assumption in a more conspicuous position to avoid misunderstandings of readers (e.g. in the introduction). Otherwise, the authors could relax the bound or simply justify the intuition according to some results in the cognitive science field. It is still interesting to see that GNNs are experientially good at learning DP and the other experimental results as well.\", \"some_less_important_concerns_are_listed\": \"1. It may be better to show some failure cases of GNNs. For example, it's reasonable to see GNN7 behaves poorly on (a) and (b) tasks in Figure 3. \\n2. Sample complexity bound $\\\\neq$ sample complexity. Maybe should be less confident while comparing MLPs and GNNs.\\n3. The connections between problem/task-alignment and algorithm-alignment are not clear enough. \\n4. The first paragraph in Section 3 could be more accurate. For example, the performance difference of different modules may come from the data quality, optimizer and hyper-parameter tuning ability etc... \\n\\n[1] Battaglia, Peter W., et al. \\\"Relational inductive biases, deep learning, and graph networks.\\\" arXiv preprint arXiv:1806.01261 (2018).\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a framework, dubbed algorithmic alignment, based on PAC learning and sample complexity, with the aim to explain generalization on reasoning tasks for different neural architectures. The framework roughly states that in order for the model to be able to learn and successfully generalize on a reasoning task, it needs to be able to easily learn (to approximate) steps of the reasoning tasks. The authors use this framework to propose an increasingly difficult set of tasks, designed to showcase the type of models that would be fit or unfit to solve them. The resulting experiments corroborate the theory, showing the limits of MLPs, Deep Sets, and consequently Graph Neural Networks. The final claim that an NP-hard task needs an enumerative architecture, and then experimental validation of that claim is nice and fits into the theory.\\nThe added benefit of the paper is that the authors show as a side-effect that visual question answering and intuitive physics\\n\\nOverall, the paper presents a meaningful contribution to the theory of learning, formalizing the means of quantifying the capabilities of architectures to solve tasks of certain complexity. The paper, though dense, is well well written, and carries an interesting conclusion that better algorithmic alignment brings the sample complexity down, i.e. models with better algorithmic alignment to the task (function they want to approximate) should generalize better.\\nThe formalization presented in the paper, though remarkably intuitive, might be difficult to practically use for more elaborate models and it is not clear whether it can be numerically computed. The paper (i.e. the reader) would certainly benefit from more examples of algorithmic alignment comparison of different models, such as one done in Corollary 3.7.\", \"question\": [\"difference to Kolmogorov complexity is that any algorithmic alignment that yields decent sample complexity is good enough - how do you define decent?\", \"You state: \\u201cin Section 4, we will show that we can usually derive a near-optimal alignment by avoiding as many \\u201cfor loops\\u201d in algorithm steps as possible.\\u201d yet I did not see that there. Was that effectively shown in Corollary 3.7?\"], \"slightly_related_work\": \"On the Turing Completeness of Modern Neural Network Architectures\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work seeks theoretical and empirical proof of the reasoning capacity of neural networks. The authors build on a body of research that demonstrates the usefulness of different neural network architectures for different reasoning problems. For example, Deep Sets have been proposed to answer questions about sets (e.g., a summary statistic), and GNNs about graph related problems, such as shortest path.\", \"i_anticipate_that_readers_would_be_very_satisfied_with_the_intuition_behind_the_main_result\": \"neural networks that \\u201calign\\u201d with known algorithmic solutions are better able to learn the solutions. Many architectures have been proposed over the years, often with a high-level justification for the architecture\\u2019s form. For example, Relation Networks noted the difficulty with learning n^2 relations using an MLP, which is an observation reflected in this work\\u2019s explanation of the difficulty with learning a for loop.\\n\\nProvided here is a justification for these high-level design decisions. The authors provide some theory and experimental results to demonstrate their proposed notion of alignment, and show that NNs that align with known algorithmic solution do well, while those that do not align do not do well. In particular, I appreciate both the positive and negative evidence, since demonstrating lack of alignment (and poor performance) is a necessary condition to show alongside alignment (and good performance).\\n\\nI\\u2019d like to caution the authors regarding their main conclusion, which is stated a few times in the paper:\\n\\n\\u201cThis perspective suggests that whether a neural network can learn a reasoning task depends on whether there exists an algorithmic solution that the network aligns with\\u201d.\\n\\nI think this logic is not precisely correct, and I would modify this to:\\n\\n\\u201cIf the structure of a neural network aligns with a known algorithmic solution, then it can more easily learn a reasoning task than a neural network does not align\\u201d. \\n\\nThis is a subtle but important difference. In particular, the original logic does not capture situations where an algorithmic solution is not known, but a neural network can otherwise still learn a solution (consider object classification). I think even the corrected logic as I\\u2019ve spelled it out above might not be quite right either, since it does not consider situations where the algorithmic solution exists, but it obtuse. Would a neural network easily learn such a task? \\n\\nOverall I think the paper is clearly written, and the experiments are adequate. Unfortunately I am not well-versed in the theoretical literature on this topic, so my assessment of the proofs is limited, and I will need to defer to the other reviewers on these matters. My surface level assessment of them is that the logic seems generally sound, but I cannot make any strong statements placing them in the context of previous work, nor can I properly evaluate the nuances. Nonetheless, as a whole, I think this is a strong contribution and a nicely put together piece of work.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a measure of classes of algorithmic alignment that measure how \\\"close\\\" neural networks are to known algorithms, e.g. dynamic programming (DP). The measure is based on the number of samples needed such that the expected generalization error is less than epsilon with 1-delta probability, where epsilon and delta are free parameters.\\n\\nThe paper proves the link between several classes of known algorithms and neural network architectures by showing how their sample complexity varies. For instance the paper shows that Graph Neural Network (GNN), can approximate any DP algorithm in a sample efficient manner, whereas MLP and deep sets (permutation invariant NN) can't. The paper empirically verifies their claims on 4 toy datasets, each representing an increasingly complex algorithm needed to solve the problem. \\n\\nI recommend this paper be accepted, since I think it's an important direction of research, and it formalizes a lot of intuition about neural network architectures.\\n\\nIt would be very interesting if the authors could actually compute the number of samples, M, for different NN architectures on the toy datasets, and show how it matches empirical findings. This could be a powerful tool if it could be made easy to use for the common practitioner.\"}"
]
} |
B1e-kxSKDH | Structured Object-Aware Physics Prediction for Video Modeling and Planning | [
"Jannik Kossen",
"Karl Stelzner",
"Marcel Hussing",
"Claas Voelcker",
"Kristian Kersting"
] | When humans observe a physical system, they can easily locate components, understand their interactions, and anticipate future behavior, even in settings with complicated and previously unseen interactions. For computers, however, learning such models from videos in an unsupervised fashion is an unsolved research problem. In this paper, we present STOVE, a novel state-space model for videos, which explicitly reasons about objects and their positions, velocities, and interactions. It is constructed by combining an image model and a dynamics model in compositional manner and improves on previous work by reusing the dynamics model for inference, accelerating and regularizing training. STOVE predicts videos with convincing physical behavior over hundreds of timesteps, outperforms previous unsupervised models, and even approaches the performance of supervised baselines. We further demonstrate the strength of our model as a simulator for sample efficient model-based control, in a task with heavily interacting objects.
| [
"self-supervised learning",
"probabilistic deep learning",
"structured models",
"video prediction",
"physics prediction",
"planning",
"variational auteoncoders",
"model-based reinforcement learning",
"VAEs",
"unsupervised",
"variational",
"graph neural networks",
"tractable probabilistic models",
"attend-infer-repeat",
"relational learning",
"AIR",
"sum-product networks",
"object-oriented",
"object-centric",
"object-aware",
"MCTS"
] | Accept (Poster) | https://openreview.net/pdf?id=B1e-kxSKDH | https://openreview.net/forum?id=B1e-kxSKDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9AvyA73Dd",
"r1lMMgfssr",
"H1lg7JLcsB",
"HkgCxPNusB",
"rJlPFpxdiH",
"HylUcrfQsr",
"ryg8vXzXjr",
"rJlOFefQjr",
"SJxYFRbQoS",
"BJgAmQbAFB",
"S1gNkPJnYr",
"rylOjQdDuS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798739185,
1573752842483,
1573703448322,
1573566197941,
1573551487065,
1573229965856,
1573229406437,
1573228672493,
1573228161233,
1571848998489,
1571710683789,
1570370463812
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2050/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2050/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2050/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2050/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2050/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2050/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2050/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2050/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2050/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2050/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2050/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper presents a method for modeling videos with object-centric structured representations. The paper is well written and clearly motivated. Using a Graph Neural Network for modeling latent physics is a sensible idea and can be beneficial for planning/control. Experimental results show improved performance over the baselines. After the rebuttal, many questions/concerns from the reviewers were addressed, and all reviewers recommend weak acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to updated review #3\", \"comment\": \"Thank you for updating your evaluation. We are glad that the revision addressed your concerns.\"}",
"{\"title\": \"Reviewer #3 response to author response\", \"comment\": \"Thank you for the clear response and updated document. I have updated my review in response as I now believe that the paper should be accepted.\"}",
"{\"title\": \"Response to updated review #1\", \"comment\": \"Thank you for updating your review. We will make sure to stick to the term \\\"graph neural network\\\" in the camera-ready version.\"}",
"{\"title\": \"Reviewer (#1) response to author response\", \"comment\": \"Thank you for your detailed response. My questions and comments are addressed and I think the revised version of the paper meets the bar for acceptance at ICLR.\", \"one_minor_note\": \"In the revised version of the paper, you use \\u201cgraph network\\u201d and \\u201cgraph neural network\\u201d interchangeably \\u2014 maybe you could consider consistently just using either one of the two terms to avoid potential confusion.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Dear Reviewer 1,\\n\\nthank you for your valuable feedback.\\nBelow, we give a detailed response to your questions and comments.\\nPlease also see the changes to the manuscript outlined in our top level comment.\\n\\n[Added a \\\"Model Figure\\\"]\\nWe have revised Figure 1 to include a visualisation of the latent space and the corresponding recognition distributions. We hope this clarifies the model structure.\\n\\n[Introduction to SuPAIR]\\nWe chose to omit details on SuPAIR as they are not required for understanding STOVE - in principle, any image model delivering a likelihood p(x | z_where) based on location information z_where could be used in its stead, including AIR. As said, we mainly chose SuPAIR due to its fast training times. If you have specific suggestions for what should be clarified about SuPAIR, we will be glad to do so.\\n\\n[Color vs. Grayscale]\\nFor the video modeling task, we use grayscale images in which all objects are the same shade of white. Color has been added to Figure 2 to make it more readable. For the RL task, we use colored images such that the models may recognize the object which is controlled by the agent. The mean values per color channels are added to each objects state, as a simple encoding of appearance. We clarified this in the revision.\\n\\n[RL experiments]\\nThe main motivation of our RL experiments is to demonstrate planning based on an object-aware dynamics model learned on purely visual input, which to our knowledge has not been done in prior work. Wang et al. use GNNs very differently from us, by employing them in a model-free policy network. Sanchez-Gonzalez et al., like us, use GNNs as a dynamics model for planning, but assume access to the ground truth states as opposed to inferring them from images.\\n\\n[Realistic Rollouts]\\nWe find that STOVE significantly improves upon prior work in that it predicts physical behavior across long timeframes, instead of stopping or teleporting objects. We quantify this in the revision by plotting the conservation of kinetic energy in the rollouts, which STOVE achieves up to at least 100,000 steps, while DDPAE and SQAIR break down after less than 100. See (1) of our top level comment and the animated GIFs in our anonymized GitHub [1].\\n\\n[Ablations]\\nIn the revision, we provide results for three ablations (see (3) in our general comment and Table 1), including two with an ablated state representation. We did not explore AIR as an alternative object detector, since we chose SuPAIR for its faster training times. We do not claim, or even expect, that AIR would perform worse.\\n\\n[Steenkiste et al.]\\nFor a visual evaluation, please compare our animated rollouts [1] with the ones presented by Steenkiste et al. [2, very bottom]. We find that STOVE more accurately captures object permanence and energy conservation. We decided against a quantitative comparison due to qualitative differences:\\n(a) R-NEM requires around 10 given observations before the iterative inference procedure converges to a good segmentation,\\n(b) it does not explicitly model object positions, and\\n(c) it requires noisy input to avoid local minima.\\nWe have instead added DDPAE as a baseline. See (2) in our top level comment.\\n\\n[1] https://github.com/ICLR20/STOVE\\n[2] https://sites.google.com/view/r-nem-gifs/\\n\\n[RL Performance]\\nThe performance of MCTS+STOVE was very close to the performance of MCTS on the ground truth environment. This indicates that the weak point of the agent was not the model (STOVE), but rather the planner, and that more thorough planning would allow it to match PPO's performance. Since the goal of our RL experiments was to highlight the applicability and sample efficiency of our model in the RL domain, we opted for an off-the-shelf planner instead of tuning for final performance. \\n\\n[Suggested Related Work]\\nThank you for the references, we have added them.\\n\\n[Reuse of Dynamics Model]\\nPrevious models, such as SQAIR and DDPAE, use an inference distribution $q(z_t | x_t, z_{t-1})$ which is entirely separate from the generative dynamics model $p(z_t | z_{t-1})$. We argue that this is wasteful, as much of the knowledge captured by the generative dynamics model is also relevant for the inference network. We therefore reuse it in our formulation of the inference network (Eq. 2), saving model parameters and regularizing training. We explore the benefits of this in one of the new ablations (\\\"double dynamics\\\").\\n\\n[Failure Modes]\\nThe main failure mode is that the inductive bias in the image model is insufficient to reliably detect objects. See Stelzner et al. for a discussion of noisy backgrounds in SuPAIR. In addition, our matching procedure assumes that objects move continuously.\\n\\n[Occlusion]\", \"occlusion_is_explicitly_modelled_in_supair\": \"If objects overlap, the hidden parts of the occluded object are treated as unobserved, and therefore marginalized during the evaluation of the object appearances' likelihood.\\n\\nWe hope that the changes made will address your concerns and look forward to further discussion.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Dear Reviewer 3,\\n\\nthank you for your valuable feedback.\\nBelow, we give a detailed response to your questions and comments.\\n\\n[Realistic Rollouts]\\nWe have quantified the notion of realistic rollouts by adding a plot of the kinetic energy in the billiards ball system across prediction timesteps. This energy should be conserved, as collisions are fully elastic and energies thus remain constant in the training data. For STOVE, the mean energy remains constant even over extremely long timeframes (we checked up to 100,000 steps), whereas for the baselines, it quickly diverges (after less than 100 steps). While in chaotic systems like the billiards environment, model predictions will necessarily differ from the ground truth after a number of timesteps, it is a desirable property of STOVE to continue to exhibit physical behavior. In contrast, all baselines predict overlapping, stopping, or teleporting objects after a short period. This can be observed visually in our animated GIFs [1].\\n\\n[1] https://github.com/ICLR20/STOVE\\n\\n[Unsupervised Learning]\\nWe agree that 'self-supervised' is a good term for STOVE. However, we do not view the end-to-end learning approach of STOVE as equivalent to decomposing the task into two distinct steps, one for feature extraction and one for supervised prediction. Kosiorek et al. (SQAIR) have shown that training dynamics and recognition models jointly can significantly improve object detection performance through the incorporation of a temporal consistency bias. We therefore believe that maintaining this coupling is a valuable feature of STOVE. In any case, the successive training of SuPAIR and dynamics model is more brittle and raises the need for additional auxiliary losses (as in Watters et al. (2017)), such as a carefully tuned discounted rollout error.\\n\\n[Contribution]\\nAs requested, we have added detailed information on the graph neural network and other components of STOVE to the appendix. We disagree with the assessment that our paper's main contribution is the graph network architecture. The benefits of relational architectures for multi-object dynamics tasks have previously been demonstrated, e.g. by Battaglia et al. (2016) and Watters et al. (2017). What has not been done before is to employ them in a setting in which state information is entirely latent, and only raw video is available. Our main contributions are to show how to do this (structured latent space, reuse of the dynamics model, joint variational inference), and to demonstrate that this enables predictions of comparable quality to the supervised setting with observed states. This comparison does not merely evaluate SuPAIR, but rather the techniques we proposed for connecting image and dynamics models.\\n\\n[Ha & Schmidhuber]\\nWe compare to VRNN, which belongs to the same class of model as the one Ha & Schmidhuber propose. Both encode input images via a VAE, and model the dynamics of the latent state via an RNN. It has been repeatedly demonstrated in the literature that models with object-factorized state representations such as STOVE outperform models with unstructured states, and our results support this, too. See e.g. the papers on SQAIR (Kosiorek et al., (2018)), and DDPAE (Hsieh et al., (2018)). We therefore deem a comparison to VRNN as a representative of unstructured models sufficient.\\n\\n[Diverse Number of Objects]\\nEven though we did not explore this in this paper, one of the main appeals of both GNNs and AIR-based models is the ability to handle a variable number of objects. This is enabled by the GNNs focus on pairwise interactions. STOVE can thus be easily extended to handle a variable number of objects. As an ad-hoc demonstration, we provide an animated rollout with 6 objects on our GitHub [1].\\n\\n[Game Engine Learning]\\nBoth Ersen & Sariel and Guzdial & Riedl share our motivation of learning the rules of games from video, we have therefore added the references. However, they explore a very different setting, since they assume access to a curated set of sprites to handle object detection, and use logical rules instead of continuous dynamics to model interactions. We find it misleading to credit these works with being able to handle more complex visual environments, as the a-priori knowledge of pixel-perfect object appearances trivializes the detection task. The goal of the field of representation learning, including AIR and all of its derivatives, is to extract meaningful, potentially discrete information from noisy and continuous input data without relying on domain specific knowledge. While hand-engineered approaches to object detection would certainly work on the domains we considered here, the techniques we present in this paper generalize to different image models and different environments. It is our hope that models like ours will make it possible to apply logical reasoning to domains where it was previously impossible, because of their continuous and noisy nature, and the absence of domain-specific knowledge.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Dear Reviewer 2,\\n\\nthank you for your valuable feedback.\\nBelow, we give a detailed response to your questions and comments.\\nPlease also see the changes to the manuscript outlined in our top level comment.\\n\\n[Ablations]\\nWe have added results for three different ablations of STOVE, including the suggested one in which two separate dynamics nets are used for generation and inference, demonstrating the value of reusing the dynamics net. Please see (3) in our general comment and Table 1 in the manuscript. We have chosen not to explore black-box MLPs as dynamics models, as the benefits of graph neural networks for multi-object dynamics tasks are well documented in the literature, see e.g. Battaglia et al. (2016) and Watters et al. (2017). We therefore do not believe this to be a crucial baseline.\\n\\n[Appearance-Based Matching]\\nWe agree, and have tried matching procedures which involve object appearance encodings. However, one of the main features of SuPAIR in contrast to AIR is that it does not necessitate a latent encoding of the object appearance. This means that an encoder network would have to be 'tacked on' to the model in order to allow for appearance based matching, as mentioned in Section 2.4. We did not find this necessary, since for the settings we considered, STOVE precisely inferred object centers with a mean error of less than 1/3 of a pixel, which suffices even during collisions or in scenarios with partial overlap. We therefore leave the exploration of appearance-based matching to future work.\\n\\n[Visual Complexity]\\nThe visual complexity of scenes and robustness of SuPAIR with respect to visual noise has been explored by Stelzner et al.. We expect that these results translate to STOVE, i.e., that STOVE is able to handle background noise better than AIR (and, by extension, DDPAE and SQAIR). Figure 5 in the appendix shows that we are able to model scenes of differently shaped object sprites. However, we did not focus on this in this paper, as its main contributions are the techniques presented to combine image and dynamics models, as opposed to the performance of the specific image model used. Due to the compositional nature of STOVE, more sophisticated image models may easily be plugged in in place of SuPAIR. Finally, we note that the complexity of the experiments is in line with previous work (DDPAE, R-NEM). We choose to extend them by exploring the RL domain, which brings additional challenges, such as dynamics depending on object identities and actions.\\n\\n[Meaningful Improvement]\\nPlease see (1) of our top level comment, as we believe the energy conservation plot clearly demonstrates the stark performance improvements achieved with STOVE over prior work. While previous approaches break down after less than 100 frames of rollout, STOVE predicts trajectories with constant mean energy trajectories for 100,000 frames or more. Additionally, DDPAE and SQAIR predict overlapping, stopping, or teleporting objects after a short period. Apart from the added conservation plot, this is also apparent from the animated GIFs in our anonymized GitHub [1].\\n\\n[1] https://github.com/ICLR20/STOVE\"}",
"{\"title\": \"Revision\", \"comment\": \"We thank the reviewers for their valuable feedback and have revised the manuscript accordingly.\", \"the_main_changes_are\": \"1) We quantify the notion of 'realistic' rollouts by plotting the kinetic energy of rollouts on the billiards task (Fig. 4). This energy should be conserved, as collisions are fully elastic and no friction is applied. We find that the energy in STOVE's rollouts remains constant over very long timeframes (we checked up to 100,000 steps), whereas it quickly diverges for the baselines (SQAIR and DDPAE) after less than 100 steps. Additionally, all baselines predict overlapping, stopping, or teleporting objects after a short period. This stark difference in quality can be observed visually in our animated GIFs [1]. \\n\\n2) We have added DDPAE as a baseline for the video prediction tasks. According to its authors, DDPAEs is capable of handling complex interactions on the billiards task. In our experiments, it performs better than SQAIR, but significantly worse than STOVE.\\n\\n3) We have added results for three of the suggested ablations to table 1. They are:\\n a) Double Dynamics Networks (two separate dynamics nets for inference and generation),\\n b) No Velocity (no explicit modelling of the velocity within the state),\\n c) No Latents (no unstructured latent variables in the dynamics state, only positions and velocities).\\nWe find that they all perform consistently worse than full STOVE, demonstrating each component's value.\\n\\n4) We fixed a bug in our RL environment and adjusted the results accordingly. PPO now converges slightly faster (4M instead of 5M steps), but all high-level observations remain the same.\\n\\n5) We have improved the clarity of the writing.\\n\\n6) We have added a detailed description of the model architectures, hyperparameters, and baselines to the appendix.\\n\\n7) We have included the suggested additional references.\\n\\nWe hope these changes address the concerns expressed by the reviewers and look forward to further discussion.\\n\\n[1] github.com/ICLR20/STOVE\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents STOVE, an object-centric structured model for predicting the dynamics of interacting objects. It extends SuPAIR, a probabilistic deep model based on Sum-Product Networks, towards modeling multi-object interactions in video sequences. Compared to prior work, the model uses graph neural networks for learning the transition dynamics and reuses the dynamics model for the state-space inference model, further regularising the learning process. The approach has been tested on simple multi-body physics tasks and performs well compared to other unsupervised and supervised baselines. Additionally, an action-conditional version of STOVE was tested on a visual MPC task (using MCTS for planning) and was shown to learn significantly faster compared to model-free baselines.\\n\\nThe paper is well written and clearly motivated but comes across as an incremental improvement on top of prior work. Here are a few comments:\\n1. The idea of reusing the dynamics model for inference is neat as it helps to regularise the learning process and remove the costly double recurrence, potentially speeding up learning. It would be great if this could be evaluated experimentally via an ablation study \\u2014 this can be done by using two separate instances of the transition model with separate weights. \\n2. A keys step that allows to reconcile the transition model and the object detection network is the matching process. Currently, this is done via choosing the pair with the least position and velocity difference between subsequent time steps. This could give erroneous results in the case of object interactions when objects are fairly close to each other (or colliding). A potentially better way could be to additionally use the content/latent codes for this matching process \\u2014 as long as the object\\u2019s appearance stays similar these can provide good signal that disambiguates different objects.\\n3. The experiments presented in the paper are quite simplistic visually \\u2014 it is not clear if this approach can generalise to more complicated visual settings. Additionally, it would be good to see further comparisons and ablations that quantifies the effect of the different components \\u2014 e.g. comparing to a combination of image model + black-box MLP dynamics model can quantify the effect of the graph neural network. These results can add further strength to the paper. \\n\\nOverall, the approach presented in the paper is a bit incremental and the experiments are somewhat simplistic. Further comparisons and ablation experiments can significantly\\tstrengthen the paper. I would suggest a borderline accept.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper the authors present a graph neural network for modeling the dynamics of objects in simple environments from video. The intuition of the presented system is that it first identifies the different objects from the image using Sum-Product Attend-Infer-Repeat (SuPAIR), which gives the objects positions and sizes. The system uses a \\u201csimple matching procedure\\u201d to map objects between frames, which allows for the system to extra the object\\u2019s velocities. Then a graph neural network is employed to model the dynamics of the particular environment (whether objects bounce, whether there are other forces at play like gravity, etc.). The authors present two environments (Billiards and Gravity) and two evaluations, one focused on predicting future states, and the second focused on using these predictions to play the game.\\n\\nI think that this paper presents an interesting approach and I agree with the authors of the importance of developing approaches that allow AI to make good predictions of future environments. However, I\\u2019m not convinced of many of the technical details in the paper. \\n\\nI am not certain whether I would classify this work as unsupervised learning. While it\\u2019s certainly true that there are no labels in the raw video, the object-finding can be understood as a preprocessing step after which the data is in fact in a fairly standard supervised learning framework. The authors use the term \\u201cself-supervised\\u201d in the first section, which I believe describes the work more clearly. \\n\\nThe primary technical contributions of the work appear to be the graph network, the experiments, and their results. While I would have preferred more detail on the graph network in an appendix, it\\u2019s acceptable to instead have access to the code. However, the experiments seem set up primarily to evaluate the system as a whole. For example, the inclusion of a supervised learning version of the system where the object\\u2019s positions are given exactly sheds light on the quality of SuPAIR. However, SuPAIR is taken from prior work. I would have thought that an entirely different approach, like that used by Ha and Schmidhuber in their World Models paper would have been more appropriate as a comparison as it represents an alternate approach entirely. \\n\\nThere is a repeated claim made in the paper that the system presents output that is \\u201cconvincing\\u201d and \\u201crealistic\\u201d over hundreds of time steps. There is no clear definition given for what this means. Figure 1 only presents pixel and positional error for 80 frames, and the error appears to go pretty large (~15%) after only forty frames. The results presented in Figure 4 suggests a much larger timescale, but it\\u2019s unclear the quality of the output predictions from it. Some clarity on this or scaling back the claims would improve the paper. \\n\\nIn terms of related work Guzdial and Riedl\\u2019s 2017 \\u201cGame Engine Learning from Gameplay Video\\u201d appear to use a very similar approach (but with OpenCV instead of SuPAIR and search instead of a graph network) as does Ersen and Sariel\\u2019s 2015 \\u201cLearning behaviors of and interactions among objects through spatio\\u2013temporal reasoning\\u201d. These approaches also function over much more complex environments with variable numbers of objects. It would be helpful for the authors to continue adding some discussion of this and related papers. \\n\\n---\", \"edit\": \"In response to the author's changes I have increased my rating to a weak accept. This is in large part due to Figure 4, which provides a great deal of additional support to the author's claims and clarity on the technical value of the results.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces a structured deep generative model for video frame prediction, with an object recognition model based on the Attend, Infer, Repeat (AIR) model by Eslami et al. (2016) and a graph neural network as a latent dynamics model. The model is evaluated on two synthetic physics simulation datasets (N-body gravitational systems and bouncing billiard balls) for next frame prediction and on a control task in the billiard domain. The model can produce accurate predictions for several time steps into the future and beats a variational RNN and SQAIR (sequential AIR variant) baseline, and is more sample-efficient than a model-free PPO agent in the control task.\\n\\nOverall, the paper is well-structured, nicely written and addresses an interesting and challenging problem. The experiments use simple domains/problems, but give good insights into how the model performs.\\n\\nRelated work is covered to a satisfactory degree, but a discussion of some of the following closely related papers could improve the paper:\\n* Chang et al., A Compositional Object-Based Approach To Learning Physical Dynamics, ICLR 2017\\n* Greff et al., Neural Expectation Maximization, NeurIPS 2017\\n* Kipf et al., Neural Relational Inference for Interacting Systems, ICML 2018\\n* Greff et al., Multi-object representation learning with iterative variational inference, ICML 2019\\n* Sun et al., Actor-centric relation network, ECCV 2018\\n* Sun et al., Relational Action Forecasting, CVPR 2019\\n* Wang et al., NerveNet: Learning structured policy with graph neural networks, ICLR 2018\\n* Xu et al., Unsupervised discovery of parts, structure and dynamics, ICLR 2019\\n* Erhardt et al., Unsupervised intuitive physics from visual observations, ACCV 2018\\n\\nIn terms of clarity, the paper could be improved by making the used model architecture more explicit, e.g., by adding a model figure, and by providing an introduction to the SuPAIR model (Stelzner et al., 2019) \\u2014 the authors assume that the reader is more or less familiar with this particular model. It is further unclear how exactly the input data is provided to the model; Figure 2 makes it seem that inputs are colored frames, section 3.1 mentions that inputs are grayscale videos (do all objects have the same appearance or different shades of gray?), which is in conflict with the statement on page 5 that the model is provided with mean values of input color channels. Please clarify.\\n\\nIn terms of novelty, the proposed modification of SQAIR (separating object detection and latent dynamics prediction) is novel and likely leads to a speed-up in training and evaluation. Using a Graph Neural Network for modeling latent physics is reasonable and has been shown to work on related problems before (see referenced work above and related work mentioned in the paper). Similarly, using such a model for planning/control is interesting and adds to the value of the paper, but has in related settings been explored before (e.g. Wang et al. (ICLR 2018) and Sanchez-Gonzalez (ICML 2018)).\\n\\nExperimentally, it would be good to provide ablation studies (e.g. a different object detection module like AIR instead of SuPAIR, not splitting the latent variables into position, velocity, size etc.) and run-time comparisons (wall-clock time), as one of the main contributions of the paper is that the proposed model is claimed to be faster than SQAIR. The overall model predictions are (to my surprise) somewhat inaccurate, when looking at e.g. the billiard ball example in Figure 2. In Steenkiste et al. (ICLR 2018), roll-outs appear to be more accurate. Maybe a quantitative experimental comparison could help?\\n\\nWhy does the proposed model perform worse than a model-free PPO baseline when trained to convergence on the control task? What is missing to close this gap?\\n\\nDo all objects have the same appearance (color/greyscale values) or are they unique in appearance? In the second case, a simpler encoder architecture could be used such as in Jaques et al. (2019) or Xu et al. (ICLR 2019).\\n\\nOverall, I think that this paper addresses an important issue and is potentially of high interest to the community. Nonetheless I think that this paper needs a bit more work and at this point I recommend a weak reject.\", \"other_comments\": [\"This sentence is unclear to me: \\u201cAn additional benefit of this approach is that the information learned by the dynamics model is reused for inference \\u2014 [\\u2026]\\u201d\", \"What are the failure modes of the model? Where does it break down?\", \"How does the model deal with partial occlusion?\", \"---------------------\", \"UPDATE (after reading the author response and the revised manuscript): My questions and comments are addressed and the additional ablation studies and experimental results on energy conservation are convincing and insightful. I think the revised version of the paper meets the bar for acceptance at ICLR.\"]}"
]
} |
r1glygHtDB | A multi-task U-net for segmentation with lazy labels | [
"Rihuan Ke",
"Aurélie Bugeau",
"Nicolas Papadakis",
"Peter Schuetz",
"Carola-Bibiane Schönlieb"
] | The need for labour intensive pixel-wise annotation is a major limitation of many fully supervised learning methods for image segmentation. In this paper, we propose a deep convolutional neural network for multi-class segmentation that circumvents this problem by being trainable on coarse data labels combined with only a very small number of images with pixel-wise annotations. We call this new labelling strategy ‘lazy’ labels. Image segmentation is then stratified into three connected tasks: rough detection of class instances, separation of wrongly connected objects without a clear boundary, and pixel-wise segmentation to find the accurate boundaries of each object. These problems are integrated into a multi-task learning framework and the model is trained end-to-end in a semi-supervised fashion. The method is demonstrated on two segmentation datasets, including food microscopy images and histology images of tissues respectively. We show that the model gives accurate segmentation results even if exact boundary labels are missing for a majority of the annotated data. This allows more flexibility and efficiency for training deep neural networks that are data hungry in a practical setting where manual annotation is expensive, by collecting more lazy (rough) annotations than precisely segmented images. | [
"multi-task learning",
"weak labels",
"semisupervised learning",
"image segmentation"
] | Reject | https://openreview.net/pdf?id=r1glygHtDB | https://openreview.net/forum?id=r1glygHtDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"oZfRXJSQw0",
"S1eydI-nor",
"Bket8AJcoS",
"rygpIbp4or",
"ryguveaEsH",
"SkxEis2VoS",
"H1garQn4sS",
"r1xY-m3VoH",
"B1g4ENhV9r",
"HkgaxjChYH",
"Hyg3vLW9Fr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798739156,
1573815911318,
1573678673423,
1573339477196,
1573339232125,
1573338012092,
1573335876612,
1573335809267,
1572287532454,
1571773172838,
1571587684126
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2049/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2049/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2049/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2049/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2049/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2049/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2049/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2049/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2049/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2049/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes an architecture for semantic instance segmentation learnable from coarse annotations and evaluates it on two microscopy image datasets, demonstrating its advantage over baseline. While the reviewers appreciate the details of the architecture, they note the lack of evaluation on any of popular datasets and the lack of comparisons with baselines that would be more close to state-of-the-art. The authors do not address this criticism convincingly. It is not clear, why e.g. the Cityscapes or VOC Pascal datasets, which both have reasonably accurate annotations, cannot be used for the validation of the idea. If the focus is on the precision of the result near the boundaries, then one can always report the error near boundaries (this is a standard thing to do). Note that the performance of the baseline models is far from saturated near boundaries (i.e. the errors are larger than mistakes of annotation).\\n\\nAt this stage, the paper lacks convincing evaluation and comparison with prior art. Given that this is first and foremost application paper, lacking some very novel ideas (as pointed out by e.g. Rev1), better evaluation is needed for acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Acknowledging further comments\", \"comment\": \"Thank you very much. We really appreciate the further comments and suggestions.\\n\\nWe agree that it is possible to make the mainstream datasets smaller in order to examine the weak supervision methods\\u2019 performance in the scenario of less training data. In this context, a performance loss is to be expected with all weak supervision methods (including image level, or count level supervision that uses a few labels per image), but it would be interesting to see the quantitative results. \\n\\nAs we mentioned in the response to review #1, the suggested datasets are not always accurate enough, given that one of our main objectives is to find a precise segmentation. Selecting samples with accurate masks in these datasets or refining their masks will require a lot of manual work for such experiments. \\n\\nMoreover, we are concerned with accurate methods for images with poor contrast and for which finding the object contours is crucial (as in our response to reviewer #3). We are keen to explore other relevant datasets in such condition or create large datasets of this kind as benchmarks.\"}",
"{\"title\": \"Acknowledging the rebuttal\", \"comment\": \"I have read the author's response to all comments by other reviewers. My current remaining concern is -- we need to have more datasets so that the comparison with other papers is easier. The authors argue that one of their motivations to not use some of the mainstream datasets is that they are large. However the larger datasets can be pruned to make them small if they want to show their method's relevance.\\nI have updated my overall rating accordingly.\"}",
"{\"title\": \"Response to Review #2 (Part 1)\", \"comment\": \"We thank the reviewer for the comments and feedback. \\n\\n>> The motivation for the specific structure of the multi-task blocks is not clear\\n\\nThe design of the multi-task blocks is inspired by the architecture of U-net, which refines the feature maps in a sequence of increasing resolution on its expanding path. Let us go through the main structural ideas: \\n- In task 1 (detection), for each object instance, only a partial mask is learned (see for instance the bottom left of Figure 3 and the middle of Figure 6). The remaining part of the object instances that is not detected in task 1 can sometimes be large - many pixels in diameter. This remaining part should be detected in task 3.\\n- Each convolutional layer performs only local operations (typically on a few neighbourhood pixels). This motivates us to add residual blocks on the lower resolution levels in order to manage the step that is required to go from task 1 to task 3 labels. \\n- As the task 3 labels (segmentation) contain more information of the object boundaries than the task 1 labels (detection), we concatenate the feature maps for task 3 directly with the feature maps of task 2 (separation) in each multi-task block. This makes the boundary information of task 3 be shared with task 2. \\n\\nSince much fewer samples are annotated for task 3, we let task 1 and task 3 share parameters in each multi-task block in order to reduce some degrees of freedom for task 3. We also tried to apply multi-task blocks with separated parameters for task 1 and 3 (keeping the other parts of the network unchanged), but this suffers from a reduction in the segmentation accuracy. \\n\\n>> The object boundaries labels can be noisy (i.e s(2) can have noise). How does model deal with this?\\nThis is one of the challenges of learning with weak labels. In the creation of $s^{\\\\rm (2)}$ we allow a certain amount of noise. This is practical as this makes the user\\u2019s job easier when doing the annotations.\\n\\nThe label $s^{\\\\rm (2)}$ only focuses on certain parts of the boundaries, and instead of determining the boundaries up to a single pixel we pick wider regions (scribbles) which are likely to contain the boundary pixels. Therefore, the labels $s^{\\\\rm (2)}$ for training the model are already averaged over several neighbouring pixels around the boundary and as such render the trained model robust to small perturbations / noise in the labels. We do observe the capability of the network for doing so (see the right of Figure 5).\\n\\n>> Is it the case that every image in I_3 is completely labeled - i.e all segments/classes marked?\\n\\nYes, the set $\\\\mathcal{I}_3$ consists of images that are fully annotated. These annotations are derived by a combination of standard image processing methods using the weak labels $\\\\mathcal{I}_1$ as a starting point and manual corrections of the so-derived full segmentation mask by a user. This process does require more effort than annotating images from $\\\\mathcal{I}_1$ and $\\\\mathcal{I}_2$. However, we emphasize that the set $\\\\mathcal{I}_3$ is small (for example the size of $\\\\mathcal{I}_3$ is 2 for dataset 1 and 8 for dataset 2).\\n\\n>> The assumption that s(3) is independent of s(1) and s(2) is not true. Instead of constraining the model to learn masks that respect the various types of labels, it seems they learn from each source independently. It is not clear how the sharing of parameters in the multitask block helps.\\n\\n- The assumption is that they are conditionally independent rather than independent. In fact, given the image $I$, $s^{\\\\rm (3)}$ can be created independently to $s^{\\\\rm (1)}$ and $s^{\\\\rm (2)}$.\\n\\n- We trained our model for all tasks jointly. The effectiveness of transferring knowledge from the weak labels (WL) to the segmentation is reflected by the improvement from the U-net trained on the WL or the strong labels (SL) only (cf. Table 1). \\nWe propose to learn the segmentation in the form of multiple tasks instead of learning a mask that is constrained by the various sources of labels, as this explicitly exposes the network to the statistical information of the WL which is not necessarily reproducible from the masks. For instance, the segmentation mask is not sufficient for predicting the touching objects without clear boundaries between them. We also show that the approach works well with the unbalanced WL classes. In fact, no background pixels are labeled for the detection task, whereas those background pixels can be ignored if we only learn for a mask respecting the WL, given that the contrast between the background and the foreground can be poor.\\n\\n- Regarding the multi-task blocks, we referer the reviewer to the explanation of the motivation we gave at the beginning of the post. For task 3, overfitting can be an issue if the images that are labeled are extremely sparse. Sharing of labels between the task helps to prevent this.\"}",
"{\"title\": \"Response to Review #2 (Part 2)\", \"comment\": \">> Can they comment on the applicability of the prior work suggested above?\\n\\nThe Label super-resolution networks assume that the label is given at the low-resolution level (in the form of one label per block of the image) and known distribution of high-resolution labels is conditioned on the label of the low-resolution block. Our work also introduces a certain kind of low resolution labels, in the sense that the boundaries of the WL are noisy and inaccurate. We make use of the observation that the pixels that are in the partial mask (WL) set a strict constraint for the values of the SL, and do not assume that the distribution of the SL given the WL is known.\\n\\nIn practice, the annotation strategy is very different for this method compared to ours. Our WL gives more information as it includes details on separating and touching objects. This is not the case with the low-resolution level labels used by the Label super-resolution network. Therefore the mentioned prior work is not directly comparable to our approach. \\n\\n>> How do the rough labeling tools work on biomedical data where the objects are more heterogenous patterns\\n\\nIndeed, in the ice crystal and air bubble problem, the distributions of pixels are similar in the different classes, which makes this problem difficult to handle. Since our method provides accurate results on this problem, we believe the segmentation task should adapt easily to different distributions of pixels. Given the results we show on the Gland dataset (Subsection 4.2), we believe that our method should also generalize well to textured objects. At the same time, the detection should not be affected by heterogeneous patterns, but more extensive experiments would have to be conducted to validate this.\\n\\n>> Can this work be used for segmentation and prediction on crop data?\\n\\nWe are not familiar with prediction problems associated to crop data. Our work could be relevant in the case of crop image datasets containing small training sets.\\n\\n>> It seems as if the improvement over the PL baseline (pseudo labels) is incremental? Can the authors provide error bars so the reader knows what the significance of the results is?\\n\\nWe have added error bars to compare the two methods (see Figure 4). Thanks for the suggestion. In addition to the comparisons provided in Table 1 and Figure 4, we have demonstrated qualitative results in Figure 5, which shows that our approach gives accurate predictions on the contours of the objects, and this is one of our main objectives in the work.\\n\\n>> Can they give a more thorough comparison in terms of human effort? It is interesting to note that only 2 images give 0.82. Would 3 images give 0.94?\\n\\nWe included a comparison of the supervision from different amounts of SL in Table 2. The $20\\\\%$ SL ($4$ images), $30\\\\%$ SL ($6$ images) and $75\\\\%$ SL ($15$ images) give scores of $0.882$, $0.913$, $0.940$ respectively. Given 6x speedup on WL annotation time, the creation of $10\\\\%$ SL + WL is $3$ times faster than $75\\\\%$ SL which gives similar accuracy. \\n\\n>> What is the performance of MT U-net without the SL images (i.e without task-3)?\\n\\nThe MT U-net without SL (i.e., without task 3) degenerates into the U-net with WL. In fact, according to the loss function (5) we used, the loss on task 3 is always zero provided no SL available. This contributes to nothing in backpropagation during the network training, and therefore MT U-net degenerates into the U-net with WL. We have provided the results on U-net with WL (second row of table 1), which therefore covers the results of MT U-net without task 3.\\n\\n>> Table-3: How well does MDUnet do with 9.4% SL data?\\n\\nThe MDUnet is proposed in a fully supervised setting, and it is limited by the amount of SL, so is the single task U-net. We have demonstrated the relatively lower performance of the single task U-net (in Table 3, and also in Table 1), and similarly a significant performance reduction could be expected for the MDUnet if one reduces the SL from $100\\\\%$ to only $9.4\\\\%$.\"}",
"{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"We thank the reviewer for the comments and feedback. \\n\\n>> it is not completely clear, however, if the mentioned scribbles need to capture each instance in the training set, or if some can also be left out.\\n\\nWe used different subsets of images (i.e., $\\\\mathcal{I}_1$, $\\\\mathcal{I}_2$ and $\\\\mathcal{I}_3$) labeled for different tasks. Therefore, the scribbles (WL) do not need to capture every instance in the training sets. However, we do have an assumption that most of the images are weakly labelled. This is motivated by the fact that the WL are collected in a much cheaper way than the SL. As we mentioned in Section 3, the missing labels (either SL or WL) are incorporated using a weighted loss function (given by Equation (5)).\\n\\n\\n>> Experimental evaluation does not leave the low-number-of-classes regime, and I\\u2019m left wondering how the method might compare on a semantically much richer data set, e.g. Cityscapes\\n\\nThe successful examples we have shown make the proposed model relevant for a wide range of applications with a low number of classes (for example biomedical imaging data) and namely the practical problems we are working on.\\nIt would indeed be interesting to try the approach for a problem with a larger number of classes, but it would require specific data. Our method has been designed to address problems in which datasets have fewer images, not highly contrasted images and where finding very accurate contours is essential. Most semantically rich and large datasets, such as Cityscapes, have better contrast and the annotations are not always accurate enough.\\n\\n\\n\\n>> Finally, unmodified U-Net is by now a rather venerable baseline, so I\\u2019m also wondering how the proposed multi-task learning could be used in other (more recent) architectures, i.e. whether the idea can be generalized sufficiently.\\n\\nWe have not explored all possible architectures in our main segmentation network, but the idea of decomposition of the segmentation problem into the mentioned smaller tasks is to a certain extent agnostic to the underlying architecture and readily applicable to a lot of different settings. The choice of network architectures has some flexibility depending on the problem at hand, and the multi-task learning block could be used in other architectures that have multiple levels of resolution design (such as SegNet).\"}",
"{\"title\": \"Response to Official Blind Review #1 (Part 1)\", \"comment\": \"We thank the reviewer for the feedback and suggestions. \\n\\n>> The method is trained on 2 datasets: air bubbles, and ice crystals.\\n- In this work we tested our method on two datasets, namely the SEM dataset (having three classes, air bubbles, ice crystals and the background) and H&E dataset (gland histology images).\\n\\n>> Authors should compare with current methods such as Where are the Masks: Instance Segmentation with Image-level Supervision ...\\n\\nWe thank the referee for the suggestions. \\nGiven the data, however, we consider point-level, count-level or image-level supervision - as in the works listed by the referee - not well adapted due to various reasons that we will explain in the subsequent discussions.\\n\\nIn paragraph \\u201cWeakly/semi-supervised segmentation learning\\u201d in our paper, we already included a review on papers which contain these kinds of supervision. The mentioned papers have the same limitations with respect to our approach than the approaches in the papers mentioned by the referee: \\n- First notice that our weak labels can be easily converted into labels in the form of points, counting, or image-level with a certain loss of information. However, our aim is to get a good trade-off between the annotation effort and the segmentation performance.\\n- The image level or count level labels do not directly encode the location of an instance or information for separating touching instances. Therefore, supervision based on these labels requires in general a much larger number of images, which is not feasible for applications with datasets of limited size.\\n- Our method is applicable to image where instance counting or image-level labelling is not a trivial task. Indeed, at the border of the images, most of the objects are partially visible (cf. Figure 3), and instance counting might be less straightforward. Also it is not very easy to manually count the number of objects for datasets in the paper where a single image contains hundreds of densely distributed objects. \\n- In point-level supervision, more information about the object location is available. Based on this, we move a step further by incorporating the rough regions into the label with a slightly larger cost. This information helps the network to predict coarse regions of the instances apart from their location as illustrated in Figure 5.\\n\\nWe claim that our approach provides a good alternative to the current weak supervision strategies. The form of WL allows the user some freedom for making the annotations, and the raw WL (with noise) can be directly used in an end-to-end training setting.\\n\\nRegarding the papers mentioned by the reviewer, they are interesting pieces of work. Nevertheless, we also observed some major differences between them and our paper. \\n- The method in the 2nd paper does not directly tackle our main issue. (i) The proposed network only predicts localization of pixels and embedding vectors rather than masks. (ii) The resulting masks, however, are selected from a set of pseudo masks generated by an object proposal method (SharpMask, ECCV 2016). The segmentation results, as shown in Figure 1 and Figure 6 of the paper, do not fit the object contours well. Our work, in contrast, is dedicated to obtaining accurate contours which is not addressed by this paper. \\n- The method in the 1st paper trains a segmentation network using pseudo masks generated by a classifier trained with image-level labels. We noticed that the image level supervision might not extend well to our experiments as we discussed above. Furthermore, the segmentation results (Figure 4) do not have a good match of object contours, but this is important for our problems. \\n- The 3rd to the 6th papers uses image-level supervision. However, it is not clear how to adapt the classifiers such that they are efficient for datasets with a limited number of images and each image containing hundreds of objects.\"}",
"{\"title\": \"Response to Official Blind Review #1 (Part 2)\", \"comment\": \">> The method should be compared on standard and challenging datasets like Cityscapes, PASCAL VOC 2012, COCO, KITTI\\u2026\\n\\nThere are various reasons why we think the current datasets are more suitable for this work. \\n1). The mentioned standard datasets are about images from natural scenes and they generally have better contrast between the objects and the background compared to the datasets that we used. Indeed, one key objective of our work is to obtain accurate object contours for images in poor contrast (for instance. images in Figure 9).\\n2). We wish to develop methods that are applicable for application domains where only relatively small training sets are available, as the ones demonstrated in the paper. \\n3). We are focusing on problems where finding a precise segmentation is important. On the large datasets mentioned by the reviewer, the annotations are not always accurate enough:\\n - Cityscapes Dataset: The legs of the man on the motorcycle in Example Weimar 1 in https://www.cityscapes-dataset.com/examples/ \\n - COCO 2019: Legs of the tennis player in http://cocodataset.org/#detection-2019 \\n - In KITTI, cars are often not segmented when touching image boundaries (second line of Fig. 1 page 16 of https://arxiv.org/pdf/1604.05096.pdf, Fig 9 (e) in https://arxiv.org/pdf/1908.11656.pdf )\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper addresses the problem of learning segmentation models in the presence of weak and strong labels of different types. This is an important problem that arises while learning in data-scarce settings. The presented approach optimizes jointly over the various types of labels by treating each one as a task. They extend the U-net architecture to incorporate the different tasks.\", \"prior_work\": \"There has been other work on incorporating multi-resolution or different types of labels. Here is one that can be cited:\\nLabel super-resolution networks (https://openreview.net/forum?id=rkxwShA9Ym)\", \"major_comments\": [\"The motivation for the specific structure of the multi-task blocks is not clear\", \"The object boundaries labels can be noisy (i.e s(2) can have noise). How does model deal with this?\", \"Is it the case that every image in I_3 is completely labeled - i.e all segments/classes marked?\", \"The assumption that s(3) is independent of s(1) and s(2) is not true. Instead of constraining the model to learn masks that respect the various types of labels, it seems they learn from each source independently. It is not clear how the sharing of parameters in the multitask block helps.\", \"Can they comment on the applicability of the prior work suggested above?\"], \"minor_comments\": [\"How do the rough labeling tools work on biomedical data where the objects are more heterogenous patterns where different labels can have very different distribution of pixels. How well will their method generalize in such settings?\", \"Can this work be used for segmentation and prediction on crop data?\"], \"results\": [\"It seems as if the improvement over the PL baseline (pseudo labels) is incremental? Can the authors provide error bars so the reader knows what the significance of the results is?\", \"Can they give a more thorough comparison in terms of human effort? It is interesting to note that only 2 images give 0.82. Would 3 images give 0.94? They need to show the trade-off between additional effort vs gains in performance.\", \"What is the performance of MT U-net without the SL images (i.e without task-3)? Table-2 does give some intuition, but authors should add another row with multitask WL\", \"Table-3: How well does MDUnet do with 9.4% SL data?\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper proposes a method for semantic segmentation using \\\"lazy\\\" segmentation labels. Lazy labels are defined as coarse labels of the segmented objects. The proposed method is a UNET trained in a multitask fashion whit 3 tasks: object detection, object separation, and object segmentation. The method is trained on 2 datasets: air bubbles, and ice crystals. The proposed method performs better than the same method using only the weakly supervised labels and the one that only uses the sparse labels.\", \"The novelty of the method is very limited. It is a multitask UNET. The method is compared with one method using pseudo labels. However, this method is not SOTA. Authors should compare with current methods such as:\", \"Where are the Masks: Instance Segmentation with Image-level Supervision\", \"Instance Segmentation with Point Supervision\", \"Object counting and instance segmentation with image-level supervision\", \"Weakly supervised instance segmentation using class peak response\", \"Soft proposal networks for weakly supervised object localization\", \"Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation\", \"These methods can use much less supervision (point-level, count-level or image-level) and may work even better.\", \"The method should be compared on standard and challenging datasets like Cityscapes, PASCAL VOC 2012, COCO, KITTI...\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The submission presents a neural network for multi-task learning on sets of labeled data that are largely weakly supervised (in this case, partially segmented instances), augmented by comparatively fewer fully supervised annotations.\\nThe multiple tasks are designed to make use of both the weak as well as as full (\\u2018strong\\u2019) labels, such that performance on fully annotated machine-generated output is improved.\\n\\nAs noted in the related work section (Section 2), multi-task methods aim to use benefits from underlying common information that may be ignored in a single-task setting. The network presented here is quite similar to most of these multi-task approaches: a common feature encoder, and partially distinct feature decoding and classification parts.\\nThe (minor) novelty mainly comes from the distinct types of weak/strong annotation data fed here: instance scribbles, boundary scribbles, and (some or few) full segmentations. \\n\\nThe submission is overall well written and provides sufficient clarity and a good overview of the approach.\\nSection 3 presents a probabilistic decomposition of the proposed architecture. With some fairly standard assumptions and simplifications, the loss in Eq. 3 becomes rather straightforward (weighted cross entropy)\\nThe actual network architecture described in Section 3.2 takes a standard U-Net as a starting point and modifies it in a fairly targeted way for the different expected types of annotations. These annotations (Section 3.3) are cheaper than full labels on a same-size dataset; it is not completely clear, however, if the mentioned scribbles need to capture each instance in the training set, or if some can also be left out. Without this being explicitly mentioned, I will assume the former.\\n\\nThe experimental evaluation is done reasonably well, although I am not familiar with any of the presented data sets. The SES set seems to be specific to the submission, while the H&E data set has been used at least one other relevant publication (Zhang et al.). My main issue here is that at least on the SES set, which does not seem to be that large, the score difference is not that big, so dataset bias could play some part (which is unproven, but so is the opposite).\\nExperimental evaluation does not leave the low-number-of-classes regime, and I\\u2019m left wondering how the method might compare on a semantically much richer data set, e.g. Cityscapes. Finally, unmodified U-Net is by now a rather venerable baseline, so I\\u2019m also wondering how the proposed multi-task learning could be used in other (more recent) architectures, i.e. whether the idea can be generalized sufficiently.\\n\\nWhile I think the ideas per se have relatively minor novelty, the combination seems novel to me, and that might warrant publication.\"}"
]
} |
Bklg1grtDr | Neural Design of Contests and All-Pay Auctions using Multi-Agent Simulation | [
"Thomas Anthony",
"Ian Gemp",
"Janos Kramar",
"Tom Eccles",
"Andrea Tacchetti",
"Yoram Bachrach"
] | We propose a multi-agent learning approach for designing crowdsourcing contests and all-pay auctions. Prizes in contests incentivise contestants to expend effort on their entries, with different prize allocations resulting in different incentives and bidding behaviors. In contrast to auctions designed manually by economists, our method searches the possible design space using a simulation of the multi-agent learning process, and can thus handle settings where a game-theoretic equilibrium analysis is not tractable. Our method simulates agent learning in contests and evaluates the utility of the resulting outcome for the auctioneer. Given a large contest design space, we assess through simulation many possible contest designs within the space, and fit a neural network to predict outcomes for previously untested contest designs. Finally, we apply mirror descent to optimize the design so as to achieve more desirable outcomes. Our empirical analysis shows our approach closely matches the optimal outcomes in settings where the equilibrium is known, and can produce high quality designs in settings where the equilibrium strategies are not solvable analytically. | [
"Auctions",
"Mechanism Design",
"Multi-Agent",
"Fictitious Play"
] | Reject | https://openreview.net/pdf?id=Bklg1grtDr | https://openreview.net/forum?id=Bklg1grtDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"02YqasPKDg",
"ryeXiUjdsS",
"S1lvK8j_jH",
"HyxRIUouoH",
"SJgLHLjOjr",
"HkegGTPZcS",
"r1g0zMbRKS",
"BJg_1FfTtS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798739127,
1573594779006,
1573594751280,
1573594709838,
1573594685644,
1572072711615,
1571848726184,
1571789023840
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2048/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2048/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2048/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2048/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2048/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2048/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2048/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper demonstrates a framework for optimizing designs in auction/contest problems. The approach relies on considering a multi-agent learning process and then simulating it.\\n\\nTo a large degree there is agreement among reviewers that this approach is sensible and sound, however lacks substantial novelty. The authors provided a rebuttal which clarified the aspects that they consider novel, however the reviewers remained mostly unconvinced. Furthermore, it would help if the improvement over past approaches is demonstrated in a more convincing way, for example with increased scope experiments that also involve richer analysis.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Novelty & Relevance\", \"comment\": \"Thank you very much for taking the time to review our paper. Your review calls out two main areas of improvement: 1) the need to better highlight what in our approach is novel, and 2) better explaining how our approach and the topic of mechanism design fits into the conference.\\n\\nThank you for identifying these. We feel like we can address them and that our paper will be better for it.\\n\\nThe main concern you have identified is the \\u201clack of novelty\\u201d, a criticism shared by R3 as well. We now realize we should have done a better job calling out which aspects of our work are novel, and which are just novel applications of existing methods. We will update our manuscript shortly to make sure the distinction is clear; please see our detailed response to R1 to find a list of contributions which we hope you will find helpful.\\n\\nA key component of this paper is learning a representation of different mechanism designs. To do this, we need to simulate agent learning. This is novel for the automated mechanism design literature, where analytic solutions have previously been used, and is also an interesting perspective for the wider representation learning community: in our setting data cannot be taken for granted, and instead depends on modelling assumptions.\\n\\nWe believe Game Theory is highly relevant to ICLR, indeed several works on and related to GT have been presented at ICLR. For example, Stable Opponent Shaping in Differentiable Games by Letcher et al \\u201819) discussed RL from a Game Theoretic perspective.\", \"game_theory_is_of_increasing_importance_to_the_iclr_community\": \"constructing multi-agent games for agents to play in training has proven an important tool (Large-Scale Study of Curiosity-Driven Learning, Burda et al. ICLR \\u201819, Large Scale GAN Training for High Fidelity Natural Image Synthesis, Brock et al. ICLR \\u201819). Designing these systems involves choosing the incentive structure of the agents, and so is itself a Mechanism Design problem.\", \"we_feel_this_rebuttal_addresses_your_two_concerns_in_detail\": \"1) lack of novelty and 2) relevance of the game theoretic topic to ICLR. We would appreciate it if you would please consider raising your score.\"}",
"{\"title\": \"Methodology & Experiments\", \"comment\": \"Thank you very much for reviewing our paper and for your feedback. Also, thank you for raising your concerns regarding a comparison of our work to Dutting et. al. We mention this work (among others) at the end of the paper, however, maybe it deserves a more pointed comparison.\", \"your_review_calls_out_the_marginal_improvement_relative_to_previous_work_in_two_main_ways\": \"1) the methodology and 2) the experimental results.\\n\\nThank you for identifying these concerns, we will comment on the methodology and experimental results both relative to Dutting et. al. specifically and in general.\\n\\nRegarding methodology, our main contribution is to see both sides of a mechanism (both the auctioneer and the bidders) as adaptive. Dutting et. al. on the other hand uses a dominant strategy incentive compatible argument and models the bidders as truth-tellers -- the bidders are static, i.e., not adaptive learning agents. To our knowledge, designing mechanisms based on the behavior of adaptive learning agents is novel and more general than previous approaches in the literature. We did not emphasize this distinction as much as we should in the original submission. We appreciate you bringing this to our attention.\\nAs you pointed out, we consider auctions without any known solution. In contrast, previous approaches including Dutting et. al. focus on specific classes of mechanisms where optimal behavior is already known. Therefore, our approach enables the design of mechanisms for a much wider class of games for which computing equilibrium strategies is intractable. For example, note that our approach places no restriction on the number of bidders in the auction (as opposed to theoretical analyses). Again, this is made possible by bringing adaptive learning agents into the pipeline.\\n\\nRegarding experimental results, your review states that we only considered 3 and 4 bidder auctions, however, we point out that we also presented results for a noisy 10 bidder auction in Figure 6 and Table 1. We will restructure the paper to avoid confusion and make the noisy 10 bidder auction stand out more.\\nWe also note that Dutting et al. considers a different class of auctions (truthful auctions vs allpay auctions in our paper) so our results cannot be readily compared in a quantitative way. We will highlight this difference in the paper so it is more clear and add a more in depth comparison to that work.\\nRegarding learned experimental baselines, we only assume differentiable models. For example, we could have fit a Gaussian process instead. We chose neural networks as they are known to be strong function approximators (and generalizers), and we felt they would be most familiar to the ICLR community. However, our choice of neural networks is not critical to the general applicability of our approach or our novel perspective of both sides of the mechanism as adaptive. We will include experiments with other differentiable models in the appendix.\\nWe point out though that Figure 3 and 7 both show the neural network\\u2019s ability to fit the data. Figure 3 displays a near perfect fit suggesting the MLP model class was sufficient for the noiseless 2 bidder setting. Figure 7 shows that there is potentially room for improvement in the noisy 10 bidder setting. We are considering alternative architectures for this setting, but per your suggestion, we will consider other differentiable models as well.\\n\\nLastly, thank you for pointing out the typo in Algorithm 2. The correct reference should be M-EMA.\", \"we_feel_this_rebuttal_addresses_your_two_concerns_in_detail\": \"1) strength of methodology and 2) experimental results. We would appreciate it if you would please consider raising your score.\"}",
"{\"title\": \"Novelty & Generality (continued)\", \"comment\": \"Your second concern regards making a better distinction between which parts of our pipeline are generally applicable, and which are specific to the all-pay auction we consider for our experimental evaluation. We will update our manuscript shortly to include this information in the discussion. Here is a detailed answer which we hope helps.\", \"stage_1\": \"The purpose of the first stage is simply to generate data samples, specifically tuples of the form (design, auctioneer utility). The rest of our pipeline is agnostic to how this data is obtained, but we took the novel approach of training learning agents in a simulated auction again because we are the first to view both sides of the mechanism (bidders/players and auctioneer/game) as adaptive. We explored both agents trained via fictitious self-play and reinforce. One specific choice we made was our desire to find a symmetric Nash equilibrium. This led to our training of agents with tied weights to maintain symmetry. This restriction can be released without repercussions to the general pipeline.\", \"stage_2\": \"Any differentiable model can be used to approximate the mapping from designs to utilities here. We chose to use a multi-layer perceptron (neural network).\", \"stage_3\": \"M-EMA was designed with the allpay auction in mind, however, we mentioned above other settings where it may be useful. A sophisticated optimization algorithm is not required by the pipeline though. We only make the assumption that the model that is fit in the second stage of the pipeline be differentiable with respect to its inputs (designs). This condition enables the use of gradient descent in the third stage. If the design space adheres to complex constraints, a flexible approach would be to use Lagrange multipliers to penalize deviation of iterates from the feasible set. We leave the choice of how to incorporate the constraints into the optimization process up to the user though. We simply provide this example of Lagrange multipliers to show that our pipeline is not limited to using only M-EMA.\", \"we_feel_this_rebuttal_addresses_your_two_concerns_in_detail\": \"1) lack of novelty and 2) generality / specificity of approach. We would appreciate it if you would please consider raising your score.\"}",
"{\"title\": \"Novelty & Generality\", \"comment\": \"Thank you very much for taking the time to review our paper and for your feedback, which we think will help us write a better manuscript.\", \"your_review_calls_out_two_main_areas_of_improvement\": \"1) the need to better highlight what in our approach is novel, and 2) the discussion of which aspects of our approach are generally applicable to any mechanism design problems, and which are specific to all-pay auctions.\\n\\nFirst off, thank you for identifying these, we feel like we can address them relatively easily and that our paper will be better for it.\\n\\nThe main concern you have identified is the \\u201clack of novelty\\u201d, a criticism shared by R1 as well. We now realize we should have done a better job calling out which aspects of our work are novel, and which are just novel applications of existing methods. We will update our manuscript shortly to make sure the distinction is clear; here is a list of contributions which we hope you will find helpful.\\n\\n1) Given this view, the first contribution of our paper, which we feel is a substantial one, is that we are the first to view both sides of a mechanism as adaptive.\\nIn classic mechanism design work, researchers use either equilibrium behavior analysis, or appeal to dominant strategy arguments to model the behavior of the bidders (e.g. Nash equilibrium or envy-free equilibria, and dominant strategy incentive compatible mechanisms). This is also true of modern machine learning approaches to mechanism design such as Dutting et al. 2017, Feng et al. 2018, Manish et al. 2018 and Tacchetti et al. 2019.\", \"restricting_to_mechanisms_for_which_calculating_the_equilibrium_behavior_is_tractable_is_a_substantial_limitation_which_shrinks_the_space_of_mechanisms_and_designs_one_can_consider_to_a_relatively_small_subset\": \"as we highlight in the paper, mechanisms that have sufficient complexity, or disruptive enough levels of noise are excluded.\\nClassic mechanism design would throw in the towel here, and conclude that one cannot properly evaluate designs in these settings, or use models chosen for their tractability rather than their accuracy. This is important: in this work we show that a mechanism designed for the tractable noiseless setting is far from optimal in the noisy setting.\\nIn this paper we present a novel way around this limitation. We view both the bidders and the auctioneer as adaptive, and use the behavior at convergence of learning agents as a stand-in for equilibrium behavior. We validate this idea when the equilibrium can be computed, and show that our learning agents converge to very similar strategies (Sec. 3.1), and then show that our method extends to situations where the equilibrium behavior is unknown (Sec. 3.2).\\nBecause we are the first to view both sides of the mechanism as adaptive, we can apply this general pipeline, which as you pointed out uses standard machine learning techniques, to the challenging domain of mechanism design.\\n\\nThe second contribution of our paper concerns the specific optimization method we employ. Entropic Mirror Descent [Beck and Teboulle \\u201803] is a non-Euclidean first-order optimization method tailored for optimization problems where the feasible set is a simplex. In our setting, we are maximizing the auctioneer\\u2019s utility over designs (feasible designs lie on a subset of the simplex), which is why we refer to it as Entropic Mirror Ascent (EMA) (simply flip the sign of the learning rate). However, not all designs on the simplex are feasible. We only want to consider designs with monotonically decreasing prizes (1st prize > 2nd prize > \\u2026). Our trick of introducing new variables that represent the marginals between the prizes (e.g., 1st prize - 2nd prize) transforms the feasible set into a new one in which a linear transformation of the marginals must lie on a simplex. By introducing this transformation and applying EMA in this new space, we effectively constrain gradient ascent search to the desired feasible design set (monotonically decreasing, positive prizes with constant sum). We name the novel application of EMA on this transformed space Monotonic EMA. An algorithm for optimizing over the subset of the simplex with monotonically decreasing values may be of general interest, not just to the allpay auction setting. For instance,\\nClassical Economics often studies monotonically increasing production functions (i.e., positive marginal returns). A company way want optimize over the space of possible production functions to see which maximizes its profit given its role in the market.\\nSome tree search algorithms such as A* require monotonic heuristic functions. Searching over the space of possible heuristic functions for a single path from root to a leaf node of a given value implies the feasible set is an (n-1)-simplex where n is the number of nodes on the path (excluding the leaf node).\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"After reading the rebuttal, I increased my score to weak accept, since it addressed my concern.\\n----------------------------------------\\nSummary\\nThis paper presents a general machine learning method for contest / auction problems. The underlying idea is to collect data pairs (i.e., [design, utility]), fit a model to the data, and then optimize over all the designs to figure out the best one. The authors mainly applied their method on an auction design problem, and finished a few experiments. However, due to lack of novelty, I lean to vote for rejecting this paper.\\nWeaknesses\\n- My major concern of this paper is the lack of novelty. As the authors stated in the introduction, the contribution of this paper is a machine learning method for designing crowdsourcing contest. However, as the authors demonstrated in Figure 1, the main idea of this approach is: collect the data, fit a model, and finally optimize the objective, which is a pretty common approach. I do not see something special or interesting in this approach.\\n- The authors spend a lot of space discussing how to deal with the auction, but I do not see their relationship with the machine learning algorithm, or how can these tricks be generalizable to other scenarios. It seems all these discussions are specific to this auction scenario, and there is almost no relationship between these tricks with the machine learning algorithm. However, if these tricks can be applied to other scenarios, these discussions will make sense.\\nPossible Improvements\\nI am very happy to increase my score if the authors could demonstrate why their approach in Figure 1 is novel, and how their discussion about the auction can be generalized to other scenarios.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"1. Summary\\n\\nThe authors employ a multi-agent learning approach for learning how to set payoffs optimally for crowdsourcing contests\\nand auctions. Optimality means e.g. incentive alignment (the principal problem) between the principal (e.g. the organizer) and participants (e.g. bidders), assuming e.g. that participants can be strategic about their behavior. In this work the principal uses ReLU-log utility.\\n\\nFirst, the authors use fictitious play and multi-agent RL to train agents on a distribution of payoffs. Then, a neural net is fitted to the samples (payyoffs, expected principal utility), and finally iteratively attempts to improve the payoffs using mirror ascent within the convex set of admissable payoffs.\\n\\nThe authors compare the payoffs with theoretically known solutions and in situations where the optimal solution is not known.\\n\\n3, 4-agent all-pay auction (Nash eq known).\\nSame as above, but with noise added to bids (Nash eq not known).\\nThe authors analyze in some detail how the principal's utility and bidder ranking behave as the participants' bids change.\\n\\n1. Decision (accept or reject) with one or two key reasons for this choice.\\n\\nReject. Although the high-level approach is interesting (use learning to design auctions for cases where no theoretical solution is known), the actual experimental results and methodological improvement over e.g. Dutting 2017 are weak. The authors only consider 3, 4-agent auctions. There are no other learned baselines (e.g., constrained optimization without neural nets) that the authors could consider.\\n\\n3. Supporting arguments\\n\\nSee above.\\n\\n4. Additional feedback with the aim to improve the paper. Make it clear that these points are here to help, and not necessarily part of your decision assessment.\", \"m_ema_and_m_emd\": \"ascent and descent? in Algo 1, 2?\\n\\n\\n--- \\nI've read the rebuttal, but still lean towards reject. The scope/analysis of the experiments (e.g. auction type), still seems limited, even though both agents and mechanism are adaptive.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper considers a scenario of bidding contest where the goal os to find optimal allocation w = (w_1, w_2, ..., w_n) of the total prize that maximizes the principal\\u2019s expected utility function. The problem is formulated into an optimization task within the simplex where the total allocation is fixed at w. Then the authors proposed simulation methods to solve this problem and use experiments to demonstrate the method's advantages. The paper is sound an clear, but it's not clear to me which part is novel and which part is from existing work, hence I doubt the contribution level of this paper. Furthermore, I'm not quite sure whether the topic fits ICLR as it's more related to game theoretic society and not related to representation learning.\\n\\nThanks for the response from the authors. I have read it carefully, especially regarding the novelty part. My review remains unchanged based on the author feedback.\"}"
]
} |
H1gy1erYDH | CaptainGAN: Navigate Through Embedding Space For Better Text Generation | [
"Chun-Hsing Lin",
"Alvin Chiang",
"Chi-Liang Liu",
"Chien-Fu Lin",
"Po-Hsien Chu",
"Siang-Ruei Wu",
"Yi-En Tsai",
"Chung-Yang (Ric) Huang"
] | Score-function-based text generation approaches such as REINFORCE, in general, suffer from high computational complexity and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a reward function and ignore the gradient information. In this paper, we propose a novel approach, CaptainGAN, which adopts the straight-through gradient estimator and introduces a ”re-centered” gradient estimation technique to steer the generator toward better text tokens through the embedding space. Our method is stable to train and converges quickly without maximum likelihood pre-training. On multiple metrics of text quality and diversity, our method outperforms existing GAN-based methods on natural language generation. | [
"Generative Adversarial Network",
"Text Generation",
"Straight-Through Estimator"
] | Reject | https://openreview.net/pdf?id=H1gy1erYDH | https://openreview.net/forum?id=H1gy1erYDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Ss0rG1TIp7",
"B1elMaV2oS",
"rylorjQ2jr",
"S1gEyFXnjH",
"rygzfExniB",
"H1lbVbx2iH",
"HkeF9zh9jS",
"HJxoXOKqoB",
"HkxM_SnKjr",
"BkxpntgtoB",
"BylGeUlKiS",
"rygN2IGOiB",
"S1e_Gax_sS",
"BJxaPddyjB",
"rkl9nFTF9H",
"HkxIxHwaFB",
"HJgZb4nntr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798739098,
1573829895630,
1573825347189,
1573824731804,
1573811210424,
1573810472903,
1573728913398,
1573718050869,
1573664106256,
1573616053239,
1573615081843,
1573557932403,
1573551376303,
1572993125211,
1572620721858,
1571808493709,
1571763193311
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2047/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2047/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2047/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2047/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2047/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2047/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2047/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2047/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2047/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2047/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2047/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2047/Authors"
],
[
"~Dianqi_Li1"
],
[
"ICLR.cc/2020/Conference/Paper2047/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2047/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2047/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a method to train generative adversarial nets for text generation. The paper proposes to address the challenge of discrete sequences using straight-through and gradient centering. The reviewers found that the results on COCO Image Captions and EMNLP 2017 News were interesting. However, this paper is borderline because it does not sufficiently motivate one of its key contributions: the gradient centering. The paper establishes that it provides an improvement in ablation, but more in-depth analysis would significantly improve the paper. I strongly encourage the authors to resubmit the paper once this has been addressed.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply\", \"comment\": \"Reply to the first question: Yes.\\n\\nThe main purpose of constraining Lipschitz constant is to make the gradient informative even if the supports of real samples and generated samples are completely disjoint (or, make the loss surface between two supports smooth). In this case, the discriminator can still be confident about its output.\\n\\nWe have cited Arjovsky & Bottou (2017) & Zhou et al. (2019). We hope the detailed discussion about gradient vanishing and gradient uninformativeness of these works can help us explain it.\\n\\nAfter that, the main problem with continuous relaxation is that the generator is led to produce spiky outputs since it must force it's distribution over tokens to be like the real data, which is a one-hot encoded token.\\nSpectral normalization can help continuous relaxations but overall the discriminator will still influence the generator to produce spiky outputs (alternatively, we could let the word embeddings non-trainable to make the input of discriminator be word embeddings instead of probability).\", \"references\": [\"Mart\\u00edn Arjovsky and L\\u00e9on Bottou. Towards principled methods for training generative adversarial networks. ArXiv, abs/1701.04862, 2017.\", \"Zhiming Zhou, Jiadong Liang, Yuxuan Song, Lantao Yu, Hongwei Wang, Weinan Zhang, Yong Yu, and Zhihua Zhang. Lipschitz generative adversarial nets, 2019.\"]}",
"{\"title\": \"Summary of major changes\", \"comment\": [\"Summary of major changes\", \"We thank all the reviewers for their insightful comments. Your suggestions have helped us to make important revisions to our paper. Major changes are as follows:\", \"A table of notation has been added in Appendix A.\", \"A new section (2.1 - Continuous Relaxations) has been added. We hope it is helpful to demonstrate the difference between approaches to non-differentiability.\", \"Section 5.5 has been rewritten to better explain the unusually high perplexity in response to Reviewer #3.\", \"Section 5.7 has been revised to clarify the contribution of our work.\"], \"additions_that_will_be_made_in_the_final_submission\": [\"error-bar: More experiments will be conducted (using different random seeds) for providing an average performance and confidence intervals for CaptainGAN.\", \"perplexity distribution: A perplexity distribution will be plotted for showing the severity of mode dropping.\"]}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for your response.\\n\\nI was mistaken in my understanding that the straight-through estimator proposed in this submission is the same as the Gumbel straight-through estimator introduced in Jang et al. (2016); thank you for clarifying. I now understand that the difference between the two is that the former uses \\u201cd/d\\\\theta softmax(logits)\\u201d, whereas the latter uses \\u201cd/d\\\\theta softmax(logits + gumbel_noise)\\u201d; is this correct?\\n\\nI think the newly-added Section 2.1 and your justification for the use of spectral normalization contradict each other. On one hand, Section 2.1 dismisses continuous relaxations on the basis that the difference between a one-hot encoded token and a distribution over tokens is easy to spot by the discriminator, which then becomes very certain of its predictions. On the other hand, the proposed approach requires the discriminator\\u2019s Lipschitz constant to be bounded for the generator to use the gradient of the discriminator effectively. It seems to me that this line of reasoning could just as well apply to continuous relaxations, because bounding the Lipschitz constant of the discriminator limits how \\u201ccertain\\u201d it can be of its prediction. Would that not nullify the objection raised in Section 2.1?\"}",
"{\"title\": \"Reply to Reviewer #3 about section 5.5\", \"comment\": \"Sorry for the late reply.\\n\\nWe've rewritten the section 5.5 and have a response at https://openreview.net/forum?id=H1gy1erYDH¬eId=HkeF9zh9jS .\"}",
"{\"title\": \"Reply to Reviewer #2\", \"comment\": \"Regarding the contribution:\\n\\nYes, the straight-through estimator helps avoid the drawback of the score-function estimator by providing extra information, which is the gradient of the discriminator, to the generator from the discriminator. There is technically an infinite number of possible \\u201cstraight-through estimators\\u201d. The one we think of as the straight-through estimator is just the most obvious way to define the backward gradient (by pretending the activation is an identity function) - but there is no reason to think that it is the best. Thus it is worth searching for modifications.\\n\\nSpectral normalization is not so much an \\u201caddon\\u201d as it is a crucial prerequisite for our method. Since we want to incorporate the gradient of the discriminator in the generator, we need to bound the Lipschitz constant of the discriminator (using spectral normalization) and makes it possible for the generator to use gradient of the discriminator effectively. For more details, please see the revised version of section 5.7. \\n\\nOur experiments show that the recentering trick increases 20% FED compared to the baseline straight-through estimator (both using spectral normalization), which is why we feel it is worth incorporating.\"}",
"{\"title\": \"Reply to Reviewer #3\", \"comment\": \"Regarding the explanation of unusually high perplexity:\\n\\nWe\\u2019ve rewritten Section 5.5 to better explain the unusually high perplexity. The main reason is CaptainGAN minimizes different objective from MLE models. This objective assigns an extremely low cost to mode dropping and does not force the generator to mimic all aspects of real data. This can result in poor modeling of the likelihood but does not necessarily lead to poor sample generation. For more details, please see the revised version of section 5.5. Moreover, we are planning to add more experiments to measure the severity of mode dropping. Due to the time constraint, we will report the result at the final submission.\"}",
"{\"title\": \"Reply to Reviewer #3\", \"comment\": \"Regarding the lack of information about RelGAN:\\n\\nRelGAN is a GAN architecture using the Gumbel-Softmax estimator. In our revision, we\\u2019ve added a subsection (2.1 Continuous Relaxations) to explain the drawback of using the Gumbel-Softmax estimator. Also, we\\u2019ve added RelGAN scores to Figure 3 and Figure 6. We show that CaptainGAN is competitive with RelGAN (without additional pretraining before adversarial training) in terms of Bleu/SelfBleu and outperforms RelGAN\\u2019s FED score.\"}",
"{\"title\": \"Regarding multiple random seeds\", \"comment\": \"Thank you, please include if possible.\\nIt would strengthen the evaluation and the results.\\n\\nAre there any plans of backing some of 5.5 with additional experiments? (see comment above)\"}",
"{\"title\": \"Reply to Reviewer #3\", \"comment\": \"Regarding the report of an average performance with confidence interval using different random seeds:\\n\\nThank you for the suggestion. Providing average performance and confidence intervals is important. However, due to time constraints, we will not be able to add the results before 11/15. Instead, we will try to include the results in our final submission.\"}",
"{\"title\": \"Reply to Reviewer #2\", \"comment\": \"Regarding the question about the difference between the work of Kusner & Hern\\u00e1ndez-Lobato (2016) and our work:\\n\\nWe\\u2019ve added a subsection (2.1 Continuous Relaxations) which explains the difference between the work of Kusner & Hern\\u00e1ndez-Lobato (2016) and our work. We specifically do not use Gumbel-Softmax Estimator for text generation. As explained in our revision, using the Gumbel-Softmax Estimator leads to a training interaction with potentially pathological quirks which is not reflective of the actual text generation process.\\n\\nKeeping training and inference consistent (both using sampled tokens) is why we use the Straight-Through estimator (note: not a \\u201cStraight-Through Gumbel estimator\\u201d). However, that choice by itself doesn\\u2019t guarantee good results. Our main contribution is how to apply the straight through estimator in an ideal way so that performance is acceptable, and that requires ensuring that a useful gradient is available during the backward pass.\"}",
"{\"title\": \"Reply to Reviewer #2\", \"comment\": \"We appreciate your rigorous review of our work. We\\u2019ve made several revisions with your feedback in mind.\", \"regarding_notation\": [\"We have added an acknowledgment for the usage of notation from Jang et al. (2016). We have also simplified and clarified the notation as follows:\", \"V = {x_1, \\u2026, x_v} stands for a predefined vocabulary of size v.\", \"x stands for a discrete token in V.\", \"\\\\hat{x} stands for a discrete token sampled from V by the generator G.\", \"\\\\mathbf{x} stands for a sequence of discrete tokens belong to V.\", \"\\\\hat{\\\\mathbf{x}} stands for a sequence of discrete tokens sampled from V.\", \"\\\\top stands for the transpose operation.\", \"We hope these adjustments improve the clarity of our paper.\"]}",
"{\"title\": \"Reply to Dianqi Li\", \"comment\": \"Thank you for the friendly reminder. We've uploaded a revision with added citations for RankGAN and MaliGAN.\"}",
"{\"title\": \"Missing paper reference\", \"comment\": \"Hi, Thanks for the good work. Just a minor comment: your experiment uses the results of MaliGAN and RankGAN. However, you didn't cite these two papers in your reference.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The submission proposes to train a GAN on discrete sequences using the straight-through Gumbel estimator introduced in Jang et al. (2016) in combination with gradient centering. The proposed approach is evaluated on COCO and EMNLP News in terms of BLEU and Self-BLEU scores, Fr\\u00e9chet Embedding Distance, Language Model Score, and Reverse Language Model Score.\\n\\nMy assessment is that the submission is below the acceptance bar, mainly due to clarity and novelty concerns. The proposed approach does have empirical backing, but I would argue that it is a very straightforward application of the straight-through Gumbel estimator to GANs, which is itself similar to existing work on applying the Gumbel-softmax estimator to GANs (Kusner & Hern\\u00e1ndez-Lobato, 2016). Detailed comments can be found below.\\n\\nThe submission does not feel self-contained. For instance, it borrows notation from Jang et al. (2016) without explicitly acknowledging it, and my personal experience is that reading Jang et al. (2016) beforehand makes a big difference in terms of clarity in Section 2.2.\\n\\nThe notation is inconsistent and confusing, and gets in the way of understanding the proposed approach. Here\\u2019s a (non-exhaustive) list of examples:\\n\\n- The reward function is first introduced as f_\\\\phi(\\\\mathbf{x}) above Equation 3, but all subsequent mentions of the reward function use f_\\\\phi(\\\\hat{\\\\mathbf{x}}).\\n- The \\\\mathbf{m}_\\\\theta variable is introduced in Equation 5 and is immediately replaced with \\\\mathbf{p}_\\\\theta, which adds notational overhead without any benefit.\\n- The difference between \\\\hat{\\\\mathbf{x}} and \\\\hat{x} is not explained in the text. From the context I understand that \\\\hat{x} is a categorical scalar in {1, \\u2026, V}; is this correct?\\n- In Equation 6, x_1, \\u2026, x_V are used to denote the *values* that \\\\hat{x} can take. This clashes with the previous convention that \\\\mathbf{x} is a sequence sampled from p_{data} (Equation 1). Given that convention and the difference between bolded and non-bolded variables discussed above, I would have expected that x_1, \\u2026, x_V would correspond to the categorical values of elements of the \\\\mathbf{x} sequence. That contributes to confusion in Equation 9, where \\\\mathbf{e}_{x_t} and p_\\\\theta(x_t) are *not* time-dependent.\\n- Equation 8 sums over time steps, but the first summation that appears in Equation 8 does not make use of the temporal index. There is also a symbol collision for T, which is used both as the sequence length and as the \\\"transpose\\\" symbol.\\n\\nAs a result, the proposed centering method and the rationale for it is still not entirely clear to me. In particular, is the gradient centering approach necessary to avoid the drawback of score function-based approaches (i.e. the generator is only given feedback on the tokens it samples), or does the non-centered, straight-through variant of the proposed approach also avoid this drawback?\\n\\nI\\u2019m also not convinced that the centering heuristic is a crucial component of the proposed approach when the biggest improvement observed over the straight-through baseline is obtained by adding spectral normalization. I would argue that the proposed approach is a straightforward application of the straight-through Gumbel gradient estimator to GAN training, which is similar in spirit to work by Kusner & Hern\\u00e1ndez-Lobato (2016) (not cited in the submission) -- the main difference being that the latter uses the Gumbel-softmax distribution directly and anneals the temperature parameter over the course of training. A comparison between the two would be warranted.\", \"references\": [\"Kusner, M. J., & Hern\\u00e1ndez-Lobato, J. M. (2016). GANs for sequences of discrete elements with the Gumbel-softmax distribution. arXiv:1611.04051.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper attempts to solve the problem of non-differentiable connection between the generation and discriminator of a GAN. The authors come up with an estimator of the gradient for the generator from the gradient of the discriminator, which was disconnected previously. With this change, the model should be able to select better tokens than random selection, which could leads to more robust training. The experiment results on both COCO Image Captions and EMNLP 2017 News datasets justify the authors' argument.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose CaptainGAN, a method using the straight-through gradient estimator to improve training of the generator for text generation.\\n\\nThe paper is well-written and the evaluation seems thorough, comparing to relevant baselines.\", \"comments\": \"\", \"figure_3\": \"the caption refers to Caccia et al. for results on LeakGAN, MaliGAN and seqGAN, but unless I\\u2019ve missed it, RelGAN hasn\\u2019t yet been introduced by name as a baseline? The citation is given in the opening part of the introduction, in an enumeration, but isn\\u2019t revisited later in the text - not even here where the results of the model are introduced. Given that it seems, according to the presented results, to be the most competitive of the GAN models that the authors are comparing to, maybe it\\u2019s worth adding more contextual information on RelGAN to the Background section?\\n\\nFor their method, the authors should report an average performance over several random seeds and provide the standard deviation / confidence intervals, for the readers to be able to assess the stability of the method and the significance of the improvement reported in the results.\\n\\nI find Section 5.5. particularly interesting, as well as the reported perplexity in Table 2. The authors provide 3 bullet points to explain the unusually high perplexity of the generator on the training and validation data. I feel that the explanations that are given are at the moment vague and not visibly backed by data, therefore being speculative. Obviously, point 1) is hard to quantify - but point 2) could possibly be at least partially quantified - if the hypothesis is that names, places, punctuation marks etc play an important role in the reported perplexity score, then maybe the authors could test this by correlating model perplexity on sentences with whether those sentences contain these types of words?\"}"
]
} |
HyxJ1xBYDH | Learning-Augmented Data Stream Algorithms | [
"Tanqiu Jiang",
"Yi Li",
"Honghao Lin",
"Yisong Ruan",
"David P. Woodruff"
] | The data stream model is a fundamental model for processing massive data sets with limited memory and fast processing time. Recently Hsu et al. (2019) incorporated machine learning techniques into the data stream model in order to learn relevant patterns in the input data. Such techniques were encapsulated by training an oracle to predict item frequencies in the streaming model. In this paper we explore the full power of such an oracle, showing that it can be applied to a wide array of problems in data streams, sometimes resulting in the first optimal bounds for such problems. Namely, we apply the oracle to counting distinct elements on the difference of streams, estimating frequency moments, estimating cascaded aggregates, and estimating moments of geometric data streams. For the distinct elements problem, we obtain the first memory-optimal algorithms. For estimating the $p$-th frequency moment for $0 < p < 2$ we obtain the first algorithms with optimal update time. For estimating the $p$-the frequency moment for $p > 2$ we obtain a quadratic saving in memory. We empirically validate our results, demonstrating also our improvements in practice. | [
"streaming algorithms",
"heavy hitters",
"F_p moment",
"distinct elements",
"cascaded norms"
] | Accept (Poster) | https://openreview.net/pdf?id=HyxJ1xBYDH | https://openreview.net/forum?id=HyxJ1xBYDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"gdm5DfP-yk",
"Syem-mAisB",
"SyxcJXRjjr",
"Hkl2wG0joB",
"Sye1Z22R5B",
"HJxwv99GqS",
"BJloMuTnFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798739069,
1573802746945,
1573802722463,
1573802595549,
1572944887493,
1572149855334,
1571768338790
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2046/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2046/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2046/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2046/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2046/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2046/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper theoretically analyzes the use of an oracle to predict various quantities in data stream models. Building upon Hsu et al., (2019), the overriding goal is to examine the degree to which such an oracle is can provide memory and time improvements across broad streaming regimes. In doing so, optimal bounds are derived in conjunction with a heavy hitter oracle.\\n\\nAlthough the rebuttal and discussion period did not lead to a consensus in the scoring of this paper, two reviewers were highly supportive. However, the primary criticism from the lone dissenting reviewer was based on the high-level presentation and motivation, and in particular, the impression that the paper read more like a STOC theory paper. In this regard though, my belief is that the authors can easily tailor a revision to increase the accessibility to a wider ICLR audience.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": [\"We thank the reviewer for the comments on our paper.\", \"We have included the result concerning a noisy oracle for the F_p moment estimation problem in the paper.\", \"We like the question of minimizing the number of oracle calls. This is an interesting open problem and we intend to explore it in future work.\"]}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for the comments on our paper. Designing more efficient streaming algorithms with machine learning techniques is a relatively new research topic and we have included more related work in our updated version of the manuscript (highlighted in the blue color).\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for the comments on our paper. A prior work of Hsu et al. (ICLR'18) showed that heavy hitter oracles exist and that they can be constructed using machine learning techniques. We are using the same type of oracles in our current submission. Similar oracles have been studied in previous works too, e.g., membership oracles for Bloom filters in Kraska et al. Both the previous works and the experiments in the current submission demonstrate that it is reasonable to make such an oracle assumption.\\n\\nWe thank the reviewer for the questions on optimization ability, generalization error, etc. These are interesting research directions. The answers are very likely to depend on the application, data sets, etc., which we plan to study in the future.\\n\\nThe prior work by Hsu et al. showed that the oracle trained by deep learning has high accuracy (see Section 5.3 in their paper): for Internet traffic data, the AUC score is 0.9, and for search query data, the AUC score is 0.8. The performance of a simple online algorithm would likely depend on the type of classifier used and input feature representation. Linear classifiers with IP addresses represented as individual bits are unlikely to work well because their expressive power is limited. For instance, at the very least, we would like our classifier to express a DNF hypothesis of the form:\\n(IP address = a1) or (IP address = a2) or ...\\n\\nWe have updated the introduction to rephrase and clarify the lower bound claims. The added/modified text are highlighted in the blue color.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presents algorithms for solving computational problems in a datastream model augmented with an oracle learned from data. The authors show that under this model, there exist algorithms that have significantly better time and space complexity than the current best known algorithms that do not use an oracle. The authors support their theoretical analysis with experiments in which the oracle is represented by a deep neural network and demonstrate improvement over classical algorithms that do not use machine learning.\\n\\nOverall, this paper seems like a solid contribution to the literature. However, in its current state it does seem to be presented and motivated in a way that is appropriate for the audience of ML researchers at ICLR. It reads very much like a STOC theory paper, and a lot of the key ML details that would be relevant to audience at this conference seem to have been shoved under the rug in a way. Therefore my score for now is a weak reject, but I am very happy to increase the score if the authors address my presentations concerns.\", \"major_comments\": [\"The oracle-augmented datasteam model needs to be contextualized better. I don't have a good sense of whether this is a reasonable theoretical model to explore and a lot of very basic questions remain unanswered for me. For example, how do I even know that the oracle in question exists? What are the particular assumptions under which it exists? What are the requirements on the training data, optimization ability, generalization error, etc. How do we know that we can create in practice ML learning models that are sufficiently accurate to serve as an oracle?\", \"The connections to deep learning seem arbitrary in some of the experiments. In one of the experiments, the authors train neural networks over a concatenation of IP address embeddings. Why do we need to use deep learning here? What is the benefit of using DL algorithms within the oracle-augmented datastream model? Is a simple algorithm enough? What algorithms should we ideally use in practice? What if you used simpler online learning algorithms with formal accuracy guarantees?\"], \"minor_comments\": [\"I thought there was a bit over-selling in intro. The authors say that they match the theoretical lower bounds for several problems. However, you are in a different computational model in which you now have access to an oracle. This needs to be made more explicitly, and language could be a bit toned down (e.g. in this model, we can obtain runtime that match or improve over lower bounds...)\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Algorithms for Streaming data using a machine learning oracle is analyzed theoretically and empirically.\\n\\nThe idea is to build on some recent work (Hsu 19) which used RNNs to predict heavy hitters in streaming data. The purpose of this paper is to analyze whether such an oracle can help streaming algorithms to obtain improved bounds. I am not very familiar with this line of research so my comments will be more general in this case. The idea of improved bounds for streaming algorithms using machine learning oracle seems to be very appealing to me. The authors present novel theoretical results supporting this.\\n\\nExperiments are performed on real as well as synthetic datasets using Hsu et al.\\u2019s method as an oracle. Two real-world problems are selected, i.e., distinct packets in a network flow, Number of occurrences of each type of search query, and it is shown that using a oracle improves performance as compared to methods that do not use the oracle. Overall, I think the paper seems to be an interesting direction which has both formal guarantees and experiments validating them in real-world datasets. One issue is perhaps, very little in terms of related work. I am not sure if this is the first work in this direction of proving bounds assuming an oracle or if there is some background work that the authors could provide to put this into context.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"The paper talks about calculating various statistics over data streams. This is a very popular topic and is very relevant in big data analysis. A lot of work has been done in this general area and on the problems that are discussed in the paper. The new idea in the paper is better streaming algorithms under the assumption that there is a \\u201cheavy hitters\\u201d oracle that returns data items that have a lot of representation in the stream. The authors give provably better algorithms for the distinct elements problem, F_p moment problem (p > 2), and some more problems. These are important problems in streaming data analysis. They improve the space bounds and interestingly in some cases the bounds are better than what is possible without the oracle assumption. This also shows the power of such an oracle. There are experimental results to demonstrate the efficiency of the algorithms. At a high level the work seems good and interesting for a large audience interested in streaming data analysis. I have not gone over the proofs in detail (much of which is in the appendix).\", \"Even though oracle results are interesting, to make it practical it may make sense to talk about a more realistic, weaker oracle where some of the queries may be incorrect.\", \"It may even make sense to minimise the number of oracle calls which can be thought of as a resource and discuss the relationship between number of oracle calls and other resources such as space.\"]}"
]
} |
HkxARkrFwB | word2ket: Space-efficient Word Embeddings inspired by Quantum Entanglement | [
"Aliakbar Panahi",
"Seyran Saeedi",
"Tom Arodz"
] | Deep learning natural language processing models often use vector word embeddings, such as word2vec or GloVe, to represent words. A discrete sequence of words can be much more easily integrated with downstream neural layers if it is represented as a sequence of continuous vectors. Also, semantic relationships between words, learned from a text corpus, can be encoded in the relative configurations of the embedding vectors. However, storing and accessing embedding vectors for all words in a dictionary requires large amount of space, and may stain systems with limited GPU memory. Here, we used approaches inspired by quantum computing to propose two related methods, word2ket and word2ketXS, for storing word embedding matrix during training and inference in a highly efficient way. Our approach achieves a hundred-fold or more reduction in the space required to store the embeddings with almost no relative drop in accuracy in practical natural language processing tasks. | [
"word embeddings",
"natural language processing",
"model reduction"
] | Accept (Spotlight) | https://openreview.net/pdf?id=HkxARkrFwB | https://openreview.net/forum?id=HkxARkrFwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"8cT-gD6ic1",
"B1lpmFXhoH",
"B1xfxOX2oH",
"rkg2avmniH",
"HJlANh2NqH",
"r1lY-EZJqH",
"Hyl12frTtr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798739040,
1573824805232,
1573824489818,
1573824452352,
1572289589879,
1571914753501,
1571799719468
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2045/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2045/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2045/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2045/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2045/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2045/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper proposes quantum-inspired methods for increasing the parametric efficiency of word embeddings. While a little heavy in terms of quantum jargon, and perhaps a little ignorant of loosely related work in this sub-field (e.g. see the work of Coecke and colleagues from 2008 onwards), the majority of reviewers were broadly convinced the work and results were of sufficient merit to be published.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your detailed comments! We address the main points below:\\n\\n>> In my opinion the terminology from quantum computing and entanglement is an unnecessary complication. It would be better to simply talk about the special parametric form of the embeddings, which allows efficient storage. \\n\\nWe removed some of the quantum computation connections from the introduction, keeping only enough to justify the title. \\n\\n>> Tensor product representations have been used for embeddings before (but not with the goal of efficiency) (e.g. Arora et al 2018) https://openreview.net/pdf?id=B1e5ef-C-\\n\\nWe have expanded the \\u201crelated work\\u201d section to include:\\nIn more distantly related work, tensor product spaces have been used in studying document embeddings, by using sketching of a tensor representing $n$-grams in the document \\\\cite{arora2018compressed}.\\n\\n>> The paper covers related work briefly and does not compare experimentally to any other work aiming to reduce memory usage for embedding models (e.g. using up-projection from lower-dimensional embeddings, or e.g. this paper: Learning Compact Neural Word Embeddings by Parameter Space Sharing by Suzuki and Nagata.\\n\\nWe have added a reference to Suzuki and Nagata\\u2019s (N&S) very interesting work. We note their experiments show substantial drop in quality on downstream tasks when the space-saving rate increases past 64. For a |U| x D embedding, their PS-SGNS method uses |U| B log K + C B K F bits (N&S, section 3.3), where F is the number of bits (e.g. 32), C B and K are parameters, chosen to meet the assertion C B = D, and log K >=1 . Thus, their embeddings use |U| B log K + D K F, PS-SGNS method cannot use less then |U| + D memory. For the DrQA / SQuAD experiments, we have |U|=118655, D=300, yet our method stores the embedding using just 380 floating numbers, a 380-fold reduction over the theoretical limit of PS-SGNS, with little impact on solution quality. Other existing methods, e.g. Uniform Quantization and K-means Compression cannot offer more than 32 fold space reduction. K-means Compression \\u201cCompressing word embeddings\\u201d by Andrews et al. also has the same limit. \\u201cOn the Downstream Performance of Compressed Word Embeddings\\u201d shows that PCA, which also has |U| + D memory, shows a big drop in performance after 4 fold compression. On the other hand, the minimum space requirement of our method is only 4 log |U|, if we use 2x2 matrix F_j and tensor of order n=log |U|. This logarithmic dependence on |U| translates to savings that grow higher with higher dictionary sizes. \\n\\n>> The experimental results seem to conflate the issues of the dimensionality of the word embeddings versus that of the higher layers. \\n\\nIn all experiments, we kept the dimensions of the higher layers constant. The LSTM layers used the same 256 hidden size dimensions, but the embedding had a varying size. To clarify this, on page 6, we added \\\"we also explored 400, and 8000 for the embedding size but kept the dimensionality of other layers constant.\\u201d\\n\\n>> In addition, the activation memory is often the major bottleneck and not the parameter memory. These issues are not discussed or made explicit in the experiments.\\n\\nWe have added two paragraphs following the experimental results to discuss the total memory breakdown during training and inference, and clarify where the savings are. During inference, there is no need for storing all the activations so embedding and other model\\u2019s parameters are the major bottlenecks. During training, one can decrease the memory required for storing activations with a method e.g. `gradient checkpointing` used recently in \\u201cGenerating Long Sequences with Sparse Transformers\\u201d by Child et al., 2019.\\n\\n>> Ideally, an experimental comparison to a prior method for space-efficient embeddings.\\n\\nWe reduced the quantum part. We added classification to the description of the models:\\nFor the first two tasks, we now have \\u201cIn both the encoder and the decoder we used internal layers with dimensionality of 256\\u201d. For the third task, we have \\u201cWe used the DrQA's model, a 3-layer bidirectional LSTMs with 128 hidden units for both paragraph and question encoding.\\u201d. In terms of experimental comparison, as noted above, existing methods offer <64-fold embedding reduction, and the main goal of our method is to provide higher reduction rates. \\n\\n>> Question: What is the role of pre-trained Glove embeddings in the word2ket models? Was any pre-training done on unlabeled text?\\n\\nWe did not use any pre-training of the models, and did not use pre-trained embeddings. To clarify this, we added \\u201cWe trained the model for 40 epochs, starting from random weights and embeddings\\u201d to the experiments descriptions. \\n\\n>> Section 3.2 F_j: R^t -> R^p , do you mean R^q \\nIndeed. Apologies for this and other typos.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your review and comments!\\n\\n>> Cons: 1. (Minor) While this is not the focus of the paper, it would be useful to have at least one experiment with a state-of-the-art model on any of these tasks to further strengthen the results (most of the baseline models used currently seem to be below SOTA).\\n\\nIndeed, these models have been surpassed by transformer-based models. We explored several models, and ultimately settled on training Bert.base, but it was not possible to advance far into the run given our computational budget. On V100 GPU at our disposal, it would take a month to train it. After two days of running it using regular embeddings and using word2ketXS 4/1 embedding, there training loss curves were indistinguishable. But at that stage, the optimization is still in its early stages, so this is not a conclusive finding.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your detailed comments! We address the main points below:\\n\\n>> The choice of tasks to evaluate on is broad, which is a strength, but is missing simpler tasks that one would expect to see, such as a text classification dataset, or simple bag-of-vectors style models. \\n\\nFollowing the suggestion, we trained GloVe using regular embedding and word2ketXS 4/1 for 600K steps on enwiki8 dataset. The evaluation loss started at 0.75 and flattened to 0.03 for word2ketXS and 0.01 for regular embedding.\\n\\n>> authors do not run experiments with the now-ubiquitous Transformer.\\n\\nTraining transformers from scratch is not possible with our current computational budget as it takes more than a month to train using a single V100 GPU. We ran an experiment with Bert.base for 2 days using regular and word2ketXS 4/1 embedding. The difference at this stage is infinitesimal. \\n\\n>> much more work could be done to understand the best way to mitigate for longer training and inference times. \\n\\nFrom our experiments with pre-training Bert, which is larger model than the ones we used in our experiments, the increase in time dropped to 7%. \\n\\n>> Generally current limitations for training these kinds of models are the long training times and being able to fit large batches onto our hardware, and the vocabulary matrix is only a constant factor here. \\n\\nWe have added two paragraphs following the experimental results to discuss the total memory breakdown during training and inference, and clarify where the savings are. During inference, there is no need for storing all the activations so embedding and other model\\u2019s parameters are the major bottlenecks. During training, one can decrease the memory required for storing activations with a method e.g. `gradient checkpointing` proposed in \\u201cTraining deep nets with sublinear memory cost.\\u201d by Chen et. al. 2016 and used recently in \\u201cGenerating Long Sequences with Sparse Transformers\\u201d by Child et al., 2019. Other approaches, such as \\u201cALBERT: A Lite BERT for Self-supervised Learning of Language Representations\\u201d by Lan et. al. use matrix factorization that gives 5 to 30 fold reduction for the base and xxlarge model. \\u201cLow-Memory Neural Network Training: A Technical Report\\u201d by Sohoni reports 8 to 60 fold reduction in the peak memory required to train a model for a DynamicConv Transformer and WideResNet model by combining methods such as (1) imposing sparsity on the model, (2) using low precision, (3) microbatching, and (4) gradient checkpointing. \\n\\n>> Here are some questions for the authors that come to mind when reviewing:\\n\\n>> How does your method compare to other published methods on your benchmarks? \\n\\nThe obstacle to comparing published methods for word embedding compression empirically with outs is that existing methods have hard limits on the compression rate. E.g. bit-reductions techniques can only reduce 32bits to 1bit. Other methods also have hard limits on their storage requirement, for example PS-SGNS method cannot use less then |U| + D memory. For the DrQA / SQuAD experiments, we have |U|=118655, D=300, yet our method stores the embedding using just 380 floating numbers, a 380-fold reduction over the theoretical limit of PS-SGNS, with little impact on solution quality. We have expanded Related Work section to comment on this issue. \\n\\n>> which choices for r and k lead to the best time/memory/performance tradeoff? how does this compare to other compression methods (on your tasks)\\n\\nThe compression rate depends on the tensor product rank and order. Increasing the order leads to logarithmic compression, while increasing the rank reduces the compression by a linear rate. Increasing order and reducing rank both lead to lower flexibility in what the compressed model can approximate. In the Giga experiment, we reported embedding dim of 400 with order of 2 and rank 10. We also investigated ranks ranging from 1 to 128. Reducing rank below 8 leads to observable drop in accuracy, of about e.g. RG-1 drops from 35.17 to about 34. Increasing the rank past 10 does not increase accuracy. \\n\\n>> Seq2Seq models usually involve multiplying the the output hidden state with a vocab matrix before softmaxing over all the vocabulary produce word probabilities - did you account for this? Does your method work for the output vocab matrix? \\n\\n\\nNo. Neither our method nor other methods aim at compressing this matrix. We added two paragraphs at the end of experimental results section to clarify this and other memory considerations for training and inference. In Transformer, this matrix is shared with the embedding matrix, in principle we can use the same lazy tensor approach to utilize the transposed embeddings matrix without explicitly reconstructing it.\\n\\n\\n>> Did you investigate pre-training word2ket like word2vec or Glove?\\nNo, we trained all models from random initializations. We added a clarification highlighting that to the manuscript.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper explores two related methods to reduce the number of parameters required (and hence the memory footprint) of neural NLP models that would otherwise use a large word embedding matrix. Their method, inspired by quantum entanglement, involves computing word embeddings on-the-fly (or by directly computing the output of the \\\"word embedding\\\" with the first linear layer of network). They demonstrate their method can save an impressive amount of memory and does not exhibit big performance losses on three nlp tasks that they explore.\\n\\nThis paper is clearly written (with only a couple of typos) but does not yet reach publication standard. Whilst the empirical performance of their approach is promising from the perspective of saving reducing memory requirements, more experiments are required and more careful comparisons to baselines and other methods in the literature for saving memory/parameters. In general the related work and experimental sections are weak and brief, with only superficial analysis. There is lack of careful analysis and insight into their results, as well as a careful comparisons to other work in this area.\\n\\nThe choice of tasks to evaluate on is broad, which is a strength, but is missing simpler tasks that one would expect to see, such as a text classification dataset, or simple bag-of-vectors style models. In addition, the choice of models are somewhat outdated baselines. It seems that transformers would be an ideal setting for their approach, as transformers have rather high dimensional word embedding matrices, but the authors do not run experiments with the now-ubiquitous Transformer.\\n\\nThe quantum inspiration is largely a distraction, and I think the paper would benefit from this element being scaled back or removed in order to free up space for more experiments.\\n\\nThe authors acknowledge one key weakness of their approach, that both training and inference time are increased (by 28% or 55% longer for DocQA depending on compression) but much more work could be done to understand the best way to mitigate for longer training and inference times.\\n\\nThe authors argue that reducing the memory footprint of models is vital to address hardware limitations for training and inference for large models like BERT or ROBERTA, but this argument is not particularly strong. Generally current limitations for training these kinds of models are the long training times and being able to fit large batches onto our hardware, and the vocabulary matrix is only a constant factor here. And since training time is a bottleneck, the added value of saving memory vs slowing the training speed by 30-50% is debatable.\", \"here_are_some_questions_for_the_authors_that_come_to_mind_when_reviewing\": \"How does your method compare to other published methods on your benchmarks? \\n\\nwhich choices for r and k lead to the best time/memory/performance tradeoff? how does this compare to other compression methods (on your tasks)\\n\\nSeq2Seq models usually involve multiplying the the output hidden state with a vocab matrix before softmaxing over all the vocabulary produce word probabilities - did you account for this? Does your method work for the output vocab matrix? \\n\\nDid you investigate pre-training word2ket like word2vec or Glove?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes word2ket - a space-efficient form of storing word embeddings through tensor products. The idea is to factorize each d-dimensional vector into a tensor product of much smaller vectors (either with or without linear operators). While this results in a time cost for each word lookup, the space savings are enormous and can potentially impact several applications where the vocabulary size is too large to fit into processor memory (CPU or GPU). The experimental evaluation is done on several tasks like summarization, machine translation and question answering and convincingly demonstrates that one can achieve close to original model performance with very few parameters! \\n\\nThis approach would be very useful due to growing model sizes in many areas of NLP (e.g. large pre-trained models) and more broadly, deep learning.\", \"pros\": \"1. Novel idea, clear explanation of the method and the tensor factorization scheme. \\n2. Convincing experiments on a variety of NLP tasks that utilize word embeddings.\", \"cons\": \"1. (Minor) While this is not the focus of the paper, it would be useful to have at least one experiment with a state-of-the-art model on any of these tasks to further strengthen the results (most of the baseline models used currently seem to be below SOTA).\", \"minor_comments\": \"\", \"abstract\": \"stain -> strain\", \"page_2\": \"$||u|| \\\\rightarrow ||w||$\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presents two methods to learn word embedding matrices that can be stored in much less space compared to traditional d x p embedding matrices, where d is the vocabulary size and p is the embedding size. Two methods are proposed: the first method estimates a p-dimensional embedding for a word as a sum of r tensor products of order n (tensor product of n q-dimensional embeddings). This representation takes rnq parameters which can be much less than p, since p = q^n. The second method factorizes a full d x p embedding matrix jointly as a tensor product of much smaller t x q matrices and can obtain even larger space savings. Algorithms for efficiently computing full p-dimensional representations are also included. When only dot products are needed, the p-dimensional representations do not need to be explicitly constructed.\\n\\nIn my opinion the terminology from quantum computing and entanglement is an unnecessary complication. It would be better to simply talk about the special parametric form of the embeddings , which allows efficient storage. Tensor product representations have been used for embeddings before (but not with the goal of efficiency) (e.g. Arora et al 2018) https://openreview.net/pdf?id=B1e5ef-C-\\nThe paper covers related work briefly and does not compare experimentally to any other work aiming to reduce memory usage for embedding models (e.g. using up-projection from lower-dimensional embeddings, or e.g. this paper: Learning Compact Neural Word Embeddings by Parameter Space Sharing by Suzuki and Nagata.\\n\\nThe experimental results on summarization, machine translation, and QA show that the methods can obtain comparable results to models using traditional word embeddings while obtaining savings of up to one-thousand fold decrease in space needed for the embeddings.\\n\\nThe experimental results seem to conflate the issues of the dimensionality of the word embeddings versus that of the higher layers. For example, in the summarization experiments, word2ketXS embeddings corresponding to 8000-dimensional embeddings are compared to a standard model with embeddings of size 256. The LSTM and layers for the word2ketXS model would become quite large but their size is not taken into account. In addition, the activation memory is often the major bottleneck and not the parameter memory. These issues are not discussed or made explicit in the experiments.\\n\\nOverall the paper can be a strong contribution if the methods are stated with less quantum computing jargon, the overall parameter size and speed of the different models is specified in the experiments, and more specific connections to related work are made. Ideally, an experimental comparison to a prior method for space-efficient embeddings.\", \"question\": \"What is the role of pre-trained Glove embeddings in the word2ket models? Was any pre-training done on unlabeled text?\", \"some_typos\": \"Section 1.1\\n\\n\\u201cmatrix, as the cost ..\\u201d -> \\u201cmatrix, at the cost\\u201d\\n\\nUnder Eq (2)\\nI think you mean w instead of u \\n\\nSection 3.2\", \"f_j\": \"R^t -> R^p , do you mean R^q\"}"
]
} |
HJgRCyHFDr | On Weight-Sharing and Bilevel Optimization in Architecture Search | [
"Mikhail Khodak",
"Liam Li",
"Maria-Florina Balcan",
"Ameet Talwalkar"
] | Weight-sharing—the simultaneous optimization of multiple neural networks using the same parameters—has emerged as a key component of state-of-the-art neural architecture search. However, its success is poorly understood and often found to be surprising. We argue that, rather than just being an optimization trick, the weight-sharing approach is induced by the relaxation of a structured hypothesis space, and introduces new algorithmic and theoretical challenges as well as applications beyond neural architecture search. Algorithmically, we show how the geometry of ERM for weight-sharing requires greater care when designing gradient- based minimization methods and apply tools from non-convex non-Euclidean optimization to give general-purpose algorithms that adapt to the underlying structure. We further analyze the learning-theoretic behavior of the bilevel optimization solved by practical weight-sharing methods. Next, using kernel configuration and NLP feature selection as case studies, we demonstrate how weight-sharing applies to the architecture search generalization of NAS and effectively optimizes the resulting bilevel objective. Finally, we use our optimization analysis to develop a simple exponentiated gradient method for NAS that aligns with the underlying optimization geometry and matches state-of-the-art approaches on CIFAR-10. | [
"neural architecture search",
"weight-sharing",
"bilevel optimization",
"non-convex optimization",
"hyperparameter optimization",
"model selection"
] | Reject | https://openreview.net/pdf?id=HJgRCyHFDr | https://openreview.net/forum?id=HJgRCyHFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"x795jnCPK",
"Syx3ZB43oH",
"BygN0VE2jH",
"B1lBMGw2KB",
"rJevgbYIFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798739013,
1573827843801,
1573827787911,
1571742221046,
1571356910576
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2044/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2044/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2044/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2044/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Since there were only two official reviews submitted, I reviewed the paper to form a third viewpoint. I agree with reviewer 2 on the following points, which support rejection of the paper:\\n1) Only CIFAR is evaluated without Penn Treebank;\\n2) The \\\"faster convergence\\\" is not empirically justified by better final accuracy with same amount of search cost; and\\n3) The advantage of the proposed ACSA over SBMD is not clearly demonstrated in the paper.\\n\\nThe scores of the two official reviews are insufficient for acceptance, and an additional review did not overturn this view.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response 1\", \"comment\": [\"Response: Thank you for your comments. We hope to address your issues below:\", \"1) Novelty and relevance of SBMD and ASCA to NAS:\", \"Novelty: We respectfully disagree with your comment. In fact, our work is the first to introduce ASCA and it is *not* an existing generic algorithm.\", \"Beta parameter: The beta parameter depends on the activation functions used and on the data. As we acknowledged at submission, this restricts the cases where the theory applies to smooth activation functions (sigmoid, tanh).\", \"2) Contribution of work on top of existing results for mirror descent:\", \"While mirror descent is indeed a well-known approach in the optimization literature, its connection to NAS has not been explored. Our theoretical guarantees are largely motivated by this connection and provide significant improvements over existing analysis for NAS (Akimoto et al., 2019; Carlucci et al., 2019; Nayman et al., 2019; Noy et al., 2019; Yao et al., 2019).\", \"The guarantees we provide for the ASCA variant of mirror descent are *new* and not previously known in any form outside the Euclidean case.\", \"3) Generalization bounds for NAS\", \"We *do* provide a theoretical bound for NAS. The main generalization result (Theorem 4.1) can be applied to non-convex inner objectives, including for NAS. We discuss what the result means for NAS starting at the bottom of page 7, with reference to existing theoretical work on complexity of the set of local minima of deep nets and a discussion of what further understanding can be gained.\"]}",
"{\"title\": \"Response 2\", \"comment\": \"Response: Thank you for your comments. We hope to address your issues below:\", \"theoretical_analysis\": \"We would like to emphasize that the convergence guarantees improve significantly upon several previous NAS analyses (Akimoto et al., 2019; Carlucci et al., 2019; Nayman et al., 2019; Noy et al., 2019; Yao et al., 2019). To our knowledge they are the first results that are both non-asymptotic (finite-time convergence) and optimize a quantity of direct interest (empirical risk objective).\", \"validity_of_exponentiated_gradient_update\": \"(1) NAS experiments: While we agree that the experiments would benefit from an additional dataset, we decided to focus on CIFAR-10 due to the high computational cost associated with running these experiments. Similar to Li & Talwalkar 2019, we have also observed that the variance associated with stage 3 evaluation of architectures is much higher on the Penn Treebank dataset and chose to instead focus our resources in thoroughly evaluating EDARTS on the lower variance CIFAR-10 benchmark. As stated in the last paragraph of the paper, we follow a higher bar for reproducibility than many other NAS publications (e.g., DARTS, SNAS, XNAS, ASAP, ProxylessNAS, etc) and report results for EDARTS for 3 different sets of seeds on CIFAR-10; EDARTS reaches ~2.70% test error on 2 out of the 3 runs. We have not seen similar broad reproducibility results for other NAS methods.\\n(2) Same search cost as first-order DARTS: the search cost is the same since we train for the same number of epochs. The faster convergence rate is reflected in the better resulting architecture. \\n(3) Kernel experiments: Please note that the kernel experiments were motivated by understanding weight-sharing and its generalization guarantees on a simpler problem (kernel ridge regression), not as a test of the performance of our optimizer. As a result, the successive halving method that exceeds exponentiated-gradient on those experiments is also an algorithm proposed in this paper, and it may be viewed as a hard-cutoff version of exponentiated-gradient. Furthermore, successive halving would be difficult to apply directly in the larger NAS search space.\\n\\nASCA vs. SBMD:\\n(1) Need for such an alternative: The motivation behind our paper is to theoretically understand NAS methods. Several NAS methods have found it useful to run many iterations on both the shared-weights and architecture-weights before switching (e.g. ENAS by Pham et al., 2018 and MdeNAS by Zheng et al., 2019). This approach is reflected in the ASCA algorithm and not in SBMD.\\n(2) Respective advantages: While most (but not all, as discussed above) NAS methods prefer an SBMD-style approach, ASCA may be preferable when fast solvers are available for strongly-convex relaxations of the problem at hand.\", \"wording\": \"Thank you for pointing these out - they will be corrected.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"I have not worked in the optimization filed and I am only gently followed the NAS field. I might under-valued the theoretical contribution.\", \"this_work_provides__theoretical_analysis_for_the_nas_using_weight_sharing_in_two_aspects\": \"1) The authors give non-asymptotic stationary-point convergence guarantees (based on stochastic block mirror descent (SBMD) from Dang and Lan (2015)) for the empirical risk minimization (ERM) objective associated with weight-sharing. Based on this analysis, the authors proposed to use exponentiated gradient to update architecture parameter, which enjoys faster convergence rate than the original results in Dang and Lan (2015). The author also provided an alternative to SBMD that uses alternating successive convex approximation (ASCA) which has similar convergence rate. \\n2) The author provide generalization guarantees for this objective over structured hypothesis spaces associated with a finite set of architectures.\\n\\nMy biggest concern is the validity of the proposed exponentiated gradient update, at least empirically. We indeed observed slightly improvement in test error over DARTS on the CIFAR10 benchmark but how reproducible the results are? Can you compare at least on the other benchmark (PENN TREEBANK) used in Liu et al 2019? Also, comparing to first order DARTS, search cost is the same and this is hard to justify the better convergence rate for EDARTS. In addition, the results on feature map selection is not very encouraging as the gap to the successive halving is significant.\\n\\nThe author proposed ASCA, as an alternative method to SBMD. Why we need such alternative? What is the advantage of ASCA comparing to SBMD? When should I use ASCA and when SBMD? How do they empirically different? \\n\\nThen I feel some wording can be improved. For example, \\\"while requiring computation training \\u2026\\u201d, \\u201c\\u2026which may be of independent interest\\u201d.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work proposes an algorithm for handling the weight-sharing neural architecture search problem. It also derives generalization bound for this problem.\", \"the_reviewer_has_several_concerns\": \"1) the SBMD and ASCA algorithms are existing generic algorithms. The analysis in this work also looks very generic. There is a sense of disconnection with the considered training problems. The reviewer would like to see more discussions on how to connect the algorithms with specific NAS problems. For example, what is the beta parameter when training a NAS problem?\\n\\n2) The convergence rate improvement brought by using mirror descent has been long known. It is not easy to see what is the contribution of this work.\\n\\n3) The generalization part seems to be meaningful. But it may be much stronger if the NAS problem can also have a theoretical bound. It is less appealing to only discuss cases with strongly convex objectives.\"}"
]
} |
BkxRRkSKwr | Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models | [
"Xisen Jin",
"Zhongyu Wei",
"Junyi Du",
"Xiangyang Xue",
"Xiang Ren"
] | The impressive performance of neural networks on natural language processing tasks attributes to their ability to model complicated word and phrase compositions. To explain how the model handles semantic compositions, we study hierarchical explanation of neural network predictions. We identify non-additivity and context independent importance attributions within hierarchies as two desirable properties for highlighting word and phrase compositions. We show some prior efforts on hierarchical explanations, e.g. contextual decomposition, do not satisfy the desired properties mathematically, leading to inconsistent explanation quality in different models. In this paper, we start by proposing a formal and general way to quantify the importance of each word and phrase. Following the formulation, we propose Sampling and Contextual Decomposition (SCD) algorithm and Sampling and Occlusion (SOC) algorithm. Human and metrics evaluation on both LSTM models and BERT Transformer models on multiple datasets show that our algorithms outperform prior hierarchical explanation algorithms. Our algorithms help to visualize semantic composition captured by models, extract classification rules and improve human trust of models. | [
"natural language processing",
"interpretability"
] | Accept (Spotlight) | https://openreview.net/pdf?id=BkxRRkSKwr | https://openreview.net/forum?id=BkxRRkSKwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"_AKqEaHJuY",
"r1xK3FH2sS",
"SyeYa8TisS",
"S1lcdXmDor",
"ByljpJ7wiS",
"SJeysJXPor",
"ryx68k7wsr",
"S1x7E1QwoH",
"BJey9CJQqB",
"r1gOVjXf5H",
"HJgBoPd0Kr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738981,
1573833137449,
1573799617448,
1573495666435,
1573494723173,
1573494679417,
1573494613437,
1573494571055,
1572171398582,
1572121392441,
1571878813427
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2043/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2043/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2043/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2043/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2043/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2043/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2043/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2043/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2043/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2043/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The authors present a hierarchical explanation model for understanding the underlying representations produced by LSTMs and Transformers. Using human evaluation, they find that their explanations are better, which could lead to better trust of these opaque models.\\n\\nThe reviewers raised some issues with the derivations, but the author response addressed most of these.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for responding\", \"comment\": \"The authors comments have addressed my concerns. I will raise the score.\"}",
"{\"title\": \"Thanks for responding to the comments.\", \"comment\": [\"Thanks to the authors for responding to my comments in detail.\", \"Thanks for clarifying the concerns around the performance of \\\"Statistic\\\" approach.\", \"Thanks for mentioning the reasons behind not using \\\"Statistic\\\" in the human-studies. My concern was primarily motivated by the performance of \\\"Statistic\\\" in the results. It seems reasonable to assume one would consider including \\\"Statistic\\\" in the human-studies as well. My only basis as of now is the hypothesis provided by the authors regarding this.\"]}",
"{\"title\": \"Paper revised\", \"comment\": [\"We would like to thank all the reviewers for their efforts and valuable comments. We have revised the paper to address the questions from reviewers, and also added additional ablation study. The major updates are:\", \"Added more related works in section 5, as suggested by reviewer #4.\", \"Added additional ablation study in section 4.4. We show padding the context gets inferior performance compared to sampling the context.\", \"Improved clarity in section 3.3-3.4, in response to reviewer #4.\"]}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you very much for your comments! We will carefully revise and improve the draft for the final version. We believe the major contributions of our paper is to propose a formulation for measuring context independent importance, and proposed two explanation algorithms derived from the formulation. Please also refer to our responses to the other two reviewers if you have any further questions.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thank you for your careful evaluation of the paper and encouraging comments. Here are our responses to the questions.\\n\\n-------------------------------\", \"q1\": \"SOC and SCD don\\u2019t always end up outperforming other approaches (specifically Statistic) on the SST-2, Yelp and TACRED datasets (Table. 1).\", \"a1\": \"\\u201cStatistic\\u201d is a direct approximation of the context independent importance defined in Eq.3 by sampling over the dataset. Therefore, the good performance of Statistic is actually an evidence for the effectiveness of our proposed formulation. However, one drawback of the Statistic based approach is that it works only for words and phrases that appear frequently in the dataset, which is usually not the case especially for long phrases. It is verified by the evaluation results shown in Table 1, where Statistic performs competitively on word level explanations, but perform poorly on phrase level explanations, with only a small improvement over input occlusion. It motivates us to sample the contexts of a phrase from a trained language model.\\n\\nQ1 (cont.): For the human evaluation experiments, dothe authors have any insights on how well does Statistic perform on the human-evaluation set of experiments?\\n\\nRegarding human evaluation experiments, we limited the number of presented explanations to 4, mainly in consideration of reducing the difficulties for evaluators. Therefore, we did not include Statistic in human experiments. Nevertheless, according to the evaluation results and analysis based on Table 1, we hypothesis that Statistic based approach would perform inferiorly compared to SOC and SCD in phrase level explanations.\\n\\n-------------------------------\", \"q2\": \"Another possible aspect to look into human evaluation could be -- \\u2018\\u2019Do the generated explanations help humans predict the output of the model?\\u2019\\u2019 This captures reliability in a very explicit sense. Do the authors have any thoughts on this and potential experiments that might address this? I don\\u2019t think not addressing this is necessarily detrimental to the paper but I\\u2019m curious to hear the thoughts of the authors on the same.\", \"a2\": \"Yes, we agree that we may explore other human evaluation protocols. We find the suggested human evaluation protocol quite consistent with a popular definition of interpretability: \\u201cthe degree to which a human can consistently predict the model\\u2019s result\\u201d[1]. Thank you for the suggestion and we will consider this protocol in our future research.\\n\\n[1] Molnar, Christoph. \\\"Interpretable machine learning\\\", 2019.\"}",
"{\"title\": \"Response to Review #4 (1/2)\", \"comment\": \"We very much appreciate your careful and valuable comments! Please find our detailed response is as follows.\\n\\n-------------------------------\", \"q1\": \"In Eq. 3 (context independent importance), the expectation is taken over the difference between the prediction on the sampled sentence and the one with the phrase removed. Will it lack of sensitivity when there are multiple evidences saturating the prediction? For example, consider the input: \\\"The movie is the best that I have ever seen. It is remarkable!\\\". Removing the word \\\"best\\\" alone doesn't alter the prediction much.\", \"a1\": \"It is a great question. We agree that importance measured by removing a phrase lacks sensitivity when there are multiple evidences saturating the prediction *given one specific input*. However, the problem is greatly alleviated in Eq.3 by sampling input sequences $x$ from $p(x| p \\\\subseteq x)$, as these drawn samples may have non-saturating predictions. For the given example in the question, our algorithms may evaluate the importance of the word \\u201cbest\\u201d at some sampled input sequences where the word \\u201cbest\\u201d is the only evidence, such as \\u201cthe movie is the best that I have ever seen\\u201d. In this way, the proposed context-independent importance is robust to saturation.\\n\\nWe also note that our formulation is robust to saturation in a similar way to other explanation algorithms, such as Shapley values. These algorithms average word importance given different subsets of context words in a specific input, to cover the case when the prediction is not saturated; our formulation is more general as it evaluates word/phrase importance given all possible contexts, weighted by their probability at the input space.\\n \\nThen, for the N-context independent importance, we utilized the assumption that a phrase usually strongly interacts with its neighboring contexts. The parameter analysis on N in section 4.4 shows the assumption generally holds true. Meanwhile, we also acknowledged at the end of section 3.4 that it is possible to extend SOC by applying other measures of phrase importance that are less affected by saturation in place of the input occlusion. We think it would potentially be helpful for longer input sequences. We have expanded the discussion at the end of section 3.4 in the new version of the paper.\\n\\n-------------------------------\", \"q2\": \"In eq 5, the expectation is computed over P(h | beta). It is NOT THE SAME as sampling words p(x_{\\\\delta} | x_{-\\\\delta}) and then consider their hidden states.\", \"a2\": \"We appreciate the reviewer's careful look into the formulation. However, we did not mean $h$ is drawn from $p(h | \\\\beta)$, and we believe the text part has caused this confusion. In the updated paper, at the beginning of Section 3.3 (after Eq. 5), we explicitly state $h$ is calculated from the sampled input sequences conditioned on $x_{-\\\\delta}$. To clarify, in CD, $\\\\beta^\\\\prime$ terms are calculated as average activation differences after subtracting $\\\\beta$, for the activation values $h$ computed on the specific input $x$ and that when $h=\\\\beta$; SCD follows general protocols of CD and also calculates $\\\\beta^\\\\prime$ terms as average activation difference after subtracting $\\\\beta$ terms, but for the activation values $h$ computed on each sampled input sequence.\\n\\n-------------------------------\", \"q3\": \"How does SCD deal with element-wise multiplication?\", \"a3\": \"We treat element-wise multiplication the same as other activation functions, as stated at the end of section 3.3. More specifically, for $h = h_1 * h_2 $, the $\\\\beta^\\\\prime$ term is computed as $E_{\\\\gamma_1, \\\\gamma_2}[(\\\\beta_1 + \\\\gamma_1) * (\\\\beta_2 + \\\\gamma_2) - \\\\gamma_1 * \\\\gamma_2]$. We have added the formulation at the end of section 3.3 to improve the clarity.\\n\\n-------------------------------\", \"q4\": \"What's the difference between Eq. 3 and Eq. 8?\", \"a4\": \"To clarify, Eq.3 introduces a general form of context independent importance, where the masking operation $x\\\\backslash p$ should be defined in specific explanation algorithms. Eq.8 shows our SOC algorithm, where the masking operation implemented as replacing the given phrase with padding tokens.\"}",
"{\"title\": \"Response to Review #4 (2/2)\", \"comment\": \"Q5: What contributes to the differences between the reported results in CD paper and ours?\\n\\nThank you for your careful look into the experimental results. For the CD algorithm, we use the official code released at [1] to ensure the correctness of the implementation. We also use standard data splits for training, validation, and testing, and evaluate our explanation algorithms on test set predictions. Therefore, we believe the difference is caused by the difference between the models used for reporting their results and ours. Nevertheless, we find that our algorithms perform competitively on different models: for example, by using the released code of CD for training the models on SST-2 dataset, we get a word correlation score of ~0.567 for CD, and ~0.697 for SOC.\\n\\n-------------------------------\", \"q6\": \"Regarding metrics selected in Table 1\\n\\nQuantitative evaluations of explanations are believed to be challenging as there are hardly any ground truths for explanations; the evaluations have to rely on hypothesis on what a neural network may have captured. We believe a good practice to evaluate neural network explanations is to compare it with multiple reference word or phrase importance. We chose coefficients of linear models as one of these references, because these coefficients are believed to be representative of word importance when the linear model is sufficiently accurate. We select the evaluation protocol also in consideration that it is reported in the CD paper.\\n\\nIt is notable that in addition to the correlation with the linear model coefficients, we also tested phrase importance correlation with human annotations, as well as performed human evaluations and qualitative studies. Having all these results showing great consistency, we believe the effectiveness of our algorithms is justified by our experiments.\\n\\n-------------------------------\", \"missing_related_references\": \"Thank you for your pointers on related works. We have included them with some discussion in the updated related works section. Please check the updated version.\\n\\n [1] https://github.com/jamie-murdoch/ContextualDecomposition\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a hierarchical decomposition method to encode the natural language as mathematical formulation such that the properties of the words and phrases can encoded properly and their importance be preserved independent of the context. This formulation is intuitive and more efficient compared to blindly learning contextual information in the model. The proposed method is a modification of contextual decomposition algorithm by adding a sampling step. They also adapt the proposed sampling method into input occlusion algorithm as another variant of their method. The proposed method is tested on LSTM and BERT models over sentiment datasets of Stanford Sentiment Treebank-2 and Yelp Sentiment Polarity and TACRED relation extraction dataset and showed more interpretable generated hierarchical explanations compared to baselines.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary:\\nThe authors proposed a method for generating hierarchical importance attribution for any neural sequence models (LSTM, BERT, etc.) Towards this goal, the authors propose two desired properties: 1) non-additivity, which means the importance of a phrase should be a non-linear function over the importance of its component words; 2) context independence, which means that the attribution of any given phrase should be independent of its context. For example, in the sentence \\\"the film is not interesting\\\", the attribution of \\\"interesting\\\" should be positive while the attribution of \\\"not interesting\\\" should be negative.\\n\\nFollowing these two properties, the authors designed three algorithms to post-hoc analysis the importance of a given phrase p.\\n1. [Sec 3.2] eq 4. expected differences in model predictions between the a sentence that contains p and the same sentence with p removed. The expectation is computed over the conditional probability Prob(sentence | p in sentence). In practice, the authors use eq 3 as a proxy to eq 4.\\n2. [Sec 3.3] eq 5. expected differences in the activation values of each layer. The expectation is computed over the conditional probability Prob(context-dependent representations | phrase-dependent representations).\\n3. [Sec 3.4] eq 8. similar to 1 but we replace the phrase p with padded tokens.\\n\\nThe authors conducted experiments on SST and Yelp. Results show that their proposed context-independent attribution correlates better with a trained linear model's coefficient, achieves higher human trust.\", \"decision\": \"reject.\\n\\nWhile I found the idea of marginalizing out the local context interesting, I think the paper still needs more work on its formulation, experiments and writing.\", \"formulation\": \"1. In eq 3, the expectation is taken over the difference between the prediction on the sampled sentence and the one with the phrase removed. This may be problematic for longer inputs (a pargraph), where the overall prediction may not change a lot when you remove a single phrase (since the evidence is everywhere). For example, consider the input: \\\"The movie is the best that I have ever seen. It is remarkable!\\\". Removing the word \\\"best\\\" alone doesn't alter the prediction much. \\n2. In eq 5, the expectation is computed over P(h | beta). It is NOT THE SAME as sampling words p(x_{\\\\delta} | x_{-\\\\delta}) and then consider their hidden states.\\n3. In Sec 3.1, you mentioned that CD is limited since the decomposition of activation sigma evolves context information gamma, and you resolved this by marginalization. But it seems to me that the computation of element wise multiplication also evolves context information. How do you deal with these?\\n3. What's the difference between eq3 and eq8? Are you just changing from remove the phrase completely to replace it by mask?\", \"experiments\": \"1. The performance of CD in Table 1 seems very different to the original CD paper (which is 0.758 for SST and 0.520 for Yelp). I am not sure what contributes to this big difference. Is it the trained model or data splits?\\n2. Table 1 shows that your methods achieves higher correlation to linear model's coefficients. But why shall we consider linear model's coefficients as the ground truth for the learned neural model? For example, the fine-tuned BERT achieves lower correlation corresponding to the LSTM. Does that mean the BERT model performs worse than LSTM?\", \"missing_related_references\": \"1. Explaining Image Classifiers by Counterfactual Generation\\n2. Rationalizing Neural Predictions\\n3. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation\\n4. L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary - The paper addresses the problem of hierarchical explanations in deep models that handle compositional semantics of words and phrases. The paper first highlights desirable properties for importance attribution scores in hierarchical explanations, specifically, non-additivity and context independence, and shows how prior work on additive feature attribution and context decomposition doesn\\u2019t accurately capture these notions. After highlighting the said properties in context of related work, the authors propose an approach to calculate the context-independent importance of a phrase by computing the difference in scores with and without masking out the phrase marginalized over all possible surrounding word contexts (approximated by sampling surrounding context for a fixed radius under a language model). Furthermore, based on the above, the authors propose two more score attribution approaches -- based on integrating the above sampling step with (1) the contextual decomposition pipeline and (2) the input occlusion pipeline. Experimentally, the authors find that the attribution scores assigned by the proposed approach are more correlated with human annotations compared to prior approaches and additionally, the generated explanations turn out to be more trustworthy when humans evaluate their quality.\\n\\nStrengths\\n\\n- The paper is well-written and generally easy to follow. The authors do a good job of motivating and highlighting the desired properties of importance attribution scores and developing the proposed scoring mechanism. The proposed scoring mechanism ties in seamlessly with the existing contextual decomposition and occlusion pipelines and leads to improved performance when the generated explanations are evaluated.\\n\\n- The proposed approach involving masking out the phrase and marginalizing over possible surrounding word-concepts is novel and offers an interesting perspective on how to approach context independent scoring of phrases -- (1) phrases don\\u2019t exist independent of the surrounding context and therefore marginalizing over all possible surrounding concepts makes sense and (2) replacing the intractable enumeration over all possible surrounding concepts with samples from a language model makes the score attribution process faster and more scalable modulo the learnt language model.\\n\\n- Sec. 4.4 offers interesting insights. I like that the authors performed this ablation given that the expectation over surrounding contexts is computed approximately via samples under a language model. There\\u2019s a clear increase in terms of the attribution scores as the number of samples increases and the neighborhood size is increased. It is interesting to note that there is an approaching plateau region where increasing the neighborhood size won\\u2019t affect the assigned scores. This experiment provides a holistic picture of the behavior of the interpretability toolkit (manifesting in terms of attribution scores) given the approximations involved. I would encourage the authors to flesh this out even more. \\n\\nWeaknesses\\n\\nHaving said that, there are some minor comments that I\\u2019d like to point out / get the authors\\u2019 opinion on. Highlighting these below:\\n\\n- While SOC and SCD don\\u2019t always end up outperforming other approaches (specifically Statistic) on the SST-2, Yelp and TACRED datasets (Table. 1), for the human evaluation experiments, the authors only compare with CD, Direct Feed, ACD and GradSHAP. Do the authors have any insights on how well does Statistic perform on the human-evaluation set of experiments?\\n\\n- While inspiring trust in users is one aspect of evaluating explanations via humans, it\\u2019s slightly unclear what \\u2018trust\\u2019 in this sense inherently identifies. Although, it might implicitly capture some notion of reliability (and predictability of the explanations by humans), asking users to rank explanations across a spectrum of \\u2018best\\u2019 to \\u2018worst\\u2019 doesn\\u2019t explicitly capture that. Another possible aspect to look into could be -- \\u2018\\u2019Do the generated explanations help humans predict the output of the model?\\u2019\\u2019 This captures reliability in a very explicit sense. Do the authors have any thoughts on this and potential experiments that might address this? I don\\u2019t think not addressing this is necessarily detrimental to the paper but I\\u2019m curious to hear the thoughts of the authors on the same. \\n\\nReasons for rating\\n\\nBeyond the above points of discussion, I don\\u2019t have major weaknesses to point \\tout. I generally like the paper. The authors do a good job of identifying the sliver in which they make their contribution and motivate the same appropriately. The proposed phrase attribution scoring mechanism is motivated from a novel perspective and has a reasonable approximation characterized appropriately by the ablations performed. The strengths and weaknesses highlighted above form the basis of my rating.\"}"
]
} |
H1lTRJBtwB | Compositional Transfer in Hierarchical Reinforcement Learning | [
"Markus Wulfmeier",
"Abbas Abdolmaleki",
"Roland Hafner",
"Jost Tobias Springenberg",
"Michael Neunert",
"Tim Hertweck",
"Thomas Lampe",
"Noah Siegel",
"Nicolas Heess",
"Martin Riedmiller"
] | The successful application of flexible, general learning algorithms to real-world robotics applications is often limited by their poor data-efficiency. To address the challenge, domains with more than one dominant task of interest encourage the sharing of information across tasks to limit required experiment time. To this end, we investigate compositional inductive biases in the form of hierarchical policies as a mechanism for knowledge transfer across tasks in reinforcement learning (RL). We demonstrate that this type of hierarchy enables positive transfer while mitigating negative interference. Furthermore, we demonstrate the benefits of additional incentives to efficiently decompose task solutions. Our experiments show that these incentives are naturally given in multitask learning and can be easily introduced for single objectives. We design an RL algorithm that enables stable and fast learning of structured policies and the effective reuse of both behavior components and transition data across tasks in an off-policy setting. Finally, we evaluate our algorithm in simulated environments as well as physical robot experiments and demonstrate substantial improvements in data data-efficiency over competitive baselines. | [
"Multitask",
"Transfer Learning",
"Reinforcement Learning",
"Hierarchical Reinforcement Learning",
"Compositional",
"Off-Policy"
] | Reject | https://openreview.net/pdf?id=H1lTRJBtwB | https://openreview.net/forum?id=H1lTRJBtwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"LBNnCAztZ",
"Bylosox5ir",
"B1x4J2XDsB",
"Byg9Co7woH",
"r1lZcsQwoS",
"BJl2vi7DsB",
"H1ezMRHRtS",
"HJxjYHATtr",
"BkxMSq-nKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738952,
1573682083286,
1573497819680,
1573497810000,
1573497737380,
1573497699562,
1571868170146,
1571837315365,
1571719738295
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2041/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2041/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2041/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2041/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2041/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2041/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2041/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2041/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper is concerned with improving data-efficiency in multitask reinforcement learning problems. This is achieved by taking a hierarchical approach, and learning commonalities across tasks for reuse. The authors present an off-policy actor-critic algorithm to learn and reuse these hierarchical policies.\\n\\nThis is an interesting and promising paper, particularly with the ability to work with robots. The reviewers did however note issues with the novelty and making the contributions clear. Additionally, it was felt that the results proved the benefits of hierarchy rather than this approach, and that further comparisons to other approaches are required. As such, this paper is a weak reject at this point.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your clarifications\", \"comment\": \"Thank you for your clarifications\\n\\nThese comments have helped clear up my understanding of some important details.\"}",
"{\"title\": \"Official Blind Review #3: Author feedback\", \"comment\": \"Thank you very much for the detailed review and constructive feedback. In particular, we are glad to see recognition for the paper\\u2019s clarity, the complexity of tasks and real world experiments.\\n\\nAs requested, we extended our discussions of the results to provide further insights. In general, we also recommend the additional ablation studies in the appendix which sadly do not fit into the main paper and provide additional insight into the method.\\n\\nRegarding point 1)\\nWe are investigating transfer to new domains in Appendix 9 and are able to demonstrate significantly accelerated training on these new domains. We investigate 2 different methods for using pre-trained low level policies: one with only the pre-trained ones and one with an additional randomly initialised low level policy. If the new task is very similar to the previous domain, only the old sub-policies suffice for good performance while domains with significantly different final tasks require additional sub-policies to perform well.\\n\\nRegarding point 2)\\nWe provide a substantial description of all tasks in the appendix including quantitative description of their reward functions. To provide a better understanding of these tasks and learned solutions, we also provide videos on the paper\\u2019s website https://sites.google.com/corp/view/rhpo/\\n\\nPlease let us know if there are remaining open questions.\"}",
"{\"title\": \"Official Blind Review #1: Author feedback\", \"comment\": \"Thank you very much for the detailed feedback. We\\u2019re glad that the complexity of tasks and real world experiments are recognised and worked to address open questions and clarify contributions and goal of the paper in the following sections.\\n\\nOur main contributions are focused on improving data-efficiency in multitask RL in complex, real-world domains. We show that the overall approach enables solving of tasks directly on a robot that go far beyond what has previously been demonstrated in this setting. First, we extend existing investigations into hierarchical RL with a focus on robustness (see points below regarding trust-region) and benefits and challenges for compositionality (see single and multi task). \\n\\nWe focus on a simple method and show strong performance gains based on a small number of key improvements. To minimise compounding aspects, this work does not include any constraints on low level policies regarding the interface (no sub-goals inputs common to HIRO, FuN and HAC which can be seen as orthogonal to our work) and to focus on per time-step mixture distributions to investigate composition rather than temporal abstraction. Here, we do not claim that 1-step options, or the lack of intrinsic rewards are our contribution but that the conceptually simple setup is good enough to achieve dramatic improvements in data efficiency, without the other ingredients. \\n\\nWe focus on performing complex, long-term real world and simulated experiments requiring robust, data-efficient algorithms. Therefore, we develop a robust (trust region), off-policy AC algorithm, that allows data sharing across tasks (\\u201coff-policy\\u201d is used here in a strong sense since data is shared across tasks). This setting (both regarding off-policy and trust region improvement) is rather different from the typical near on-policy settings studied often in the literature and in particularly differs from typical application domains for the mentioned work on OC and FuN. In additional ablations, we show the importance of using a trust region constraint for the high-level controller, which has not been investigated in prior work to our knowledge. Hopefully, these algorithmic advances can additionally be transferred to other hierarchical schemes in future work. \\n\\nWe extend two policy optimisers MPO & SVG[1] (with SVG described in Appendix 10). The extension of MPO is used for our main experiments as it is empirically more robust (see Pile1, Pile2 and Cleanup2 domains) and conceptually simpler by allowing to directly maximize probabilities under a mixture distribution in the policy improvement step without relying on MC gradient estimation via likelihood ratio or reparametrisation trick. This form of actor-critic algorithm enables us additionally to train all options and not just the executed ones based on each trajectory. Finally the generation of rewards for all tasks, common with e.g. scheduled auxiliary control or hindsight experience replay, enables to utilise the exploration capabilities for one task to benefit learning all other tasks.\\n\\nWe have improved the submission to better point out connections to prior work by moving additional results into figures in the main paper to clarify connections. Our extension of SVG [1] trains mixture of Gaussians policies by using the Gumbel Softmax trick (in the Appendix). This leads to increased performance compared to flat Gaussian policies and actually is conceptually similar to the option-critic. Simplified, SVG is an actor-critic algorithm which builds on the reparametrisation instead of likelihood ratio trick (commonly leading to lower variance). Since we do not build on temporally extended sub-policies we can work with a single critic, thereby simplifying the algorithm. \\n\\nThe idea of using hierarchy for multitask (reinforcement) learning is indeed not new and has not been new for many years. However there has been process towards improved data efficiency and better understanding of algorithms and mechanisms (for example in the mentioned related work). Our experiments include both complex simulated and real-world tasks and we demonstrate the data-efficiency benefits and robustness of our algorithm as well as provide further insights via an extensive set of ablations. We believe that our experiments represent a relevant step towards the deployment of RL algorithms to the real world, a fundamental, open line of research. \\n\\nFinally, we are of course happy to extend our literature review by the suggested references.\\nPlease let us know if there are remaining open questions. \\n\\n[1] Heess, Nicolas, et al. \\\"Learning continuous control policies by stochastic value gradients.\\\" Advances in Neural Information Processing Systems. 2015.\"}",
"{\"title\": \"Official Blind Review #2: Author Feedback pt2\", \"comment\": [\"Minor questions (in the order of questions asked):\", \"Section 3.2 reference policy: as described on the same page under the first step of policy improvement, we use the target policy as reference policy and have improved clarity when introducing the term. In our implementation, the corresponding target network is fixed for a certain number of learning steps.\", \"Rewards for all tasks simultaneously: there are clear limitations of when rewards cannot be determined for tasks in hindsight, but most commonly we work with domains where rewards simply can be computed based on state, action and observation data for a wide range of tasks. Even when we do not know about a task's existence when generating trajectories you can imagine sampling transition data later and assigning rewards for new tasks (as long as the stored data suffices for this computation). Finally, the paper includes real world tasks where exactly this is the case.\", \"Single task domains: As correctly observed , the single task domains are unable to benefit from cross-task transfer. In this context, these domains are used to investigate compositionality and show that additional incentives were required for sub-policy specialisation and the resolution performance gains via composition. Please also see section 4.1 for more details.\", \"Section 4.1 different initial means: as described we distribute the mixture\\u2019s initial means equally between minimum and maximum action range (resulting in means of -1, 0, 1 in our environments). The different initialisations provide additional incentive for the specialisation of different mixture components, which is not needed in the multitask domains. We have clarified this in the paper.\", \"Tasks in Pile1 domain: These tasks (including reaching, grasping, lifting etc) are related. The goal of evaluating on 3 different domains (Pile1, Pile2, Cleanup2) is to show how compositional policies become more relevant for domains with more variation and less overlap between tasks. In this context Pile1 is, as correctly identified, the most similar and simple and Cleanup2 the most complex and varied domain.\", \"Improvement between domains: Similarly to the point above, we increase the complexity in multitask domains by learning to solve more tasks and less similar tasks. In this context, we were able to show that with increasing complexity (in particular regarding task similarity), compositional models become more relevant.\", \"Across all environments, \\u2018reach\\u2019 is a comparably easy task with dense reward; as soon as the agent receives any reward, all baselines and RHPO quickly learn to solve this. Please see page 20-24 of the appendix for all details regarding the task definitions. Intuitively, in the Pile1 domain, reaching gets a reward for getting close to the blocks, grasping for contact with the blocks, lifting for having the blocks above a specific height of the group, placing and stack depends on the positions between two blocks and stack-and-leave depends on having one brick on top of the other with the gripper further away from the stack (which is the hardest configuration). We hope this also clarifies the difference between reach+grasp and stack. For all other domains, please do have a look at the appendix of the paper.\", \"Components: we have improved clarity about this aspect throughout the paper. Components are the mixture components from the mixture of Gaussians (which is the policy distribution under RHPO).\", \"Algorithmic details: We worked hard on providing the most concise presentation of our algorithm in this paper. Section 3.2 includes the description of policy evaluation and improvement steps which is sufficient for understanding the training procedure. Finally, the appendix should provide for all necessary aspects for reproduction of the work with the algorithm described in detail in section A.2. In this paper, we do not aim to investigate temporal abstraction and focus on training low and high level jointly by maximising the probability of actions under a mixture distribution (which consists of the high and low level policies). However, evaluating temporal abstraction under this data-efficient and robust framework for real-world robotics tasks can be a valuable future direction.\"]}",
"{\"title\": \"Official Blind Review #2: Author Feedback\", \"comment\": \"Thank you for the detailed and constructive feedback. We\\u2019re glad that the complexity of tasks, real world experiments and contributions are recognised. In the following, we aim to provide clarity to open questions in the review in particular regarding choice of methods, extensions and requirements to solve the described tasks.\\n\\nOur main contributions are focused on improving data-efficiency in multitask RL in complex, real-world domains. We show that the overall approach enables solving of tasks directly on a robot that go beyond what has previously been demonstrated in this setting. We focus on a simple method and show strong performance gains based on a small number of key improvements. To minimise compounding aspects, this work does not include any constraints on low level policies regarding the interface (no sub-goals inputs common to HIRO, HAC which can be seen as orthogonal to our work) and to focus on per time-step mixture distributions to investigate composition rather than temporal abstraction. \\n\\nWe focus on performing long-term, complex real world and simulated experiments requiring robust, data-efficient algorithms. Therefore, we develop a robust (trust region), off-policy AC algorithm, that allows data sharing across tasks (\\u201coff-policy\\u201d is used here in a strong sense since data is shared across tasks). In additional ablations, we show the importance of using a trust region constraint for the high-level controller, which has not been investigated in prior work to our knowledge. Hopefully, these algorithmic advances can additionally be transferred to other hierarchical schemes in future work. \\n\\nWe extend two policy optimisers MPO & SVG [1] (with SVG described in Appendix 10). The extension of MPO is used for our main experiments as it is empirically more robust (see Pile1, Pile2 and Cleanup2 domains) and conceptually simpler by allowing to directly maximize probabilities under a mixture distribution in the policy improvement step without relying on MC gradient estimation via likelihood ratio or reparametrisation trick. This form of actor-critic algorithm enables us additionally to train all options and not just the executed ones based on each trajectory. Finally the generation of rewards for all tasks, common with e.g. scheduled auxiliary control or hindsight experience replay, enables to utilise the exploration capabilities for one task to benefit learning all other tasks.\\n\\nThe extension of SVG can be seen as conceptually similar to the option-critic. Simplified, SVG is an actor-critic algorithm which builds on the reparametrisation instead of likelihood ratio trick (commonly leading to lower variance). Since we do not build on temporally extended sub-policies, we can work with a single critic thereby simplifying the algorithm. We moved some of the results into figures in the main paper to clarify this connection between both approaches.\\n\\nOur experiments include both complex simulated and real-world tasks and we demonstrate the data-efficiency benefits and robustness of our algorithm as well as provide further insights via an extensive set of ablations. We believe that our experiments represent a relevant step towards the deployment of (H)RL algorithms to the real world, a fundamental, open line of research. \\n\\n\\nPlease let us know if there are remaining open questions.\\n\\n[1] Heess, Nicolas, et al. \\\"Learning continuous control policies by stochastic value gradients.\\\" Advances in Neural Information Processing Systems. 2015.\\n\\n(Minor questions are addressed in the next comment)\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"While this paper has some interesting experiments. I am quite confused about what exactly the author are claiming is the core contribution of their work. To me the proposed approach does not seem particularly novel and the idea that hierarchy can be useful for multi-task learning is also not new. While it is possible that I am missing something, I have tried going through the paper a few times and the contribution is not immediately obvious. The two improvements in section 3.2 seem quite low level and are only applicable to this particular approach to hierarchical RL. Additionally, it is very much not clear why someone, for example, would select the approach of this paper in comparison to popular paradigms like Option-Critic and Feudal Networks.\\n\\nThe authors mention that Feudal approaches \\\"employ different rewards for different levels of the hierarchy rather than optimizing a single objective for the entire model as we do.\\\" Why reward decomposition at the lower levels is a problem instead of a feature isn't totally clear, but this criticism does not apply to Option-Critic models. For Option-Critic models the authors claim that \\\"Rather than the additional inductive bias of temporal abstraction, we focus on the investigation of composition as type of hierarchy in the context of single and multitask learning while demonstrating\\nthe strength of hierarchical composition to lie in domains with strong variation in the objectives such as in multitask domains.\\\" First of all, I should point out that [1] looked at applying Option-Critic in a many task setting and found both that there was an advantage to hierarchy and an advantage to added depth of hierarchy. Additionally, it is well known that Option-Critic approaches (when unregularized) tend to learn options that terminate every step [2]. So, if you generically apply Option-Critic, it would in fact be possible to disentangle the inductive bias of hierarchy from the inductive bias of temporal abstraction by using options that always terminate. \\n\\nIn comparison to past frameworks, the approach of this paper seems less theoretically motivated. It certainly does not seem justified to me to just assume this framework and disregard past successful approaches even as a comparison. While the experiments show the value of hierarchy, they do not show the value of this particular method of creating hierarchy. The feeling I get is that the authors are trying to make their experiments less about what they are proposing in this paper and more about empirical insights about the nature of hierarchy overall. If this is the case, I feel like the empirical results are not novel enough to create value for the community and too tied to a particular approach to hierarchy which does not align with much of the past work on HRL. \\n\\n[1] \\\"Learning Abstract Options\\\". Matthew Riemer, Miao Liu, and Gerald Tesauro. NeurIPS-18. \\n[2] \\\"When Waiting is not an Option: Learning Options with a Deliberation Cost\\\" Jean Harb, Pierre-Luc Bacon, Martin Klissarov, and Doina Precup. AAAI-18.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces a hierarchical policy structure for use in both single task and multitask reinforcement learning. The authors then assess the usefulness of such a structure in both settings on complex robotic tasks. These tasks include the stacking and reaching of blocks using a robotic hand, as an example. In addition to carrying out these experiments on simulated robots, the authors have also carried out experiments on a real Sawyer robotic arm.\\n\\nThe particular form of their hierarchical policy for the multitask case is as follows. The policy, which is conditioned on the current state and task index consists of a gaussian mixture, where the individual gaussian densities are conditioned on the state and a context variable. The weights of this mixture are then dependent on a density on this context variable, which is conditioned on the state and task index. The intuition behind this is that the weight portion, which is called the high level component identifies task specific information, while the low level policy learns general, shareable knowledge of the different problems. \\n\\nThe authors adapt the Multitask Policy Optimisation algorithm for their use by introducing an intermediate non-parametric policy, which is derived by setting KL bounds on the policy w.r.t to a reference policy. Having derived a closed-form solution to this, they go on to learn the parametric policy of interest. \\n\\nThe authors consider 3 settings of experiments. Firstly, they assess the benefits of the hierarchical structure for single task settings in a simulated environment. For the most part, they find that compared to a flat policy, the hierarchical structure shows benefits only if the initial means of the high-level components are sampled to be different. While the experimental results are shown to support this, further discussion of why this is the case would have been welcome. \\n\\nThe main benefits of the hierarchical policy are shown in the multitask case, in both simulated and real situations. In fact, the authors have shown that the hierarchical case often shows major benefits in difficult, more complicated tasks (reach vs stacking for example).\\n\\nI think that the paper was very well written. It is nicely structured, with easy to read language, and without unnecessary jargon or clutter. Where necessary, the relevant extra details were provided in the Appendices.\", \"the_following_are_some_additional_notes\": \"1) It would have been interesting to see how the hierarchal policy faired in new tasks that were not a part of the original training set, compared to a flat multitask policy. \\n2) Further details about how each task is differentiated from each other in the experiments. That is, what are their different goals, which are reflected by the reward functions. \\n\\nAs such I recommend this paper to be weak accepted.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper is rather interesting and is able to solve some difficult tasks. A combination of different learning techniques for acquiring structure and learning with asymmetric data are used. While the combination of methods is new I am not sure that this particular combination of methods to train an HRL policy is sufficiently novel. Is the authors can highlight the effects and contribution of how these methods are combined to indicate this better it would be good.\", \"more_detailed_comments\": [\"In section 3.2 you mention a reference policy? Can you provide more details on this reference policy?\", \"IN the paper it is mentioned that the method collects data, including the reward for each task on a single state, action transition. This assumption seems rather strong. Earlier in the paper, the authors discussed the motivation for learning transferable sub-policies. In the real world, it may not be possible to collect the reward for every kind of task simultaneously.\", \"The first evaluation in Section 4.1 uses two humanoid environments. While these environments can be considered difficult that does not seem like the multi-task type environment the method is motivated to work well on. There is little sub-task transfer in this task.\", \"In section 4.1 it is noted that the version that initialized with different policy means works best. How are these means initialized?\", \"Is the Pile1 collection of tasks really separate tasks? It would be good to have some more details on how these are organized. There may not be a clear definition in the community what is considered a specific task but I am not overly convinced that these \\\"different\\\" tasks are separate. Most of them look like a similar version of pick and place.\", \"In Figure 2 the hierarchical method is similar in performance on the stack and leave (Pile1) set of tasks and marginally better in the Pile2 set yet does far better on the Cleanup2 set. While these are all simulations with multiple tasks is there some reasoning to why each method looks similar on the Pile1 set of tasks?\", \"For the robotic tasks, it is noted that again the baseline methods do well on the \\\"Reach\\\" task. It is shown that the RHPO does much better on the Stack task. It would be great if the authors can describe the interesting differences between the tasks. It is not clear how difficult the Stack task is and why it is largely different from Reach + Grasping.\", \"For the images on the right of Figure 4, It shows a comparison between the tasks and some \\\"components\\\" Are these components the states? The \\\"components' are not explained well in the papers.\", \"There is no algorithm in the main paper which makes it a little difficult to understand the operation of the learning method. For example how exactly is the learning of the two different levels compared? It seems like they are trained together. If they are they should be compared to HIRO or HAC. How is temporal abstraction handled between the two policy layers if they are trained together?\"]}"
]
} |
HJlnC1rKPB | On the Relationship between Self-Attention and Convolutional Layers | [
"Jean-Baptiste Cordonnier",
"Andreas Loukas",
"Martin Jaggi"
] | Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks. This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis. Our code is publicly available. | [
"self-attention",
"attention",
"transformers",
"convolution",
"CNN",
"image",
"expressivity",
"capacity"
] | Accept (Poster) | https://openreview.net/pdf?id=HJlnC1rKPB | https://openreview.net/forum?id=HJlnC1rKPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"mmtcopdgMs",
"rJg607FnsS",
"Bkxt7WL2jr",
"S1gMv2aYoS",
"rkxRCMjKoS",
"SJe23bsYsH",
"ryxN-estjr",
"ryxtiK90Fr",
"H1eGmNvsFr",
"HkgXKBVXtr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738923,
1573848020987,
1573835040730,
1573669978198,
1573659349557,
1573659059769,
1573658619767,
1571887520809,
1571677209944,
1571140987461
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2040/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2040/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2040/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2040/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2040/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2040/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2040/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2040/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2040/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper studies the relationship between attention networks such as Transformers and convolutional networks. The paper shows that a special case of attention can be cast as convolution. However this link depends on using relative positional embeddings and generalization to other encodings are not given in the paper. The reviewers found the results correct, but we caution that the writing should better reflect the caveats of the approach.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Authors' Comment about Experiment Datasets\", \"comment\": \"We would like to thank the reviewers again for their useful comments that allowed us to add some remarks on the theory and extend the experimental part. We believe that we addressed the raised concerns by updating the main text and conducting more experiments.\", \"we_would_like_to_address_one_last_point\": \"The reviewers pointed out through the discussion that our experiments merely cover CIFAR-10. We would like to argue that experiments on ImageNet/MSCOCO would not complement well the theory of the paper. Due to the quadratic memory footprint of the attentions scores (pixels to pixels), full Self-Attention cannot process large images without introducing spatial downsampling layers (ResNet backbone) and/or Local-Attention (Ramachandran et al., 2019). As the goal of the Section 4.3 is to study to what extent Self-Attention learns to behave like a CNN, enforcing locality to a 7x7 patch on 200x200 images as in (Ramachandran et al., 2019) already forces the model to adopt a receptive field similar to CNNs, hence defeating the purpose of our experiments.\\n\\nWe are also curious if our findings transfer to the Local-Attention setting but it is outside of the scope of our experimental section. However, we will include the visualization of Local-Attention scores in our interactive website once Ramachandran et al. (2019) release their model.\"}",
"{\"title\": \"Discussion with Reviewer #3\", \"comment\": \"We would like to thank the reviewer for actively participating in the discussion.\\n\\n\\n\\n\\\"these localised attention patterns that emerge when you slide the queries are interesting, and suggest in fact that the output of each head acts like a convolution, and NOT the aggregation of the heads (which the theory suggests).\\\"\\n\\nThis seems to be a key misunderstanding that we wish to clarify. A single head does not operate as a convolution, though it acts on one of the pixels/patches of the receptive field of the convolutional kernel. (Figures 6, 7 and the GIF display the attention probabilities used to compute the weighted average of the input, not the convolution kernel weights themselves.) To convince you, we present three arguments showing that a single head SA cannot perform general convolution:\\n\\n1. To compute the output of a convolution at a given location, the $C_{in}$ input channels of the neighboring pixels can be linearly transformed independently by the different slices of the weight tensor of shape $K\\\\times K \\\\times C_{in} \\\\times C_{out}$. This cannot be done by a single head of self-attention.\\n\\n For example, a convolutional layer can transform an RGB image ($C_{in}=3$) into a grey-scale image ($C_{out}=1$) with a $(1,2)$ kernel by summing the red channel of each pixel with the green channel of the pixel at its right. A single-head self-attention layer cannot simulate such convolution because filter matrices applied at each shift position have to be the same (up to a positive rescaling allowed by the attention probabilities).\\n\\n2. Due to the constraint imposed by the softmax, even for $C_{in} = C_{out} = 1$ a single head multi-attention cannot express a derivative kernel. Such kernels require positive and negative weights, which violates the constraint imposed on the attention probabilities.\\n\\n3. Finally, we present a more formal argument why a single head self-attention (SA) layer cannot act like convolution. Let us momentarily consider an arbitrary convolution layer and suppose that\\n\\n$$\\nY_q^{(\\\\text{conv})} = \\\\sum_{k} X_{k,:} W_{q-k} \\\\in \\\\mathbb{R}^{D_{out}}\\n$$\\n\\nis its output at some pixel $q$. Further, let\\n\\n$$\\nY_q^{(\\\\text{SA})} = \\\\sum_{k} \\\\operatorname{softmax}(A_{q,:}^{(1)})_k X_{k,:} W^{(1)} \\\\in \\\\mathbb{R}^{D_{out}}\\n$$\\n\\nbe the output of the single-head SA layer. Now, it is not hard to see that the equation $Y_q^{(\\\\text{conv})} = Y_q^{(\\\\text{SA})}$ does not have a general solution w.r.t the unknown parameters $W^{(1)} \\\\in \\\\mathbb{R}^{D_{in} \\\\times D_{out}}$ and $\\\\operatorname{softmax}(A_{q,:}^{(1)})_{:} \\\\in \\\\mathbb{R}^{H\\\\times W}$. This follows by a degrees of freedom argument: on the convolutional layer, each $X_{k,:}$ within the kernel support (there are $K^2$ such pixels) is multiplied by a *different* $D_{in} \\\\times D_{out}$ matrix $W_{q-k}$---thus, in total, we have $K^2 D_{in} D_{out}$ variables. On the SA side, each $X_{k,:}$ within the kernel support is multiplied by the *same* $D_{in} \\\\times D_{out}$ matrix $W^{(1)}$ and the only thing that changes across $k$ are the non-zero scalar attention coefficients $\\\\operatorname{softmax}(A_{q,:}^{(1)})_{k}$---yielding a total of $D_{in} D_{out} + K^2$ parameters (which is less than $K^2 D_{in} D_{out}$). It directly follows that a single-head SA layer cannot by-itself simulate all possible convolution layers.\\n\\n\\n\\n\\nThe figures presenting the computed attention probabilities show that some layers of MHSA (mainly 2 and 3) exploit relative positional encoding to attend to distinct pixels at a fixed shift from the query pixel reproducing the receptive field of a localized convolutional kernel. Other layers use the input content to compute attention probabilities (visible for the first layer of non-averaged attention probabilities, Figures 7-9 in Appendix). Interestingly, this aligns with the finding from (Bello et al. 2019) that convolution and self-attention combined outperform each taken separately. Such combination is learned in practice when optimizing an unconstrained fully-attentional model as shown in Figure 5.\\n\\nIn light of the updated findings (with $W_{key},W_{qry} \\\\neq 0$), we agree to nuance our claim that \\\"learned fully-attentional models do behave similar to CNNs in practice\\\" to \\\"fully-attentional models learn to combine local behavior (similar to convolution) and global attention based on input content\\\".\\n\\n\\n\\n\\nAs suggested by the reviewer, we have moved the experiment on learned positional encodings with $W_{key}=W_{qry}=0$ to Appendix to shorten the manuscript. The experiment section now covers:\\n\\n1. the quadratic encoding (with $W_{key}=W_{qry}=0$) verifying that the theoretical construction can be learned,\\n2. a \\\"pure\\\" fully-attentional model similar to (Ramachandran et al., 2019) to study if some MHSA layers behave like CNN layers in practice.\"}",
"{\"title\": \"Reviewer's reply to the author rebuttal\", \"comment\": \"\\\"Following your review, we added a paragraph in Section 3 (page 4) to explain that our proof holds for \\\"ZERO\\\" padding and any stride by downsampling.\\\"\\n- Thanks, this makes sense.\\n\\n\\\"We disagree that our experiments heavily depend on the quadratic encoding, which we only employ to illustrate our theory. Section 4.2 shows that the reparametrization we do in the proof is learnable (and performs reasonably well). This demonstrates that our argument for expressivity is not contrived (i.e., only works in theory), but it connects to what works in practice.\\\"\\n- I'd argue that even the 'learned relative positional encoding' results aren't very relevant since W_key=W_qry=0 there as well. Claiming that \\\"learned fully-attentional models do behave similar to CNNs in practice\\\" requires showing results on settings of learned fully-attentional models used in practice, i.e. learning W_key and W_qry (this leads to the next comment).\\n\\n\\\"we conducted more experiments with learned matrices W_key and W_qry: Figure 6\\\"\\n- Hence I think the addition of these experiments makes the paper much more interesting and relevant, but also think the analysis of the results and how it relates to the theory should be further clarified: the theory suggests that when W_key=W_qry=0 and fixed and alpha is sufficiently large, each attention head gives weight exclusively to a pixel, and the aggregation of information of these pixels is achieved by W_out that combines the output of each head. Figure 5 for W_key=W_qry=0 indeed shows sparse attention weights, somewhat in line with the theory. However Figure 6 where W_key and W_qry are learned, the attention weights are sparse only for a few heads in layers 2 and 3, and for others the pattern is fairly dense, at least very far from giving weight exclusively to a pixel. Hence here, the connection of the results to the theory is weak, and I disagree with your statement in the rebuttal that \\\"The hypothesis that some attention heads focus on pixels at a fixed shift from the query pixel is confirmed.\\\" You've shown this for very few attention heads, and the rather differing behaviour of the rest suggest that the theory is not being shown to hold in practice.\\n\\n\\\"Other heads tend to use more content-based attention (see Figure 8-10 in Appendix for non-averaged probabilities) leveraging the advantage of Self-Attention over CNN (which does not contradict our theory).\\\"\\n- Indeed it doesn't contradict the theory, but it suggests that in practice self-attention doesn't behave like CNNs, at least not for the reasons that were given in the theory.\\n\\n\\\"reviewers can consider this demo GIF (animation of Figure 6 and 7) and appreciate the translation of the attended patches when sliding the query pixel (similar to sliding a convolutional kernel).\\\"\\n- these localised attention patterns that emerge when you slide the queries are interesting, and suggest in fact that the output of each head acts like a convolution, and NOT the aggregation of the heads (which the theory suggests). This was in fact what I was intuitively expecting when I first read the abstract of the paper, being more aligned with personal hands-on experience with self-attention, and I was disappointed to find out in the derivation that each head corresponds to a single pixel, a much more unrealistic scenario.\\n\\nSo in summary, I think there is still some work that can be done regarding the link between the theory and the experiments, either by modifying the theory to better align with the experiments for the general, realistic case (learning W_qry and W_key) or weakening the claim that \\\"learned fully-attentional models do behave similar to CNNs in practice\\\" to \\\"learned fully-attentional models can behave similar to CNNs\\\". \\n\\nI also still think the analysis would be much more interesting and relevant had you explored the attention patterns that arise in fully-attentional models that are comparable with SOTA convolutional models, such as those in Ramachandran et al. (2019), applied to more realistic datasets such as ImageNet and MSCOCO. Even without any theory, it would at least help understand the behaviour of attention in these realistic models with a competitive edge.\"}",
"{\"title\": \"Answer to Official Blind Review #1\", \"comment\": \"We thank the reviewer for their time assessing our work and their constructive feedback.\\n\\nAs a minor correction to what you wrote, we would like to highlight that we do not claim quadratic positional encoding to be superior to what is used in practice. We conducted experiments with quadratic encoding out of curiosity to examine if this positional encoding--crafted solely for the proof of our main theorem--could actually be learned in practice. The fact that this encoding yields decent practical performance indicates that our proof by construction is not superficial.\\n\\nWe agree that this formal result of expressivity is a first step towards better understanding the relationship between self-attention and convolution.\"}",
"{\"title\": \"Answer to Official Blind Review #3\", \"comment\": \"We thank the reviewer for their time assessing our work and their constructive feedback.\\nWe address your concerns about the theory and the experiments and we have updated the submitted paper accordingly.\\n\\n\\n\\n1. About the theory\\n\\nOur theoretical claim is that Multi-Head Self-Attention is at least as expressive as convolution, which is compatible with setting $W_{qry}$ and $W_{key}$ to zero in the proof. As the input pixel positions of a convolutional kernel do not depend on the input image, setting attention scores based on the input data to 0 is coherent.\\n\\nReviewer #2 and you share a common concern about setting the softmax temperature arbitrary close to 0 to attain hard-attention. We suggest you to consult our answer to their review and the remark we added in Section 3 about the scale of $\\\\alpha$.\\n\\nThank you for pointing out that we did not mention padding and stride. We considered the general convolution operator (defined in eq. (5)), however, the Conv2d layers implemented in deep learning frameworks also use stride, padding, dilation and padding_mode options. Following your review, we added a paragraph in Section 3 (page 4) to explain that our proof holds for \\\"ZERO\\\" padding and any stride by downsampling.\\n\\n\\n\\n2. About the experiments\\n\\nWe disagree that our experiments heavily depend on the quadratic encoding, which we only employ to illustrate our theory. Section 4.2 shows that the reparametrization we do in the proof is learnable (and performs reasonably well). This demonstrates that our argument for expressivity is not contrived (i.e., only works in theory), but it connects to what works in practice. At the same time, we would like to stress that we do not claim that the quadratic encoding can replace standard SA with learned $W_{qry}$ and $W_{key}$.\\n\\nWe also would like to highlight that we do not claim that MHSA with quadratic encoding should replace CNN in practice, but only that it can learn to behave like CNNs. You are right that the time and memory complexity of full Self-Attention are indeed deceptively costly and we made it more precise in the text. To avoid misleading the reader, we removed the vague word \\\"powerful\\\" from the paper (two occurrences) to clarify that we meant expressive power.\\n\\nConcerning ImageNet and MSCOCO.\\nAs you mention, full attention is not feasible on larger images without leveraging local attention. The experiments on more challenging datasets conducted by Ramachandran et al. (2019) are impressive but serve a different purpose: showing that local MHSA achieves state-of-the-art performance with competitive number of parameters and number of FLOPS on classification/segmentation of large images. Local Attention would force the self-attention heads to attend only to local patches, hence defeating our goal to show that Self-Attention behaves like CNN by attending to neighboring pixels at fixed shifts.\\n\\nConcerning attention based on data (learned $W_{key}$ and $W_{qry}$).\\nDisentangling position and content attention was a first step toward better understanding of how MHSA processes images in practical settings. To connect our findings with the full-blown MHSA in practice, we conducted more experiments with learned matrices $W_{key}$ and $W_{qry}$: Figure 6 (added to the paper) is the counterpart of Figure 5 when content-content attention is enabled, i.e. $q^\\\\top k + q^\\\\top r$ attention as in (Ramachandran et al., 2019). We averaged the attention probabilities over a batch of 100 test images to remove the dependence on the input content and observe if some heads probabilities are very localized to some pixels around the query pixels. The hypothesis that some attention heads focus on pixels at a fixed shift from the query pixel is confirmed. Other heads tend to use more content-based attention (see Figure 8-10 in Appendix for non-averaged probabilities) leveraging the advantage of Self-Attention over CNN (which does not contradict our theory). We will share (after deanonymization) an interactive website to visualize attention maps (with/without data content) for different images/batch/query pixels. In the meantime, reviewers can consider this demo GIF (animation of Figure 6 and 7) and appreciate the translation of the attended patches when sliding the query pixel (similar to sliding a convolutional kernel). https://drive.google.com/file/d/1METSetroUA2qd2slol9wt7YxucJslAmF/view?usp=sharing\"}",
"{\"title\": \"Answer to Official Blind Review #2\", \"comment\": \"We thank the reviewer for their time assessing our work and their constructive feedback.\\n\\nConcerning the magnitude of $\\\\alpha$, we distinguish two cases:\\n1. With infinite precision, the exact representation of one pixel (hard attention) requires $\\\\alpha$ to be arbitrary large, despite that the attention probabilities of all other pixels converge exponentially to 0 as $\\\\alpha$ grows.\\n2. With finite precision (i.e., in practice), the smallest positive float32 is approximately $10^{-45}$. As such, setting $\\\\alpha=46$ is enough to obtain hard attention (which seems reasonable).\\nFollowing your suggestion, we have added a remark in Section 3 to clarify this point.\\n\\nThank you for pointing out the typo in the introduction.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the recent application of attention based Transformer networks for image classification tasks and asks the question as to the similarity of functions learned by these attention networks with the standard convolutional networks.\\n\\nFirst the paper theoretically proves that a multi head self attention layer (appropriately defined for a 3 dimensional input) can represent a convolutional filter. The proof is based on constructing weights for the attention layers that results in a convolution operation. This construction uses rather crucially the relative positional encodings for the self attention layer. The paper claims that the results can be extended to other forms of positional encodings. \\n\\nIt looks like the construction is correct as far as I can tell. One caveat is that, It looks like, the weights of the attention layer need to be arbitrarily large (\\\\alpha in Lemma 2) to exactly represent the convolution layer. I think this is not possible to avoid for exact representation. A comment on this after the results will be nice.\\n\\nFinally the paper presents experiments on the Cifar10 dataset. The paper shows that the multihead attention units in the lower layers learn to attend on grid like structures on pixels, similar to a Conv filter. I find the experiments to be nicely complementing the theoretical results, even though they are limited to the Cifar10 dataset.\\n\\nOverall I think this paper takes a nice step towards understanding the similarities and differences between the Attention and Conv layers, and I suggest acceptance.\", \"minor\": \"First sentence in intro raise -> rise.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper claims that 1. multi-head self-attention(MHSA) is at least as powerful as convolutions by showing that a CONV can be cast as a special case of MHSA and 2. that in practice, MHSA often mimic convolutional layers.\\n\\nThese claims are interesting and timely, given that there has been a fair amount of recent work that have explored the use of self-attention(SA) on image tasks, either by composing SA with convolutions or replacing convolutions altogether with self-attention (examples of each are referenced in the paper). So should these claims be true, they would give theoretical evidence that SA can completely replace convolutions.\\n\\nHowever, I think that the claims are exaggerated and misleading.\\n1. The theory shows an arguably contrived link between self-attention and convolution. Theorem 1 says that a convolution can be seen as a special case of MHSA, and the constructive proof (that chooses SA parameters to derive a convolution) shows a correspondence between the output of each head of MHSA and a D_out by D_in linear transform applied to the D_in features of a single pixel, with attention weight given entirely to this pixel (i.e. hard attention). The derivation relies heavily on the use of a relative encoding scheme that sets W_qry=W_key = 0 (usually referred to as W^Q, W^K in the self-attention literature, the linear maps applied to the queries and keys) i.e. the attention weights do not depend on the key/query values, but only their relative positions. Moreover, the softmax temperature (an interpretation of 1/alpha) is set arbitrarily close to 0 to make the softmax saturate and attain hard-attention. With these two constraints, I am sceptical as to whether you can really say that you are implementing self-attention. In standard practice when MHSA is used, W^Q and W^K are never set to zero, and the scale of the logits for the self-attention weights are controlled by normalising them with sqrt(D_k) (or sqrt(D_k/N_h), depnding on how you choose to deal with multiple heads). Furthermore, the derivation only holds for when stride=1 and padding=\\u201cSAME\\u201d, such that the spatial dimensions of the input (H & W) remain unchanged. In fact the padding is not really dealt with in the derivation, and it is unclear whether the result can generalise to convolutions with stride > 1, making the claim \\u201cMHSA layer \\u2026 is at least as powerful as any convolutional layer\\u201d problematic. Hence although I think the derivation is mathematically correct, I think that the link that the derivation makes between convolutions and MHSA is somewhat contrived and not a useful observation in practice. I expect MHSA with learned W_qry and W_key will behave differently to when they are set to 0, and it would be much more interesting/relevant to see how their behaviour compares with convolutions in this more realistic setting.\\n\\n2. The heavy dependence of the experiments on the quadratic encoding, the aforementioned contrived form of MHSA that was used to derive the link between convolutions and MHSA, makes the results not very relevant and the claim that \\\"MHSA often mimic convolutional layers\\\" rather misleading. It could be more relevant if quadratic encoding can replace standard MHSA parametererisations with learned W_qry and W_key, but I\\u2019m not convinced that this is the case. Although Figure 4 suggests that this SA with quadratic encoding gives similar test performance to ResNet18, I think that CIFAR-10 classification is too simple a task to claim that quadratic encoding can replace standard SA with learned W_qry and W_key, and I think results can look very different for harder problems e.g. ImageNet, MSCOCO - explored in Ramachandran et al - made possible because they use local SA as opposed to full SA. Experiments on these problems would be much more interesting and relevant. Note that the experiments using the learned relative positional encoding have \\u201cattention scores (are) computed using only the relative positions of the pixels and not the data\\u201d (I\\u2019m guessing this means W_qry=W_key=0 again). Hence the qualitative similarities between MHSA and convolutions only hold for the rather restricted case where I get the impression that self-attention has been unrealistically constrained only to increase its chance of behaving similarly to convolutions. Also the comparison in Figure 4 and Table 1 is being used to support the claim that self-attention can be as \\u201cpowerful\\u201d as convolutions, but I think this is misleading because both quadratic and learned SA uses full SA, where each pixel attends to all pixels - this means the time & memory complexity of the algorithm is O((HW)^2), whereas for convolutions it is O(HW). So the expressiveness of SA that matches convolutions for this particular problem comes at a significant cost, to the extent that for bigger problems (ImageNet, MSCOCO) full SA is not feasible due to its quadratic memory requirement, whereas convolutions don\\u2019t face this problem. I think this should be pointed out more explicitly in the text, and think the claim that \\u201cself-attention is at least as powerful as convolutions\\u201d should be replaced with a more moderate statement such as \\u201cself-attention defines a family of functions that contains convolutions (of stride 1)\\u201d\", \"summary\": \"Although the writing of the paper is clear and the derivation is mathematically correct as far as I can see, the link between self-attention and convolutions in the paper are fairly contrived, hence the contribution of the paper to the field is not so significant in my honest opinion.\\n\\n********************\\nI appreciate the authors' response, and understand that the maths suggests a single head of MHA (in the original form) cannot exactly emulate a general convolution. But empirically, the localised attention patterns do seem to suggest that each head can behave similarly to a restricted form of convolution, where similar weights are given to the receptive field (the local patch) in the neighbourhood each input pixel. Perhaps an analysis of what special case of convolution each head can emulate would be interesting, given the empirically observed similarities in the qualitative behaviour. \\n\\nWith the more justified nuance of the findings of the paper, and together with the authors' significant efforts to make the evaluation more relevant and thorough, I will increase my score to \\\"weak accept\\\".\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper shows both theoretically and in practice that self-attention can learn to act as convolutions. The main intuition is that every attention head can learn to attend individually to a given relative offset around each pixel. Given enough heads (K**2) such a layer can imitate a convolution with kernel size (K,K). This leads to the conclusion that self-attention is at least as powerful as CNNs are. This fact has been acknowledged by (at least part of) the community for a while (following a similar intuition) but as far as I know has never been formalized. Hence, although incremental I consider this an important contribution. The derivation of quadratic relative encoding is a nice theoretical construction. Experiments show improvements over learned relative attention, however, experiments are merely conducted on Cifar.\\n\\nFinally, even though the contributions are somewhat marginal and the experiments are not quite enough to establish the new relative attention mechanism as being superior, I like this paper and consider its contributions valuable. The message of the paper deserves a larger audience and I therefore lean to accept despite some shortcomings.\"}"
]
} |
SyxiRJStwr | Dynamic Scale Inference by Entropy Minimization | [
"Dequan Wang",
"Evan Shelhamer",
"Bruno Olshausen",
"Trevor Darrell"
] | Given the variety of the visual world there is not one true scale for recognition: objects may appear at drastically different sizes across the visual field. Rather than enumerate variations across filter channels or pyramid levels, dynamic models locally predict scale and adapt receptive fields accordingly. The degree of variation and diversity of inputs makes this a difficult task. Existing methods either learn a feedforward predictor, which is not itself totally immune to the scale variation it is meant to counter, or select scales by a fixed algorithm, which cannot learn from the given task and data. We extend dynamic scale inference from feedforward prediction to iterative optimization for further adaptivity. We propose a novel entropy minimization objective for inference and optimize over task and structure parameters to tune the model to each input. Optimization during inference improves semantic segmentation accuracy and generalizes better to extreme scale variations that cause feedforward dynamic inference to falter. | [
"unsupervised learning",
"dynamic inference",
"equivariance",
"entropy"
] | Reject | https://openreview.net/pdf?id=SyxiRJStwr | https://openreview.net/forum?id=SyxiRJStwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"LhxIGI1UJB",
"SkeTFntooH",
"HygQr3tosr",
"HyxxXnYssB",
"ryxrlQ_Lcr",
"S1x7EQ7AFr",
"rJeSrHy0FB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738895,
1573784709191,
1573784634567,
1573784600246,
1572401900639,
1571857195060,
1571841340810
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2038/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2038/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2038/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2038/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2038/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2038/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper constitutes interesting progress on an important problem. I urge the authors to continue to refine their investigations, with the help of the reviewer comments; e.g., the quantitative analysis recommended by AnonReviewer4.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Multi-scale Training and Deformation are Included and Improved On (Plus: How to End Optimization)\", \"comment\": \"Thank you for your feedback and consideration of alternatives for addressing scale.\\n\\n> this issue can be addressed through multi-scale training/testing or the deformable kernels\\n\\nWe do multi-scale training with data augmentation (see \\\"w/ aug\\\" results). The improvement by optimization during inference is on top of a base model trained this way. Multi-scale testing requires knowledge of the testing scale, or enumeration of several scales, while the purpose of our method is to select the unknown scale.\\n\\nThe base method \\\"scale regression\\\" does include deformable kernels, in particular scale deformations with Gaussian structure (Shelhamer et al. 2019). The improvement due to optimization is on top of this adaptation by prediction, showing the need for further adaptation by optimization. Please see Figure 4 for the comparison of scale deformation by prediction and optimization.\\n\\n> optimization process may take [more time] when compared to other scale processing methods like deformable kernels\\n\\nWe appreciate the accuracy and efficiency of deformable kernels. Our experiments show that they are however limited, in that the predictor for the deformation can only generalize so far. Optimization takes more time, but it does so to improve the results.\\n\\nNote that multi-scale testing also takes more time. Since computation scales as the product of input dimensions,, a 2x larger input would cost ~4x the time, and be comparable or more expensive than our optimization.\\n\\n> The number of optimization steps may be hard to control\\n> [is there] a more elegant way to decide when to end the optimization process?\\n\\nWe agree that a more adaptive rule for ending optimization would be preferable. In a rebuttal experiment on a simpler ResNet-50 FCN, we achieved similar results by choosing the number of iterations or choosing a relative tolerance on the change in the entropy across iterations. This tolerance could be more transferrable than a fixed number of steps.\"}",
"{\"title\": \"Optimizing Scale and Score, State-of-the-Art, and Simple Parameter Transformations\", \"comment\": \"Thank you for the feedback and careful consideration of the optimization problem.\\n\\n> the prior reason for adapting \\\\theta_{score} is less obvious and would require further explanation\\n\\nFor completeness, we try adapting both scale and score. The purpose of adapting score is to further fit the model to the appearance of each test input. This can be seen as a way to add modeling capacity, since every test input has its own classifier parameters, instead of sharing the same parameters across all inputs.\\n\\n> current or recent state-of-the-art for the considered dataset\\n\\nThe current state-of-the-art for PASCAL VOC is DeepLabv3+ [1] at a commanding 89% IoU. Many factors contribute to its high accuracy, including supervision, optimization, and inference-time augmentation that are independent of scale.\\n\\nWe choose DLA for the relevance of its skip connections and multi-scale feature pyramid, to show that further scale adaptation still helps.\\n\\n> Wouldn\\u2019t simply multiplying \\\\theta_{score} by a large number decrease the entropy\\n> Similarly, couldn\\u2019t the adversary simply rotate \\\\theta_{score} to reduce IoU?\\n\\nWhile there are simple parameter transformations to decrease entropy or reduce accuracy, that does not mean they are simple to optimize, and empirically these cases do not happen. We do not regularize by weight decay during inference to keep from multiplying the score parameters. We do optimize the adversary until the loss converges (but note that IoU might not drop to zero becauses ties are broken by class order, with background first, so if the parameters are driven to zero the output will be fully background and be partially correct).\\n\\n> improvement is ~2 points on average, not ~2 points for each scale\\n\\nThank you for your precision. We will rephrase this accordingly.\\n\\n[1] https://arxiv.org/abs/1802.02611\"}",
"{\"title\": \"Multi-scale Testing, Novelty of Entropy for Inference-Time Optimization, and Task Parameters\", \"comment\": \"Thank you for your feedback and careful consideration of inference across scales.\\n\\n> Multiscale test-time inference is already standard in state-of-the-art architectures\\n\\nThe choice of enumeration by pyramid or selection by adaption is both practical and philosophical. Practically speaking, pyramids are common for deep learning in vision, but selection is not, so a point of this work is to show that selection (especially by optimization) is worthy of more attention. In the paper referenced by the review, Table 3 shows a 1-2 point gain by pyramid, which is comparable to our 2 point gain by a substantially different route. More philosophically, a pyramid is discrete and fixed while our optimization is continuous and adaptive.\\n\\n> Using entropy as a measure of network uncertainty is a good idea, but also not a novel finding\\n\\nEntropy is certainly a well-established measure of uncertainty. The novelty of our method is its use for optimization during inference: to the best of our knowledge, ours is the first method to adapt to each testing input by entropy minimization.\\n\\n> what is the task parameter for?\\n\\nThe task parameters are the parameters of the output layer (also known as the \\\"score\\\" layer). By optimizing over scale and score parameters, the method can adjust geometry and appearance to minimize entropy. In Table 3 we evaluate optimizing only scale, only score, or both and find that both can help.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary:\\nThe following work proposes a test-time optimization over scales to improve semantic segmentation. Specifically, at test time, they iteratively optimize over the score and scale parameters of Shellhamer et al 2019, where a Gaussian receptive field is used to allow for dynamic scale adaptation of each convolutional layer. They optimize the parameters with respect to an entropy minimization objective. Experiments on PASCAL VOC, reported at multiple scales, demonstrate improvements in IOU over the vanilla architecture.\", \"strengths\": \"-The work was well-motivated\\n-Formulation is pretty elegant and outperforms the baseline\", \"weaknesses\": \"While I liked the somewhat elegant formulation of dynamic test-time scaling proposed by the following work, I don't think this work introduced many novel results nor insights\\n-Multiscale test-time inference is already standard in state-of-the-art architectures such as DeepLab. Specifically, DeepLab runs inference at multiple scales, then max pools the logit responses across all scales. (see Table 3 of https://arxiv.org/pdf/1511.03339.pdf)\\n-Using entropy as a measure of network uncertainty is a good idea, but also not a novel finding.\\n-COCO and Cityscapes would probably have been better choices for datasets with larger variations in scale\", \"improvements\": \"-Try comparing against the multiscale logit-max-pooling inference procedure as a baseline -- or demonstrate that the proposed technique can further improve upon the results of the logit-pooling technique.\\n-Some details weren't very well explained. Specifically, what is the $\\\\theta_{score}$ task parameter for?\\n\\n** Post Rebuttal Response\\nI'd like to thank the authors for clarifying some points in their response. Overall, I maintain that I think the optimization-based scale inference solution they present is interesting from an implementation standpoint, but the findings in this work did not yield sufficient new insight for the task. While I agree that their approach is fairly different from common approaches such as discrete image pyramids, a thorough quantitative comparison of these differences would make this work a lot stronger. As such, I maintain my original rating.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose a method to dynamically adapt some structural features of a semantic image segmentation model at inference time based on the entropy of the predictions.\\n\\nUsing a model that explicitly controls the size of the filters at each layer, they show that running a small number of SGD steps on the scale and final prediction parameters in the last layer to minimize the entropy of least confident predictions for a specific example leads to better performance overall, and especially better generalization when there is a size discrepancy between training and test set.\", \"strengths\": \"The method is inspired, and leads to significant improvements. The dynamic inference setup is clearly explained, and well motivated for the case of the scale parameters. Extensive ablation experiments and the inclusion of an oracle system help understand the contributions of each component of the setup, and the potential of inference-time optimization of the considered parameters.\", \"weaknesses\": \"Some information is missing from the description of the experimental setting. A quick review of the DLA model would be welcome, to get a better sense of the roles of \\\\theta_{scale} and \\\\theta_{score}. The authors should include published numbers for a relevant baseline and the current or recent state-of-the-art for the considered dataset (Table 1should also report 1x numbers in both settings). Finally, while the authors make a strong case for dynamic adaptation of the scale parameters, the prior reason for adapting \\\\theta_{score} is less obvious and would require further explanation.\", \"questions_and_miscellaneous_remarks\": \"\\u201cAs reported in Table 1, our method consistently improves on the baseline by \\u223c2 points for all scales\\u201d > this statement is a little misleading, since the improvement is ~2 points on average, not ~2 points for each scale.\\n\\nWouldn\\u2019t simply multiplying \\\\theta_{score} by a large number decrease the entropy of the predictions? Do you do anything to prevent that from happening?\\n\\nSimilarly, couldn\\u2019t the adversary simply rotate \\\\theta_{score} to reduce IoU? Is the adversary optimized for long enough?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper focused on the problem of semantic segmentation. The author proposed to minimize the output entropy to dynamically predict the scales when doing inference. The entropy minimization strategy is achieved by iterative optimization. Experimental results are reported on the PASCAL VOC dataset.\", \"clarity\": \"I think this paper is moderate. The idea of dynamically predicting the scale or receptive field is interesting. However, this issue can be addressed through multi-scale training/testing or the deformable kernels. The experimental results are not that convincing. The method is only evaluated on one dataset and one backbone. The paper could be improved with more convincing experiments.\", \"limitations\": \"The optimization process may take a certain number of forward and backward steps. In Sec 3.2 the author shows this will introduce 3x more time, this will much decrease its popularity when compared to other scale processing methods like deformable kernels.\", \"experiments\": \"1. The proposed method is evaluated on the PASCAL VOC dataset with the DLA segmentation backbone. The chosen backbone is not as strong as the most popular frameworks like DeepLab and PSPNet. Thus the baseline results as shown in Table 1 are not that high. I would like to see the relative improvements introduced by the proposed method over a stronger baseline.\\n\\n2. The experimental dataset is PASCAL VOC only. I would be more convinced with more datasets like Cityscapes or ADE20K.\\n\\n3. The reported experimental results are with models trained on a narrow range of scales. What the results and relative improvements would be if trained with regular multi-scales like [0.5, 2.0]? Will the scale issue be easily addressed by a multi-scales training strategy?\\n\\n4. The number of optimization steps may be hard to control, 32 is used for DLA on PASCAL VOC. Will this number be changed for different models on different datasets? If yes, can the author find a more elegant way to decide when to end the optimization process?\", \"misc\": \"It is better to give a brief introduction of structure parameters scale and dynamic Gaussian receptive fields as in Sec 2.3.\"}"
]
} |
rkxs0yHFPH | SpikeGrad: An ANN-equivalent Computation Model for Implementing Backpropagation with Spikes | [
"Johannes C. Thiele",
"Olivier Bichler",
"Antoine Dupret"
] | Event-based neuromorphic systems promise to reduce the energy consumption of deep neural networks by replacing expensive floating point operations on dense matrices by low energy, sparse operations on spike events. While these systems can be trained increasingly well using approximations of the backpropagation algorithm, this usually requires high precision errors and is therefore incompatible with the typical communication infrastructure of neuromorphic circuits. In this work, we analyze how the gradient can be discretized into spike events when training a spiking neural network. To accelerate our simulation, we show that using a special implementation of the integrate-and-fire neuron allows us to describe the accumulated activations and errors of the spiking neural network in terms of an equivalent artificial neural network, allowing us to largely speed up training compared to an explicit simulation of all spike events. This way we are able to demonstrate that even for deep networks, the gradients can be discretized sufficiently well with spikes if the gradient is properly rescaled. This form of spike-based backpropagation enables us to achieve equivalent or better accuracies on the MNIST and CIFAR10 datasets than comparable state-of-the-art spiking neural networks trained with full precision gradients. The algorithm, which we call SpikeGrad, is based on only accumulation and comparison operations and can naturally exploit sparsity in the gradient computation, which makes it an interesting choice for a spiking neuromorphic systems with on-chip learning capacities. | [
"spiking neural network",
"neuromorphic engineering",
"backpropagation"
] | Accept (Poster) | https://openreview.net/pdf?id=rkxs0yHFPH | https://openreview.net/forum?id=rkxs0yHFPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"HroVTEjk8I",
"HklkGczhsS",
"BylBV_MhiB",
"HylXmDzhjB",
"S1x7zEO9oB",
"SJgz3mV7oS",
"rJlKh0QmoB",
"r1liPFXmiH",
"rJgnwZSTYH",
"rJlYxLacKr",
"BkgT1iYcYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738866,
1573820935060,
1573820461449,
1573820187211,
1573712906606,
1573237673725,
1573236401394,
1573235043448,
1571799396016,
1571636720538,
1571621604574
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2037/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2037/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2037/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2037/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2037/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2037/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2037/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2037/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2037/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2037/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a learning framework for spiking neural networks that exploits the sparsity of the gradient during backpropagation to reduce the computational cost of training. The method is evaluated against prior works that use full precision gradients and shown comparable performance. Overall, the contribution of the paper is solid, and after a constructive rebuttal cycle, all reviewers reached a consensus of weak accept. Therefore, I recommend accepting this submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper update with more detailed discussion of hardware considerations\", \"comment\": \"Based on your review, we updated the discussion section and added a more detailed treatment of the points you raised. We added references to current work on neuromorphic hardware and explain that hardware overheads have to be taken into account when assessing system level improvements in event-based SNN systems.\"}",
"{\"title\": \"Thanks\", \"comment\": \"We are happy that we could provide answers to your questions. We updated the paper to clarify points that you addressed.\"}",
"{\"title\": \"Paper update and additional results on CIFAR100\", \"comment\": \"Based on your review, we clarified the content in some sections and added additional results.\\n\\nAs you requested, we added inference scores for the CIFAR100 dataset. Using the same architecture as for CIFAR10, we obtain a maximal classification score of 64.40%. Running the same experiment with only the forward pass encoded in\\nspikes, but floating point gradients, we obtain 64.69%. This result could probably be improved by\\nadapting the architecture better to the dataset or adding additional regularization, but seems acceptable for a simple CNN without BatchNorm or residual connections. The error on the training set converges to around 98% over 300 epochs, so convergence of the optimization algorithm does seems to be a major problem. It therefore should demonstrate that coding the gradient into spikes does not lead to a large precision loss even in this scenario. \\n\\nWe additionally extended the discussion section to treat better certain points that you raised regarding the efficiency of the spike coding scheme in ANNs with optimized parameter number.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for your detailed discussion and response to my question. This has helped me better understand the contributions of your paper and the missing gaps in my understanding of the paper.\"}",
"{\"title\": \"Clarifications regarding the hardware efficiency and runtime of our approach\", \"comment\": \"Thank you very much for your helpful review.\\n\\nWe are happy to give our response to your main concern. \\n\\nSNNs require dedicated event-based digital or analog hardware implementations of the integrate-and-fire neuron, such as the Loihi chip by Intel or TrueNorth by IBM. The wall clock time of our algorithm therefore depends strongly on the exact hardware that is used to implement SpikeGrad. This paper presents algorithmic work on SNNs and we do at the moment not have access to such a specialized neuromorphic chip. The SNN in our case is simulated with a clock-based simulation of the spike dynamics, as it is common pratice in similar theoretical work on SNNs, and effectively done by the previous works that we refer to. In general, such a simulation of an SNN on standard hardware will be much slower than that of an ANN of similar topology, since GPUs are not suitable for an efficient implementation of the event-based, sparse dynamics of SNNs. Simulating a large SNN on standard hardware is therefore very time consuming.\\nOne of the main advantages of our framework is that these lengthy simulations can be avoided. We demonstrate that a special version of the integrate-and-fire neuron has equivalent accumulated responses as an integer ANN with the same weights. This means that we can be sure that an SNN that implements this integrate-and-fire neuron model in dedicated hardware will have exactly the same response as the ANN. The fact that we can simulate training of an SNN without the explicit need of a functioning SNN hardware system or a lengthy simulation is a great practical advantage of our framework. It is the main reason why our paper is the first work that is able to show that an SNN using only integrate-and-fire dynamics for BP can be trained to the same precision as SNNs that are trained using floating point gradients (floating points gradients are difficult to implement in integrate-and-fire SNN hardware). \\nYou are right that for a practical applications it remains to show that the integrate-and-fire dynamics can be implemented efficiently in a dedicated chip, and that the system can profit from the sparsity in computation, without too much overhead. This is indeed an ongoing question in the field of SNN hardware design, and a large number of competing approaches try to address this problem. The aim of this paper was to focus on learning algorithms that are compatible with the integrate-and-fire neuron (that is the most common SNN model), and we wanted to be agnostic with respect to the exact hardware implementation. We therefore used spike operation count as a metric, since it applies to a large number of possible SNN implementations. How efficiently these operations can be performed in a particular SNN hardware, given the additional overhead of event-based computing, is an interesting direction for future work. Your concern addresses a problem that is very relevant for the justification of a large number of algorithmic research papers on SNNs. It is for the moment however out of scope for this theoretical paper, which assumes that such a hardware may indeed be built. If desired, we will clarify this point in the updated version of our paper, and identify more clearly the assumptions we make on the hardware implementation. \\n\\nWe are happy to reply to any further questions or remarks that you might have.\"}",
"{\"title\": \"Clarifications demanded by the reviewer\", \"comment\": \"Thank you very much for your helpful review.\\n\\nAs you have remarked correctly, the integrate-and-fire (IF) model is usually used for the forward propagation phase in an SNN. Our work uses the integrate-and-fire neuron model additionally to implement backpropagation, and is the first work to do so successfully on the CIFAR10 dataset using a large network. The main interest in using spikes to implement backpropagation is that also training can be implemented \\\"on-chip\\\" in an event-based SNN hardware system that implements the IF neuron model (please note that SNNs are always implemented in dedicated, neuromorphic hardware, such as Loihi by Intel, that is optimized for the IF neuron model). SpikeGrad is therefore the only training method for SNNs so far that is able to perform spike-based training in a large network on the CIFAR10 dataset, and which yields performances comparable to SNN systems trained \\\"off-chip\\\" with floating point gradients.\\n\\nThis also relates to your question why we present an analysis of the gradient sparsity. Because our method implements backpropagation with the IF model, it is able to exploit this sparsity when training is performed in a dedicated hardware implementation of the IF neuron. The main advantage of the IF neuron is that events and therefore computation is only triggered in proportion to the integrated activation (in the forward pass) or the integrated error (in the backward pass). The IF model is typically only analyzed in the context of forward propagation, but our results demonstrate that it could be indeed also very interesting for backpropagation, because the gradient becomes extremely sparse, and values become very low, as soon as the system response is approximately correct. This could be an interesting property for an embedded system with continuous learning, since fine tuning the network could be performed with very little events, and therefore little computation.\\n\\nThis implementation of backpropagation with the IF model is not our only contribution. An important property of SpikeGrad is that both forward and backward propagation can be described by an equivalent ANN. This is the reason why also the forward processing pass is part of SpikeGrad. This has the advantage that we can simulate the behavior of the SNN efficiently on GPUs using the equivalent ANN, with the guarantee that the SNN that uses the IF model will give the same results. Usually simulation of SNNs uses clock-based simulation of the IF neuron model. This is however inefficient, since the event-based computation of the IF neuron model is not well suited to standard computing hardware, in particular GPUs. This is one of the main reasons why other training algorithms for SNNs cannot be tested on large networks, since researchers do often not have access to the necessary dedicated hardware (often ASICs that are produced in small numbers), and simulation on standard hardware becomes intractable. SpikeGrad offers a practical solution to this problem.\\n\\nAs you requested, we provide a clarification of our notion of equivalence: when implementing the SNN using the SpikeGrad model described in the paper, the accumulated responses (the sum of all emitted spikes) of each IF neuron at the end of a propagation phase (forward and backward) will be equivalent to the corresponding integer values in the ANN. Since learning and inference is performed on these accumulated quantities in the SNN, the SNN learns and infers equivalently to the ANN. An SNN using the same initial conditions and that is trained on the same data will therefore learn the same weights as the ANN and give the same inference responses. An example of such an equivalence can be found in our response to reviewer #2.\\n\\nWe hope that we could clarify the points you raised in your review and we are happy to provide additional details if you have further questions. Your are right that the labeling of the sections might be not very clear. We will upload an updated version of the article that addresses these points as soon as we have obtained your feedback to our first response.\"}",
"{\"title\": \"Responses to your main concerns\", \"comment\": \"Thank you very much for your helpful review.\\n\\nRegarding your concerns, here are our responses:\\n\\n1. The second compartment is in principal necessary to discretize the gradient into spikes, in analogy to feedforward propagation. However, if the derivative of the activation function in equation (5) is calculated and saved before the backpropagation phase, the value of V is no longer required and the same compartment could be reused for U. This is however more of an implementation detail, and we used two compartments here to clarify that both forward and backward propagation use the integrate-and-fire model.\\n \\n2. In our framework, the SNN is guaranteed to have the same accumulated response as the ANN. However, for each value, there is in principle a large number of combinations of +1 and -1 that lead to the same response. For instance the value 5 in the ANN can be represented in the SNN by 5 times +1 or by 10 times +1 and 5 times -1. In the first case there are only five spikes that have to be processed, in the second case there are 15. Since the SNN can always propagate additional spikes to correct its current response, the final sum of all spikes will always be equivalent to the output of the ANN. How many spikes the SNN will use to encode a number depends on the order of the input spikes. If the input values are encoded in an irregular fashion, for instance the input value 5 is encoded by 5 negative spikes followed by 10 positive spikes, also the neurons in the network will first propagate spikes of one sign, and then spikes of another sign that may cancel each other out. This therefore produces useless computation. However, if the input is coded in a regular fashion, and 5 is coded by 5 positive spikes, also the neurons will respond more regularly and little corrective spikes are necessary. Empirically, we observe that for this regular input encoding, the number of spikes necessary to encode a number is close to the optimum, and on average there are only approximately 3.5% more spikes than the absolute value of the ANN activation (e.g. the value 100 in the ANN needs on average 103.5 spikes to be encoded in the SNN). \\nYou are correct that many ANNs are too large for the task they are used for, and possibly the number of neurons and parameters can be reduced, resulting in lower sparsity. However, SNNs are not only potentially more efficient in cases where a large number of values is exactly zero. It is sufficient that a large number of values is typically small. For instance, if 80% of activations in a 8-bit integer quantized ANN are 1, 15% are 2, and only the remaining 5% are much larger, almost all values can be represented by only 1 or 2 spikes. This means these neurons will only trigger 1 or 2 accumulation operations in the receiving neurons (and no multiplications). On the other hand, a standard 8-bit integer ANN will perform a 8-bit MAC operation for each of these neuron outputs, regardless of their numeric value.\\n\\n3. We compared our results to MNIST and CIFAR10 because these are still the most common benchmarks in SNN training. We will try to obtain results on CIFAR100 for the final version of the paper, we are however not sure if the time constraints will allow us to give reliable results by the end of the review period.\\n\\nWe will add these points in the updated version of the paper as soon as we have obtained your feedback. We are happy to reply to any further questions or remarks that you might have.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper shows how SNNs can integrate backpropagation with a second accumulation module that discretizes errors into spikes. In other words, the authors show how to translate event-based information propagation (used by SNNs) into a backpropagation method, which is the main contribution. The description of establishing the equivalence between an ANN and SNN in Section 3 is mostly well done. They perform empirical studies on MNIST and CIFAR-10 to demonstrate the effectiveness of SpikeGrad.\\n\\n\\n= Main Concerns =\\n\\n1. It seems not clear to me why it is a good idea to introduce a second compartment with a threshold in each neuron as described in Eqn. 6. \\n2. I very much like the idea of \\\"translating\\\" an SNN into an ANN. I'm a bit confused about the computational complexity estimation of the SNN. In particular, it is not clear to me what is the practical implication of {n-n_min}/n_min < 0.035. Furthermore, in https://openreview.net/forum?id=rkg6PhNKDr, for ANNs on CIFAR-10, freezing 80% of the VGG19 parameters from the third epoch onwards only results in 0.24% drop in accuracy. I wonder if the advantage of SNN over ANN is still huge in this case. \\n3. I do not think experiments on MNIST are very useful, as the task is a toy task. I would suggest running at least one more experiment on CIFAR-100 or TinyImagenet.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"EDIT After Rebuttal: My understanding of the contributions of this paper has improved. I now increase my score to a weak accept.\\n\\nThis paper proposes a new backpropagation algorithm learning algorithm \\\"SpikeGrad\\\" for Spike-based neural network paradigm. Simulating this algorithm on a classical hardware would require a lot of time-steps. To circumvent this, they show how to construct a corresponding artificial neural net (that can be trained using the traditional gradient based algorithms) which is equivalent to the spiking neural net. Using this equivalence they simulate a large scale SNN on many real-world dataset (first paper to do so). In particular, they use MNIST and CIFAR-10 for this purpose. They show that training a fixed architecture using their method is comparable to other prior work which uses high-precision gradients to train them. They also show how to exploit sparsity of the gradient in the back propagation for SNN.\\n\\nThis paper is hard-to-follow for someone not familiar with the background material. In particular, without looking at prior literature it was hard to understand that \\\"integrate and fire neuron model\\\" is essentially the feedforward mechanism for the SNN. I would suggest the authors make this a bit more explicit. Moreover, it would serve the structuring of the paper to have a formal \\\"Preliminaries\\\" section, where all known stuff goes. It was hard to discern what is new in this paper, and what is from prior work and these are mixed in section 2. For instance, section 2 states \\\"SpikeGrad\\\" algorithm; but the main contribution (ie., the back propagation algorithm) only appears in the middle of this section. Likewise, I think section 3 can be arranged better. In particular, the equivalence is a \\\"formal\\\" statement and thus, could be stated as a theorem followed by a proof. It will also make it explicit as to what does it mean by an \\\"equivalent\\\" network. In fact, it is still not clear to me at this point what that statement means. Could you please elaborate this in the rebuttal? \\n\\nRegarding the conceptual contribution of this paper, if I understood things correctly, the main claim is that they give a new way to train SNN whose performance on MNIST and CIFAR-10 is comparable to other works. The second contribution is that they give the equivalence between ANN and SNN (point above). It is also unclear to me what the point regarding the sparse gradient in the backpropagation in the experimental section is trying to make? Could you please clarify this in the rebuttal as well?\\n\\nAt this point, the writing of this paper leaves me with many unanswered questions that needs to be addressed before I can make a more informed decision. Please provide those in the rebuttal and based on those will update my final score. But with my current understanding of this paper, I think this does not meet the bar. The contributions in this paper do not seem too significant.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a first framework of large-scale spiking neural network that exploits the the sparsity of the gradient during backpropagation, to save training energy.\\nLater, it provides detailed analysis to show the equivalence of accumulated response and the corresponding integer activation ANN.\\n\\nThe paper is clearly written. The forward and backward process with the spike activation and error activation function respectively to save energy is clearly demonstrated. The response equivalence of the proposed architecture and integer ANNs provides theoretical gurantee for the good performance in training accuracy. \\n\\nMy only concern is the lack of empirical support for the energy saving of the proposal. In order to show the effectiveness of the proposal, the authors should also provide time consumptions of the SNN and normal ANN. A mere comparison on sparsity doesn't really show the advantage of the proposal, since there is some computational overhead. For a system-level improvement, it's not sufficient to show the epoch-operation relation.\\nIf the authors could provide wall clock time comparisons, I will consider raising my score. \\n\\n==========\\nI find the response of the authors reasonable and address some of my concerns. Therefore I'm willing to raise my score to 6.\"}"
]
} |
ryl5CJSFPS | GENERALIZATION GUARANTEES FOR NEURAL NETS VIA HARNESSING THE LOW-RANKNESS OF JACOBIAN | [
"Samet Oymak",
"Zalan Fabian",
"Mingchen Li",
"Mahdi Soltanolkotabi"
] | Modern neural network architectures often generalize well despite containing many more parameters than the size of the training dataset. This paper explores the generalization capabilities of neural networks trained via gradient descent. We develop a data-dependent optimization and generalization theory which leverages the low-rank structure of the Jacobian matrix associated with the network. Our results help demystify why training and generalization is easier on clean and structured datasets and harder on noisy and unstructured datasets as well as how the network size affects the evolution of the train and test errors during training. Specifically, we use a control knob to split the Jacobian spectum into ``information" and ``nuisance" spaces associated with the large and small singular values. We show that over the information space learning is fast and one can quickly train a model with zero training loss that can also generalize well. Over the nuisance space training is slower and early stopping can help with generalization at the expense of some bias. We also show that the overall generalization capability of the network is controlled by how well the labels are aligned with the information space. A key feature of our results is that even constant width neural nets can provably generalize for sufficiently nice datasets. We conduct various numerical experiments on deep networks that corroborate our theoretical findings and demonstrate that: (i) the Jacobian of typical neural networks exhibit low-rank structure with a few large singular values and many small ones leading to a low-dimensional information space, (ii) over the information space learning is fast and most of the labels falls on this space, and (iii) label noise falls on the nuisance space and impedes optimization/generalization. | [
"Theory of neural nets",
"low-rank structure of Jacobian",
"optimization and generalization theory"
] | Reject | https://openreview.net/pdf?id=ryl5CJSFPS | https://openreview.net/forum?id=ryl5CJSFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"s3mlf1QDWY",
"HyxZzj92jH",
"S1lUTvchjH",
"Bklu7DqnsS",
"BygXbD0n5B",
"rkgb8JyCKr",
"Syer94DpFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738838,
1573853960936,
1573853117721,
1573852960075,
1572820731074,
1571839817285,
1571808396618
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2035/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2035/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2035/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2035/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2035/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2035/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This submission investigates the properties of the Jacobian matrix in deep learning setup. Specifically, it splits the spectrum of the matrix into information (large singulars) and ``nuisance (small singulars) spaces. The paper shows that over the information space learning is fast and achieves zero loss. It also shows that generalization relates to how well labels are aligned with the information space.\\n\\nWhile the submission certainly has encouraging analysis/results, reviewers find these contributions limited and it is not clear how some of the claims in the paper can be extended to more general settings. For example, while the authors claim that low-rank structure is suggested by theory, the support of this claim is limited to a case study on mixture of Gaussians. In addition, the provided analysis only studies two-layer networks. As elaborated by R4, extending these arguments to more than two layers does not seem straighforward using the tools used in the submission. While all reviewers appreciated author's response, they were not convinced and maintained their original ratings.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your time and efforts in reviewing our paper.\", \"re_point_1\": \"Our theory yields data-dependent bounds and does not require any assumptions on the data or the Jacobian. In short: If dataset has nice properties (quantified by theory e.g. low-rankness), bounds become stronger and theory works for any cutoff value of alpha as soon as the network is sufficiently wide. However, our result shows that when the Jacobian exhibits such low-rank structure one can use a large alpha and in turn small width networks to achieve good generalization (proportional to how aligned the labels are with top eigen-vectors of the Jacobian). As case study, we rigorously proved that the Jacobian indeed becomes low-rank when the input data obeys a mixture of Gaussian model. However, we note that for low-rank behavior to happen it is not necessary for the input data to be linearly separable or low rank. Indeed, we empirically demonstrated this low-rank/bimodal structure on CIFAR-10, a dataset that is clearly not linearly separable (unlike MNIST). The low-rankedness of the Jacobian originates from the representation power of the architecture for a given data set. It seems that when the network architecture can learn good data representations, the Jacobian becomes low rank. Stated differently, if the data features at the last hidden layer are approximately separable then we expect the Jacobian to be approximately low rank. It is difficult to see how a network can generalize without such low-rank or clusterable representations. Hence, we expect low-rank behavior to be prevalent and expect these observations to hold on larger datasets such as CIFAR 100 and ImageNet for typical networks that achieve good generalization performance. In fact, preliminary simulations on a subset of this data sets confirm this.\", \"re_point_2\": \"The low-rank structure of the Jacobian is strongly suggested by theory as demonstrated in Sec 2.3 when the data comes from a Gaussian Mixtures. Specifically, Thm 2.5 shows that if the noise level is small, the effective rank of the M-NTK is approximately K^2 C. To verify the effective low-rank property of the Jacobian on real networks and practical datasets (not generated by GMM) we empirically show that the Jacobian has bimodal structure on CIFAR-10 and MNIST. That being said, our generalization results hold for any spectrum cut-off and we don\\u2019t need any assumptions on the data distribution. Please see response to point 1 for further discussion about this. Thanks for the reference (https://arxiv.org/pdf/1910.05929.pdf). This is in line with our intuition and gives further credence to the observation that the Jacobian is low rank.\", \"re_point_3\": \"Great question! Cross-entropy term P will emphasize the points that are close to the decision boundary (small margin) and shrink the other ones (large margin). Hence, during training JP will be low-rank where low-rankness is not only due to J but also related to the number of points in the class boundaries. We used square loss to keep the theoretical analysis simpler, however we did make similar observations empirically on other loss functions. In particular, we indeed found that for both squared loss with softmax and cross-entropy loss the Jacobian has also low-rank structure and observed Jacobian adaptation on CIFAR10 with cross-entropy. Roughly stated for cross-entropy theory one has to replace the residual with the derivative of the loss and we expect that the general ideas presented in this paper will hold.\", \"re_point_4\": \"We are not completely sure if we understand your question on modeling the derivatives w.r.t each logit. As you pointed out, in the multi-class NTK case we have to differentiate each output w.r.t. each input feature and concatenate them to obtain J. In M-NTK, we still have a single Jacobian whose dimensions grow with number of classes and this is exactly what we assumed in theory and calculated in the numerical experiments.\", \"re_point_5\": \"The theoretical analysis on convergence and generalization in Jacot, 2018 hold for the infinite-width limit NTK, which we agree is restrictive in practice. Our theory does not make this assumption, and provides generalization results for finite width networks, where the width k can be even constant in some cases (in particular when the Jacobian is sufficiently low-rank). In particular, our results go beyond the NTK regime as we do not require all the eigenvectors of the Jacobian to remain approximately fixed. Rather we show that it is sufficient if only the few top ones remain approximately fixed. This is exactly why we can handle rather small widths not possible in the NTK analysis. Please see the case study in Section 2.3 which clearly demonstrates the advantages of our result with respect to the NTK regime. Furthermore, in Appendix D we go even further than this and begin to demystify the more mysterious adaptation behavior observed in our experiments which is clearly outside the purview of NTK style analysis.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your time and efforts in reviewing our paper. First, many thanks for pointing out the template. Great catch! We fixed it.\\n\\nWe agree that a few of the high-level motivations of the paper is related to Arora et. al. and Jacot et al. However, our paper contains many new insights, studies completely new phenomena (adaptation, harnessing low-rank, rigorous early stopping analysis etc), new experiments and theoretical justifications. We have stated the similarities and differences between our paper and this work in various places in the paper. Let us point to a few (of many) of the novelties of our work w.r.t. the Arora et. al. paper.\\n\\n(1)\\tThe central premise of this paper is how one can utilize the low-rank nature of the Jacobian to provide generalization guarantees. Arora et. al. does not utilize low-rank structure at all. In particular, as the minimum eigenvalue of the NTK kernel goes to zero the required width would go to infinity. In the limit where the Jacobian is exactly low-rank the results of Arora et at are vacuous (requires $k\\\\tendsto +inf) for instance when sigma tends to zero in the simple case study of Section 2.1 or the simplest binary classification problem where (x,y) samples have the discrete distribution (1,1) or (-1,-1). The advantage of this becomes clear during the network size analysis: Our network width is data-dependent, and it can be as small as constant (or logarithmic) if the data is low-dimensional. In contrast, results of Arora et al. don\\u2019t even apply if the kernel matrix is rank deficient. Observe that kernel matrix is expected to have bad condition number for structured data (in contrast to random data).\\n(2)\\tThe concept of information and nuisance space also does not seem to appear in Arora et al.\\n(3)\\tAnother key aspect of the results of this paper is the use of early stopping. The results of Arora et al are based on iterating to convergence and do not have early stopping which is important for the results of this paper. In fact, earlier versions of Arora et al seemed to advocated that early stopping is completely unnecessary!\\n(4)\\tWe also note that as mentioned the paper in the extreme case where alpha_0=sqrt{lambda}=sqrt{lambda_min(Sigma(X))) with with K=1 we require k>= cn^4/lambda^2 whereas Arora et al requires k>= cn^8/lambda^6. So that in this very special case our results can prove the results of Arora et al with less stringent width requirements.\\n(5)\\tOur contributions go beyond the NTK regime as we do not require all the eigen directions of the Jacobian to be fixed across iterations rather we only need the very top ones to remain fixed. \\n(6)\\tWe provide a detailed experimental study of how neural networks learn better low-rank representations over time and provide theoretical justification for it (see Appendix D). Hence, our contributions go even beyond the NTK regime on the top eigen directions mentioned in (5).\\n(7)\\tWe also notably do not require random initialization (see Theorem 2.3). We are not familiar with any comparable generalization bound for such deterministic initialization which can be used for pretrained models.\\n(8)\\tFinally of minor importance we study K>1.\\n\\n\\nIn summary we believe that some similarities in the high-level motivations should not overshadow the novel new insights, phenomena, experiments and theoretical results in this paper and we hope that you reconsider your score.\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"Thank you for your effort and time for reviewing our paper.\\n\\nRe \\u201cThe faster convergence of the model in the information space is not surprising and was observed by Jacot 2018\\u201d and \\u201cThe results uncovered are not surprising and predicted by Jacot 2018 (granted, it is interesting to see that the result holds for finite width and non-continuous gradient flow).\\u201d\\n\\nFirst let us agree that we should have discussed Jacot 2018 in further detail. We were not aware that Jacot et al. contained similar insights to Arora et al. (which we compared to). We revised the paper and now properly refer to Jacot et al (see Prior Art section). Compared to Jacot et al., paper contains many new insights (adaptation, harnessing low-rank, early stopping generalization analysis etc), provides new experiments and theoretical justifications. Below we outline some of these novelties\\n\\n(1)\\tThe central premise of this paper (as clear by the title) is how one can utilize the low-rank nature of the Jacobian to provide generalization guarantees. Jacot et al. does not utilize low-rank structure at all. Jacot et al. does mention principal components and lower eigenvectors in one paragraph (see their page 7). This is obviously related but much closer to Arora et al.\\u2019s analysis (which is also followup on Jacot et al.) than ours. In contrast this paper quantifies the low-rankness of features, demonstrates real datasets exhibit Jacobian low-rankness, and states many provable benefits (generalization, small network width, convergence\\u2026) via low-rankness.\\n(2)\\tThe concept of info and nuisance space (which helps quantify low-rankness) is also new to this paper and does not seem to appear in Jacot et al. \\n(3)\\tThe result of Jacot et al. are for gradient flow on infinite width networks. Our results hold for gradient descent with finite width neural networks. In fact, in some cases we can handle constant width thanks to carefully quantifying the benefit of low-rankness. Any simple discrete variant of Jacot et al. would require the width to scale polynomially in the size of the training data (e.g. the Arora et al. paper). It will also get much worse for badly conditioned NTK see Section 2.3 on mixture models.\\n(4)\\tOur contributions go beyond the NTK regime as we do not require all the eigendirections of the Jacobian to be fixed across iterations rather we only need the very top ones to remain fixed. \\n(5)\\tWe provide a detailed experimental study of how neural nets learn better low-rank representations over time and provide theoretical justification for it (see Appendix D). Hence, our contributions go even beyond the NTK regime on the top eigen directions mentioned in (5).\\n(6)\\tAs also noted by the reviewer we focus on generalization and classification unlike Jacot et. al. 2018.\\n\\n\\nRe \\u201cthe setting of theoretical contributions is too restrictive. The model exposed in section 1 is extremely simplified, as only W can be learned, and V is fixed. As a result, the model is in essence completely linear...\\u201d \\n\\nWe respectfully disagree. First re \\u201cthe model is linear\\u201d. If W was fixed, then yes, the model would be linear but as a function of W the model is definitely not linear. We note that we have only focused on learning the first layer of the network for clarity of exposition. That said most of the arguments in the paper have been written in a way which extension to training of both layers is possible (including the Radamecher complexity arguments). Also, in Appendix B we already sketched how one would go about handling joint optimization of both layers.\\n\\nRe Nitpick \\u201cour results may shed light on the generalization with pre-trained models\\u201d seems stretch. While I agree that theories that require random initialization won\\u2019t work for transfer learning, the results of the authors don\\u2019t leverage anything about pre-training.\\n\\nOur contribution re pretraining is that we do not need random initialization unlike related works. Any initial Jacobian can be your NTK (applies to finite networks under smooth activations). Hence guarantees hold from arbitrary initialization. Numerical section also demonstrates network learns better low-rank representation over time. When you put these together, it is reasonable to highlight possible benefits on transfer learning. That said, we will tone down the statement.\\n\\nRe Model is too simple (even simpler than a standard one hidden layer network). \\n\\nPlease see response earlier above which demonstrates that the proof does extend to joint optimization of both layers.\\n\\nRe The paper is incremental, as the link between convergence rate and the projection of the desired outputs on the information space was already made in Jacot 2018\\n\\nPlease see response to question above clarifying the novelties with respect to Jacot et al.\\n\\n\\nIn conclusion we believe that some similarities in the high-level motivations should not overshadow the novel new insights, phenomena, experiments and theoretical results in this paper and we hope that you reconsider your score.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes new (data dependent) generalization guarantees based on the Jacobian of the model. The authors suggest that if the desired outputs lie into the information space (the subspace spanned by the largest eigenvectors of the NTK), the model will train faster and better generalization will be achieved.\\n\\nThe faster convergence of the model in the information space is not surprising and was observed by Jacot 2018. The authors make improvements over this result:\\n - They present a generalization result, whereas Jacot 2018 focuses only on the convergence on the training set. It is also formulated as a classification problem instead of a regression one.\\n- It doesn\\u2019t need JJ^t to stay constant during the training.\\n\\nHowever, the setting considered by the authors to derive their theoretical contributions is too restrictive. The model exposed in section 1 is extremely simplified, as only W can be learned and V is fixed. As a result, the model is in essence completely linear: the goal is, for a given V, to learn a \\u201cgood\\u201d hidden layer using a linear model and the loss L : h -> ||V phi(h) - y ||. \\n\\nThe experiment on cifar10 is interesting, especially the section regarding label corruption. A more extensive empirical investigation is this direction would be of great value. The results uncovered are not surprising and predicted by Jacot 2018 (granted, it is interesting to see that the result holds for finite width and non-continuous gradient flow).\\n\\nI think this paper in its current state is not good enough for two reasons. First, the major contribution is a generalization bound that is derived for a model that is too simple (even simpler than a standard one hidden layer network). Beside this result, the rest of the paper is incremental, as the link between convergence rate and the projection of the desired outputs on the information space was already made in Jacot 2018\", \"nb\": \"I did not check the derivation of the results in annexes.\", \"nitpick\": \"\", \"page_2\": \"\\u201cour results may shed light on the generalization capabilities of networks initialized with pre-trained models commonly used in meta/transfer learning\\u201d seems like a bit of a stretch. While I agree that theories that requires random initialization won\\u2019t work for transfer learning, the results presented by the authors don\\u2019t really leverage anything particular about pre-training.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors use the empirically supported assumption of the low-rank nature of the neural network Jacobian to provide new elements of data-dependent optimization and generalization theory. By modelling the data as a low-rank object itself, they analytically study the evolution of the train and test errors. The paper divides the space of weights and biases into the \\u201cinformation\\u201d and \\u201cnuisance\\u201d subspaces, spanned by the top largest and the remaining singular vectors of the Jacobian respectively. They use this division and its alignment with the low-rank structure of the data to talk about convergence speed. Finally, they provide numerical experiments to back their claims.\\n\\nI enjoyed the paper, however, there were many points where I was unclear on the precise nature of the assumptions used / the strength of the results.\", \"disclaimer\": \"I didn\\u2019t manage to read through the proofs in the appendix and cannot therefore vouch for its correctness.\\n\\n-- Point 1 --\\nLeveraging the data structure\\n\\nI am unclear on how exactly you were modelling the structure of the data. From your proofs, it seems that you have been dealing with the matrix X comprising the concatenated flatted vectors of the raw input features (e.g. pixels) of the input data [x1,x2,...,x_datasetsize]^T. In particular, the only place where I see data explicitly enter is in Definition 2.1, where you look at the X X^T and fi\\u2019(w X) fi\\u2019(w X)^T.\\n\\nIf the data is linearly separable in the raw input space on its own, then I see that the matrix X X^T will be low-rank (related to the number of classes). I also see your point about the connection of the y to the relevant (semantic) clusters. The same argument could by applied to fi\\u2019(w X) fi\\u2019(w X)^T -- provided that the features produced are again linearly separable, we will observe this object to have a low (number of classes - 1) rank. \\n\\nWhat is unclear to me is whether these assumptions are warranted. I understand that some simple datasets, e.g. MNIST, are essentially linearly separable in the raw pixels, and therefore the X X^T indeed is low rank. However, I doubt anything like that is true for big datasets, such as ImageNet. For deep networks that are used on these big datasets, such a modelling assumptions would likely not be true. I wonder how this relates to your results, since the low-rank nature of the Hessian is observed even for those networks, which is in turn related to the low rank nature of the Jacobian.\\n\\n-- Point 2 --\\nData implicitly present in the Jacobian tensor.\\n\\nI wonder how you modelled the Jacobian tensor that you started using on page 3. Since the Jacobian -- the derivative of the output logits with respect to the weights, has to be evaluated at a particular input X, the assumptions you make on the data are in turn having an effect on the Jacobian, and vice versa.\\n\\nI am unclear on exactly what assumptions you make about the object, and whether you are actually saying that its low rank structure comes from the data, is empirical observed and therefore assumed, or due to the network regardless of the data.\", \"i_recently_saw_a_new_arxiv_submission_that_seems_to_be_looking_into_this_on_real_networks\": \"https://arxiv.org/pdf/1910.05929.pdf Their model explicitly assumes that logit gradients cluster in the weight space in a particular way.\\n\\n-- Point 3 --\\nSquare loss vs softmax\\n\\nYou are using the square loss |f(X) - y|^2 throughout your work. Many of the empirical low-rank observations of the Hessian (related to the JJ^T) are performed on real networks with the cross-entropy loss. While the Hessian with the square loss is of the form JJ^T + terms, the softmax in the cross-entropy loss introduces an additional cross-term (let us call it P for now), which in turn makes it JP(PJ)^T + terms. Do you know how this relates to your results?\\n\\nMore generally, does the square loss you use make the results significantly different from what we would get for a softmax?\\n\\n-- Point 4 --\\nNeural Tangent Kernel (NTK) -- assumptions\\n\\nUnder the NTK assumption, you still need to model the derivatives of each logit with respect to each weight on each input in order to obtain the Jacobian matrix and in turn JJ^T. I am therefore very confused by \\u201cBased on our simulations the M-NTK indeed haslow-rank structure with a few large eigenvalues and many smaller ones. \\u201c on page 5. What assumptions exactly do you use in your model?\\n\\n-- Point 5 --\\nNeural Tangent Kernel (NTK) -- validity\\n\\nBy assuming the NTK holding, do you limit the validity of your results? I think it is believed that NTK might not, generally speaking, be enough to capture the complexity of DNNs, and therefore assuming it might limit the range of applicability of any results derived assuming it.\\n\\n-- Conclusion --\\nIn general, my points of confusion often stem from being unsure as to what parts of the argument were assumed and based on what empirical / theoretical evidence, and what parts were generically true. While the paper seems interesting, I am not sure what its novel contribution is and how broad the claims made actually are in their applicability.\", \"appendix\": \"I was not able to judge the proofs in the appendix.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Note: The template used in this paper is of ICLR 2019, not ICLR 2020.\\n\\nThis paper identifies the information space and nuisance space by thresholding the singular values of the network's jacobian and shows that generally the residuals projected to the information space can be effectively optimized to zero, thus leading to efficient optimization and good generalization.\\n\\nI believe this paper should be rejected because its motivation and technical framework are not novel enough in that 1) the motivation of decomposition along gradient matrix is already well-founded by a series of paper related to neural tangent kernel 2) the techniques used here also fall in a similar framework. The following is the detailed comments.\\n\\nFirst, this paper's motivation is to employ the singular decomposition of the jacobian. Actually, the motivation is essentially the same as (Arora et al. 2019) and many other works. The neural tangent kernel matrix defined there is exactly the inner product of two jacobian (or gradient) described here and to employ the singular decomposition is actually corresponding to employing the eigendecomposition of the neural tangent kernel, which appears first in (Arora et al. 2019). The logic behind dividing the singular space into information space and nuisance space is that gradient descending along different directions has different speeds, determined by the eigenvalues.\\n\\nThe framework presented in this paper is based on the assumption that the parameters will not be far away from the starting point. Such an assumption further guarantees the trajectory won't be far away from the linearized trajectory, leading to an optimization guarantee. This approach is widely used by many works, and well-known for a considerably long time. Also, the paper's proof is complicated and lengthy which hinders its clarity.\\n\\nTo summarize, this paper definitely contains some rigorous analysis which I appreciate, but it doesn't provide new insights into optimization and generalization for deep nets. The motivation and logic behind are not novel enough, the main theorem neither. So I suggest a weak rejection to this paper in its current form.\\n\\n[1] Arora, Sanjeev, et al. \\\"Fine-grained analysis of optimization and generalization for over-parameterized two-layer neural networks.\\\" arXiv preprint arXiv:1901.08584 (2019).\\n\\n****** Post-rebuttal response ******\\n\\nThanks to the authors' response. I have read the rebuttal and unfortunately, I feel it is still not strong enough to justify this paper's novelty issue and I will keep my rating unchanged.\"}"
]
} |
rke5R1SFwS | Learning to Remember from a Multi-Task Teacher | [
"Yuwen Xiong",
"Mengye Ren",
"Raquel Urtasun"
] | Recent studies on catastrophic forgetting during sequential learning typically focus on fixing the accuracy of the predictions for a previously learned task. In this paper we argue that the outputs of neural networks are subject to rapid changes when learning a new data distribution, and networks that appear to "forget" everything still contain useful representation towards previous tasks. We thus propose to enforce the output accuracy to stay the same, we should aim to reduce the effect of catastrophic forgetting on the representation level, as the output layer can be quickly recovered later with a small number of examples. Towards this goal, we propose an experimental setup that measures the amount of representational forgetting, and develop a novel meta-learning algorithm to overcome this issue. The proposed meta-learner produces weight updates of a sequential learning network, mimicking a multi-task teacher network's representation. We show that our meta-learner can improve its learned representations on new tasks, while maintaining a good representation for old tasks. | [
"Meta-learning",
"sequential learning",
"catastrophic forgetting"
] | Reject | https://openreview.net/pdf?id=rke5R1SFwS | https://openreview.net/forum?id=rke5R1SFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"EZa-YwkGJY",
"Hye_UZHisB",
"B1lMX-HjoB",
"Skgd2xBojH",
"Skg8UxriiB",
"SJlCEgrsoH",
"B1xinJSiiB",
"HklvmbYRYS",
"SkxKwvdptB",
"rJgWaM12KS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738811,
1573765456173,
1573765401940,
1573765295907,
1573765198328,
1573765174303,
1573765043005,
1571881246994,
1571813216534,
1571709625049
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2034/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2034/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2034/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2034/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2034/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2034/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2034/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2034/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2034/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper addresses the setting of continual learning. Instead of focusing on catastrophic forgetting measured in terms of the output performance of the previous tasks, the authors tackle forgetting that happens at the level of the feature representation via a meta-learning approach. As rightly acknowledged by R2, from a meta-learning perspective the work is quite interesting and demonstrates a number of promising results.\", \"however_the_reviewers_have_raised_several_important_concerns_that_placed_this_work_below_the_acceptance_bar\": \"(1) the current manuscript lacks convincing empirical evaluations that clearly show the benefits of the proposed approach over SOTA continual learning methods; specifically the generalization of the proposed strategy to more than two sequential tasks is essential; also see R1\\u2019s detailed suggestions that would strengthen the contributions of this approach in light of continual learning;\\n(2) training a meta-learner to predict the weight updates with supervision from a multi-task teacher network as an oracle, albeit nicely motivated, is unrealistic in the continual learning setting -- see R1\\u2019s detailed comments on this issue. \\n(3) R2 and R3 expressed concerns regarding i) stronger baselines that are tuned to take advantage of the meta-learning data and ii) transferability to the different new tasks, i.e. dissimilarity of the meta-train and meta-test settings. Pleased to report that the authors showed and discussed in their response some initial qualitative results regarding these issues. An analysis on the performance of the proposed method when the meta-training and testing datasets are made progressively dissimilar would strengthen the evaluation the proposed meta-learning approach. \\nThere is a reviewer disagreement on this paper. AC can confirm that all three reviewers have read the rebuttal and have contributed to a long discussion. Among the aforementioned concerns, (3) did not have a decisive impact on the decision, but would be helpful to address in a subsequent revision. However, (1) and (2) make it very difficult to assess the benefits of the proposed approach, and were viewed by AC as critical issues. AC suggests, that in its current state the manuscript is not ready for a publication and needs a major revision before submitting for another round of reviews. We hope the reviews are useful for improving and revising the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to R3 (part 2)\", \"comment\": \"------\", \"re\": \"Writing: We thank R3 for pointing out. We have already updated our manuscript and fixed these typos. We will do another pass to make sure the writing is clear and reorganizing the paragraphs.\\n\\n------\", \"references\": \"Chelsea Finn, Pieter Abbeel, and Sergey Levine. \\\"Model-agnostic meta-learning for fast adaptation of deep networks.\\\" In ICML 2017.\\nKhurram Javed, Martha White. \\u201cMeta-Learning Representations for Continual Learning.\\u201d In NeurIPS 2019 (to appear).\"}",
"{\"title\": \"Response to R3\", \"comment\": \"We thank R3 for the insightful and constructive feedback. We address individual comments below.\\n\\n------\", \"re\": [\"Baselines: We thank R3 for pointing out these baselines. During the past few days, we gathered initial experimental results of three new baselines, as requested by the reviewer.\", \"Random: This is a baseline suggested by R3. We use random feature projection of the images using ResNet-32 and learn a linear readout layer.\", \"Rep: This is representation learning on the meta-training set, as pointed out by R3 we don\\u2019t have a baseline that leverage meta-training data. We use the meta-training 50 classes plus CIFAR-5A original classes to pre-train a representation backbone using standard classification. Then we use linear readout to directly get the classification accuracy of old+new classes.\", \"MAML (Finn et al., 2017): We first pretrain with CIFAR-5A, then for each meta-learning step, we unroll the readout SGD steps for 10 steps, and then backprop through SGD from the \\u201cquery\\u201d set to learn a good representation of the backbone. During testing, we train the readout layer till convergence. Note that this is an attempt to replicate the MAML method proposed in Javed and White (2019) in our experimental settings. The OML method cannot be adapted to our setting, since we do many SGD steps per task instead of one per task.\", \"As shown in Table-2, even though learning a good representation on 50+5 classes can contribute some gain over the \\u201cFreeze\\u201d baseline, finetuning representation on the new classes are still necessary. Furthermore, Rep and MAML still suffer from catastrophic forgetting on the representation level on Task A.\"], \"table_1\": \"Results on CIFAR 5A -> Tiny ImageNet\\n | Task A | Task B |\\n---------------------------------------------\\n0 step of training Task B\\n---------------------------------------------\\nFreeze | 92.4 +/- 0.2 | 65.5 +/- 4.4 |\\n---------------------------------------------\\n500 steps of training Task B\\n---------------------------------------------\\nSGD | 78.3 +/- 2.5 | 77.0 +/- 1.6 |\\nSGD x0.1 | 87.9 +/- 0.4 | 72.1 +/- 2.6 |\\nLWF | 81.3 +/- 1.6 | 78.6 +/- 2.3 |\\nEWC | 79.6 +/- 1.6 | 79.2 +/- 1.7 |\\nOurs | 87.2 +/- 0.6 | 75.5 +/- 1.9 |\\n---------------------------------------------\\nTeacher | 89.3 +/- 0.5 | 76.0 +/- 3.1 |\\n\\n-------\", \"table_2\": \"Additional baselines for Experiment 2\\n | Task A | Task B |\\n---------------------------------------------\\n0 step of training Task B\\n---------------------------------------------\\nFreeze | 92.4 +/- 0.2 | 71.1 +/- 7.5 |\\nRandom | 24.6 +/- 3.5 | 34.3 +/- 7.2 |\\nRep* | 93.3 +/- 0.6 | 82.6 +/- 4.6 |\\nMaML* | 93.6 +/- 0.2 | 80.7 +/- 4.7 |\\n---------------------------------------------\\n500 steps of training Task B\\n---------------------------------------------\\nSGD | 78.4 +/- 1.9 | 88.2 +/- 1.3 |\\nSGD 0.1 | 84.8 +/- 0.5 | 81.5 +/- 2.5 |\\nLWF | 81.8 +/- 2.0 | 89.5 +/- 1.1 |\\nEWC | 80.6 +/- 0.8 | 88.8 +/- 1.1 |\\nRep* | 80.2 +/- 2.1 | 87.9 +/- 3.1 |\\nMaML* | 82.4 +/- 2.6 | 89.1 +/- 3.7 |\\nOurs^ | 88.0 +/- 0.6 | 86.7 +/- 1.3 |\\n---------------------------------------------\\nTeacher | 91.1 +/- 0.4 | 88.9 +/- 1.4 |\\n*: Main network parameters are pre-trained with all 50+5 meta-training classes instead of 5 classes.\\n^: Meta-learner parameters are trained with 50 meta-training classes, but the main network parameters are only trained with 5 classes.\"}",
"{\"title\": \"Response to R2\", \"comment\": \"Thank you for your valuable review. We provide detailed response below.\\n\\n-------\", \"re\": \"true derivative or first order approximation: We compute the true derivative since this is the definition for gradients. We acknowledge that there are prior literature that shows first order approximation can also work (e.g. \\u201cMAML\\u201d and \\u201clearning to learn by gradient descent by gradient descent\\u201d).\", \"table_1\": \"Results on CIFAR 5A -> Tiny ImageNet\\n | Task A | Task B |\\n---------------------------------------------\\n0 step of training Task B\\n---------------------------------------------\\nFreeze | 92.4 +/- 0.2 | 65.5 +/- 4.4 |\\n---------------------------------------------\\n500 steps of training Task B\\n---------------------------------------------\\nSGD | 78.3 +/- 2.5 | 77.0 +/- 1.6 |\\nSGD x0.1 | 87.9 +/- 0.4 | 72.1 +/- 2.6 |\\nLWF | 81.3 +/- 1.6 | 78.6 +/- 2.3 |\\nEWC | 79.6 +/- 1.6 | 79.2 +/- 1.7 |\\nOurs | 87.2 +/- 0.6 | 75.5 +/- 1.9 |\\n---------------------------------------------\\nTeacher | 89.3 +/- 0.5 | 76.0 +/- 3.1 |\\n\\n-------\"}",
"{\"title\": \"Response to R1 (part 2)\", \"comment\": \"------\\nWe also thank R1 for providing a detailed list of references. We have already cited most of them and will cite the last two. We carefully checked the reference R1 provided, in particular,\\n\\nEbrahimi, Sayna, et al. \\\"Uncertainty-guided Continual Learning with Bayesian Neural Networks.\\\" has not been published in a conference venue, and it may be too early to compare with it. \\n\\nLopez-Paz, David, and Marc'Aurelio Ranzato. \\\"Gradient episodic memory for continual learning.\\\" and Shin, Hanul, et al. \\\"Continual learning with deep generative replay.\\\" and Nguyen, Cuong V., et al. \\\"Variational continual learning\\u201d and Aljundi, Rahaf, et al. \\\"Online continual learning with no task boundaries.\\\" We argue that these paper have a different setting compared to ours since they require a buffer whereas our method has no storage of past data/gradient. Having past data storage can usually improve the performance and our method can potentially also get a boost. Having a data buffer can also cost a lot of memory storage depending on input/weight dimension. Therefore, we argue it won\\u2019t be a fair setting to compare with methods with data buffers.\\n\\nZenke, Friedemann, Ben Poole, and Surya Ganguli. \\\"Continual learning through synaptic intelligence.\\\" has very similar performance to EWC in their paper. We have cited this work already.\\n\\nSerr\\u00e0, J., Sur\\u00eds, D., Miron, M. & Karatzoglou, A.. (2018). Overcoming Catastrophic Forgetting with Hard Attention to the Task and other pruning based papers. Thanks for pointing out. We have cited and will compare to them in the future. One thing to note is that, in these works, the model needs to know which task ID it is currently dealing with, and thus can turn on the pruning procedure for the next session. This can potentially be a limitation for dynamic incoming tasks.\\n\\n------\", \"references\": \"Chelsea Finn, Pieter Abbeel, and Sergey Levine. \\\"Model-agnostic meta-learning for fast adaptation of deep networks.\\\" In ICML 2017.\\nKhurram Javed, Martha White. \\u201cMeta-Learning Representations for Continual Learning.\\u201d In NeurIPS 2019 (to appear).\"}",
"{\"title\": \"Response to R1\", \"comment\": \"We thank R1 for valuable and constructive feedback. We address individual points below.\\n\\n-------\", \"re\": [\"Baselines and references: To address both R1 and R3\\u2019s comment, we added three more baselines.\", \"Random: This is a baseline suggested by R3. We use random feature projection of the images using ResNet-32 and learn a linear readout layer.\", \"Rep: This is representation learning on the meta-training set, as pointed out by R3 we don\\u2019t have a baseline that leverage meta-training data. We use the meta-training 50 classes plus CIFAR-5A original classes to pre-train a representation backbone using standard classification. Then we use linear readout to directly get the classification accuracy of old and new classes.\", \"MAML (Finn et al., 2017): We first pretrain with CIFAR-5A, then for each meta-learning step, we unroll the readout SGD steps for 10 steps, and then backprop through SGD from the \\u201cquery\\u201d set to learn a good representation of the backbone. During testing, we train the readout layer till convergence. Note that this is an attempt to replicate the MAML method proposed in Javed and White (2019) in our experimental settings. The OML method cannot be adapted to our setting, since we do many SGD steps per task instead of one per task.\", \"As shown in Table-1, even though learning a good representation on 50+5 classes can contribute some gain over the \\u201cFreeze\\u201d baseline, finetuning the representation on the new classes is still necessary. Furthermore, Rep and MAML still suffer from catastrophic forgetting on the representation level on Task A.\"], \"table_1\": \"Additional baselines for Experiment 2\\n | Task A | Task B |\\n---------------------------------------------\\n0 step of training Task B\\n---------------------------------------------\\nFreeze | 92.4 +/- 0.2 | 71.1 +/- 7.5 |\\nRandom | 24.6 +/- 3.5 | 34.3 +/- 7.2 |\\nRep* | 93.3 +/- 0.6 | 82.6 +/- 4.6 |\\nMaML* | 93.6 +/- 0.2 | 80.7 +/- 4.7 |\\n---------------------------------------------\\n500 steps of training Task B\\n---------------------------------------------\\nSGD | 78.4 +/- 1.9 | 88.2 +/- 1.3 |\\nSGD 0.1 | 84.8 +/- 0.5 | 81.5 +/- 2.5 |\\nLWF | 81.8 +/- 2.0 | 89.5 +/- 1.1 |\\nEWC | 80.6 +/- 0.8 | 88.8 +/- 1.1 |\\nRep* | 80.2 +/- 2.1 | 87.9 +/- 3.1 |\\nMaML* | 82.4 +/- 2.6 | 89.1 +/- 3.7 |\\nOurs^ | 88.0 +/- 0.6 | 86.7 +/- 1.3 |\\n---------------------------------------------\\nTeacher | 91.1 +/- 0.4 | 88.9 +/- 1.4 |\\n*: Main network parameters are pre-trained with all 50+5 meta-training classes instead of 5 classes.\\n^: Meta-learner parameters are trained with 50 meta-training classes, but the main network parameters are only trained with 5 classes.\"}",
"{\"title\": \"General response\", \"comment\": \"We thank all reviewers for their time and valuable comments. Here is a summary of the main points.\\n\\n1. We added transferability experiment to Tiny ImageNet as Task B (see R2, R3 response)\\n\\n2. We added more baselines that use meta training data (see R1, R3 response)\\n\\n3. We updated Figure 1 to illustrate the significance of doing measurement of forgetting on the representation level. The change in accuracy is less drastic and more stable (with less variance across runs). (see R3 response)\\n\\nPlease see individual responses below for details of the aforementioned items, as well as other minor points.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary: This paper introduces a variation on measuring catastrophic forgetting in sequential learning at the representation level and attempts to resolve forgetting issue with the help of a meta-learner that predicts weight updates for previous tasks while it receives supervision from a multi-task learner teacher. The new method is evaluated on sequences of two tasks while task 1 data remains available at all times to the teacher.\", \"pros\": \"(+): This paper is very well-written and very well-motivated. \\n(+): Tackling continual learning from a meta-learning approach is novel and not yet well-explored. \\n(+): Literature review is done precisely well.\\n\\nCons that significantly affected my score and resulted in rejecting the paper are two-fold. \\n\\nFirst, based on my understanding from the paper, it appears that this work has a significant contradictory assumption with a regular continual learning setup and that is to provide access to the entire dataset from an old task while we learn a new task. This changes the problem from continual/sequential/lifelong learning to multi-task learning. All the prior work that were beautifully reviewed in section 1 and 2 obey this assumption where access to previous tasks\\u2019 data is either impossible (ex. [1,3,4,5,6,7,8] in the below list ) or is very limited (ex. [2]). \\n\\nSecond, is the experimental setting. The experiments are accurately described and performed but authors have only considered sequence of 2 tasks which is far from being considered as a continual learning setting. I would like to ask the authors to explain how this method can be extended to multiple tasks and how much of the past data they should provide while training? Another drawback in the experiments is about the baselines. Despite addressing the most recent papers in section 2, authors have only made comparison against two relatively old approaches (EWC by Kirkpatrickthat et al from 2016 as well as LwF by Li & Hoiem presented at ECCV 2016, I believe the authors have cited the journal version of the work published in 2018 but the work is actually from ECCV 2016). Although these methods are still included as baselines in the literature, more recent approaches which have outperformed these need to be provided as well. I have provided a list of papers which achieved superior performance to the current baselines below which is arranged chronologically and is indeed not limited to this list as it is not realistic to list all prior work since 2016 in here. \\n\\nI would be happy to change my score if authors can address the above concerns about considering distinguishing multi-task learning from continual learning and providing a realistic evaluation setup with more than 2 tasks and comparison with current state of the art methods.\\n\\n[1] Zenke, Friedemann, Ben Poole, and Surya Ganguli. \\\"Continual learning through synaptic intelligence.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\\n[2] Lopez-Paz, David, and Marc'Aurelio Ranzato. \\\"Gradient episodic memory for continual learning.\\\" Advances in Neural Information Processing Systems. 2017.\\n[3] Shin, Hanul, et al. \\\"Continual learning with deep generative replay.\\\" Advances in Neural Information Processing Systems. 2017.\\n[4] Nguyen, Cuong V., et al. \\\"Variational continual learning.\\\" arXiv preprint arXiv:1710.10628 (2017).\\n[5] Serr\\u00e0, J., Sur\\u00eds, D., Miron, M. & Karatzoglou, A.. (2018). Overcoming Catastrophic Forgetting with Hard Attention to the Task. Proceedings of the 35th International Conference on Machine Learning, in PMLR 80:4548-4557\\n[6] Schwarz, Jonathan, et al. \\\"Progress & compress: A scalable framework for continual learning.\\\" arXiv preprint arXiv:1805.06370 (2018). \\n[7] Mallya, Arun, and Svetlana Lazebnik. \\\"Packnet: Adding multiple tasks to a single network by iterative pruning.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\\n[8] Ebrahimi, Sayna, et al. \\\"Uncertainty-guided Continual Learning with Bayesian Neural Networks.\\\" arXiv preprint arXiv:1906.02425 (2019).\\n[9] Aljundi, Rahaf, et al. \\\"Online continual learning with no task boundaries.\\\" arXiv preprint arXiv:1903.08671 (2019).\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------------\\n------------------------------------------------------------------------------------------------------------------------------------------------------\\n------------------------------------------------------------------------------------------------------------------------------------------------------\", \"post_rebuttal_review\": \"I disagree with the authors claiming that this work is continual learning (sequential learning + avoiding forgetting).\\nDespite introducing 9 recent continual learning work to authors in my initial review, they added 2 meta-learning baselines (MAML,REP), keeping 2 naive and old CL baselines is not acceptable. I reply to authors comment below regarding the baselines:\\n\\n[Authors' reply:] Lopez-Paz, David, and Marc'Aurelio Ranzato. \\\"Gradient episodic memory for continual learning.\\\" and Shin, Hanul, et al. \\\"Continual learning with deep generative replay.\\\" and Nguyen, Cuong V., et al. \\\"Variational continual learning\\u201d and Aljundi, Rahaf, et al. \\\"Online continual learning with no task boundaries.\\\" We argue that these paper have a different setting compared to ours since they require a buffer whereas our method has no storage of past data/gradient. Having past data storage can usually improve the performance and our method can potentially also get a boost. Having a data buffer can also cost a lot of memory storage depending on input/weight dimension. Therefore, we argue it won\\u2019t be a fair setting to compare with methods with data buffers.\\n\\n[Reviewer's reply:] GEM (Lopez-Paz et al., 2017) and its faster version (A-GEM) (Chaudhry, et al. 2018) and other memory based methods such as MER (Riemer et al. 2018), ER-RES (Chaudhry et al. 2019), they use memory sizes of at most 6MB to store samples but they only do a **single epoch** through the data. So if it is not fair, it would be for those methods given the computational expenses of this paper. VCL (Nguyen et al. 2018) in its vanilla version does not use coreset memory if that is still your concern.\\n\\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n[Authors' reply:] Zenke, Friedemann, Ben Poole, and Surya Ganguli. \\\"Continual learning through synaptic intelligence.\\\" has very similar performance to EWC in their paper. We have cited this work already.\\n\\n[Reviewer's reply:] This method is an online version of EWC which is faster despite the on-par performance. So it has its own advantage.\\n\\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n[Authors' reply:] Serr\\u00e0, J., Sur\\u00eds, D., Miron, M. & Karatzoglou, A.. (2018). Overcoming Catastrophic Forgetting with Hard Attention to the Task and other pruning based papers. Thanks for pointing out. We have cited and will compare to them in the future. One thing to note is that, in these works, the model needs to know which task ID it is currently dealing with, and thus can turn on the pruning procedure for the next session. This can potentially be a limitation for dynamic incoming tasks.\\n\\n[Reviewer's reply:] Comparing with HAT paper (Serr\\u00e0 et al. 2018) is really easy using their provided code and is one of the strongest baselines in continual leaning literature. They do NOT do any pruning. Their approach simply learns an attention mask which regularizes weights and prevent changes on them without using any memory. Regarding the task number, this is indeed not an issue for your approach with 2 tasks.\\n\\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\n[Authors' reply:] Ebrahimi, Sayna, et al. \\\"Uncertainty-guided Continual Learning with Bayesian Neural Networks.\\\" has not been published in a conference venue, and it may be too early to compare with it. \\n\\n[Reviewer's reply:] I agree that this work is not published and hence can't be asked for comparison but I encourage authors to read it.\\n\\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\", \"authors_are_neglecting_one_important_difference_between_meta_learning_and_continual_learning\": \"in MAML and REP, it is assumed that we have access to ALL tasks distributions from which we sample from in the beginning (look at page 3, algorithm 1, line 3). This is in contrast to continual learning where one cannot even assume how many tasks will be given. Moreover, the computational expense of this work which causes performing more than 2 tasks to be a future work is also not acceptable when there are significantly cheaper and are able to do a lot more than 2. (In all the references I mentioned, the length of the sequence in experiments is at least 5.)\\n\\nWhile this work might be interesting to meta-learning community, I think it is far from being introduced as a method that prevents catastrophic forgetting and hence be included in the CL literature. Therefore, I intend to keep my score as reject.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper explores learning without forgetting / the online learning setting. They employ a novel meta-learned learning algorithm to this end.\", \"writing\": \"For the most part the writing was clear and easy to follow. There where a couple typos on the top of page 2 that should be fixed.\", \"motivation\": \"The motivation for wanting meta-learning as well as various algorithmic choices are clear. \\nThe one piece of motivation I did not fully understand is why not forgetting on the feature space is so important. My understanding of the method is that it should be applicable in both settings (with and without relearning the last layer). Infact, I would expect the difference between the meta-learned method and the baselines to only increase in this setting.\\n\\nI find the distillation based learning to be a clever alternative to the computationally heavy optimizing over past performance.\", \"experiments\": \"This work provides a nice build up of experiments.\\nExperiment 1 demonstrates the principles. In my opinion you should caution the reader given the meta-train, meta-test split. D_{B_1} and D_{B_2} are the same distribution and thus it will be easy for the learned update rule to memorize features. Given your learned update rule \\narchitecture I doubt this will be the case though. I believe the authors are aware of this though as this issue is addressed in experiments 2 and 3.\\n\\nPlease include what the error bars are over in the captions.\\n\\nExperiments 2 and 3 are interesting and demonstrate the method on a more realistic setting. From the details it seems like this was difficult to get to work -- needing a complex schedule for example. Further elaboration or study of these details (e.g. ablations) would help the field. Also please include what the +- is for the experiments in table 1.\\n\\nFigure 6 is not referenced in the text. It was also difficult for me to understand though I finally got it.\\n\\nOverall, I believe the baselines could be made considerably stronger. Meta-learning expends considerable compute to find a good learned update rule. Spending similar amounts of compute tuning the baselines would be appreciated. Second, the meta-learned update rule presented here is essentially a learned optimizer and thus considerably more powerful than SGD. What optimizers did you use for LwF and EWC? Where the hyper parameters tuned here in an attempt to use similar compute? Where there learning rate schedules also tuned? \\n\\nQuestions / concerns:\\n \\nCost of running this not discussed. I would expect that both meta-training, and training are considerably more expensive. I am curious in particular \\n\\nOne motivation for meta-learning update rules in this way is that this cost can be amortized ahead of time and the learned update rule can transfer to new very different tasks. Without transfer like this, however, it's unclear if a method such as this is useful in general. Some discussion to this end I think would be helpful. I am not docking this work for not doing this type of generalization work though as we must start someplace and meta-training on similar data distributions is a logical place to do so.\\n\\nI am unclear as to your exact meta-training setup from algorithm 1. Does your meta-gradient (DL/dtheta) get computed every inner iteration (iteration of t)? If so how many steps do you back prop through? As of now it looks like your only backpropping a single iteration / application of f. Second, when computing this meta-gradient do you compute the true derivative or a first order approximation common in other work?\", \"overall\": \"I would recommend this paper for acceptance as it presents an interesting approach to solving the catastrophic forgetting issue with a compelling set of diverse experiments.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"### Summary\\n\\n- The paper demonstrates that neural networks that appear to have forgotten an old task still contain useful information of that task in their representation layers. \\n- The paper proposes to meta-learn an update rule (parameterized by an LSTM) that acts as a gating mechanism (or plasticity) for each learnable parameter at meta-test time. \\n- To meta-learn the update rule, the paper proposes minimizing the difference between representations of a teacher and student neural network. The teacher neural network learns from a batch of data sampled IID from the distribution of the complete dataset whereas the student neural network samples a batch only from the current task.\\n\\n### Decision with reasons \\n\\nI vote for rejecting the paper.\\n\\n1- The claim that neural networks forget mostly due to a miscalibration of the output layer is not well supported empirically (The drop in readout accuracy in Figure 1 is still significant). If the claim is only to the extent that the drop in readout accuracy is slower than original accuracy, then it's not interesting or new. (This is what I believed in before reading the paper as well). \\n\\n2- While the underlying idea in the paper for learning an update rule is promising and sound, the paper is missing baselines that also use the meta-training dataset in some way. Moreover, a meta-learned update rule is only useful if it can discover some general underlying learning principles. In this paper, the meta-train and meta-test settings are too similar to see if that is the case. \\n\\n\\n### Supporting arguments for the reasons for the decision.\\n\\n1- The paper claims that catastrophic forgetting in a neural network is partly due to miscalibration of the last layer, and the representation layer of the neural network still contain useful information. However, the only supporting evidence for this claim is that readout accuracy does not drop as quickly as the original accuracy (Figure 1). \\n\\nFirst, the drop in readout accuracy is still significant to term forgetting 'catastrophic.' Secondly, figure 1 only report results after 300 steps. A more interesting question is the difference between the accuracies when the network has been trained on Task B till convergence. Secondly, it is important to report the read-out accuracy for task A on a random Neural Network of the same architecture to see if the Neural network is maintaining information in the representation layer (as the authors claim), or if a linear classifier on a random CNN is just a strong baseline (Shown to be a strong baseline in many recent papers. One example is Anand et.al 2019 [1])\\n\\n2- The motivation behind meta-learning an update rule is to discover underlying learning principles that generalize to new settings. Metz et. al. 2019, for example, showed that their learned update rule could be applied to networks with different architecture, non-linearities, and datasets (They went as far as showing it worked on different data modalities.)\\n\\nAll the results in this paper, however, are for a fixed architecture (The authors do look at generalization to unseen classes, but we care about generalization to arbitrary architectures/problems when meta-learning an update rule). The data at meta-train and meta-test time are also very similar (Different parts of the same dataset). The empirical results, consequently, are not very convincing. Moreover, by reading between the lines, it can be inferred that the learned update rule is very finicky. For instance, to generalize just to unseen initializations, the authors had to use 100 different initializations at meta-training time. That does not instill a lot of confidence in me about the stability of the learned update rule. \\n\\nFinally, the paper proposes the complex student-teacher learning paradigm while skipping a simple baseline: training on the student model by using data from Task B in the support set and using data from A and B in the query set during meta-training. A similar procedure was proposed by Javed and White 2019 [3]. (Note that the current baselines in the paper do not use the meta-training data at all which makes the comparison extremely unfair. Moreover, even a simple baseline such as LwF that does not use meta-training performs almost as well (See Table 1).) \\n\\n### Additional evidence that can change my evaluation\\n \\n1- Showing that the meta-learned update rule can be applied to different architectures/non-linearities/datasets (Train on one dataset, test on another). \\n\\n### Minor comments that did not play a part in my decision, but should be addressed nonetheless. \\n\\nThe paper should cite the classic paper by Yoshua. et.al (1991) which proposed the idea of meta-learning an update rule [2]. \\n\\nThe paper, in its current form, needs to be proofread and reorganized. There are many errors in the grammar (For example just in the first paragraph, New borns -> Newborns, a same -> the same (or 'a distribution')). I find passing my writing through the free version of Grammarly very helpful in getting rid of most such errors. \\n\\nThe organization of the paper is also not very clear. For example, the third paragraph in \\\"Related Work\\\" is about the method proposed in the paper whereas the second and fourth are about related work. \\n\\nThe writing is also occasionally ambigious. For instance: \\n\\n\\\"In human language acquisition, it is found that children who lost their first language maintain similar brain activation to bilingual speakers (Pierce et al., 2014).\\n\\nInspired by this fact, we propose a novel meta-learning algorithm that tries to mimic a multi-task teacher network\\u2019s representation, an offline oracle in our sequential learning setup, since multi-task learning has simultaneous access to all tasks whereas our sequential learning algorithm only has access to one task at a time.\\\"\\n\\nIt is not clear how the method in the second paragraph is inspired from the statement in the first paragraph. \\n\\nI did not take writing quality in account when giving my score because openreview allows updating the paper during the review process. I hope that authors would fix these issues during the writing process. \\n\\nOn an unrelated note, the figures in the paper are well made and clear. It is possible to understand the proposed methodology just from the figures. \\n\\n[1] Unsupervised State Representation Learning in Atari https://arxiv.org/abs/1906.08226\\n\\n[2] Learning a Synaptic Learning Rule https://mila.quebec/wp-content/uploads/2019/08/bengio_1991_ijcnn.pdf\\n\\n[3] Meta-Learning Representations for Continual Learning https://arxiv.org/abs/1905.12588\\n\\n\\n#### UPDATE\", \"i_gave_the_paper_a_3_for_the_following_reasons\": \"1. The baselines do not use the meta-training dataset at all. This makes the comparisons unfair (Is the update rule learning some general learning principles or learning or induce good representations for the meta-training dataset?) \\n\\n2. The meta-train and meta-test settings are too similar. Learning an update rule only makes sense if we can discover some underlying learning principles. If the update rule is tied to a data-distribution, it is not extremely useful.\\n\\nIn the public discussion phase, the authors addressed both of my concerns. They added baselines which uses the meta-training dataset for learning a representation, and they added an experiment in which meta-testing is done on a different dataset. However: \\n\\n1. The baselines perform very close to the proposed method at a fraction of meta-training cost. Rep*, even before doing any steps on Task B, results in better average performance than the method proposed in the paper. Moreover, the authors do not combine these baselines with existing methods to mitigate interference (Such as LwF) which can easily be done (and would probably increase the performance of the baseline noticeably)\\n\\n2. The meta-training and meta-testing datasets are still too similar in the new experiment (A downscaled TinyImagenet is very similar to CIFAR). Even though the results are more promising given the added experiment, I don't think they answer if the LSTM is learning some general learning principle or some task-specific heuristic for performing slightly better. \\n\\nR2's review (and the response to the review) also highlights an important problem -- the authors didn't tune the optimizers used by the baselines on the meta-training dataset. I suspect that a well-tuned adaptative optimizer (Like Adam) would reduce the gap between baselines and the author's method significantly. \\n\\nIs hundreds-of-hours of GPU compute worth negligible (or no) performance improvement in a very restricted setting (Since the learning rule doesn't seem to generalize based on existing results)? I'm inclined to say that it is not. As a result, I'm keeping my initial score. \\n\\nI do encourage the authors to investigate the proposed method more (It is a reasonable method) and try to empirically demonstrate that the LSTM is discovering some general learning principle.\"}"
]
} |
ryxK0JBtPr | Gradient $\ell_1$ Regularization for Quantization Robustness | [
"Milad Alizadeh",
"Arash Behboodi",
"Mart van Baalen",
"Christos Louizos",
"Tijmen Blankevoort",
"Max Welling"
] | We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robustness against post-training quantization. By training quantization-ready networks, our approach enables storing a single set of weights that can be quantized on-demand to different bit-widths as energy and memory requirements of the application change. Unlike quantization-aware training using the straight-through estimator that only targets a specific bit-width and requires access to training data and pipeline, our regularization-based method paves the way for ``on the fly'' post-training quantization to various bit-widths. We show that by modeling quantization as a $\ell_\infty$-bounded perturbation, the first-order term in the loss expansion can be regularized using the $\ell_1$-norm of gradients. We experimentally validate our method on different vision architectures on CIFAR-10 and ImageNet datasets and show that the regularization of a neural network using our method improves robustness against quantization noise. | [
"quantization",
"regularization",
"robustness",
"gradient regularization"
] | Accept (Poster) | https://openreview.net/pdf?id=ryxK0JBtPr | https://openreview.net/forum?id=ryxK0JBtPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"D6996H75Y",
"r1evGIc2jr",
"r1lcj_tDor",
"SJgXpIKwsB",
"ryxSorYvjH",
"r1gvs0imcr",
"r1eFtky0tr",
"HyxmIYPEYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738783,
1573852686966,
1573521569791,
1573521082868,
1573520796575,
1572220575014,
1571839872714,
1571219787185
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2033/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2033/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2033/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2033/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2033/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2033/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2033/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Reviewers uniformly suggest acceptance. Please take their comments into account in the camera-ready. Congratulations!\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Follow-up Response to Review #3\", \"comment\": \"Given the reviewer's comment on deeper models we have started experimenting with MobileNet-v2 on ImageNet as well. While not necessarily a deeper architecture, MobileNet has a more demanding backward path and could be a good test for our proposed regularization. Our initial experiments do show promising results of 50.2% top-1 accuracy in the (8-bit weights, 4-bit activations) configuration as opposed to 0.07% for vanilla post-training quantization, and 59% for fine-tuning using STE. However, given that MobileNet architecture is very sensitive to choices such as per-channel quantization vs. per-layer quantization, and BatchNorm folding, we would like to run more extensive tests before including results in the final revision.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We would like to thank the reviewer for the constructive comments.\\n\\nYour question about capital N is fair. N, in this case, is (roughly) the number of elements w.r.t. which we are computing the gradient (e.g. weights in the case of regular backprop). The point we were trying to make is that, while this is a second-order method, we do not need to compute the full Hessian w.r.t. the weights, which would have O(N^2) time and space complexity in the number of weights.\", \"to_be_more_exact\": \"auto-differentiation [1] of a function \\\"f: R^n --> R^m\\\", where \\\"f\\\" contains \\\"E\\\" elementary operations, requires O(m x C x E) time, where \\\"C\\\" is a fixed constant. The gradient_L1 penalty is a function \\\"p: R^N --> R\\\", where N is the number of elements in the gradient. This function contains O(N) elementary operations to compute the L1 norm. The function to compute the loss gradient contains O(N) elementary operations as well, one for every node in the original forward computation graph. Thus, from the formula above, the complexity of computing the gradient w.r.t. the gradient L1 norm is O(2xCxN). Since 2xC is a constant that does not depend on the input, the complexity is O(N). We have updated the paper (Section 4.1) to make all of this clearer and provide more details on the complexity of the algorithm.\\n\\nWe have also added the new \\\"Appendix E\\\" to provide justification for enabling the regularization only in the final stages of the training. The appendix depicts the progression of the regularization objective in unregularized networks. We show that the regularization loss becomes smaller (up to a point) during training with no regularization and therefore we can apply the regularization when the regularization loss has plateaued and is oscillating. We have also added wall-time timing measurements of the overhead in Section 4.1of the draft.\\n\\nWith regards to performance at (4,4) bits and comparison to STE you are absolutely right that this is related to the strength of the regularization. As we discuss in our reply to Review #1 our main criteria for choosing lambda was maintaining the accuracy of the unquantized model. We have now run more experiments with larger values for lambda and that indeed results in improved performance in the (4,4) case, albeit at the cost of overall lower accuracy across all bit-width configurations. We have updated the paper to include this result (Table 2, Section 4.2).\\n\\n[1] Atilim Gunes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. \\\"Automatic differentiation in machine learning: a survey.\\\", Journal of machine learning research\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We would like to thank the reviewer for the careful review and useful comments.\\n\\nRegarding the comparison to quantization-aware training, we would like to emphasize that our proposed method is not meant to serve as a direct substitute for quantization aware fine-tuning. As the reviewer rightly pointed out quantization-aware training/fine-tuning can often achieve better results for the specific target bit-width. Having said that, we believe there are interesting practical applications where quantization-robust models are more appropriate. For example, we can consider the task of using a neural network as part of a mobile application. In such cases, one might be interested in automatically constraining the computational complexity of the network such that it conforms to specific battery consumption requirements, e.g. using a 4-bit variant when the battery is less than 20\\\\% but the full precision model when the battery is over 80\\\\%. In these cases, we can quantize to a specific bit-width on-the-fly without worrying about fine-tuning and without having to store multiple (potentially large) quantized models on device. Another challenge with post-training fine-tuning of models is access to the training data which can be challenging in some scenarios e.g. due to GDPR regularizations. We have uploaded an updated version of the paper to clarify such potential use-cases and applications.\\n\\nRegarding quantization schemes other than uniform symmetric quantization, it should be noted that our proposed method works equally well for asymmetric quantization schemes. Our theoretical derivations hold as long as the quantization noise has bounded support, even if it is not uniformly distributed. We have revised our text in Section 2.3 and the supplementary materials to reflect this issue. We have also moved some of the discussion in Section 2.3 to the supplementary materials as suggested by the reviewer. Non-uniform quantization schemes are currently less hardware-friendly and have limited applicability. Therefore, the focus of our research has been on uniform quantization schemes. We have updated Appendix A to discuss non-uniform cases. Our analysis holds for certain situations in which non-uniform quantization is used. However, a general answer to these questions will require more research and is left for future work.\\n\\nThe reviewer's comment on the hyper-parameter selection is a fair point and should have been made clearer in the paper. Our criteria for choosing $\\\\lambda$ was: the highest value of $\\\\lambda$ within our search space that does not affect the accuracy of the \\\\emph{unquantized} model, i.e. we did not want regularization to cause any degradation in the accuracy of the model in the normal mode, while maximally regularizing the model. We did not do any quantization for validation or hyper-parameter tuning. Furthermore, as discussed in Section 4.1 we only apply regularization in the final stages of the training (we have now updated the paper with the new Appendix E to include evidence and justification for this), however, we do track the regularization term during the training. This enabled us to have a rough estimate of the scale of regularization term with respect to the cross-entropy term. We then performed the grid search over a few points above and below the scale value that would bring regularization term to the same level as the cross-entropy. We have now updated Section 4.1 to make this clearer.\\n\\nLastly, since the initial submission of the paper, we have been running more experiments with different values of lambda. One motivation for these experiments was to see if we can recover the (4,4) performance in ResNet-18 on ImageNet. We have updated the Table 2 to include this additional result for the same architecture but with larger lambda. It shows that larger values for lambda do indeed allow much better performance at (4,4) but at the cost of overall accuracy degradation across all quantization targets.\\n\\nRe. the notation: This is a good suggestion. Our updated draft uses adapts the suggestion notation.\\n\\nRe. the redundant section: We have incorporated your comment by moving most of Section 2.3 into an appendix.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We would like to thank the reviewer for the encouraging words and comments.\\n\\nWhile sparsity of the gradients is not something that we explicitly target, it is indeed a by-product of the objective. Intuitively our regularization imposes a constraint on the model that encourages it to be insensitive to bounded perturbations, in the first order sense. It should be noted that we recover weight sparsity when we adopt a linear model, as the L1 norm of the gradient is equivalent to the L1 norm of the weights.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper shows that if we add L1 regularization on gradient in the training phase, the obtained model can achieve better post-training quantization performance since it is more robust to Linf perturbation on the weights. I like the intuition of the paper but there are several weaknesses:\\n\\n1. The main concern is that the proposed method cannot outperform quantization-aware fine-tuning. This probably limits the application of the method --- it will only be used when there's not enough time budget for quantization-aware fine tuning for each specific choice of #bits. It will be good if the authors can discuss in what practical scenario their algorithm can be applied. \\n\\n2. The method is only tested under uniform symmetric quantization. I believe to demonstrate that the L1 regularized models are indeed easier to be quantized, we need to test it on several different kinds of quantizations. \\n\\n3. I have concerns about the hyper-parameter selection for lambda. The authors mentioned that lambda is chosen by grid-search, but what's the grid search criteria? In other words, are the hyper-parameters trying to minimize the validation error of the \\\"unquantized model\\\", or they are minimizing the validation error of the \\\"post-quantized model\\\"? \\n\\n4. Some minor suggestions: \\n\\n- The current paper uses boldfaced n as perturbation which is quite confusing (since small n is the dimension). I would suggest to replace it by something else, e.g, \\\\Delta. \\n\\n- Section 2.3 seems redundant. It's clearly that L1 regularization is better given it's the dual norm of Linf, so clearly it's better than L2 norm. You have proved L2 is not good anyway in experiments. \\n\\n===========\\n\\nAfter seeing the rebuttal, my concerns about the parameters have been well addressed. Also, I agree with the authors that there are use cases for post quantization, and personally I think post quantization is much easier to do in practice than quantization-aware training. However, this is quite subjective so the fact that the proposed method doesn't outperform quantization-aware training is still a weakness of the paper. \\n\\nI would like to slightly raise the score to borderline/weak-accept. I hope the authors can have some experiments on non-uniform quantization if the paper is being accepted; I really think that will demonstrate the strength of the method. People will likely to use this method if it can consistently improve many different kinds of post quantization.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper models the quantization errors of weights and activations as additive l_inf bounded perturbations and uses first-order approximation of loss function to derive a gradient norm penalty regularization that encourage the network's robustness to any bit-width quantization. The authors claim that this method is better than previous quantization-aware methods because those methods are dedicated to one specific quantization configuration.\\n\\nThe derivation of the proposed method is not complex but I like the idea that models quantization error as additive perturbation in this context and how it eventually connects with gradient penalty that's widely used in GAN training and adversarial robustness.\", \"questions\": \"1. What is the capital N in the time complexity of gradient computation in Sec. 4.1? The authors should discuss in details the time complexity of the proposed regularization well because this is an essential problem of the regularization, which involves double back-propagation and should be computationally heavy. For the same reason, I'd like to see the training time comparison, and more results with deeper networks.\\n\\n2. Compared to STE, one of the quantization-aware methods, the proposed method is not very competitive even in the setting when a STE network, which is specially trained for 6,6 bits but quantized to 4,4 bits, can outperforms the proposed method. This contradicts with the claimed strength of the proposed method. Will it be better when we regularize more, if we want the model to perform well when quantized to 4,4 bits? It would be better if there is a set of experiments of different regularization hyperparameters.\\n\\n***********************\", \"update\": \"I'd like to keep my score after reading the authors' response to all reviewers. I think the authors do address some questions but the paper still has some weakness in terms of performance.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: the authors propose a regularization scheme that is applied during regular training to ease the pose-training quantization of a network. Modeling the quantization noise as an additive perturbation bounded in \\\\ell_\\\\inf norm, they bound from above the first-order term of the perturbations applied to the network by the \\\\ell_1 norm of the gradients. Their claims are also supported by experiments and qualitative illustrations.\", \"strengths_of_the_paper\": [\"The paper is clearly written and easy to follow. In particular, section 2.1 clearly motivates the formulation of the regularization term from a theoretical point of view (reminiscent of the formulation of adversarial examples) and Figures 1 and 2 motivate the regularization term from a practical point of view. I found Figure 5 particularly enlightening (the regularization term \\\"expands\\\" the decision cells).\", \"The method is clearly positioned with respect to previous work (in particular using \\\\ell_2 regularization of the gradients)\", \"Experiments demonstrate the effectiveness fo the method.\"], \"weaknesses_of_the_paper\": [\"The link between the proposed objective and the sparsity could be made clearer: does this objective enforce sparsity of the gradients, the weights, and how does this affect training?\"], \"justification_of_rating\": \"The paper clearly presents a regularization method to improve post-training quantization. The approach is motivated both from a theoretical point of view and from a practical point of view. The latter aspect is of particular interest for the community. The claims are validated by a limited set of experiments that are seem nonetheless well executed.\"}"
]
} |
rJxt0JHKvS | Coloring graph neural networks for node disambiguation | [
"George Dasoulas",
"Ludovic Dos Santos",
"Kevin Scaman",
"Aladin Virmaux"
] | In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks (MPNNs). More specifically, we introduce a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes. Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations. Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets. | [
"Graph neural networks",
"separability",
"node disambiguation",
"universal approximation",
"representation learning"
] | Reject | https://openreview.net/pdf?id=rJxt0JHKvS | https://openreview.net/forum?id=rJxt0JHKvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"zR_yRwryMQ",
"SygeSfGuoB",
"H1ltBTpZiH",
"HkgBzaabiB",
"S1x72spZjH",
"BJlp8spWsS",
"Bye8QHt0tH",
"rylnm81RKr",
"SJxoqd7TtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738754,
1573556792161,
1573145920651,
1573145868957,
1573145514947,
1573145429480,
1571882270197,
1571841571660,
1571793042715
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2032/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2032/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2032/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2032/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2032/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2032/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2032/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2032/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents an extension of MPNN which leverages the random color augmentation to improve the representation power of MPNN. The experimental results shows the effectiveness of colorization. A majority of the reviewers were particularly concerned about lacking permutation invariance in the approach as well as the large variance issue in practice, and their opinion stays the same after the rebuttal. The reviewers unanimously expressed their concerns on the large variance issue during the discussion period. Overall, the reviewers believe that the authors has not addressed their concerns sufficiently.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Comment on the new submission\", \"comment\": \"Dear reviewers, as discussed in our answers to your comments, we have updated the submission with the following main changes:\\n\\n1. Section 6 was replaced by a short comment in Section 5.\\n\\n2. The main document is now 8 pages long.\\n\\n3. We included new experiments on the Circular skip links problem of [1,2], and significantly outperform their algorithms (CLIP obtains a (max,min) accuracy over 20 runs of (98.7, 76.0) compared to (80, 10) for Ring-GNN and (53.3, 10) for RP-GIN).\\n\\n4. We provide an ablation study on the benchmark graph classification datasets by providing the results of CLIP with the coloring mechanism (named 0-CLIP), as well as results for a varying number of colorings in the appendix.\\n\\n[1] Relational Pooling for Graph Representations, Ryan L. Murphy, Balasubramaniam Srinivasan, Vinayak Rao, Bruno Ribeiro, ICML 2019\\n\\n[2] Chen, Z., Villar, S., Chen, L. and Bruna, J., 2019. On the equivalence between graph isomorphism testing and function approximation with GNNs. NeurIPS 2019, arXiv preprint arXiv:1905.12560.\"}",
"{\"title\": \"Review answer (2)\", \"comment\": \"8. For the real-world datasets, we examined values of the parameter $k$ in $\\\\{1,2,4,8,16\\\\}$. Indeed, we omitted to clarify which value of $k$ we use in Table 1, as it was set as a hyper-parametrization result and the best accuracy results were not achieved for the same value of $k$.\\nOn real world datasets the number of colors did not have a large impact on the accuracy (as for other competitors such as RP-GIN), and we thus decided not to report these results to gain a little extra space. However, it is true that this is an important information for readers, and we will add a short discussion to the experimental section as well as the results of 0-CLIP (without any coloring), 1-CLIP and 16-CLIP to the appendix. Moreover, we will also add another experiment used by a recent related paper [4,5] to assess the quality of universal graph representations (see point 5.).\\n\\nWe use bold in order to better clarify the statistically insignificant difference between the different algorithms used in the benchmark. We use the same standard statistical test than the one used for example in [2].\\n\\n9. We took the standard definition from complexity theory [1]: a function f is said to be of exponential growth if log(f) is of polynomial growth.\\n\\n[1] Computational Complexity: A Modern Approach, Papadimitriou\\n[2] How powerful are graph neural networks, Xu et al., ICLR 2019, https://arxiv.org/pdf/1810.00826.pdf\\n[3] Invariant and Equivariant Graph Networks, Haggai Maron, Heli Ben-Hamu, Nadav Shamir, Yaron Lipman, ICLR 2019, https://openreview.net/forum?id=Syx72jC9tm\\n[4] Chen, Z., Villar, S., Chen, L. and Bruna, J., 2019. On the equivalence between graph isomorphism testing and function approximation with GNNs. arXiv preprint arXiv:1905.12560\\n[5] Relational Pooling for Graph Representations, Ryan L. Murphy, Balasubramaniam Srinivasan, Vinayak Rao, Bruno Ribeiro, ICML 2019\"}",
"{\"title\": \"Review answer (1)\", \"comment\": \"Thank you very much for your review! We will update the paper in the next few days to address your concerns.\\n\\n1. In this work, we aim to give a complete and thorough study of universality in the context of GNNs. Hence the theoretical framework that allows a precise definition of CLIP is needed, also for pedagogical reasons. However, in the coming revised version we will give a better balanced version between the generalities and the clear description of our proposed algorithm. \\n\\n2. There are 3 steps in order to compute the graph representation from the node representations. Firstly, for each color $c$ among the $k$ colorings of the graph we compute a color dependent graph representation (which correspond to the sums, given $c$, in Eq. (5)). Secondly, to be color independent, we compute a vector from the $k$ previous ones by taking the coefficient-wise maximum among them. Finally, we use an MLP $\\\\psi$ that outputs the final graph representation $x_G$. We will make that clearer in the paper.\\nConcerning your second remark, indeed, as you correctly understood when $k=1$, the max operator becomes an identity and we append a random coloring to the node attributes vector. That does not mean that we add a random color to every node attribute, but that we assign only once a different color randomly for the nodes with identical attributes.\\n\\n3. We apologies for the strong claim of novelty in Section 6, and agree that most of the ideas of NeighborNet are already present in the literature. We thus decided to replace this section by a comment in Section 5 that shows that the universality of this known architecture for permutation invariant sets is a straightforward consequence of Corollary 1.\\nConcerning your second comment, $\\\\psi(x,y)$ indeed stands for the application of the function $\\\\psi$ to the concatenation of $x$ and $y$.\\n\\n4. We only kept one of the state of the art MPNN variants which is Graph Isomorphism Network (GIN) [2]. It has superior performance among other standard variants of MPNNs.\\nLooking at the experiments, the large variances are not specific to our algorithm. However, our algorithm is the only one consistent across all datasets and, contrary to GIN, we are statistically better than WL on 2 out of 3 datasets (none for the state of the art MPNN can achieve this).\\n\\n5. Thank you very much for pointing out this missing reference we were not aware of (it will be published at NeurIPS 2019). This work shares some similarities with ours and we are definitely going to cite and discuss it in our article.\\nThey introduce Ring-GNN wich is an equivariant neural network method based on [3]. While Ring-GNN has more expressive power than WL-1, one cannot make it straightforwardly a universal GNN.\\nNote that CLIP also only uses 2-tensors (and inf-CLIP reaches universality).\\n\\nWe will add the benchmark on Circular Skip Links to compare with our method as it is done in [4,5]. Looking at their reported experiments, early results show that CLIP significantly outperforms both RP-GIN and Ring-GNN. We will introduce this benchmark in the soon revised version of our paper.\\n\\n6. k-CLIP has universal representation theoretically for k large enough. In section 5.3 we give a precise bound.\\n\\n7. Thank you very much for pointing out these unclear notations.\\nIndeed in 5.1 there may be some confusion as we first use 'c' in an example before using it more generally in Eq. (4). The chosen set of color $C = \\\\{c_1, ..., c_n\\\\}$ is ordered in all possible ways for the purpose of CLIP as stated in Eq. (4).\\nIn Eq. (8), $x$ is a vector and S a set of vectors. In the case of CLIP, x will be a feature vector of a node and S the set of feature vectors of its neighbors.\\n\\n[2] How powerful are graph neural networks, Xu et al., ICLR 2019, https://arxiv.org/pdf/1810.00826.pdf\\n[3] Invariant and Equivariant Graph Networks, Haggai Maron, Heli Ben-Hamu, Nadav Shamir, Yaron Lipman, ICLR 2019, https://openreview.net/forum?id=Syx72jC9tm\\n[4] Chen, Z., Villar, S., Chen, L. and Bruna, J., 2019. On the equivalence between graph isomorphism testing and function approximation with GNNs. arXiv preprint arXiv:1905.12560\\n[5] Relational Pooling for Graph Representations, Ryan L. Murphy, Balasubramaniam Srinivasan, Vinayak Rao, Bruno Ribeiro, ICML 2019\"}",
"{\"title\": \"Review answer\", \"comment\": \"Thank you very much for your review! We will update the paper in the next few days to address your concerns.\\n\\n1. The universal representation and isomorphism problem are indeed closely related (Reviewer #1 pointed out [1] which may be of interest for you). More precisely, as stated in Proposition 1, a universal representation of a graph G is able to separate points in the domain space (here a space of graphs). K-regular graphs are a simple example of a class of graphs that cannot be distinguished using classical MPNNs (there are many non-isomorphic k-regular graphs, see e.g. [2]). The property testing example from Section 7.2 highlights this point.\\nOur paper shows in particular that inf-CLIP is able to separate, i.e. distinguish, all graphs up to a permutation (i.e. non-isomorphic) and so is our relaxed 1-CLIP algorithm in expectation.\\n\\n2. The source of randomness of 1-CLIP comes from the fact that we add a single color to every node, the color being chosen randomly. It is thus similar to the addition of noise to every node attribute of the graph. Theorem 3 states that 1-CLIP is universal in expectation, although its variance may be large in practice. However, our experiments in Section 7 indicate that the variance remains relatively small, while being more expressive than classical MPNNs (see Section 7.2 and the property testing task).\\n\\n3. In our experiments on real datasets, we think that the variance of the 10-fold cross validation accuracy is probably larger than the improvement due to an increase in colors. There was thus no visible improvement to using more colors, and we decided not to display these results in the paper to gain a little extra space. We will add the results of 0-CLIP (without any coloring), 1-CLIP and 16-CLIP to the appendix.\\n\\n4. Thank you for pointing out these conflicting notations, we will take care of this in the updated version of the paper.\\n\\n[1] Chen, Z., Villar, S., Chen, L. and Bruna, J., 2019. On the equivalence between graph isomorphism testing and function approximation with GNNs. arXiv preprint arXiv:1905.12560.\\n[2] https://oeis.org/A051031\"}",
"{\"title\": \"Review answer\", \"comment\": \"Thank you very much for your review! We will update the paper in the next few days to address your concerns.\\n\\n1. While k-CLIP is indeed not, strictly speaking, permutation invariant due to its randomness, note that its probability distribution is permutation invariant. Hence, k-CLIP can be seen as a permutation invariant representation with added (unbiased) noise. A discussed in Remark 1, the variance of k-CLIP may be reduced (and thus come closer to a deterministic representation) by averaging over multiple independent samples of the representation. Of course, this incurs an additional computation cost, and experimental evaluations suggest that the noise of k-CLIP remains sufficiently small in practice.\\nThe difficult question of finding deterministic representations that are both computationally tractable and provably universal is very interesting and left for future work. However, we discuss at the end of Section 5.4 the fact that such a universal and tractable graph representation may not exist, as it would also solve the graph isomorphism problem in polynomial time: a notoriously difficult problem in graph theory.\\n\\n2. In the experiments on real graph classification datasets, the number of colors did not have a large impact on the accuracy (as for other competitors such as RP-GIN), and we thus decided not to report these results to gain a little extra space. However, it is true that this is an important information for readers, and we will add a short discussion to the experimental section as well as the results of 0-CLIP (without any coloring), 1-CLIP and 16-CLIP to the appendix. Moreover, we will also add another experiment used by a recent related paper [1] to assess the quality of universal graph representations.\\n\\n3. The main novelty of this paper is to provide a theoretical analysis of universality for graph neural networks. The proposed algorithm in Section 5 is a direct application of the theory of separable neural networks (a novel concept defined in Section 3.3) that we develop in Section 3 and 4 as well as in the supplementary material. We give another use case of our theoretical framework in Section 6 for permutation invariant sets (this section will be removed). To the best of our knowledge, Theorem 2 with this level of generality is a novel result and does not appear in the GNN literature (as well as the following propositions and corollaries).\\nOur experiments on graph classification and property testing benchmarks show that, unlike most deep learning SOTA algorithms, our method is able to reach state of the art results in these two relatively different tasks.\\n\\n4. We apologies for the additional page, and decided to replace Section 6 by a remark in Section 5 in order to meet the eight page limit. This section was, as correctly noted by Reviewer #1, too long and mostly containing already known concepts.\\n\\n[1] Chen, Z., Villar, S., Chen, L. and Bruna, J., 2019. On the equivalence between graph isomorphism testing and function approximation with GNNs. arXiv preprint arXiv:1905.12560.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes an extension of MPNN which leverages the random color augmentation to improve the representation power of MPNN. Authors also prove that two variants of the proposed method have universal representation power (one is exact and the other holds in expectation) from the separability perspective. Experiments on some small graph benchmark datasets and structural property tests are reported.\\n\\nOverall, the paper seems to make a good contribution on advocating a new perspective of representation power of GNNs, i.e., separability, and proposes a variant to empirically improve representation power. However, I do have quite a few concerns listed as below which impedes my understanding and prevents me from giving a high score.\", \"pros\": \"1, The separability perspective of representation power seems novel.\\n\\n2, The coloring based method is interesting and simple to implement.\\n\\n3, The graph property test experiments are good testbeds to verify the representation power of various GNNs.\\n\\nCons & Questions:\\n\\n1, The overall paper seems lack of focus in a sense that section 3 and 4 discuss too much on general universality whereas the main contribution, i.e., section 5 is not explained clearly.\\n\\n2, If I understood correctly, the max operator in Eq. (5) only aggregates the \\u201ccolored\\u201d representation within the group of nodes which shared the same attributes. How do you further get the representation of the whole graph? When k=1, the max operator in Eq. (5) becomes identity, wouldn\\u2019t 1-CLIP method be equivalent to augmenting random color as extra node features to GNNs? \\n\\n3, The whole section 6 is just a very common GNN aggregation operator, I do not understand why authors claim it as \\u201ca novel universal neighborhood representation\\u201d. Also, the notation in Eq. (8) is not rigorous, what do you exactly mean by psi(x, y) as an MLP? Do you mean concatenating x and y as an input to MLP?\\n\\n4, The experimental results on the benchmark datasets are less impressive as the mean performances are close to the WL-test results and the variances are considerably large. Moreover, why is the MPNN baseline missing, not mentioning other state-of-the-art GNNs? Same GNN baselines are missing in the structural property tests as well.\\n\\n5, A closely relevant reference [1] is missing. The equivalence between universal approximation and graph isomorphism testing is studied in [1]. I think it is necessary to discuss the relationship. A comparison with [1] both theoretically and empirically would be make the paper more convincing.\\n\\n6, Since k-CLIP with some k such that 1 < k < infinity achieves the best performance in the experiments, does k-CLIP still have universal representation theoretically? \\n\\n7, Many notations are introduced without clear explanation. For example, what does lower-case c stand for? If it stands for the color per node, why does permutation appears in the definition of Eq. (4)? If I understood correctly, Eq. (4) is the set of all colorings which does not depend on permutation anyway. What does S refer to in Eq. (8)?\\n\\n8, What are variants of CLIP reported in Table 1? Are they 1-CLIP? Also, the multiple bold numbers in Table 1 are quite confusing. \\n\\n9, Wouldn't Eq. (6) indicate factorial growth rather than the claimed exponential one?\", \"typos\": \"CDNN in table 1 should be DCNN\\n\\n[1] Chen, Z., Villar, S., Chen, L. and Bruna, J., 2019. On the equivalence between graph isomorphism testing and function approximation with GNNs. arXiv preprint arXiv:1905.12560.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents an interesting work, called Colored Local Iterative Procedure (CLIP), to improve the expressive power of Message Passing Neural Networks (MPNNs). Considering the expressive power from the concept of universal representations, the authors introduced the concept of separability and combine the separable representation with MLP to achieve the universal representation for graphs. They then developed a coloring scheme to improve the MPNN, and obtained superior performance on benchmark graph classification datasets as well as in the graph property testing experiments. In general, I like the paper, but I have the following concerns:\\n\\nAlthough we can easily get the idea that universal representation is more expressive, however, I did feel a small conceptual gap between isomorphism test and universal representation. For example, in Section 4.2, when the authors talked about the fact that MPNN is not expressive to construct isomorphism tests for a k-regular graph, it is expected to have a more explicit explanation of how universal representations can solve this and how it is connected to isomorphism test. It seems that there is no such explanation in the paper. \\n\\nI am not very clear about how 1-CLIP gets the randomness. To my understanding, 1-CLIP uses one color, so the identical node attributes still have the same node attributes after coloring, and it is essentially equivalent to just concatenating extra node features to an MPNN? It also does not change the expressive power of MPNN.\\n\\nIntuitively k-CLIP should be better than p-CLIP if k>p, and it is also demonstrated in the graph property testing experiment. However, why do the authors use k as a hyperparameter to select the best results in classical benchmark datasets? Does it say sometimes the smaller k can also get a better result? Why not also just show the results of 1-CLIP and 16-CLIP?\\n\\nIt seems $k$ has different meanings in different places of the paper. For example, $k$ in $C_k$ is different the $k$ in Eq. (4). Maybe it is better to use a different variable to avoid confusion.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a coloring scheme that can increase the expressive power of GCNs. Based on this coloring scheme, a colored local iterative procedure is built. Experimental studies are performed and demonstrate the effectiveness of the methods.\\n\\n1. A major concern for this method is the permutation invariant in coloring scheme. In this work, nodes in a group is colored randomly. This means the graph will change with different coloring patterns. In section 5.3, inf-CLIP is claimed to be permutation invariant. However, this property can not be guaranteed for a normal k.\\n\\n2. The experimental studies are weak. There should be some ablation studies to evaluate the effectiveness of the coloring scheme. In section 7.2, the ablation studies are performed on synthetic datasets. Why not use real data?\\n\\n3. This paper exceeds 8 pages which means higher requirements are needed. The novelty of this paper is incremental and not technically sound.\", \"suggestions\": \"Figure out ways to ensure permutation invariant would be a great plus.\"}"
]
} |
H1l_0JBYwS | Spectral Embedding of Regularized Block Models | [
"Nathan De Lara",
"Thomas Bonald"
] | Spectral embedding is a popular technique for the representation of graph data. Several regularization techniques have been proposed to improve the quality of the embedding with respect to downstream tasks like clustering. In this paper, we explain on a simple block model the impact of the complete graph regularization, whereby a constant is added to all entries of the adjacency matrix. Specifically, we show that the regularization forces the spectral embedding to focus on the largest blocks, making the representation less sensitive to noise or outliers. We illustrate these results on both on both synthetic and real data, showing how regularization improves standard clustering scores. | [
"Spectral embedding",
"regularization",
"block models",
"clustering"
] | Accept (Spotlight) | https://openreview.net/pdf?id=H1l_0JBYwS | https://openreview.net/forum?id=H1l_0JBYwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"FbV1U9mBU",
"BJgNvYBgiH",
"Bke7Sj6f9r",
"rJlOcA2CKS"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798738725,
1573046620123,
1572162362833,
1571896975988
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2031/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2031/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2031/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper proposes a nice and easy way to regularize spectral graph embeddings, and explains the effect through a nice set of experiments. Therefore, I recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": [\"Thanks for your comments and suggestions.\", \"We have detailed the derivation of Eq (7) (see the revised version).\", \"Selecting good values for alpha is an interesting question, that is indeed not addressed in our paper. We simply recommend to use the relative value of alpha with respect to the total weight of the graph, as in our experiments. We have selected a representative range of magnitudes (0, 0.1, 1, 10) to illustrate the sensitivity of the results to this relative parameter.\", \"Our main result (Theorem 1) gives the structure of the spectral embedding, independently of the number of blocks. The computation of the spectral embedding of the block model requires to solve an eigenvalue problem in dimension K (the number of blocks), whose complexity depends on the chosen solver.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper analyzes the effect of regularization on spectral embeddings in a deterministic block model and explicitly characterizes the spectra of the Laplacian of the regularized graph in terms of the regularization parameter and block sizes. To my knowledge, this has not been done before. Prior work either derives sufficient conditions for the recovery of all blocks in the asymptotic limit of an infinite number of nodes in the case of (Joseph & Yu, 2016), or lower bounds the number of small eigenvalues of the Laplacian of the unregularized graph on random graphs in expectation (therefore arguing in favor of regularization) in the case of (Zhang & Rohe, 2018). This paper, on the other hand, gives a precise characterization of the eigenvalues and eigenvectors (albeit in the case of a deterministic graph); the results are elegant and the analysis uses simple elementary techniques, which is very satisfying and seems to be easy to build on. The authors mention that they would like to extend this analysis to stochastic block models, which would indeed be interesting. The paper is also well written and the results are clearly presented. Overall, this is a nice contribution to spectral graph theory and so I recommend acceptance.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper explains through a block model the impact of the complete graph regularization, intended as adding to all the entries of the adjacency matrix a constant. The paper is a nice balance between theory and practical effect, since it shows that at the end spectral embedding has an impact on larger connected block units of the graph, discarding isolated nodes.\\n\\nIt also introduces the problem in a gentle way, so that the range of possible readers is wide.\\n\\nIn general I'm happy with the paper, no major lacks on my side\", \"suggestions\": \"it is not clear how to get to Eq.7), the authors should explain the last passage before that equation a little more? How the values of the noise alpha have been selected? How the approach scales with the number of blocks, in term of complexity?\"}"
]
} |
SJeOAJStwB | On Federated Learning of Deep Networks from Non-IID Data: Parameter Divergence and the Effects of Hyperparametric Methods | [
"Heejae Kim",
"Taewoo Kim",
"Chan-Hyun Youn"
] | Federated learning, where a global model is trained by iterative parameter averaging of locally-computed updates, is a promising approach for distributed training of deep networks; it provides high communication-efficiency and privacy-preservability, which allows to fit well into decentralized data environments, e.g., mobile-cloud ecosystems. However, despite the advantages, the federated learning-based methods still have a challenge in dealing with non-IID training data of local devices (i.e., learners). In this regard, we study the effects of a variety of hyperparametric conditions under the non-IID environments, to answer important concerns in practical implementations: (i) We first investigate parameter divergence of local updates to explain performance degradation from non-IID data. The origin of the parameter divergence is also found both empirically and theoretically. (ii) We then revisit the effects of optimizers, network depth/width, and regularization techniques; our observations show that the well-known advantages of the hyperparameter optimization strategies could rather yield diminishing returns with non-IID data. (iii) We finally provide the reasons of the failure cases in a categorized way, mainly based on metrics of the parameter divergence. | [
"Federated learning",
"Iterative parameter averaging",
"Deep networks",
"Decentralized non-IID data",
"Hyperparameter optimization methods"
] | Reject | https://openreview.net/pdf?id=SJeOAJStwB | https://openreview.net/forum?id=SJeOAJStwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BSRPRuBYvQ",
"HJe-HfWnor",
"BkgAzMW2ir",
"Skedlfbnsr",
"HJgLu-bnjH",
"B1goUWZ3jr",
"HyxWNZ-njH",
"rJeyG-WhiB",
"SyxrReZnsB",
"HkeZ3gb2iB",
"H1lJwlb2iH",
"HyeLHgZhir",
"Bkex-eZniS",
"HJg9jyWnsr",
"rkxsOJ-nir",
"rJg1DkWhoB",
"r1lHGdF0FH",
"SJebxqCTtB",
"ByguveDTFS",
"r1eGGqEkuS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1576798738696,
1573814840795,
1573814806290,
1573814768508,
1573814637632,
1573814611254,
1573814568545,
1573814535236,
1573814476548,
1573814441102,
1573814358960,
1573814334291,
1573814263857,
1573814177690,
1573814131481,
1573814103312,
1571883021469,
1571838441469,
1571807328111,
1569831433536
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2030/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2030/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2030/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2030/Authors"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper studies the problem of federated learning for non-i.i.d. data, and looks at the hyperparameter optimization in this setting. As the reviewers have noted, this is a purely empirical paper. There are certain aspects of the experiments that need further discussion, especially the learning rate selection for different architectures. That said, the submission may not be ready for publication at its current stage.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author response to reviewer #1 (3/3)\", \"comment\": \"4. There are some missing details (maybe they are already in the paper but I didn't find them):\\n\\n4.1. What is the definition of Adam-A and Adam-WB? And, what are the differences between Adam-A, Adam-WB, and vanilla Adam? (and also, what is the \\\"A\\\" in NMom-A?)\\n=====(Answer)=====\\nWe apologize for your confusion. As mentioned at the second paragraph of \\u201cSteep fall phenomenon\\u201d of Section 4.2 (in the original version of the paper), \\n\\u201c(optimizer name)-A\\u201d: under the certain optimizer, the parameter averaging being performed for all the variables;\\n\\u201c(optimizer name)-WB\\u201d: under the certain optimizer, the parameter averaging being performed only for weights & biases.\\nTherefore, we conducted an analysis of Adam-A vs Adam-WB; and NMom-A vs NMom-WB (NMom: Nesterov momentum SGD optimizer). In addition, as mentioned in Footnote 3 and the first paragraph of Appendix C, \\u201cvanilla\\u201d training refers to non-distributed training with a single machine, using the whole training examples; for the vanilla training, we trained the networks for 100 epochs.\\nPlease note that in the revised version, the location of the mention about the definition of \\u201c(optimizer name)-A\\u201d and \\u201c(optimizer name)-WB\\u201d has been changed to inside Section 4.1.\\n\\n4.2. When using Adam in federated learning, how are the variables synchronized? Note that for Adam, there are 3 sets of variables: model parameters, 1st moment, and 2nd moment. Due to the local updates, all the 3 sets of variables are not synchronized. When the authors use Adam in FL, did they only synchronize/average the model parameter and ignore the 1st and 2nd moments, or did they synchronize all the 3 sets of variables?\\n=====(Answer)=====\\nAs we stated above, we experimented with both the \\u201c(optimizer name)-A\\u201d and \\u201c(optimizer name)-WB\\u201d cases. To the best of our knowledge, so far there have been no studies about Adam to synchronize all the 3 sets of variables under federated learning. However, in the momentum SGD case, there have been some literatures; for instance, (Lin et al., 2018) presented methods with \\u201cLocal Momentum\\u201d, \\u201cGlobal Momentum\\u201d, and \\u201cHybrid Momentum\\u201d. In our experiments, \\u201cAdam-A\\u201d and \\u201cNMom-A\\u201d take the simple averaging strategy for all the 3 sets (i.e., model parameters, 1st moment, and 2nd moment for Adam) and 2 sets (i.e., model parameters and the momentum for momentum SGD) of variables, respectively; it has the similar philosophy with the \\u201cLocal Momentum\\u201d method. One can see from Table 4 in (Lin et al., 2018) that the simple averaging strategy can yield still competitive results compared to \\u201cGlobal Momentum\\u201d or \\u201cHybrid Momentum\\u201d method. This answer was reflected in Footnote 7 of the revised version of the paper. Please also refer to Table 7 in the appendix of the revised version the paper.\\n\\n(Lin et al., 2018) Tao Lin, Sebastian U. Stich, Kumar Kshitij Patel, and Martin Jaggi. Don\\u2019t use large mini-batches, use local SGD. arXiv preprint arXiv: 1808.07217, 2018.\"}",
"{\"title\": \"Author response to reviewer #1 (2/3)\", \"comment\": \"**Regarding Section 4.2:\\nIn many previous literatures, e.g., (Zhao et al., 2018), inordinate magnitude of parameter divergence is regarded as a direct response to learners\\u2019 local data being non-IID sampled from the population distribution; thus they explained that the consequent parameter averaging with the highly diverged local updates could lead to bad solutions far from the global optimum. Likewise, in our experiments, for many of the failure cases under the non-IID data setting, we observed that the inordinate magnitude of parameter divergence could become one of the internal causes of the diminishing returns.\\nHowever, under the non-IID data setting, some of the failure cases have been observed where the test accuracy is still low but the parameter divergence values decrease (rapidly) over rounds; as the round goes, even the values were sometimes seen to be lower than those of the comparison targets. For the failure cases, we concluded that these (unexpected abnormal) sudden drop of parameter divergence values indicate going into poor local minima (or saddles); this can be supported by the behaviors that test accuracy increases plausibly at very early rounds, but the growth rate quickly stagnates and eventually becomes much lower than the comparison targets.\\nIn relation, we provided Figure 5 (in the revised version of this paper) as the evidence of the steep fall phenomenon; as depicted in the figure, the loss landscapes of the failure cases (i.e., \\u201cAdam-WB\\u201d and \\u201cw/ BN\\u201d under the non-IID setting) show sharper minima and the minimal value in the bowl is relatively greater. Here \\u201csharp\\u201d minima is broadly known to lead to poorer generalization ability (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017); it is observed from the figure that going into a sharp minima happens even in early rounds (e.g., 25th). \\nIt is expected that the discovery of these steep fall phenomena provides a new insight into the relationship between test accuracy and parameter divergence; we believe that the steep fall phenomenon should be considered as the cause of diminishing returns of the federated learning with non-IID data, along with the inordinate magnitude of parameter divergence.\\n\\n(Zhao et al., 2018) Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-IID data. arXiv preprint arXiv: 1806.00582, 2018.\\n(Hochreiter & Schmidhuber, 1997) Sepp Hochreiter and Jurgen Schmidhuber. Flat minima. Neural Computation, 9(1), 1997.\\n(Keskar et al., 2017) Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In ICLR, 2017.\\n\\n\\n3. Since this is nearly a pure empirical paper, I hope the authors can make the experiments thorough. However, there are some experiments I expect to see but not yet included in this paper:\\n\\n3.1. The authors only studies Nesterov momentum in this paper. However, in practice, it is more common to use Polyak momentum. I hope the authors can also study FL SGD with Polyak momentum in this paper.\\n=====(Answer)===== \\nWe appreciate the valuable suggestion. We conducted the corresponding experiments; for the details, please see Tables 7-13 in the revised version of the paper.\\n\\n3.2. In this paper, the authors assume that different workers has the same number of local data samples (in Definition 1). However, due to the heterogeneous setting, it is very likely that different workers have different numbers of local data samples, which could be another source of divergence. Furthermore, different numbers of local data samples also results in different numbers of local steps, which may also cause divergence.\\n=====(Answer)=====\\nAs you point out, since the federated learning do not require centralizing local data, data unbalancedness (i.e., each learner has various numbers of local data examples) would be also naturally assumed in the federated learning along with the data non-IIDness (McMahan et al., 2017). We appreciate the valuable suggestion. We conducted the corresponding experiments; for the details, please see Appendix C.8 in the revised version of the paper.\\n\\n(McMahan et al., 2017) H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agu \\u0308era y Arcas. Communication-efficient learning of deep networks from decentralized data. In AISTATS, 2017. \\n\\n3.3. [1] proposes a regularization mechanism (FedProx) to deal with the heterogeneity. Instead of studying weight decay, it is more reasonable to study the regularization technique proposed by [1].\\n=====(Answer)=====\\nWe appreciate the valuable suggestion. We conducted the corresponding experiments; for the details, please see Appendix C.4 in the revised version of the paper.\"}",
"{\"title\": \"Author response to reviewer #1 (1/3)\", \"comment\": \"We first appreciate the valuable comments. We carefully looked through all the comments; the following describes our answers.\\n\\n1. The paper is nearly pure empirical. There is no theoretical analysis supporting the observations proposed in Section 4.1, which weaken the contribution of this paper.\\n2. This paper only raises some issues in federated learning with non-IID data, and discusses the potential causes. No suggestions or potential solutions is proposed in this paper, which weaken the contribution of this paper.\\n =====(Answer for Question 1 and 2)=====\\nWe appreciate the valuable comments, and we admit your concerns. \\nNevertheless, we believe that focusing on federated learning with non-IID data, our work provides the meaningful exploratory analysis breaking the existing common wisdom about the considered hyperparameter optimization methods. In relation, here we intend to emphasize our contributions.\", \"our_distinct_contributions_can_be_highlighted_as_follows\": \"**Regarding Section 3: \\nIn many previous literatures, e.g., (Zhao et al., 2018), parameter divergence is regarded as a direct response to learners\\u2019 local data being non-IID sampled from the population distribution. In relation, it was reported that as the probabilistic distance (e.g., earth mover\\u2019s distance) of learners\\u2019 local data becomes farther away from the population distribution, bigger parameter divergence might appear; this is correlated with the degradation of performance such as test accuracy (please refer to Section 3.2 of (Zhao et al., 2018)). Also, we added our analysis of the relationship among the three factors (i.e., probabilistic distance, parameter divergence, and performance) in the rebuttal period; the relevant description can be found in Section 3 of the revised version of this paper.\\n\\nRegarding the parameter divergence, our distinct contribution can be summarized in two-fold: \\nFirst, for the first time we identified the mechanism by which data non-IIDness affects the parameter divergence: \\u201cif data distributions in each local dataset are highly skewed and heterogeneous over classes, subsets of neurons, which have especially big magnitudes of the gradients in back propagation, become significantly different across learners; this leads to inordinate parameter divergence between them\\u201d. It has been analyzed in both empirical and theoretical way.\\nSecond, many of the related literatures usually handle the parameter difference of each learner\\u2019s local model parameters from one computed with the population distribution (this philosophy is connected to the definition of PD-VL); meanwhile, in our study we also considered the parameter diversity between the local updates as well (this is connected to the definition of PD-Ls). The reason of probing parameter divergence being important is that the federated learning are performed based on iterative parameter averaging. That is, investigating how local updates are diverged can give a clue whether the subsequent parameter averaging yields positive returns; the proposed divergence metrics provide two ways for it.\\n\\n**Regarding Section 4.1:\\nIn this study, we focused on the well-known hyperparameter optimization strategies (i.e., hyperparametric strategies) to improve learning performance: (i) using momentum SGD or Adam than pure SGD, (ii) network deepening/widening (until a proper level), (iii) Batch Normalization, (iv) weight decay, (v) data augmentation, and (vi) Dropout. Their positive effects have been reported in a variety of literatures; practically, they are being broadly used in deep net training. Also in our experiments, the hyperparametric methods yielded better outcome under vanilla training (i.e., non-distributed training) and under the considered federated learning algorithm with the IID decentralized data setting.\\nHowever, under the non-IID data setting, we newly identified that the hyperparametric methods could rather give negative/diminishing effects on performance of the federated learning algorithm; we believe that these findings can be highly impactful to the upcoming works or industrial implementations.\"}",
"{\"title\": \"Author response to reviewer #3 (6/6)\", \"comment\": \"- page 4, effect of optimizers: what do you refer to as \\u201call model parameters\\u201d? \\n=====(Answer)===== We apologize for your confusion; we clarified this in the revised version of the paper as follows:\\n\\u201cEffects of optimizers. Unlike non-adaptive optimizers such as pure SGD and momentum SGD (Polyak, 1964; Nesterov, 1983), Adam (Kingma & Ba, 2015) could give poor performance from non-IID data if the parameter averaging is performed only for weights and biases, compared to all the model variables (including the first and second moment) being averaged.\\u201d\\n\\n- why Dropout yields bigger parameter divergence if on Fig 2, right it actually helps? \\n=====(Answer)===== At the initial steps of this study, we had expected that the dropped nodes (or neurons), randomly selected, are different across learners, and its impact on \\udbff\\udc11the parameter divergence would be much stronger than under the IID setting; the experimental results were also shown that the Dropout yield bigger parameter divergence. However, it was observed that the generalization effect of the Dropout could be still valid in test accuracy in some cases. Regarding this, we expect that the positive effects of the Dropout become weaker as difficulty level of learning tasks goes to be higher (e.g., CIFAR-100). \\n\\n- Last line of the page 5. Where was this observed? \\n=====(Answer)===== We apologize for your confusion. In the revised version of the paper, it can be found at Table 13 of the appendix; we specified this.\"}",
"{\"title\": \"Author response to reviewer #3 (5/6)\", \"comment\": \"8. Why for different experiments different baseline models are used? (NetA, NetB, NetC)\\n=====(Answer)=====\\nBasically, we considered NetA-Baseline as our baseline network architecture; it was used in the investigation of the effects of (i) optimizers, (ii) weight decay, (iii) Batch Normalization, (iv) data augmentation, and (v) Dropout.\\nIn order to study the effects of network depth, we also used its two variants, i.e., NetA-Deeper and NetA-Deepest.\\nAlso, in order to study the effects of network width in relation of convolutional layers, we also used its other two variants, i.e., NetA-Narrower and NetA-Narrowest. While the first convolutional layer of NetA-Baseline has 64 output channels, those of NetA-Narrower and NetA-Narrowest have 16 and 32 output channels, respectively.\\nIn addition, regarding network width in relation of fully-connected layers, we also wanted to investigate the effects of the global average pooling. Therefore, we used two baseline networks (i.e., NetB-Baseline and NetC-Baseline), of which the number of fully-connected layers is 3 and 1; they also use the global average pooling after the last convolutional layer. One might regard the NetB-Baseline and the NetC-Baseline as a VGG-type and a ResNet-type fully-connected layers. \\nWe also then constructed its max pooling variants, i.e., NetB-Wider and NetB-Widest; and NetC-Wider and NetC-Widest.\\nWe additionally remark that the NetC-Baseline network can be regarded as a shallow ResNet-type network, and it was compared with ResNet-14 and ResNet-20 in our study (see Table 8 in the appendix of the revised version of the paper).\\n\\n\\n- Appendix B, first equation on page 13. (d_q)^t -> (d_q)^t_k; The size of gradient \\\\nabla_w [E ...] is different from the size of (d_q)_k. They cannot be added together.\\n=====(Answer)===== We appreciate the thankful comment; we corrected this in the revised version of the paper. Please see Appendix B.\\n\\n- page 7, last sentence of the first paragraph: what is the accuracy achieved with Batch Renormalization? \\n=====(Answer)===== We apologize for your confusion. In the revised version of the paper, it can be found at Table 3.\\n\\n Why the reason for accuracy gap is \\u201csignificant parameter divergence\\u201d? on fig. 3 \\u201cparameter divergence\\u201d is smaller than for the baseline.\\n=====(Answer)===== We clarified this in the revised version of the paper as follows:\\n\\u201cBatch Normalization yields not only big parameter divergence (especially before the first learning rate drop) but also the steep fall phenomenon; the corresponding test accuracy was seen to be very low (see Table 3).\\u201d\\n\\n- Why the name of the section on page 7 is \\u201cexcessively high training loss of local updates\\u201d if later it is stated that it is actually smaller than for the IID case? \\n=====(Answer)===== This is because the comparison target of \\u201cexcessively high\\u201d here is the baseline cases under the non-IID data setting. Also, as we remarked in the paper, please additionally note that the training loss being high is much more critical under non-IID data setting than under IID cases; this is because local updates are extremely easy to be overfitted to each training dataset under non-IID data environments.\\n\\n- section 3: \\u201cA pleasant level of parameter divergence can help to improve generalization\\u201d -> where was it shown?\\n=====(Answer)===== We appreciate the valuable comment; In the revision of the paper, we corrected/clarified the sentence as follows:\\n\\u201cA pleasant level of parameter divergence could rather imply exploiting rich decentralized data\\u201d\\n\\n- section 4.2, paragraph 2: what is meant by \\u201chyperparametric methods\\u201d?\\n=====(Answer)===== We apologize for your confusion; in our paper, \\u201chyperparametric methods\\u201d is used interchangeably with \\u201chyperparameter optimization methods\\u201d. We specified this at Footnote 2 in the revised version of the paper.\\n\\n- section 4.2, paragraph 3: \\u201cquantitative increase in a layer level\\u201d -> not clear what does it mean.\\n=====(Answer)===== Our parameter divergence metrics make normalized (qualitative) measures possible since they used cosine distance instead of Euclidean distance. Please refer to also the third paragraph in Section 3 of the revised version of the paper.\"}",
"{\"title\": \"Author response to reviewer #3 (4/6)\", \"comment\": \"7. Better re-prase the definition of the steep fall phenomena, now it is not very clear: in the IID setting parameter divergence values are also sometimes reducing sharply; in the network width study parameters divergence doesn\\u2019t experience sudden drop. Also, how does this phenomena (and parameter divergence too) connects to the training loss? \\n=====(Answer)=====\\nWe appreciate the thankful suggestion. As you point out, in the original version of this paper, the definition of the steep fall phenomenon might be described somewhat ambiguously.\\nInstead of simply describing sudden drop of the parameter divergence values in the last fully-connected layer, the philosophy behind the steep fall phenomenon is as follows:\\nIn many previous literatures, e.g., (Zhao et al., 2018), inordinate magnitude of parameter divergence is regarded as a direct response to learners\\u2019 local data being non-IID sampled from the population distribution; thus they explained that the consequent parameter averaging with the highly diverged local updates could lead to bad solutions far from the global optimum. Likewise, in our experiments, for many of the failure cases under the non-IID data setting, we observed that the inordinate magnitude of parameter divergence could become one of the internal causes of the diminishing returns.\\nHowever, under the non-IID data setting, some of the failure cases have been observed where the test accuracy is still low but the parameter divergence values decrease (rapidly) over rounds; as the round goes, even the values were sometimes seen to be lower than those of the comparison targets. For the failure cases, we concluded that these (unexpected abnormal) sudden drop of parameter divergence values indicate going into poor local minima (or saddles); this can be supported by the behaviors that test accuracy increases plausibly at very early rounds, but the growth rate quickly stagnates and eventually becomes much lower than the comparison targets.\\nIn relation, we provided Figure 5 (in the revised version of this paper) as the evidence of the steep fall phenomenon; as depicted in the figure, the loss landscapes of the failure cases (i.e., \\u201cAdam-WB\\u201d and \\u201cw/ BN\\u201d under the non-IID setting) show sharper minima and the minimal value in the bowl is relatively greater. Here \\u201csharp\\u201d minima is broadly known to lead to poorer generalization ability (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017); it is observed from the figure that going into a sharp minima happens even in early rounds (e.g., 25th). \\nIt is expected that the discovery of these steep fall phenomena provides a new insight into the relationship between test accuracy and parameter divergence; we believe that the steep fall phenomenon should be considered as the cause of diminishing returns of the federated learning with non-IID data, along with the inordinate magnitude of parameter divergence.\\nThis answer was reflected in \\u201cSteep fall phenomenon\\u201d of Section 4.2 in the revised version of the paper.\\n\\n(Zhao et al., 2018) Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-IID data. arXiv preprint arXiv: 1806.00582, 2018.\\n(Hochreiter & Schmidhuber, 1997) Sepp Hochreiter and Jurgen Schmidhuber. Flat minima. Neural Computation, 9(1), 1997.\\n(Keskar et al., 2017) Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In ICLR, 2017.\"}",
"{\"title\": \"Author response to reviewer #3 (3/6)\", \"comment\": \"5. In experiments on Fig. 2. and Fig.3 (middle) what is the accuracy for IID baseline? Is the observed phenomena connected to the poor network architecture or to the non-iid data? \\n=====(Answer)===== \\nWe apologize for your confusion. Each test accuracy values (under both IID and Non-IID(2) setting) is found in the tables in Appendix C. In the revised version of the paper, we also provided the corresponding tables in Section 4.2 (see Tables 2-4).\\n\\nIn this study, we focused on the well-known hyperparameter optimization strategies (i.e., hyperparametric strategies) to improve learning performance: (i) using momentum SGD or Adam than pure SGD, (ii) network deepening/widening (until a proper level), (iii) weight decay, (iv) Batch Normalization, (v) data augmentation, and (vi) Dropout. Their positive effects have been reported in a variety of literatures; practically, they are being broadly used in deep net training to improve learning performance. Also in our experiments, the hyperparametric methods yielded better outcome under vanilla training (i.e., non-distributed training) and under the considered federated learning algorithm with the IID decentralized data setting.\\nHowever, under the non-IID data setting, we newly identified that the hyperparametric methods could rather give negative/diminishing effects on performance of the federated learning algorithm; we believe that these findings can be highly impactful to the upcoming works or industrial implementations.\\n\\n\\n6. In table 5 of the appendix, why experiments use Adam optimizer, but not Momentum SGD as in the main paper to compare the performance of ResNet14 and ResNet20?\\n=====(Answer)===== \\nWe appreciate the valuable suggestion. We conducted the corresponding experiments; for the details, please see Table 8 in the appendix of the revised version of the paper. We additionally note that regarding the ResNet results, there was some error in the original version of the paper; we corrected this in the revised version.\"}",
"{\"title\": \"Author response to reviewer #3 (2/6)\", \"comment\": \"3. Why the divergence of parameters is considered only at the last layer? It seems to hide many important interactions in the other layers. \\n=====(Answer)===== \\nAt first, please remind that regarding each experimental trial, in Figures 9-34 (of the revised version of the paper) we provide the PD-Ls and the PD-VL graphs for each four layers. From the figures of the experimental results in Appendix C, we can identify that in most cases the parameter divergence values of the first convolutional layer and the last fully-connected layer would be more dominant than those of the other layers, judging from their difference of magnitude between under the IID and the non-IID data setting (please also note that log scale is used for the y-axis). We additionally remark that the results of other related studies also show the dominance of the first convolutional layer and the last fully-connected layer (e.g., see Figure 2 in Zhao et al. (2018)). Therefore, our discussion here was primarily described based on the results of the first convolutional layer and the last fully-connected layer. This answer was reflected in Footnote 8 of the revised version of the paper.\\n\\n(Zhao et al., 2018) Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-IID data. arXiv preprint arXiv: 1806.00582, 2018.\\n\\n\\n4. Some important experimental details --- should be added:\\n\\nWe apologize for your confusion, and we made them clear in the revised version.\\n\\n- At which moment the parameter divergence is computed in the plots? Is it computed at the end of the local iterations right before synchronization? \\n=====(Answer)===== \\nAccording to the definition of w^t_k in Algorithm 1, the values of PD-VL and PD-Ls are computed at the end of the local iterations right before synchronization. \\n\\n- How the training loss was computed in the plots? before or after synchronization? on the local only or the global data?\\n=====(Answer)=====\\nTraining loss values in the plots such as Figure 6 (of the revised version of the paper) are mean of each learner\\u2019s training loss before each synchronization; each learner\\u2019s training loss values were calculated on their local data. In the revised version of the paper, we added this information at the caption of Figure 6.\\n\\n- Which batch size was used? \\n=====(Answer)=====\\nAs stated in Table 1 (of the original version of the paper), the minibatch size was set to be 50 for the considered federated learning algorithm. => Please note that in the revised version, the location of the mention about the minibatch size has been changed to inside the text in \\u201cEnvironmental configuration\\u201d of Section 2.2. In addition, in Tables 7-13 (of the revised version of the paper), the results under vanilla training include both when the minibatch size is 50 and 500.\\n\\n- Improve the figure caption to detail the experimental setup. (e.g. in fig 3. the network architecture was mentioned only for one of the figures, include which optimized was used, etc)\\n=====(Answer)=====\\nAs stated in \\u201cBaseline network model\\u201d, we used NetA-Baseline as our baseline network architecture; without the specific mention of the network architecture, the NetA-Baseline network is considered for Figures 3-6 in the revised version of the paper. To be more clear, we provided the corresponding mention again through Tables 2-4.\\nIn addition, we clarified the remaining setups in the first paragraph of Section 4.2 in the revised version of the paper as follows:\\n\\u201cNote that our discussion in this subsection is mostly made from the results under Nesterov momentum SGD and on CIFAR-10; the complete results including other optimizers (e.g., pure SGD, Polyak momentum SGD, and Adam) and datasets (e.g., SVHN) are given in Appendix C.\\u201d\"}",
"{\"title\": \"Author response to reviewer #3 (1/6)\", \"comment\": \"We first appreciate the valuable comments. We carefully looked through all the comments; the following describes our answers.\\n\\n1. The initial learning rates were not tuned properly. It is set to be the same for different neural network topologies, which might significantly affect the results. What did the choice of initial learning rates is based on? \\n=====(Answer)=====\\nAs you remark, the initial learning rates were set to be the same for different model architectures. Therefore, the best results might not have been obtained with regard to the learning rates. Nevertheless, the choice of the initial learning rates was conducted based on the follows:\\n(i) Based on the results of Appendix B as well as the intuitive thoughts, (especially before the first learning rate drop) learning rates may highly affect the values of the parameter divergence. Therefore, we set the initial learning rates the same for the compared cases (e.g., NetA-Baseline vs NetA-Deeper vs NetA-Deepest) so that the corresponding parameter divergence values could be compared under the same conditions.\\n(ii) In addition, one of the main objective in the paper is to show that the considered hyper parameter optimization strategies (which have been reported that they yield better outcome under \\u201cvanilla\\u201d training or under the federated learning with IID data) could rather result in the diminishing returns under non-IID data setting. \\nAs described in Tables 7-13 of Appendix C (in the revised version of the paper), we can see that under \\u201cvanilla training\\u201d (especially for batch size: 50) and under the federated learning with the IID data setting, most of the results are shown to be similar with what we already know (e.g., the advantages of deeper network architectures, global average pooling, Batch Normalization, and so on). However, under the federated learning with the Non-IID(2) data setting, we can see that some of the hyperparameter optmization methods rather yield the highly conflicted results (i.e., the diminishing returns).\\nTherefore, in summary, our setting of the initial learning rates could be rather far from the best results; nevertheless, from Tables 7-13 the results can be interpreted as still valid (since the results under \\u201cvanilla training\\u201d and under the federated learning with the IID data setting follow the similar trends to those well known). In addition, we believe that our setting also provides the fair comparison of parameter divergence.\\n\\n\\n2. Why the parameter divergence metric in Definition 1 is not the same as in the theoretical study (Appendix B)? What is the intuition behind Definition 1?\\n=====(Answer)=====\\nWe first remark that PD-Ls in Definition 1 and ||(d_q)^(t+1)_i - (d_q)^(t+1)_j|| are related. In the case of Figure 1 (and Appendix B), we used the same network architecture and training methods. Manipulated variables here is only data distributions (i.e., IID, Non-IID(2), and Non-IID(1)). Therefore, ||(d_q)^(t+1)_i - (d_q)^(t+1)_j|| can be validly utilized. However, in most of our experiments, we compared the different network architectures (e.g., NetC-Baseline, NetC-Wider, and NetC-Widest) or the effects of the different training settings (e.g., various weight decay factors) together in a set. Therefore, for instance, in the case of NetC-Baseline, NetC-Wider, and NetC-Widest, the number of neurons in the output layer becomes different (i.e., 2560, 10240, and 40960, respectively); in the case of various weight decay factors, the degree to which the model parameters from the previous iteration are reflected in the current parameters highly depends on the factor values. Therefore, we thought that we need a normalized (qualitative) metric rather than simply considering the magnitude of parameter (weight) differences; consequently, instead of the euclidean distance, we used cosine distance-based metrics in Definition 1. This answer was reflected in the third paragraph of Section 3 in the revised version of the paper.\"}",
"{\"title\": \"Author response to reviewer #2 (3/3)\", \"comment\": \"5. In the setting described as \\\"IID\\\" in Table 1 is not, the subsampled for each learner are not IID subsamples of the full dataset because they are class-balanced (if I'm understanding correctly)\\n=====(Answer)=====\\nAs you point out, in our \\u201cIID\\u201d data setting, we cannot say that each learner\\u2019s data examples are practically IID sampled.\\nIn order to each learner\\u2019s data examples being practically IID sampled, we should have conducted the followings: (i) shuffling the full dataset, and (ii) partitioning the data examples into learners in that order.\\nHowever, we did (i) sorting the full dataset by class, and (ii) partitioning the data examples into learners to be class-balanced.\\nNevertheless, the CIFAR-10 dataset consists of 50000 training data examples of 5000 images each for 10 classes; thus, statistically we believe that our \\u201cIID\\u201d data setting can be regarded as one of the ideal IID settings.\\nNote that, for the SVHN dataset (consisting of 73257 training data examples), we reconstructed the full dataset to have 50000 training data examples of 5000 images each for 10 classes.\"}",
"{\"title\": \"Author response to reviewer #2 (2/3)\", \"comment\": \"3. Regarding steep fall phenomenon\\n=====(Answer)=====\\nWe apologize for your confusion; the philosophy behind the steep fall phenomenon is as follows:\\nIn many previous literatures, e.g., (Zhao et al., 2018), inordinate magnitude of parameter divergence is regarded as a direct response to learners\\u2019 local data being non-IID sampled from the population distribution; thus they explained that the consequent parameter averaging with the highly diverged local updates could lead to bad solutions far from the global optimum. Likewise, in our experiments, for many of the failure cases under the non-IID data setting, we observed that the inordinate magnitude of parameter divergence could become one of the internal causes of the diminishing returns.\\nHowever, under the non-IID data setting, some of the failure cases have been observed where the test accuracy is still low but the parameter divergence values decrease (rapidly) over rounds; as the round goes, even the values were sometimes seen to be lower than those of the comparison targets.\\nFor the failure cases, we concluded that these (unexpected abnormal) sudden drop of parameter divergence values indicate going into poor local minima (or saddles); this can be supported by the behaviors that test accuracy increases plausibly at very early rounds, but the growth rate quickly stagnates and eventually becomes much lower than the comparison targets.\\nIn relation, we provided Figure 5 (in the revised version of this paper) as the evidence of the steep fall phenomenon; as depicted in the figure, the loss landscapes of the failure cases (i.e., \\u201cAdam-WB\\u201d and \\u201cw/ BN\\u201d under the non-IID setting) show sharper minima and the minimal value in the bowl is relatively greater. Here \\u201csharp\\u201d minima is broadly known to lead to poorer generalization ability (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017); it is observed from the figure that going into a sharp minima happens even in early rounds (e.g., 25th). \\nIt is expected that the discovery of these steep fall phenomena provides a new insight into the relationship between test accuracy and parameter divergence; we believe that the steep fall phenomenon should be considered as the cause of diminishing returns of the federated learning with non-IID data, along with the inordinate magnitude of parameter divergence.\\n\\n(Zhao et al., 2018) Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-IID data. arXiv preprint arXiv: 1806.00582, 2018.\\n(Hochreiter & Schmidhuber, 1997) Sepp Hochreiter and Jurgen Schmidhuber. Flat minima. Neural Computation, 9(1), 1997.\\n(Keskar et al., 2017) Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In ICLR, 2017.\"}",
"{\"title\": \"Author response to reviewer #2 (1/3)\", \"comment\": \"We first appreciate the valuable comments. We carefully looked through all the comments; the following describes our answers.\\n\\n1. Regarding the lack of quantitative analysis of the trends the paper identifies\\n=====(Answer)=====\\nWe appreciate the valuable comments. As you point out, we admit that the paper lacks a quantitive analysis of the findings.\\nHowever, please remind that even under \\u201cvanilla\\u201d training, it is not easy to generally quantify the gains of the considered hyperparameter optimization methods since they highly depend on the training dataset or the remaining training strategies. Therefore, we were afraid to conclude the general quantification of the effects of the methods.\\nInstead, by also providing the results under \\u201cvanilla\\u201d training and the federated learning with IID data, we tried to emphasize the negative effects of the hyperparametric methods; we think our results show the severity of performance degradation of each method, even indirectly. \\n\\n\\n2. Regarding the relationship between test accuracy and parameter divergence\\n=====(Answer)=====\\nIn many previous literatures, e.g., (Zhao et al., 2018), parameter divergence is regarded as a direct response to learners\\u2019 local data being non-IID sampled from the population distribution. In relation, it was reported that as the probabilistic distance (e.g., earth mover\\u2019s distance) of learners\\u2019 local data becomes farther away from the population distribution, bigger parameter divergence might appear; this is correlated with the degradation of performance such as test accuracy (please refer to Section 3.2 of (Zhao et al., 2018)). \\nAlso, we added our analysis of the relationship among the three factors (i.e., probabilistic distance, parameter divergence, and performance) in the rebuttal period; the relevant description can be found in Section 3 of the revised version of this paper.\\n\\nThe reason of probing parameter divergence being important is that the federated learning are performed based on iterative parameter averaging. That is, investigating how local updates are diverged can give a clue whether the subsequent parameter averaging yields positive returns; the proposed divergence metrics provide two ways for it.\\n\\n(Zhao et al., 2018) Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-IID data. arXiv preprint arXiv: 1806.00582, 2018.\"}",
"{\"title\": \"We posted the revised version of the paper\", \"comment\": \"Thanks to the valuable comments of reviewers, we were able to improve our paper better.\\n\\nIn the revision, we aimed to improve the clarity by addressing the concerns/questions of reviewers.\\n\\nIn the process we strengthened the description of the reason why the proposed parameter divergence metrics are important, and we added the new content to establish the relationship among probabilistic distance, parameter divergence, and performance. We also provide the additional experiment results about (i) ResNet with Nesterov momentum SGD, (ii) Polyak momentum SGD, (iii) unbalanced non-IID data settings, and (iv) FedProx (Li et al., 2019).\\n\\n(Li et al., 2019) Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization for heterogeneous networks. In ICML Workshop, 2019. \\n\\nHow and where the reviewers\\u2019 comments have been addressed in the revised version is described in the responses to each comment.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"title\": \"Summary of our distinct contributions (2/2)\", \"comment\": \"**Regarding Section 4.2:\\nIn many previous literatures, e.g., (Zhao et al., 2018), inordinate magnitude of parameter divergence is regarded as a direct response to learners\\u2019 local data being non-IID sampled from the population distribution; thus they explained that the consequent parameter averaging with the highly diverged local updates could lead to bad solutions far from the global optimum. Likewise, in our experiments, for many of the failure cases under the non-IID data setting, we observed that the inordinate magnitude of parameter divergence could become one of the internal causes of the diminishing returns.\\nHowever, under the non-IID data setting, some of the failure cases have been observed where the test accuracy is still low but the parameter divergence values decrease (rapidly) over rounds; as the round goes, even the values were sometimes seen to be lower than those of the comparison targets. For the failure cases, we concluded that these (unexpected abnormal) sudden drop of parameter divergence values indicate going into poor local minima (or saddles); this can be supported by the behaviors that test accuracy increases plausibly at very early rounds, but the growth rate quickly stagnates and eventually becomes much lower than the comparison targets.\\nIn relation, we provided Figure 5 (in the revised version of this paper) as the evidence of the steep fall phenomenon; as depicted in the figure, the loss landscapes of the failure cases (i.e., \\u201cAdam-WB\\u201d and \\u201cw/ BN\\u201d under the non-IID setting) show sharper minima and the minimal value in the bowl is relatively greater. Here \\u201csharp\\u201d minima is broadly known to lead to poorer generalization ability (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017); it is observed from the figure that going into a sharp minima happens even in early rounds (e.g., 25th). \\nIt is expected that the discovery of these steep fall phenomena provides a new insight into the relationship between test accuracy and parameter divergence; we believe that the steep fall phenomenon should be considered as the cause of diminishing returns of the federated learning with non-IID data, along with the inordinate magnitude of parameter divergence.\\n\\n(Zhao et al., 2018) Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-IID data. arXiv preprint arXiv: 1806.00582, 2018.\\n(Hochreiter & Schmidhuber, 1997) Sepp Hochreiter and Jurgen Schmidhuber. Flat minima. Neural Computation, 9(1), 1997.\\n(Keskar et al., 2017) Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In ICLR, 2017.\"}",
"{\"title\": \"Summary of our distinct contributions (1/2)\", \"comment\": \"We believe that focusing on federated learning with non-IID data, our work provides the meaningful exploratory analysis breaking the existing common wisdom about the considered hyperparameter optimization methods. In relation, here we intend to emphasize our contributions.\", \"our_distinct_contributions_can_be_highlighted_as_follows\": \"**Regarding Section 3: \\nIn many previous literatures, e.g., (Zhao et al., 2018), parameter divergence is regarded as a direct response to learners\\u2019 local data being non-IID sampled from the population distribution. In relation, it was reported that as the probabilistic distance (e.g., earth mover\\u2019s distance) of learners\\u2019 local data becomes farther away from the population distribution, bigger parameter divergence might appear; this is correlated with the degradation of performance such as test accuracy (please refer to Section 3.2 of (Zhao et al., 2018)). Also, we added our analysis of the relationship among the three factors (i.e., probabilistic distance, parameter divergence, and performance) in the rebuttal period; the relevant description can be found in Section 3 of the revised version of this paper.\\n\\nRegarding the parameter divergence, our distinct contribution can be summarized in two-fold: \\nFirst, for the first time we identified the mechanism by which data non-IIDness affects the parameter divergence: \\u201cif data distributions in each local dataset are highly skewed and heterogeneous over classes, subsets of neurons, which have especially big magnitudes of the gradients in back propagation, become significantly different across learners; this leads to inordinate parameter divergence between them\\u201d. It has been analyzed in both empirical and theoretical way.\\nSecond, many of the related literatures usually handle the parameter difference of each learner\\u2019s local model parameters from one computed with the population distribution (this philosophy is connected to the definition of PD-VL); meanwhile, in our study we also considered the parameter diversity between the local updates as well (this is connected to the definition of PD-Ls). The reason of probing parameter divergence being important is that the federated learning are performed based on iterative parameter averaging. That is, investigating how local updates are diverged can give a clue whether the subsequent parameter averaging yields positive returns; the proposed divergence metrics provide two ways for it.\\n\\n**Regarding Section 4.1:\\nIn this study, we focused on the well-known hyperparameter optimization strategies (i.e., hyperparametric strategies) to improve learning performance: (i) using momentum SGD or Adam than pure SGD, (ii) network deepening/widening (until a proper level), (iii) Batch Normalization, (iv) weight decay, (v) data augmentation, and (vi) Dropout. Their positive effects have been reported in a variety of literatures; practically, they are being broadly used in deep net training. Also in our experiments, the hyperparametric methods yielded better outcome under vanilla training (i.e., non-distributed training) and under the considered federated learning algorithm with the IID decentralized data setting.\\nHowever, under the non-IID data setting, we newly identified that the hyperparametric methods could rather give negative/diminishing effects on performance of the federated learning algorithm; we believe that these findings can be highly impactful to the upcoming works or industrial implementations.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThe paper presents an empirical study of causes of parameter divergence in federated learning. Federated learning is the setting where parameter updates (e.g. gradients) are computed separately on possibly non-IID subsamples of the data and then aggregated by averaging. The paper examines the effects of choice of optimizer, network width and depth, batch normalization, weight decay, and data augmentation on the amount of parameter divergence. Divergence is defined as the average cosine distance between pairs of locally-updated weights, or between locally updated weights and weights trained with IID data. The paper generally concludes that regularization methods like BN and weight decay have an adverse effect in the federated setting, the deepening the network has an adverse effect while widening it might be beneficial, and that adaptive optimizers like Adam can perform poorly if their internal statistics are not aggregated.\\n\\nI recommend that the paper be rejected. The main shortcoming of the paper is the lack of rigororous statistical analysis to support its conclusions. The paper contains a lot of raw data, but the discussion mainly highlights trends that the authors seem to have observed in the results, without quantifying the relative sizes of effects, how consistent they are across experimental conditions, etc. The writing is also quite unclear, to the point that I often didn't understand exactly what argument was being made.\\n\\nDetails / Questions:\\nThe main problem is the lack of quantitative analysis of the trends the paper identifies. For example, regarding \\\"Effects of Batch Normalization\\\", there seem to be two claims made:\\n1. Batch normalization makes things worse (somehow) in the federated setting\\n2. Batch re-normalization still makes things worse, but not as much\\nHow are these effects quantified? How large are they? Do they hold across all datasets, architectures, and optimizers considered? Ideally there would be a table summarizing each experimental manipulation, its effect on performance, whether that effect is significant, etc. Of course this requires some care because the paper is doing an exploratory analysis and there are many hypotheses to test; a good reference is [1].\\n\\nThe paper also relies heavily on parameter divergence as a measure of performance in federated learning, but I see no evidence presented that parameter divergence is predictive of test accuracy (which is presumably what we actually care about). Intuitively I can see how it might be related, but since divergence is basically being used as a proxy for accuracy, it is vital to show convincingly that the two are related. What do we gain by analyzing parameter divergence rather than simply comparing test accuracy?\\n\\nRegarding the \\\"steep fall phenomenon\\\": The paper seems to present this as an indicator that a manipulation performs poorly in the federated setting. But, isn't it a good thing if parameter divergence goes down? Why does specifically a sudden, sharp decrease in divergence indicate a problem?\\n\\nFinally, some improvements might be made to the experiment setup. For one, the case of completely-disjoint label sets in different local learners seems extreme to me. Wouldn't at least partial overlap be more common in practice? (This is not my area so I don't know). Experimenting with different degrees of overlap would be useful. As for network architectures, it would be valuable to look at a greater variety of standard architecture styles (e.g. ResNet, Inception, etc). I realize there are some experiments with ResNet, but the focus is mainly on the single-path VGG-like architecture. I do realize this is a lot of experiments to do.\", \"minor_points\": [\"In the setting described as \\\"IID\\\" in Table 1 is not, the subsampled for each learner are not IID subsamples of the full dataset because they are class-balanced (if I'm understanding correctly)\"], \"references\": \"[1] Dem\\u0161ar, J. (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7(Jan), 1-30.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper experimentally studies the reasons for the slow convergence of the Federated Averaging algorithm when the data are non-iid distributed between workers in the multiclass-classification case. Paper performs extensive experimental study and observes that the main reasons for failure are connected to (i) the parameter divergence during the local steps, (ii) steep-fall phenomena when parameters on different nodes are getting close fast, and to the (iii) high training loss.\\n\\nMy score is weak reject. The paper provides extensive but unclear experimental results. Improving presentation would significantly improve the paper. For example, why in experimental and theoretical study different parameter divergence metrics were used, etc (see below), why different networks use different optimizers. \\nMoreover, provided experimental comparison might be unfair. The learning rate is constant throughout all of the experiments, depending only on the optimizer, but not on the neural network architecture. This can affect the final results.\", \"concerns_and_questions_that_should_be_addressed\": \"1. The initial learning rates were not tuned properly. It is set to be the same for different neural network topologies, which might significantly affect the results. What did the choice of initial learning rates is based on? \\n\\n2. Why the parameter divergence metric in Definition 1 is not the same as in the theoretical study (Appendix B)? What is the intuition behind Definition 1?\\n\\n3. Why the divergence of parameters is considered only at the last layer? It seems to hide many important interactions in the other layers. \\n\\n4. Some important experimental details --- should be added:\\n - At which moment the parameter divergence is computed in the plots? Is it computed at the end of the local iterations right before synchronization? \\n - How the training loss was computed in the plots? before or after synchronization? on the local only or the global data?\\n - Which batch size was used? \\n - Improve the figure caption to detail the experimental setup. (e.g. in fig 3. the network architecture was mentioned only for one of the figures, include which optimized was used, etc)\\n\\n5. In experiments on Fig. 2. and Fig.3 (middle) what is the accuracy for IID baseline? Is the observed phenomena connected to the poor network architecture or to the non-iid data? \\n\\n6. In table 5 of the appendix, why experiments use Adam optimizer, but not Momentum SGD as in the main paper to compare the performance of ResNet14 and ResNet20?\\n\\n7. Better re-prase the definition of the steep fall phenomena, now it is not very clear: in the IID setting parameter divergence values are also sometimes reducing sharply; in the network width study parameters divergence doesn\\u2019t experience sudden drop. Also, how does this phenomena (and parameter divergence too) connects to the training loss? \\n\\n8. Why for different experiments different baseline models are used? (NetA, NetB, NetC)\", \"other_minor_comments\": [\"Appendix B, first equation on page 13. (d_q)^t -> (d_q)^t_k; The size of gradient \\\\nabla_w [E ...] is different from the size of (d_q)_k. They cannot be added together.\", \"page 7, last sentence of the first paragraph: what is the accuracy achieved with Batch Renormalization? Why the reason for accuracy gap is \\u201csignificant parameter divergence\\u201d? on fig. 3 \\u201cparameter divergence\\u201d is smaller than for the baseline.\", \"Why the name of the section on page 7 is \\u201cexcessively high training loss of local updates\\u201d if later it is stated that it is actually smaller than for the IID case?\", \"Defenition 1, line 4: \\u201cthe then\\u201d -> \\u201cthe\\u201d\", \"section 3: \\u201cA pleasant level of parameter divergence can help to improve generalization\\u201d -> where was it shown?\", \"section 4.2, paragraph 2: what is meant by \\u201chyperparametric methods\\u201d?\", \"section 4.2, paragraph 3: \\u201cquantitative increase in a layer level\\u201d -> not clear what does it mean.\", \"page 4, effect of optimizers: what do you refer to as \\u201call model parameters\\u201d?\", \"page 5, last paragraph: Hinton et al... -> (Hinton et al\\u2026). Use \\\\citet(\\\\citep) instead of \\\\cite.\", \"why Dropout yields bigger parameter divergence if on Fig 2, right it actually helps?\", \"Last line of the page 5. Where was this observed?\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors empirically investigate parameter divergence of local updates in federated learning with non-IID data. The authors study the effects of optimizers, network depth/width, and regularization techniques, and provide some observations. In overall, I think this paper study an important problem in federated learning.\\n\\nHowever, there are some weakness in this paper:\\n\\n1. The paper is nearly pure empirical. There is no theoretical analysis supporting the observations proposed in Section 4.1, which weaken the contribution of this paper.\\n\\n2. This paper only raises some issues in federated learning with non-IID data, and discusses the potential causes. No suggestions or potential solutions is proposed in this paper, which weaken the contribution of this paper.\\n\\n3. Since this is nearly a pure empirical paper, I hope the authors can make the experiments thorough. However, there are some experiments I expect to see but not yet included in this paper:\\n\\n 3.1. The authors only studies Nesterov momentum in this paper. However, in practice, it is more common to use Polyak momentum. I hope the authors can also study FL SGD with Polyak momentum in this paper.\\n\\n 3.2. In this paper, the authors assume that different workers has the same number of local data samples (in Definition 1). However, due to the heterogeneous setting, it is very likely that different workers have different numbers of local data samples, which could be another source of divergence. Furthermore, different numbers of local data samples also results in different numbers of local steps, which may also cause divergence.\\n\\n 3.3. [1] proposes a regularization mechanism (FedProx) to deal with the heterogeneity. Instead of studying weight decay, it is more reasonable to study the regularization technique proposed by [1].\\n\\n\\n4. There are some missing details (maybe they are already in the paper but I didn't find them):\\n\\n 4.1. What is the definition of Adam-A and Adam-WB? And, what are the differences between Adam-A, Adam-WB, and vanilla Adam? (and also, what is the \\\"A\\\" in NMom-A?)\\n\\n 4.2. When using Adam in federated learning, how are the variables synchronized? Note that for Adam, there are 3 sets of variables: model parameters, 1st moment, and 2nd moment. Due to the local updates, all the 3 sets of variables are not synchronized. When the authors use Adam in FL, did they only synchronize/average the model parameter and ignore the 1st and 2nd moments, or did they synchronize all the 3 sets of variables?\\n\\n\\n----------------\\nReference\\n\\n[1] Li, Tian et al. \\u201cFederated Optimization for Heterogeneous Networks.\\u201d (2018).\"}",
"{\"comment\": \"We found some typos in the paper:\\n\\nIn \\u201cInordinate magnitude of parameter divergence\\u201d of Section 4.2,\\n\\n(1) Regarding the second sentence from the behind of the second paragraph, we correct this sentence to \\u201cSince the NetA-Deeper and NetA-Deepest have twice and three times as many model parameters as NetA-Baseline, it can be expected enough that the deeper models yield bigger parameter divergence in the whole model; but our results also show its qualitative increase in a layer level.\\u201d\\n\\n(2) Regarding the last sentence of the third paragraph, we correct this sentence to \\u201cWe additionally observe for the non-IID cases that even with the weight decay factor of 0.0005, the parameter divergence values are similar to those with the smaller factors at very early rounds in which the norms of the weights are relatively very small.\\u201d\\n\\n(3) Regarding the first sentence in the fourth paragraph, we correct this sentence to \\u201cIn addition, it is observed from the right plot of the figure that Dropout (Hinton et al., 2012; Srivatava et al. 2014) also yields bigger parameter divergence under the non-IID data setting.\\u201d\\n\\n(4) Regarding the last sentence in the fourth paragraph, we correct this sentence to \\u201cThe corresponding test accuracy was seen to be a diminishing return with the momentum SGD optimizer (i.e., using Dropout we can achieve +2.85% under IID, but only +1.69% is obtained under non-IID(2), compared to when it is not applied); however, it was observed that the generalization effect of the Dropout is still valid in test accuracy for the pure SGD and the Adam (refer to also Table 10 in the appendix).\\u201d\\n\\nIn Appendix B,\\n\\n(5) Regarding the last sentence, we correct this sentence to \\u201cThen, similar with Equation (1), we can have (the equation).\\u201d\\n\\nWe apologize for the inconvenience.\", \"title\": \"Some typos\"}"
]
} |
rJguRyBYvr | Improved Detection of Adversarial Attacks via Penetration Distortion Maximization | [
"Shai Rozenberg",
"Gal Elidan",
"Ran El-Yaniv"
] | This paper is concerned with the defense of deep models against adversarial at-
tacks. We develop an adversarial detection method, which is inspired by the cer-
tificate defense approach, and captures the idea of separating class clusters in the
embedding space so as to increase the margin. The resulting defense is intuitive,
effective, scalable and can be integrated into any given neural classification model.
Our method demonstrates state-of-the-art detection performance under all threat
models. | [
"Adversarial Examples",
"Adversarial Attacks",
"Adversarial Defense",
"White-Box threat models"
] | Reject | https://openreview.net/pdf?id=rJguRyBYvr | https://openreview.net/forum?id=rJguRyBYvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ZDsWSAVMkz",
"HkxTPm3_jS",
"SJxRCM3OsS",
"HklKrf3uoH",
"HJlIS4s1qB",
"SkxDVJmVKH",
"S1ez9A-QKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738667,
1573598052949,
1573597909738,
1573597761442,
1571955774081,
1571200814731,
1571131018156
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2029/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2029/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2029/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2029/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2029/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2029/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"A defense against of adversarial attacks is presented, which builds mostly on combining known methods in a novel way. While the novelty is somewhat limited, this would be fine if the results were unequivocally good and other parts of the problematic. However, reviewers were not entirely convinced by the results, and had a number of minor complaints with various parts of the paper.\\n\\nIn sum, this paper is not currently at a stage where it can be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to Reviewer #3\", \"comment\": \"Thanks for your thoughtful comments. Please consider our response.\\n\\n1. \\\"The proposed technique requires retraining.\\\"\\n\\nOur method intends to be attack-agnostic and is thus compared only to such methods.\\nWe agree that it would be most interesting to remain attack-agnostic without retraining. \\nThe two papers you mentioned ([1] and [2]) clearly utilize adversarial optimization (i.e., using adversarial examples for optimization) and are thus not attack-agnostic. Therefore, this is not a fair comparison. To verify that [1] and [2] utilize adversarial optimization,see section 5.2 in [1] and section B2 in supplementary material in [2]\\n\\n2. \\\"There are already well-known margin-based loss functions, such as triplet loss [4], center loss [5], large-Margin softmax loss [6], and many others, which are not mentioned at all.\\\"\\n\\nThanks for the references. We plan to experiment with some of them in future work. We will mention all these margin increasing loss functions.\\n\\n3. In terms of retraining-based detection, higher AUCs have been reported in [3] for a neural fingerprinting method.\\n\\n\\nWe agree that the Neural Fingerprinting paper is very interesting and presents phenomenal results. However, the threat model of that paper is completely different than in our setting. Specifically, in their gray-box threat model, the adversary has no information about the fingerprint instances whatsoever, which amounts to several thousands secret parameters. In our gray-box threat model the adversary is solely unaware of the use of the KDE-based defense mechanism. Thus, no apples-to-apples comparison can be made here.\\n\\nIn the white-box threat model, we measure the *distortion* required to fool our model using the CW-wb attack, while the fingerprint paper they implemented an adaptive version of FGSM, BIM and SPSA and measured *robustness* to a given set of hyper-parameters. Here again, the comparison between the two papers isn't apples-to-apples.\\n\\nWe note that our threat models follow the ones defined in [1][2]\\n[1]Feinman, et,al. Detecting adversarial samples from artifacts\\n[2]Pang et,al. Towards robust detection of adversarial examples\\n\\n4. \\\"Incorrect references:\\nFixed\\n\\n5. \\\"RCE is also a baseline?\\\"\\nYes. To the best of our knowledge the RCE NIPS-2018 paper still presents SOTA results for adversarial detection.\\n\\n6. \\\"Some of the norms are not properly defined\\\"\\n\\nWhile we could use any Lp norm, all our results are achieved with the L2 norm for the embedding and the Frobenius for the Jacobian. Fixed.\"}",
"{\"title\": \"Reply to Reviewer #2\", \"comment\": \"Thanks for your thoughtful comments. Please consider our response.\\n\\n1. \\\"relies on a lot of general intuitions and unproven claims about neural networks.\\\" \\\"hard to verify given just Figure 1.\\\"\\n\\nThere was indeed a problem with the color scheme - now fixed (in the new version). \\nWe followed your advice and conducted an analysis showing that these claims directly hold on the embedding space (and not relying on t-sne). The conclusion is qualitatively the same. When considering L2 distance in the embedding space 70% of the attacks on instance x targets one of the two closest classes to x. This will be added to the next version.\\n\\n2. Variance reduction: \\\"Would this still work for a dataset with a large number of classes (e.g., ImageNet)?\\\"\\n\\nWe have checked the variance reduction on Cifar-100 (100 classes), and found that it still works.\\nSpecifically, when examining the embedding clustering quality using the Davies-Bouldin index (DBI) we get an improvement of of 25% \\nThese preliminary results will be included after the rebuttal.\\n\\n3. Evaluation section\\n\\n(a) \\\"It would still be good to provide additional explanations (white-box model)\\\"\\n\\nWe followed the procedure for the KDE spoofing attack in which one sets the hyper-parameters such that all generated adversarial examples are able to fool the targeted model. The required such hyper-parameters are specified in Appendix C.\\n\\n(b) \\\"Optimize this objective using gradient-free attacks\\\"\\n\\nWe included the SPSA gradient-free attack on Cifar-10. The results indicate better performance using our method (significantly better in the case of resiliency). For a perturbation (epsilon) of 0.05, PDM achieved a robustness (adversary fail rate) of 0.4 compared to 0.13 achieved by an RCE trained model and 0.08 achieved by a CE trained model.\\n\\n(c) \\\"I also suggest trying it out on rotation-translation attacks\\\"\\n\\nDone. And results are consistent. Our method is still much better. \\nSpecifically, PDM achieved an AUC score of 0.931 compared to 0.914 achieved by an RCE trained model and 0.89 achieved by a CE trained model.\\n\\n4. \\\"Missing citations.\\\"\\nThanks, added. The new version now includes them all.\"}",
"{\"title\": \"Reply to Review #1\", \"comment\": \"Thanks for your thoughtful comments. Please consider our response.\\n\\n1. We agree that the novelty of our method is in the combination of several techniques that work well in unison. We note that the combination itself is intuitive and, more importantly, it leads to significant and consistent improvements.\\n\\n2. While it would be interesting to explore other metrics,\\nthe use of cosine as a proxy is common many applications. Our choice followed the need to use a differentiable and bounded metric. \\n\\nIn regards to variance reduction, our intention (which will be clarified in the next version) is to consider the class-wise *average* variance, which is less sensitive to outliers. When considering the average the proposed method works very well.\\n\\n3. Cluster distance definition is indeed wrong - now fixed (in the new revision).\\n\\n4. BIM deficiency: We agree it is a deficiency. We expect it however to disappear when using a different Jacobian smoothing method. On the positive side, we have made a significant step toward identifying the source of this problem (see preliminary explanation in Section 4.2),\\nwhich we plan to resolve.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Update after author response:\\nI would like to thank the authors for the thoughtful response, and for addressing some of the concerns raised by the reviewers. The draft appears improved but my concerns about the novelty and interpretability of the work still stand, leading me to keep my assessment unchanged.\\n---------------------------\\n\\nIn this paper, the authors propose a general defense method against adversarial attacks by maximizing an approximate bound on the magnitude of distortion needed to force a misclassification. The authors note that this maximization can be achieved by increasing the margin between class clusters and by reducing the norm of the Jacobian of intermediate layers. Subsequently, they either directly adopt or introduce simple modifications to existing techniques to affect these two factors, showing the robustness of the combined method to several adversarial attacks on MNIST and CIFAR-10 datasets. \\n\\nAs neural networks get deployed for increasingly critical applications, the issue of defense against adversarial attacks becomes progressively relevant. The paper does a good job of motivating a relatively simple approach to the problem based on an approximate bound, and pulls in from different existing methods to build a robust system. The strong points of the paper:\\n1. The paper is clearly written, and the approach is sensible. \\n2. Fairly thorough empirical investigation under different threat models.\\n3. The proposed method performs consistently above the baselines for different experiments.\", \"here_are_some_of_my_concerns\": \"1. The work is somewhat incremental and the novelty mostly lies in pulling a few different methods together that seem to work well in unison.\\n2. The two methods used for increasing the margin don\\u2019t actually optimize that objective directly. The Siamese Loss uses cosine distance as proxy and the variance reduction doesn\\u2019t guarantee increase in margin which is sensitive to outliers. Any improvement achieved thus appears to be an ill-understood side-effect. \\n3. The definition of cluster distance (page 3) looks erroneous. \\n4. The authors note that the proposal doesn\\u2019t work very well for a specific kind of attack (BIM) but don\\u2019t have clear recommendations for improvement. The tentative explanation of why this happens is also somewhat loose. \\n\\nIn summary, I think the paper addresses an interesting problem even though the development is arguably incremental. However, since the unified approach is simple yet novel, and the results fairly promising, I am somewhat inclined to accept this paper.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n========\\nThis paper proposes a defense against adversarial examples that detects perturbed inputs using kernel density estimation. The paper uses a combination of known (and often known to be broken) techniques, and does not provide a fully convincing evaluation.\\nI lean towards rejection of this paper.\\n\\nDetailed comments\\n=================\\nThe idea of increasing robustness by maximizing inter-class margins and minimizing intra-class variance is fairly natural, but the author's discussion of their approach (mainly in sections 1 and 2) is very hand-wavy and relies on a lot of general intuitions and unproven claims about neural networks.\\n\\nFor example, in the introduction, the authors claim:\\n\\n\\\"A trained deep classification model tends to organize instances into clusters in the embedding space, according to class labels. Classes with clusters in close proximity to one another, provide excellent opportunities for attackers to fool the model. This geometry explains the tendency of untargeted attacks to alter the label of a given image to a class adjacent in the embedding space as demonstrated in Figure 1a.\\\"\\n\\nFirst, a t-SNE representation is just a 2D projection of high-dimensional data that is useful for visualization purposes, and one should be careful when extrapolating insights about the actual data from it. For example, distances in the 2D projection do not necessarily correspond directly to distances in the embedding space. \\nThe claim that untargeted attacks lead to a \\\"nearby\\\" cluster are hard to verify given just Figure 1. First, the colors of the labels between 1a and 1b do not seem to match (e.g., Dog is bright green in 1b but this color does not appear in 1a). If the other colors match, then this would seem to suggest that trucks (purple) often get altered to ships (orange). Yet, the two clusters are quite far apart in 1a. It seems hard to say something qualitative here. An actual experiment comparing distances in the embedding space and the tendency of untargeted attacks to move from one class to another would be helpful.\\nThe color scheme in Figure 1b is also unclear. A color bar would help here at the very least.\\n\\nThese observations are then used to justify increasing cluster distance while minimizing cluster variance, but it would be nice to see a more formal argument relating these concepts to the embedding distance.\\n\\nThe technique proposed in Section 3.2. to reduce variance loss estimates each class' variance on each batch. Would this still work for a dataset with a large number of classes (e.g., ImageNet)? For such a dataset, each class will be present less than once in expectation in each batch, which seems problematic.\\n\\nThe plots in Figure 2 don't give much of a sense of how the combination of the different proposed techniques is better than any individual technique. The evaluation compares PDM to RCE, but from Figure 2 one could guess that variance reduction alone (2c) performs very similarly to PDM (2e). An ablation study showing the contribution of each of the individual techniques would be helpful.\\n\\nThe evaluation section could be improved significantly. FGSM, JSMA, and to some extent BIM, are not recommended attacks for evaluating robustness. The gray-box and black-box threat model evaluations are also not the most interesting here. Instead, and following the recommendations of Carlini et al. (2019), the evaluation should:\\n\\n- Propose an adaptive attack objective, tailored for the proposed defense in a white-box setting. The authors do this to some extent, by re-using the attack objective from Carlini & Wagner 2017, which targets KDE. It would still be good to provide additional explanations about how the hyperparameters for this attack were set.\\n- Optimize this objective using both gradient-based and gradient-free attacks\\n- As the proposed defense is attack-agnostic, I also suggest trying it out on rotation-translation attacks, as the worst-case attack can always be found by brute-force search\\n\\nOther\\n=====\\n- The citations for adversarial training in the 2nd paragraph of the intro are unusual. Standard references here are for sure the first two below, and maybe some of the other three as is relevant to your work\\n - Szegedy et al. 2013: \\\"intriguing properties of neural networks\\\"\\n - Goodfellow et al. 2014: \\\"Explaining and harnessing adversarial examples\\\"\\n - Kurakin et al. 2016: \\\"Adversarial Machine Learning at Scale\\\"\\n - Madry et al. 2017: \\\"Towards deep learning models resistant to adversarial attacks\\\"\\n - Tramer et al. 2017: \\\"Ensemble Adversarial Training\\\"\\n- The Taylor approximation in (1) does not seem to be well defined. The Jacobian of F is a matrix, so it isn't clear what evaluating that matrix at a point x means.\\n- The \\\"greater yet similar\\\" symbol (e.g., in equation (4)) should be defined formally.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"After rebuttal: my rating remains the same.\\nI have read other reviewers' comments and the response. Overall, the contribution of retraining and detection with previously explored kernel density is limited. \\n\\n=================\", \"summary\": \"This paper proposes new regularization techniques to train DNNs, which after training, make the crafted adversarial examples more detectable. The general idea is to minimize the inter-class variance and maximize the intra-class distance, at some feature layer. This involves regularization terms: 1) SiameseLoss, an existing idea of contrastive learning known can increase inter-class margin; 2) reduce variance loss (RVL), a variance term on deep features, and 3) reverse cross entropy (RCE), a previously proposed term for detection purpose. The motivation behind seems intuitive and the empirical results demonstrate moderate improve in detection AUC, compared to one existing technique (e.g RCE).\", \"my_concerns\": \"1. The proposed technique requires retraining the networks to get a few percents of detection improvement. This is a disadvantage compared to standard detection approaches such as [1] and [2] which do not need to retain the network. I am surprised that these standard detection methods were not even mentioned at all. Retraining with fixed loss becomes problematic when the networks have to be trained using their own loss functions due to application-specific reasons. Moreover, the detection performance reported in this paper is not better than the one reported in [2] (ResNet, CIFAR-10, 95.84%) which do not need retraining.\\n\\n2. There are already well-known margin-based loss functions, such as triplet loss [4], center loss [5], large-Margin softmax loss [6], and many others, which are not mentioned at all.\\n\\n3. In terms of retraining-based detection, higher AUCs have been reported in [3] for a neural fingerprinting method.\\n\\n4. Incorrect references to existing works. The second sentence in Intro paragraph 2: Metzen, et al, .... these are not adversarial training. Xu, et al. (feature squeezing) is not a randomization technique.\\n\\n5. The \\\"baseline\\\" method reported in Table 2, is confusing. RCE is also a baseline? You mean conventional cross entropy (CE) training?\\n\\n6. Some of the norms are not properly defined, which can be confusing in adversarial research. For example, from Equation (1) to (4). The \\\"Frobenius norm used here\\\" statement in Equation (3), don't know this F norm comes from.\\n\\n\\n[1] Characterizing adversarial subspaces using local intrinsic dimensionality. ICLR, 2018\\n[2] A simple unified framework for detecting out-of-distribution samples and adversarial attacks. NeurIPS, 2018\\n[3] Detecting Adversarial Examples via Neural Fingerprinting. arXiv preprint arXiv:1803.03870, 2018\\n[4] Facenet: A unified embedding for face recognition and clustering. CVPR, 2015.\\n[5] A Discriminative Feature Learning Approach for Deep Face Recognition. ECCV, 2016.\\n[6] Large-Margin Softmax Loss for Convolutional Neural Networks. ICML 2016.\"}"
]
} |
S1gwC1StwS | Barcodes as summary of objective functions' topology | [
"Serguei Barannikov",
"Alexander Korotin",
"Dmitry Oganesyan",
"Daniil Emtsev",
"Evgeny Burnaev"
] | We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces. We present an algorithm for calculations of the objective function's barcodes of minima. Our experiments confirm two principal observations: (1) the barcodes of minima are located in a small lower part of the range of values of objective function and (2) increase of the neural network's depth brings down the minima's barcodes. This has natural implications for the neural network learning and the ability to generalize. | [
"Barcodes",
"canonical form invariants",
"loss surface",
"gradient complexes"
] | Reject | https://openreview.net/pdf?id=S1gwC1StwS | https://openreview.net/forum?id=S1gwC1StwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"onVKCmjRJm",
"HkxncmdbsS",
"SJxwR-dbiS",
"HJekGJOZjS",
"SkxeCjmJqS",
"BkeqaMZJ5S",
"BklZP2GiFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738638,
1573122963672,
1573122510803,
1573121798685,
1571924936346,
1571914434164,
1571658841021
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2028/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2028/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2028/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2028/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2028/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2028/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The main concern raised by the reviewers is that the paper is difficult to read and potentially unclear. Therefore, the area chair read the paper, and also found it fairly dense and challenging to read. While there may be important discoveries in the paper, the paper in its current form makes it too difficult to read. Since four reviewers (including the AC) struggled to understand the paper, we believe the presentation of the paper should be improved. In particular, the claims of the paper should be better put into context.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for the feedback.\\nIt seems that there is some misunderstanding here concerning the definition of the minima barcodes. The barcode is not quite associated with the intuition behind the notion of the \\u00abdepth\\u00bb of local minima. We associate with each minimum not the lowest index one saddle (which can often lie on a path to a higher minimum) but the minimal value 1-saddle among the highest points on paths to different minima with smaller value. We are adjusting the overall exposition on this and on other specific points that you mentioned.\", \"some_general_remarks\": \"Our aim was not just to demonstrate the computation of barcodes of objective functions, but to attract the attention of ML community to this tool. This notion allows to place many different concepts and experiments into a bigger overall picture which improves our understanding and gives useful insights.\", \"addressing_your_questions_regarding_the_scalability_of_our_method_we_would_like_to_mention_the_following\": \"1)All existing topological data analysis packages have scalability issues. Our algorithm permits to raise both the dimension of function\\u2019s input, from 4 to 15, and the number of points, from 10^6 to 10^8;\\n2)In the next version of this paper we are adding more experiments with computation of barcodes for loss functions in these dimensions which shows usefulness of this approach in other ML problems;\\n3)We currently prepare a sequel paper to this paper where we show the possibility of computation of the barcodes for large-scale modern networks. But even from practical point of view it is important first to understand the behavior of barcodes in simple examples where all hyper-parameters optimization schemes can be easily turned off.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thanks for your feedback.\\nThe barcode is not the \\\"list of pairs of local minima and their connected saddles\\\". Please note that we consider specific 1-to-1 correspondence between the set of minima and the set of 1-saddles of \\\"+\\\" type. There can be thousands of local minima and millions of saddle points on paths between fixed pair of minima, even in small dimensions. The barcode of minima extracts the most essential part from this combinatorially very complex information in the form of simple 1-to-1 correspondence, for each minimum generically there is unique 1-saddle. The full barcode is the similar decomposition concerning critical points of arbitrary index. To the best of our knowledge this representation of topology of objective functions via canonical 1-to-1 correspondence between minima and saddle points was not studied before in machine learning literature. \\n\\nWe are adding a clarification on this and on other points that you mentioned.\", \"some_general_remarks\": \"Our aim was not just to demonstrate the computation of barcodes of objective functions, but to attract the attention of ML community to this tool. This notion allows to place many different concepts and experiments into a bigger overall picture which improves our understanding and gives useful insights.\", \"addressing_your_questions_regarding_the_scalability_of_our_method_we_would_like_to_mention_the_following\": \"1)All existing topological data analysis packages have scalability issues. Our algorithm permits to raise both the dimension of function\\u2019s input, from 4 to 15, and the number of points, from 10^6 to 10^8;\\n2)In the next version of this paper we are adding more experiments with computation of barcodes for loss functions in these dimensions (e.g. related with hyper parameter optimization) which shows usefulness of this approach in other ML problems;\\n3)We currently prepare a sequel paper to this paper where we show the possibility of computation of the barcodes for large-scale modern networks. But even from practical point of view it is important to understand first the behavior of barcodes in simplest examples where all hyper-parameters optimization schemes can be easily turned off.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thanks for your comments.\\nAs explained in section 3 we actually do not use any grid. The algorithm for computing barcodes of arbitrary function that we developed works with randomly chosen, or specifically chosen, point cloud in the function\\u2019s input. It does not require a grid, thus, it expands the calculations of the barcodes of functions beyond the dimensions of the input accessible before. To the best of our knowledge such algorithm was not described in literature. If it was, we would be grateful for a reference. \\n\\n----------------------------------\", \"here_are_answers_to_your_more_specific_minor_questions\": \"> [...]What do you mean by gradient flow? [...]\\nThe gradient flow is the standard notion, which if needed can be easily looked up in the cited literature. It is the flow generated by the gradient vector field, the standard vector field used in modern optimization methods. \\n\\n> [...] What do you mean by \\\"TDA package\\\"?[...]\\nThe reference to the paper \\\"Introduction to the R package TDA\\\" is right next to the mentioning of this package.\\n\\n> [...]Right before Theorem 2.3., what does the notation F_sC_* mean? This needs to be introduced somewhere[...] \\nFrom the text of the paper right before the Theorem 2.3: \\\"...an increasing sequence of subcomplexes (R\\u2212filtration) FsC\\u2217 \\u2282 FrC\\u2217\\u2282...\\\" \\nso as stated in the paper, FsC\\u2217 \\u2282 FrC\\u2217\\u2282... is indeed an increasing sequence of subcomplexes.\\n---------------------------------------\", \"some_general_remarks\": \"Our aim was not just to demonstrate the computation of barcodes of objective functions, but to attract the attention of ML community to this tool. This notion allows to place many different concepts and experiments into a bigger overall picture which improves our understanding and gives useful insights.\", \"addressing_your_questions_regarding_the_scalability_of_our_method_we_would_like_to_mention_the_following\": \"1)All existing topological data analysis packages have scalability issues. Our algorithm permits to raise both the dimension of function\\u2019s input, from 4 to 15, and the number of points, from 10^6 to 10^8;\\n2)In the next version of this paper we are adding more experiments with computation of barcodes for loss functions in these dimensions which shows usefulness of this approach in other ML problems;\\n3)We currently prepare a sequel paper to this paper where we show the possibility of computation of the barcodes for large-scale modern networks. But even from practical point of view it is important to understand first the behavior of barcodes in simplest examples where all hyper-parameters optimization schemes can be easily turned off.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces the notion of barcodes as a topological invariant of loss surfaces that encodes the \\\"depth\\\" of local minima by associating to each minimum the lowest index-one saddle. An algorithm is presented for the computation of barcodes, and some small-scale experiments are conducted. For very small neural networks, the barcodes are found to live at small loss values, and the authors argue that this suggests it may be hard to get stuck in a suboptimal local minimum.\\n\\nI believe the concept of barcodes will be new to most members of the ICLR community (at least it was to me), and I appreciate the authors' effort to convey the ideas through multiple definitions in Section 2. I wasn't able to fully appreciate the importance of Definition 3, and Definitions 1 and 2 were tough to digest owing to imprecise language, but I think I got the main point. I was also unable to fully comprehend the definitions of \\\"birth\\\" and \\\"death\\\" in this context. I'd strongly encourage the authors to improve the readability of this section so that non-experts can follow the story.\\n\\nIt seems like the main contribution is a new algorithm for computing barcodes of minima. I am unfamiliar with prior work in this direction, and I was also unable from the paper to infer what the main improvements were relative to the existing algorithms. I'd encourage the authors to state their explicit algorithmic improvements, and to demonstrate empirically that the new algorithm outperforms the prior ones in the expected ways.\\n\\nThe main experiments are on extremely tiny neural networks, presumably owing to computational restrictions. The authors state that \\\"it is possible to apply it to large-scale modern neural networks\\\", but it's not clear to me how that would work or what additional algorithmic improvements (if any) would need to be made in order to do so. I don't think that the results on tiny neural networks have much relevance to practice, so I think the empirical data presented in this paper will have very limited impact. If there were results for practical models, it would be a different story. So I'd encourage the authors to devote additional effort to scaling up the method for use on practical neural network architectures.\\n\\nOverall, I think there may be some really nice ideas in this paper that could help shape our understanding of neural network loss surfaces, but the current paper does not explore those ideas fully and does not convey them in a sufficiently clear manner. I hope to see an improved version of this paper at a future conference, but I cannot recommend acceptance of this version to ICLR.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper aims to study the topology of loss surfaces of neural networks using tools from algebraic topology. From what I understood, the idea is to effectively (1) take a grid over the parameters of a function (say a parameters of a neural net), (2) evaluate the function at those points, (3) compute sub-levelset persistent homology and (4) study the resulting barcode (for 0/1-dim features) (i.e., the mentioned \\\"canonical form\\\" invariants). Some experiments are presented on extremely simple toy data.\\n\\nOverall, the paper is very hard to read, as different concepts and terminology appear all over the place without a precise definition (see comments below). Given the problems in the writing of the paper, my assessment is that this idea boils down to computing persistent homology of the sub-levelset filtration of the loss surface sampled at fixed parameter realizations. I do not think that this will be feasible to do, even for small-scale real-world neural networks, simply due to the difficulty of finding a suitable grid, let alone the vast number of function evaluations involved.\\n\\nThe paper is also unclear in many parts. A selection is listed below:\\n\\n(1) What do you mean by gradient flow? One can define a gradient flow in a linear space X and for a function F: X->R, e.g., as a smooth curve R->X, such that x'(t) = -\\\\nabla F(x(t)); is that what is meant? \\n\\n(2) What do you mean by \\\"TDA package\\\"? There are many TDA packages these days (maybe the CRAN TDA package?)\\n\\n(3) \\\"It was tested in dimensions up to 16 ...\\\" What is meant by dimension here? The dimensionality of the parameter space?\\n\\n(4) The author's talk about the \\\"minima's barcode\\\" - I have no idea what is meant by that either; the barcode is the result of sub-levelset persistent homology of a function -> it's not associated to a minima.\\n\\n(5) Is Theorem 2.3. not just a restatement of a theorem from Barannikov '94? At least the proof in the appendix seems to be .\\n\\n(6) Right before Theorem 2.3., what does the notation F_sC_* mean? This needs to be introduced somewhere.\\n\\nFrom my perspective, the whole story revolves around how to compute persistence barcodes from the sub-levelset filtration of the loss surface, obtained from function values taken on a grid over the parameters. The paper devotes quite some time to the introduction of these concepts, but not in a very clear or understandable manner. The experiments are effectively done on toy data, which is fine, but the paper stops at that point. I do not buy the argument that \\\"it is possible to apply it [the method] to large-scale modern neural networks\\\". Without a clear strategy to extend this, or at least some preliminary \\\"larger\\\"-scale results, the paper does not meet the ICLR threshold. The more theoretical part is too convoluted and, from my perspective, just a restatement of earlier results.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work is focused on topological characterization of target surfaces of optimization objectives (i.e. loss functions) by computing so called barcodes, which are lists of pairs of local minima and their connected saddle points. The authors claim that the barcodes constitute a representation of target objectives that is invariant under homeomorphisms of input to the objectives. The authors present an algorithm for computing the barcodes from graph-based representation of a surface, and present barcodes computed on toy examples in numerical analysis.\\n\\nIn my opinion, the main contribution of the work i.e. creation of barcodes is based on a rather trivial idea. The concept of characterizing optimization objectives through pairs of local minima and one-index saddle points is straightforward given that one can (thoroughly if not exhaustively) compute them in a computationally feasible manner; this is however hardy the case in any realistic scenario. I therefore struggle to see how the idea can be practically significant. Maybe the authors can put more emphasis on the theoretical aspect of their work, which is about the invariance nature of barcodes. They can for instance demonstrate how one can exploit the invariance property of barcodes for parameter optimization. \\n\\nThe authors can consider application of their work to hyper-parameter optimization, which is usually low-dimensional and one can also compare with other approaches such as Gaussian processes or other Bayesian methodologies. \\n\\nIn numerical experiments, for the toy task solved using neural network I don't find it very surprising that the barcodes descend lower as the capacity of the network is increased. Can the authors further clarify why it is a significant finding for them?\"}"
]
} |
SylL0krYPS | Toward Evaluating Robustness of Deep Reinforcement Learning with Continuous Control | [
"Tsui-Wei Weng",
"Krishnamurthy (Dj) Dvijotham*",
"Jonathan Uesato*",
"Kai Xiao*",
"Sven Gowal*",
"Robert Stanforth*",
"Pushmeet Kohli"
] | Deep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks. Prior works mostly focus on model-free adversarial attacks and agents with discrete actions. In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics. Extensive experiments on various MuJoCo domains (Cartpole, Fish, Walker, Humanoid) demonstrate that our proposed framework is much more effective and efficient than model-free based attacks baselines in degrading agent performance as well as driving agents to unsafe states. | [
"deep learning",
"reinforcement learning",
"robustness",
"adversarial examples"
] | Accept (Poster) | https://openreview.net/pdf?id=SylL0krYPS | https://openreview.net/forum?id=SylL0krYPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"t9FO16glvu",
"Hkxbh_c3sB",
"BJga0IY3jB",
"SygTDK73oH",
"Hke-i_X3jr",
"rkgXl_Qnir",
"HJllfUXhoS",
"HkgbGBQhir",
"Bye037XhoB",
"SkgbcLA1cS",
"S1eTSCxTtr",
"ByljlkVFFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738609,
1573853353312,
1573848788530,
1573824868706,
1573824665436,
1573824490534,
1573824008069,
1573823753003,
1573823414133,
1571968649124,
1571782213088,
1571532531068
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2026/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2026/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2026/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2026/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2026/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2026/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2026/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2026/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2026/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2026/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2026/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper considers adversarial attacks in continuous action model-based deep reinforcement learning. An optimisation-based approach is presented, and evaluated on Mujoco tasks.\\n\\nThere were two main concerns from the reviewers. The first was that the approach requires strong assumptions, but in the rebuttal some relaxations were demonstrated (e.g., not attacking every step). Additionally, there were issues raised with the choice of baselines, but in the discussion the reviewers did not agree on any other reasonable baselines to use.\\n\\nThis is a novel and interesting contribution nonetheless, which could open the field to much additional discussion, and so should be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reviewer response\", \"comment\": \"Thanks for the response and clarifications, especially for the detailed response to R3. I have increased my score to a weak accept as I agree that this is a relevant and important topic to study and that the experiments shown in this paper are novel in some way -- however I am still feeling borderline about the paper as it seems like it is set up in a way that causes the baselines to trivially fail giving the existing body of adversarial attack literature.\"}",
"{\"title\": \"Additional experiment to Reviewer 3 #1\", \"comment\": \"#1 black-box attacks \\n\\nFollowing your suggestion, we managed to perform an additional experiment on the black-box transfer attack in walker domain with task stand with planning length T = 5. The experiment is conducted in 9 different runs and we report the avg total reward and avg total loss below.\\n\\nWithout any attacks, Agent 1 has total reward of 995 and the Agent 2 has total reward of 994. Agent 1 and 2 are both d4pg agents with the same architecture but different network parameters.\\n\\n* If we perform white box attack on the Agent 1 in 9 different runs, then the mean of total loss is 172 with std 59, and the mean of total reward is 197 with std 28. \\n\\n* If we perform black box attack on the Agent 2, using the perturbations generated from Agent 1 (since under the black-box assumption, we don't have the parameters of targeted Agent 2), the mean of total loss is 1550 with std 47 and the mean of total reward is 985 with std 20. \\n\\nIt can be seen that the transfer attack is not as effective as our white box attack on Agent 1. A possible reason could be that the two Agents may be very different despite both of them perform well in the normal setting without attacks. Hence, we believe the black-box setting would certainly be an interesting future direction that requires a more comprehensive study and investigation.\"}",
"{\"title\": \"Reply to Reviewer 3, part 3\", \"comment\": \"(continued response to #4 Baseline implementation)\\n\\n2. Yes, the reviewer's interpretation is correct -- we assume the baseline adversary has an \\\"unfair advantage\\\" since they have access to the true reward (and then take the best attack result among 1000 trials), whereas our techniques do not have access to this information. Without this advantage, the baseline adversaries (rand-B, rand-U) may be weaker if they use their learned model to find the best attack sequence. In any case, Table 1 and 2, as well as the above additional experiments (#2 less frequency attack, #3 learned model) all demonstrate that our proposed attack can successfully uncover vulnerabilities of deep RL agents while the baselines cannot. \\n\\nFollowing your suggestion, we have added the above details to the appendix.\\n\\n\\n#5 Sample complexity comparison\\n\\nThanks for your comment! Yes, we agree the applications/setting are different, and our point was to highlight a qualitative difference in the sample complexity regimes (ours ~1000 vs Gleave et. al ~100K or Uesato et. al >1M), even though these clearly aren't head-to-head comparisons. This is in-line with the theoretical perspective that we might expect model-based approaches to be more sample-efficient than their model-free counterparts. Following your suggestion, we have added a footnote to clarify that this is a qualitative comparison, and we have also removed the claim at the end of the conclusion.\\n\\n\\n#6 Score of learned policy without attacks and other learning algorithms\\n\\nWe use default total timesteps = 1000, and the maximum total reward is 1000. We report the total reward of the d4pg agents used in this paper below. The agents are well-trained and have total reward close to 1000, which outperforms agents trained by other learning algorithms on the same tasks (e.g. ddpg, A3C in sec 6, Tassa et. al 2018; ppo in sec 5, Abdolmaleki et. al 2018), and thus the agents in this paper can be regarded as state-of-the-art RL agents for these continuous control domain tasks. The attack results in Table 1 and 2 in our manuscript are hence suggested to be representative. \\n\\nDomain \\tTask Total reward\\nwalker\\t stand 994\\nwalker \\twalk \\t987\\nhumanoid stand \\t972\\nhumanoid walk \\t967\\ncartpole \\tbalance 1000\\ncartpole \\tswingup 883\\nfish \\tupright 962\", \"reference\": \"- Tassa et. al 2018, Deepmind control suite.\\n- Abdolmaleki et. al 2018, Maximum A Posteriori Policy Optimisation.\\n\\n\\n#7 Figure 3 clarification\\n\\nThanks for your suggestion, we have moved Fig 3 to the appendix. The y-axis is the total head height loss (originally described in the title of figure as well as in the caption \\\"y-axis is the total loss of the corresponding initialization\\\"), and the x-axis is the k_th run. As discussed in Sec 4.2, the meaning of Fig 3 is to show how the accuracy of the learned models affects our proposed technique: \\n\\n(1) we first learned 3 models with 3 different number of samples: {5e5, 1e6, 5e6} and we found that with more training samples (e.g. 5e6, equivalently 5000 episodes), we are able to learn a more accurate model than the one with 5e5 training samples. \\n\\n(2) we plot the attack results of total loss for our technique with 3 learned models (denoted as PGD, num_train) as well as the baselines (randU, randB, Flip) on 10 different runs (initializations). \\n\\nWe show with the more accurate learned model (5e6 training samples), we are able to achieve a stronger attack (the total losses are at the order of 50-200 over 10 different runs) than the less accurate learned model (e.g. 5e5 training samples). However, even with a less accurate learned model, the total losses are on the order of 400-700, which already outperforms the best baselines by a margin of 1.3-2 times. This result in Fig 3 also answers reviewer's comments #3 that indeed, to achieve effective attack, a very accurate model isn't necessarily needed in our proposed method. Of course, if the learned model is more accurate, then we are able to degrade agent's performance even more. \\n\\n\\nWe appreciate your constructive feedback and hope our additional experimental results and clarification can convince you about the contributions of this paper.\"}",
"{\"title\": \"Reply to Reviewer 3, part 2\", \"comment\": \"(continued response of #3 systematically test what happens as the model becomes less accurate over time) \\n\\n2. domain: Walker, task: walk (observation perturbation)\\nFor Walker walk task, on the other hand, it is much more complicated, and our learned models are less accurate (test error on the order of 1e-1). For 10 steps, the prediction error of our learned model compared to the true model is already more than 100%, and hence using a small T for planning would be more reasonable. We report the results for T = {1,2,5,10,15,20} over 10 runs below and compare the total loss and total reward with the baseline results. \\n\\nOur results show that using T = 1 indeed gives the best attack results (decreases the loss by 3.2X and decreases the reward by 3.6X compared to the best baseline (randB)) and the attack becomes less powerful as T increases. Nevertheless, even with T = 10, our proposed technique still outperforms the best baseline (randB) by 1.4X both in the total loss and total reward.\\n\\n(a) Total loss (smaller means stronger attack)\\n \\tmean std med min max\\nOurs, T=1 \\t 468 79 489 286 567\\nOurs, T=2 \\t 604 31 611 535 643\\nOurs, T=5 \\t 761 65 771 617 837\\nOurs, T=10 \\t 881 68 886 753 975\\nOurs, T=15 \\t 874 93 891 723 1002\\nOurs, T=20 \\t 937 62 950 804 993\\nrandU \\t 1517 22 1522 1461 1542\\nrandB \\t 1231 31 1234 1189 1272 \\nflip \\t 1601 18 1604 1562 1619 \\n\\n(b) Total reward (smaller means stronger attack)\\n \\tmean std med min max\\nOurs, T=1 \\t 222 45 227 135 300\\nOurs, T=2 \\t 353 51 362 253 441\\nOurs, T=5 \\t 483 60 496 348 540\\nOurs, T=10 \\t 568 48 579 469 623\\nOurs, T=15 \\t 583 58 604 483 647\\nOurs, T=20 \\t 634 41 638 559 687\\nrandU \\t 941 23 945 885 965\\nrandB \\t 796 21 796 766 824 \\nflip \\t 981 9 984 961 991 \\n\\n3. domain: Walker, task: stand (observation perturbation)\\nFor Walker stand, the learned model is slightly more accurate than the walker.walk model (we use the test MSE error for evaluation, the test MSE for walker.walk is 0.447 while the test MSE for walker.stand is 0.089), and we also study the effect of T on our proposed method over 10 runs. \\n\\nInterestingly, our results show that with the more accurate walker.stand model (compared to the walker.walk model), T = 10 gives the best avg total loss (84) and best avg total reward (153), which are 13.4X and 4.9X smaller than the best baseline, randB (avg total loss: 1126, avg total reward: 744). Again, note that even with T = 1, the worst choice among all T = {1,2,5,10,15,20}, the result is still 3.5X and 2.9X better than the best baselines, demonstrating the effectiveness of our proposed approach. \\n\\n(a) Total loss (smaller means stronger attack)\\n \\tmean std med min max\\nOurs, T=1 \\t 322 84 319 202 453\\nOurs, T=2 \\t 279 55 264 223 391\\nOurs, T=5 \\t 163 53 154 93 246\\nOurs, T=10 \\t 84 46 67 42 165\\nOurs, T=15 \\t 101 40 82 57 157\\nOurs, T=20 \\t 117 41 98 68 193\\nrandU \\t 1462 70 1454 1341 1561\\nrandB \\t 1126 86 1130 973 1244 \\nflip \\t 1458 24 1451 1428 1501 \\n\\n(b) Total reward (smaller means stronger attack)\\n \\tmean std med min max\\nOurs, T=1 \\t 257 67 265 163 366\\nOurs, T=2 \\t 246 40 232 200 322\\nOurs, T=5 \\t 193 27 188 154 238\\nOurs, T=10 \\t 153 24 142 132 194\\nOurs, T=15 \\t 164 23 152 143 201\\nOurs, T=20 \\t 170 21 161 149 207\\nrandU \\t 938 41 932 866 999\\nrandB \\t 744 48 744 664 809 \\nflip \\t 993 8 997 979 999 \\n\\n\\n#4 Baseline implementation\\n1. For the baselines (rand-U and rand-B), the adversary generates 1000 trajectories with random noise directly and we report the best loss/reward at the end of each episode. The detailed steps are listed below:\", \"step_1\": \"the perturbations are generated from a uniform distribution or a bernoulli distribution within the range [-eps, eps] for each trajectory, and we record the total reward and total loss for each trajectory from the true environment (the MuJoCo simulator).\", \"step_2\": \"take the best (lowest) total reward/loss among 1000 trajectories and report in Table 1 and 2 in our original manuscript.\\n\\n(to be continued in part 3)\"}",
"{\"title\": \"Reply to Reviewer 3, part 1\", \"comment\": \"Thanks for your detailed response and constructive suggestions to help us improve the original manuscript! We have incorporated all your comments accordingly into the revised manuscript and we hope our additional experimental results and clarification/explanations can convince you about the contributions of this paper.\\n\\n#1 black-box attacks \\nThanks for your insight! Yes, our proposed technique is a white-box attack, which requires the information of pre-trained agents. The setting of black-box attacks in Huang et. al is interesting but is not the main focus of this paper. We have added a remark and included the black-box attack as an interesting direction in our future work section in the appendix. \\n\\n#2 attacks with less frequency\\nFollowing your suggestion, we have performed additional experiments on the setting where attackers are less powerful -- they can only attack every 2 timestep instead of every transition. We conduct experiments for 10 different runs with different initial states in the walker domain with task stand and report the following statistics of 10 runs (mean, standard deviation, median, min and max). \\n\\nThe results show that our proposed attack is indeed much stronger than the baselines even when the attacker\\u2019s power is limited (e.g. can only attack every 2 timesteps): \\n1. Compared to the best results among three baselines, our attack gives 1.53X smaller avg total loss (ours: 934 v.s. best baseline: 1431)\\n2. The mean reward of all the baselines is close to perfect reward, while our attacks can achieve 1.43X smaller avg total reward compared to the best avg reward from baseline (ours: 648 v.s. best baseline: 924). \\n\\n(a) Total loss (smaller means stronger attack)\\n mean std med min max\\nOurs 934 152 886 769 1187\\nrandU 1511 35 1502 1468 1558\\nrandB 1431 77 1430 1282 1541 \\nflip 1532 15 1537 1496 1546 \\n\\n(b) Total reward (smaller means stronger attack)\\n mean std med min max\\nOurs 648 95 622 559 799\\nrandU 970 20 964 947 999\\nrandB 924 41 923 840 981 \\nflip 996 5 999 984 1000 \\n\\n\\n#3 systematically test what happens as the model becomes less accurate over time\\n\\nThanks for your insights. Yes, we have some discussion on the effect of model accuracy in the Sec 4.2 (Evaluating on the efficiency of attack) and Fig 3 in our original manuscript. We show that even with a less accurate learned model (with only 5e5 training samples, equivalently 500 episodes), our proposed attack can already successfully degrade the agent performance by a factor of 1.3-2X compared to the best baseline results. This suggests that an accurate model isn't necessarily needed in our proposed method to achieve effective attacks -- note that with a more accurate learned model (with 5e6 training samples), our proposed attack can further degrade the agent performance by 2X. Please also see our reply #7 Figure 3 clarification. \\n\\nIn addition, to investigate model effect over time, we have performed additional experiments on the planning/unroll length T for 3 example domains and tasks, each with 10 different initializations. The results are summarized below.\\n\\n1. domain: Cartpole, task: balance (action perturbation)\\nFor the Cartpole balance task, our learned models are very accurate (test MSE error on the order of 1e-6). We observed that the prediction error of our learned model compared to the true model (the MuJoCo simulator) is around 10% for 100 steps. Hence, we can choose the planning steps T to be very large (e.g. 20-100) and our experiments show that the result of T = 100 is slightly better:\\n\\n(a) Total loss (smaller means stronger attack)\\n \\t mean std med min max\\nOurs, T=20 \\t2173 51 2189 2087 2239\\nOurs, T=100 \\t1951 113 1924 1851 2192\\nrandU \\t 4000 0 4000 4000 4000\\nrandB \\t 3999 0 3999 3999 3999 \\nflip \\t 3046 1005 3074 2060 3999\\n\\n\\n(to be continued in part 2)\"}",
"{\"title\": \"Reply to Reviewer 2, part 2\", \"comment\": \"(continued response to #2 ablation study on T)\\n\\n2. domain: Walker, task: walk (observation perturbation)\\n\\n(a) Total loss (smaller means stronger attack)\\n \\tmean std med min max\\nOurs, T=1 \\t 468 79 489 286 567\\nOurs, T=2 \\t 604 31 611 535 643\\nOurs, T=5 \\t 761 65 771 617 837\\nOurs, T=10 \\t 881 68 886 753 975\\nOurs, T=15 \\t 874 93 891 723 1002\\nOurs, T=20 \\t 937 62 950 804 993\\nrandU \\t 1517 22 1522 1461 1542\\nrandB \\t 1231 31 1234 1189 1272 \\nflip \\t 1601 18 1604 1562 1619 \\n\\n(b) Total reward (smaller means stronger attack)\\n \\tmean std med min max\\nOurs, T=1 \\t 222 45 227 135 300\\nOurs, T=2 \\t 353 51 362 253 441\\nOurs, T=5 \\t 483 60 496 348 540\\nOurs, T=10 \\t 568 48 579 469 623\\nOurs, T=15 \\t 583 58 604 483 647\\nOurs, T=20 \\t 634 41 638 559 687\\nrandU \\t 941 23 945 885 965\\nrandB \\t 796 21 796 766 824 \\nflip \\t 981 9 984 961 991 \\n\\n3. domain: Cartpole, task: balance (action perturbation)\\n\\n(a) Total loss (smaller means stronger attack)\\n \\t mean std med min max\\nOurs, T=20 \\t2173 51 2189 2087 2239\\nOurs, T=100 \\t1951 113 1924 1851 2192\\nrandU \\t 4000 0 4000 4000 4000\\nrandB \\t 3999 0 3999 3999 3999 \\nflip \\t 3046 1005 3074 2060 3999 \\n\\n\\n#3 learning reward functions\\nThanks for your insight! Yes, it is indeed an interesting point, which we also discussed in Sec 4.2 of original manuscript. While we believe learning a surrogate of reward function can help to lower the true reward during PGD attack, it is beyond the scope of this paper. We have added a paragraph of future works in appendix and include this approach as an interesting future work.\"}",
"{\"title\": \"Reply to Reviewer 2, part 1\", \"comment\": \"Thanks for your positive feedback!\\n\\n#1 adversarial training on RL agents\\nYes, we believe this is definitely a very interesting research direction. From our point of view, we think there are three important challenges that need to be addressed to study adversarial training of RL agents along with our proposed attacks: \\n\\n1. The adversary and model need to be jointly updated. How do we balance these two updates, and make sure the adversary is well-trained at each point in training?\\n2. How do we avoid cycles in the training process due to the agent overfitting to the current adversary? \\n3. How do we ensure the adversary doesn't overly prevent exploration / balance unperturbed vs. robust performance?\\n\\nWhile we think adversarial training may help to train a more robust agent in general (similar to its image classification counterpart [Madry et. al 2018]), we believe it would be better to study this question systematically and rigorously in a separate paper due to the above challenges and rebuttal time constraints. We hope to focus this paper on the robustness evaluation of deep RL agent because our goal is to uncover model vulnerability in the field of deep RL -- to bring researchers awareness of the potential safety issues of state-of-the-art RL agents -- and we hope our results can better motivate adversarial training as an important next step to help to train more robust deep RL agents. We have updated our manuscript to discuss the importance of this direction and identify some of the challenges and nuances in the future work section in the appendix.\", \"reference\": \"- Madry et. al, Towards Deep Learning Models Resistant to Adversarial Attacks, ICLR 2018\\n\\n#2 ablation study on T\\nFollowing your suggestion, we perform ablation studies on T (the planning length) of our proposed model-based attack in three examples (walker.stand, walker.walk, cartpole.balance) and conduct experiments for 10 different runs with different initial states. The following statistics of 10 runs (mean, standard deviation, median, min and max) are reported. \\n\\nThe main takeaway from these experiments is that when the model is accurate, we can use larger T in our proposed attack algorithm; while when the model is less accurate, shorter unrolls (smaller T) are more effective (as in the Walker.walk example). However, even under the most unfavorable hyperparameters, our proposed attack still outperforms all the baselines by a large margin. For example, using the least accurate model with T=20 in Walker.stand example, decreases the model\\u2019s avg reward to 170 compared to 744 for the best baseline. In general, we suggest choosing a moderate T that can leverage the learned model while remaining robust to errors in the learned model.\", \"our_results\": \"1. domain: Walker, task: stand (observation perturbation), test MSE: \\n(a) Total loss (smaller means stronger attack)\\n \\tmean std med min max\\nOurs, T=1 \\t 322 84 319 202 453\\nOurs, T=2 \\t 279 55 264 223 391\\nOurs, T=5 \\t 163 53 154 93 246\\nOurs, T=10 \\t 84 46 67 42 165\\nOurs, T=15 \\t 101 40 82 57 157\\nOurs, T=20 \\t 117 41 98 68 193\\nrandU \\t 1462 70 1454 1341 1561\\nrandB \\t 1126 86 1130 973 1244 \\nflip \\t 1458 24 1451 1428 1501 \\n\\n(b) Total reward (smaller means stronger attack)\\n \\tmean std med min max\\nOurs, T=1 \\t 257 67 265 163 366\\nOurs, T=2 \\t 246 40 232 200 322\\nOurs, T=5 \\t 193 27 188 154 238\\nOurs, T=10 \\t 153 24 142 132 194\\nOurs, T=15 \\t 164 23 152 143 201\\nOurs, T=20 \\t 170 21 161 149 207\\nrandU \\t 938 41 932 866 999\\nrandB \\t 744 48 744 664 809 \\nflip \\t 993 8 997 979 999 \\n\\n(to be continued in part 2)\"}",
"{\"title\": \"Reply to Reviewer 1\", \"comment\": \"Thanks for your comments!\\n\\n#1 Other attacks are possible \\nIndeed, adversarial attacks have been shown to be possible for neural network models in various supervised learning tasks, and hence it may not be surprising that adversarial attacks exist for the deep RL agents. \\n\\nHowever, the vulnerability of RL agents can not be easily discovered by existing baselines which are model-free and build upon random searches and heuristics -- this is also verified by our extensive experiments on various domains (e.g. walker, humanoid, cartpole, and fish), where the agents still achieve close to their original best rewards even with baseline attacks at every time step (see Table 1 and 2). \\n\\nHence, it is important and necessary to have a systematic methodology to design *non-trivial* adversarial attacks, which can *efficiently* and *effectively* discover the vulnerabilities of deep RL agents -- this is indeed the motivation of this work. This paper takes a first step toward this direction by proposing the first sample-efficient model-based adversarial attack, which can successfully degrade agent performance by up to 4 times in terms of total reward and up to 4.6 times in terms of distance to unsafe states (smaller means stronger attacks) compared to the model-free baselines. \\n\\nTherefore, we believe (1) our proposal of non-trivial model-based adversarial attacks and (2) our systematic study on the efficiency and effectiveness of our method compared with model-free attack baselines are two important contributions to the field of deep reinforcement learning. We hope our discovery of the vulnerability of deep RL agent can also bring more safety awareness to researchers in this field when they design algorithms to train deep RL agents. \\n\\n#2 Our additional contributions\\nIn the rebuttal, we have conducted additional experiments to further demonstrate the effectiveness of our proposed method and show that our techniques outperform all the baselines by a significant margin:\\n(1) In a weaker adversary setting where the adversary cannot attack at every time step, we show that our attack can still successfully degrade agent's performance (total reward and loss) by 1.4-1.5X while the baseline attacks cannot. Note that the agents are almost not affected by the baseline attacks because the agents still achieve almost perfect total reward. Details please see our reply #2 to Reviewer#3.\\n\\n(2) We conduct systematic study in three domains and tasks on the effect of planning/unroll length in our proposed technique. Our results suggest that our technique are still effective and outperforms all the baselines by a large margin even when the learned model dynamics is not very accurate. In particular, we have discussed the trade-off between unroll length T and the model accuracy in three examples. Details please see our reply #2 to Reviewer #2 and reply #3, #7 to Reviewer #3. \\n\\nWe hope that the above explanation and clarification have convinced you of the importance and contributions of this paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents an adversarial attack for perturbing\\nthe actions or observations of an agent acting near-optimally\\nin an MDP so that the policy performs poorly.\\nI think understanding the sensitivity of a policy to\\nslight perturbations in the actions it takes or the\\nobservations that it receives is important for having\\nrobust learned policies and controllers.\\nThis paper presents an empirical step in the direction\\nof showing that such attacks are possible, but in the context\\nof the other adversarial attacks that are possible, this is\\nnot surprising alone and would be much stronger with\\nother contributions.\\nI think an exciting direction of new work could be to\\ncontinue formalizing these vulnerabilities and\\nlooking at ways of adding robustness across many\\nother domains.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"*Synopsis*:\\n This paper looks at a new framework for adversarial attacks on deep reinforcement learning agents under continuous action spaces. They propose a model based approach which adds noise to either the observation or actions of the agent to push the agent to predefined target states. They then report results against several model-free/unlearned baselines on MuJoCo tasks using a policy learned through D4PG.\", \"main_contributions\": \"- Adversarial attacks for Deep RL in continuous action spaces.\\n\\n *Review*\\n The paper is well written, and has some interesting discussion/insight into attacking deep RL agents in continuous actions spaces. I think the authors are headed in the right direction, but compared to prior work in adversarial attacks for deep RL agents (i.e. the Huang and Lin) I have a few concerns that I feel the authors need to better explain/motivate in their paper. I am recommending this paper be rejected based on the following concerns. I am willing to raise my score if some of these are addressed by the authors in subsequent revisions\\n\\n 1. This algorithm requires the pre-trained policy to plan attacks (which may be a high bar for such an adversarial attack). It would be a nice addition to include similar results with \\\"black-box\\\" adversarial attacks, as mentioned in the Huang. \\n\\n 2. Another issue, addressed in the Lin paper, is this attack seems to require perturbation on every time step in a proposed trajectory. As mentioned by Lin, this is probably unrealistic and would cause the attacker to be detected. It would be another nice contribution to include variants that don't require perturbations on each transition.\\n\\n 3. Another unfortunate requirement is a learned model (or a way to simulate trajectories). From the Model Based RL literature, we know learning such a model is quite difficult and often unrealistic given our current approaches. While this is problematic, I think the paper could systematically test this looking at what happens as the model becomes less accurate over time. This could provide some nice results showing an accurate model isn't necessarily needed and anneal concerns over having to learn such a model.\\n\\n 4. It is unclear if the baselines measured against are meaningful in this setting, and I'm also a bit unclear how they are generated/implemented. Specifically, the random trajectories require you to return the generated trajectory with the smallest loss/reward. It is unclear how the adversary knows this information. Is it known through a model or some other simulation? Also the flip baseline could use a bit more explanation. I think these details can be safely placed in the appendix, but should appear somewhere in the final version.\\n\\n 5. I'm not sure the comparison to sample efficiency to the Gleave or Uesato papers are meaningful. For Gleave, the threat model explored is much different where they do not have access to the agent's observation or action streams and instead learn policies to affect the other agent in game scenarios. This is very different. Also, the Uesato is not adversarially attacking the agent, but attempting to find failure cases for the agent, which I again feel is very different from what you are trying to accomplish. I would remove this discussion and the claim at the end of the conclusion.\", \"other_suggestions\": \"S1. It would be helpful to include the score of the learned policy without any attacks, to see how well the baselines are performing (this will help readers understand if these are reasonable/meaningful baselines).\\n\\n S2. I'm unclear what figure three is adding to the paper, and am actually uncertain what the y-axis means. I don't think this is a wise use of the 9th page, and this plot could probably be relegated to the appendix.\\n \\n S3. As in prior work, it would be useful to see how well this line of attack works for multiple learning algorithms. Some potential candidates could be: PPO, TRPO, SAC, etc...\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: This paper proposed a new adversarial attack method based on model-based RL. Unlike existing adversarial attack methods on deep RL, the authors first approximate the dynamics models and then generate the adversarial samples by minimizing the total distance of each state to the pre-defined target state (i.e. planning). Using Cartpole, Fish, Walker, and Humanoid, the authors showed that the proposed method can pool the agents more effectively.\", \"detailed_comments\": \"The proposed idea (i.e. designing an adversarial attack based on model-based RL) is interesting but it would be better if the authors can provide evaluations such as adversarial training and ablation studies for the proposed method (see the suggestion & question). So, I'd like to recommend \\\"weak accept\\\"\\n\\nSuggestion & question:\\n\\nCould the authors apply adversarial training based on the proposed methods? I wonder whether RL agents can be robust after adversarial training. \\n\\nInstead of utilizing a pre-defined target state $s_{target}$, we can also approximate a reward function and generate adversarial samples by minimizing the total rewards. It would be interesting if the authors can consider this case. \\n\\nCould the authors report an ablation study on the effects of T?\"}"
]
} |
SkxHRySFvr | LEARNING TO IMPUTE: A GENERAL FRAMEWORK FOR SEMI-SUPERVISED LEARNING | [
"Wei-Hong Li",
"Chuan-Sheng Foo",
"Hakan Bilen"
] | Recent semi-supervised learning methods have shown to achieve comparable results to their supervised counterparts while using only a small portion of labels in image classification tasks thanks to their regularization strategies. In this paper, we take a more direct approach for semi-supervised learning and propose learning to impute the labels of unlabeled samples such that a network achieves better generalization when it is trained on these labels. We pose the problem in a learning-to-learn formulation which can easily be incorporated to the state-of-the-art semi-supervised techniques and boost their performance especially when the labels are limited. We demonstrate that our method is applicable to both classification and regression problems including image classification and facial landmark detection tasks. | [
"Semi-supervised Learning",
"Meta-Learning",
"Learning to label"
] | Reject | https://openreview.net/pdf?id=SkxHRySFvr | https://openreview.net/forum?id=SkxHRySFvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"PfgC4fBSnS",
"S1giyndnsr",
"BJxafid3jr",
"SylTUcdhjB",
"HyxT__vnir",
"Syeqoi3d9r",
"BkxfphJd5H",
"BkgLDSmstB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738580,
1573845987420,
1573845781012,
1573845589235,
1573841012841,
1572551586210,
1572498617882,
1571661150099
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2024/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2024/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2024/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2024/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2024/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2024/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2024/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"There is insufficient support to recommend accepting this paper. The reviewers unanimously criticize the quality of the exposition, noting that many key elements in the main development and experimental set up are not clear. The significance of the contribution could be made stronger with some form of theoretical analysis. The current paper lacks depth and insufficient justification for the proposed approach. The submitted comments should be able to help the authors improve the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the feedback and respond to the individual points below.\", \"q1\": \"why the strategy is effective should be further analyzed.\", \"re\": \"We follow the experimental setting (training and validation set) that is proposed by (Oliver et al. 2018). As stated above, the validation data in this work is used for early stopping and hyper-parameter tuning only. The training and meta-validation sets are effectively the same in our experiments. The errors for the fully supervised baselines that are trained on the whole training set are 4.17%, 25.44%, 7.83% for CIFAR-10, CIFAR-100 and AFLW respectively. These numbers set the upper bound on performance for the semi-supervised methods.\", \"q2\": \"How the validation data can improve the generalization ability of the model should be given with theoretical analysis. Whether the size of the validation data has a great influence?\", \"q3\": \"Some experimental settings are not clear: how many unlabeled data, how many samples should be used in the validation data to evaluate the model with pseudo labeled samples?\", \"q4\": \"How to divide the training data and the validation data? Whether the validation data need much more that the training data? How about the results only with all the labeled samples, which can further improve the confidence of the proposed method.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for the feedback and respond to the individual points below.\", \"q1\": \"\\u2026 check the final performance on meta-validation set \\u2026 does not seem a right way to measure performance of the model as meta-validation set is already used in training. The set of labeled points should be partitioned into train and meta-validation set.\", \"re\": \"We have moved the related work section to Section 2 to address this.\", \"q2\": \"The derivation of the updates given the added term to the loss. In option 1, the authors mention they use Eqn. 8 to update z, while Eqn 8. has the reverse information.\", \"q3a\": \"In option 2, z = \\\\sigmoid(\\\\Phi_\\\\theta) and reducing the loss on z, l( \\\\sigmoid(\\\\Phi_\\\\theta) , \\\\Phi_\\\\theta), does not look very meaningful. trying to get \\\\Phi_\\\\theta close to its sigmoid means getting it close to zero. but we do not know what is the label for unlabeled data, so why getting the label close to zero?\", \"q3b\": \"Also the authors mention that second order derivatives will come to play without any explanation. I suggest spending more effort on explaining the problem formulation as that's the core of the paper.\", \"q4\": \"As mentioned above the problem formulation is not clean and there are unjustified choice there. Moreover, the experiment results are mostly declared without any justification (for example, the proposed method does not always lead to improvement and not all cases are explained. The authors only note that the method works well in low data regime).\", \"q5\": \"In the first experiment PL is compared to two cases of the proposed algorithm whereas in other experiments PL is compared to combining PL with versions of the proposed method. Is there a reason for this?\", \"q6\": \"The models used as baseline are only explained briefly in the last page of the paper, while being used multiple time in the experiment section. This is not good writing practice.\"}",
"{\"title\": \"Response to Reviewer 5\", \"comment\": \"We thank the reviewer for the feedback and respond to the individual points below.\\n\\nQ1. The derivation from Eq.(3) to (4) is confusing. ... the second term of Eq.(3) (with unlabelled data) will always be zero \\u2026 no incentive to deviate from the pseudo-label z.\", \"re\": \"Corrected.\", \"q4a\": \"What are the sizes of the meta-validation sets in the experiments?\", \"q4b\": \"Error bars in the tables and Fig.2?\", \"q4c\": \"The MM results in Table 2 are noticeably worse than the original results.\", \"q4d\": \"option 2 is consistently better than option 1, which is not true for the MM baseline.\", \"q4e\": \"22500 training steps seems arbitrary.\", \"q5\": \"Typos.\"}",
"{\"title\": \"Response to area chair and all reviewers\", \"comment\": \"We thank all reviewers for their valuable feedback.\", \"the_main_concerns_were_clarification_of\": \"1) how the meta-validation set was constructed (Reviewer#3, Reviewer#4, Reviewer#5)\\n2) the model formulation, in particular, how the prediction for pseudo-labels are designed and why they are different from the network\\u2019s output (Reviewer#4 and Reviewer#5). \\n\\nWe address these concerns below, and have also incorporated the requested clarifications and additional experiments into the manuscript. In summary we made the following changes:\\n1) we updated the manuscript to clarify the prediction for pseudo-labels and more clearly explain the model design choices\\n2) details of the meta-validation set are clearly indicated in Sec. 3 and at the beginning of the experiment section\\n3) we explicitly described the gradient updates the formulation with the second derivative in Equation 6 and 7 in Sec. 3 and detailed how we use the second derivative (meta gradient) for updating model\\n4) we investigate the effect of the meta-validation mini-batch size in Appendix A.2. \\n\\nWe note that reviewers did not raise any concerns about the originality and soundness of the proposed method. It is also pointed out by the reviewers that, our proposed semi-supervised method is applicable to both classification and regression problems and achieves improvements over respective state-of-the-art methods. We also would like to reiterate the contributions of our paper. We propose a new learning-to-learn method for semi-supervised learning. Our proposed meta learning method involves learning an update rule to label unlabeled training samples such that training our model using these predicted labels of unlabeled samples to improve its performance not only itself but also on a meta-validation set. In addition, our method is highly generic and can be easily incorporated to the state-of-the-art methods and boost their performance, in particular in fewer labels regime. Beyond this, we demonstrate that our method is applicable to both classification and regression problems including image classification and facial landmark detection tasks and achieves significant performance gains.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #5\", \"review\": \"This paper uses a meta-learning approach to solve semi-supervised learning. The main idea is to simulate an SGD step on the loss of the meta-validation data and see how the model will perform if the pseudo-labels of unlabelled data are perturbed. Experiments on classification and regression problems show that the proposed method can improve over existing methods. The idea itself is intriguing but the derivation and some design choice are not very well-explained.\\n\\n(1) The derivation from Eq.(3) to (4) is confusing. Note that in Eq.(3), the prediction \\\\Phi_\\\\theta also depends on \\\\theta in addition to the pseudo-label z. When taking a step of SGD, the second term of Eq.(3) (with unlabelled data) will always be zero if both arguments of the loss (\\\\Phi_\\\\theta(x) and z_\\\\theta(x)) change simultaneously. Eq.(4) somehow only considers the gradient of unsupervised loss, then the gradient would be zero because there is no incentive to deviate from the pseudo-label z. The pseudo-code does not help much. The update from \\\\hat{\\\\theta}^{t} to \\\\hat{\\\\theta}^{t+1} has the same issue: there is no incentive for \\\\hat{\\\\theta}^{t} to deviate because z is exactly produced by it.\\n\\n(2) For classification problems, it is natural to use cross-entropy loss for the probability vector z. Are there any specific reasons for using Gumbel-softmax? In addition, using L2 loss for probability vectors (as mentioned in Appendix A) is known to be problematic as it may create exponentially many local minima (Auer et al, 1996).\\n\\n(3) The recent work of Li et al. (2019) also considers iteratively improving pseudo-labels with meta-updates so it should be discussed and compared.\\n\\n(4) Experiments\\n- What are the sizes of the meta-validation sets in the experiments?\\n- Error bars in the tables and Fig.2?\\n- The MM results in Table 2 are noticeably worse than the original results. For example, with 250 labeled data, MM achieved 11.08% in CIFAR-10 as reported in the original paper. (And 4000 labeled data can achieve 4.95%)\\n- It is said that option 2 is consistently better than option 1, which is not true for the MM baseline.\\n- 22500 training steps for Experiment 4 seems arbitrary. What are the candidates for the hyper-parameters?\", \"typos\": [\"In the first paragraph of Sec.2, one of the x and one of the y should be bold.\", \"Above Eq.(4), x^{U\\\\in U} should be x^i \\\\in U\", \"The transpose in Eq.(7) is not necessary\", \"It is said on page 6 that Fig.2 reports classification loss but the task is a regression problem.\", \"Ref\", \"Auer, P., Herbster, M. and Warmuth, M.K., 1996. Exponentially many local minima for single neurons. In Advances in neural information processing systems (pp. 316-322).\", \"Li, X., Sun, Q., Liu, Y., Zheng, S., Chua, T.S. and Schiele, B., 2019. Learning to Self-Train for Semi-Supervised Few-Shot Classification. In Advances in neural information processing systems.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper looks into problem of semi-supervised learning and in order to be mindful of generalization on the unlabeled data, they add a term to the loss function which includes loss on imputed labels.\\nI have 3 main concerns with the paper \\n1. The authors mention that the meta-validation set is a random subset of train set. and they check the final performance on meta-validation set. This does not seem a right way to measure performance of the model as meta-validation set is already used in training. The set of labeled points should be partitioned into train and meta-validation set.\\n\\n2. The derivation of the updates given the added term to the loss. In option 1, the authors mention they use Eqn. 8 to update z, while Eqn 8. has the reverse information. \\n\\n3.In option 2, z = \\\\sigmoid(\\\\Phi_\\\\theta) and reducing the loss on z, l( \\\\sigmoid(\\\\Phi_\\\\theta) , \\\\Phi_\\\\theta), does not look very meaningful. trying to get \\\\Phi_\\\\theta close to its sigmoid means getting it close to zero. but we do not know what is the label for unlabeled data, so why getting the label close to zero?\\nAlso the authors mention that second order derivatives will come to play without any explanation. I suggest spending more effort on explaining the problem formulation as that's the core of the paper.\", \"more_comments\": [\"As mentioned above the problem formulation is not clean and there are unjustified choice there. Moreover, the experiment results are mostly declared without any justification (for example, the proposed method does not always lead to improvement and not all cases are explained. The authors only note that the method works well in low data regime).\", \"In the first experiment PL is compared to two cases of the proposed algorithm whereas in other experiments PL is compared to combining PL with versions of the proposed method. Is there a reason for this?\", \"The models used as baseline are only explained briefly in the last page of the paper, while being used multiple time in the experiment section. This is not good writing practice.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a semi-supervised approach to impute the labels of unlabeled samples such that a network achieves better generalization when it is trained on these labels. The proposed strategy can be easily used to improve the state-of-the-art semi-supervised methods. It mainly uses a validation data set to evaluate the updating rules of the unlabeled samples with pseudo-labels. The proposed method is applicable to both classification and regression problems including image classification and facial landmark detection tasks, which has shown in the experiments. But the following should be improved in the following aspects:\\n[1] In the proposed method, the model parameters are updated both on the unlabeled samples and validation data set. The experimental results show that such a strategy is effective to improve the performance of the state-of-the-art method. But why the strategy is effective should be further analyzed.\\n[2] How the validation data can improve the generalization ability of the model should be given with theoretical analysis. Whether the size of the validation data has a great influence?\\n[3] Some experimental settings are not clear. In the experiments, how many unlabeled data is labeled with pseudo-labels. For different size of the unlabeled data, how many samples should be used in the validation data to evaluate the model with pseudo labeled samples.\\n[4] How to divide the training data and the validation data? Whether the validation data need much more that the training data? How about the results only with all the labeled samples, which can further improve the confidence of the proposed method.\"}"
]
} |
Bklr0kBKvB | Geometry-aware Generation of Adversarial and Cooperative Point Clouds | [
"Yuxin Wen",
"Jiehong Lin",
"Ke Chen",
"Kui Jia"
] | Recent studies show that machine learning models are vulnerable to adversarial examples. In 2D image domain, these examples are obtained by adding imperceptible noises to natural images. This paper studies adversarial generation of point clouds by learning to deform those approximating object surfaces of certain categories. As 2D manifolds embedded in the 3D Euclidean space, object surfaces enjoy the general properties of smoothness and fairness. We thus argue that in order to achieve imperceptible surface shape deformations, adversarial point clouds should have the same properties with similar degrees of smoothness/fairness to the benign ones, while being close to the benign ones as well when measured under certain distance metrics of point clouds. To this end, we propose a novel loss function to account for imperceptible, geometry-aware deformations of point clouds, and use the proposed loss in an adversarial objective to attack representative models of point set classifiers. Experiments show that our proposed method achieves stronger attacks than existing methods, without introduction of noticeable outliers and surface irregularities. In this work, we also investigate an opposite direction that learns to deform point clouds of object surfaces in the same geometry-aware, but cooperative manner. Cooperatively generated point clouds are more favored by machine learning models in terms of improved classification confidence or accuracy. We present experiments verifying that our proposed objective succeeds in learning cooperative shape deformations. | [
"Adversarial attack",
"Point cloud classification"
] | Reject | https://openreview.net/pdf?id=Bklr0kBKvB | https://openreview.net/forum?id=Bklr0kBKvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"jqgpju4XN",
"rklgh93cjH",
"ByeqWqh5sS",
"Byx3zY39sH",
"H1e7frzf9H",
"ryle0T519r",
"SJxzT10hYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738549,
1573730984398,
1573730818293,
1573730580508,
1572115722550,
1571954119540,
1571770297928
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2023/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2023/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2023/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2023/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2023/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2023/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper offers an improved attack on 3-D point clouds. Unfortunately the clarity of the contribution is unclear and on balance insufficient for acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Re: Official Blind Review #3\", \"comment\": \"Thank you for your constructive comments. We have improved the paper based on these comments. Our responses to individual comments are as follows.\\n\\nQ1. What\\u2019s the significance of \\u201ccooperative part\\u201d in the paper?\", \"reply\": \"Many thanks for the suggestion. We performed an ablation study on different alpha and beta and attach the quality results to the Appendix A.3 of our revised paper. As shown in the figure, smaller alpha values (less attention to Hausdorff Distance) lead to more obvious outliers; and larger alpha values (more attention to Hausdorff Distance) lead to the deformation of the point cloud, due to the overwhelming focus on the outliers while ignoring the whole shape. On the other hand, smaller beta values (less attention to consistency of local curvatures) lead to high frequency outliers near the surface; and again, larger beta values (more attention to consistency of local curvatures) lead to the deformation of the whole point cloud. We do not perform the ablation study on the hyper-parameter of lambda (the balance parameters of adversarial term and imperceptibility term) because we have adopted the strategy of binary-search on it in all of our experiments. During the binary-search, lambda is self-adjusted adaptively: if the adversarial attack is successful, lambda will become larger, and vice verse. And hence, it\\u2019s a dynamically adjusted parameter already.\"}",
"{\"title\": \"Re: Official Blind Review #2\", \"comment\": \"Thank you for your constructive comments. We have improved the paper based on these comments. Our responses to individual comments are as follows.\\n\\nQ1. Is Chamfer distance / Hausdorff distance essentially the L_2 / L_infinity norm?\", \"reply\": \"Thanks for this interesting suggestion. During this rebuttal period, we conducted a user study on Amazon Mechanical Turk (AMT) in order to verify the imperceptible quality of our adversarial examples. We have included these results of user study in Appendix A.5 in the revised paper.\", \"c1\": \"Xiang, Chong, Charles R. Qi, and Bo Li. \\\"Generating 3d adversarial point clouds.\\\"\\u00a0Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.\"}",
"{\"title\": \"Re: Official Blind Review #1\", \"comment\": \"Thank you for your constructive comments. We have improved the paper based on these comments. Our responses to individual comments are as follows.\\n\\nQ1. One issue that could be easily addressed is that Table 2 is mentioned on page 5, but doesn't appear until the end of page 7. It is also mentioned before Table 1. So I would recommend changing it to Table 1 and introducing it before it is mentioned.\", \"reply\": \"Thanks for the suggestion. In the revised paper, we introduce the point cloud classifiers in Appendix A.6 and the defense method in Appendix A.7.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper describes a new targeted adversarial attack against 3D point cloud object classifiers that is robust to several countermeasures. The attack finds a point of the target class that is close to the original point cloud in terms of a more complicated metric that combines the Hausdorff distance, the Chamfer distance, and a curvature distance measure. The proposed attack is 100% successful against several different state of the art classifiers on a dataset of 1024-point clouds sampled from 25 instances of CAD models of each of 10 common objects without any countermeasures. When the Random Removal countermeasure is used, the attack is still successful almost 50% of the time even when 256 points are removed as compared to two other attacks that are only ~17% successful. When the SOR countermeasure is used, the attack is 60% successful when 64 points are removed as compared to <1% for the comparison attacks. The attack can also be used in reverse for data augmentation in training and can cut error rates almost in half.\\n\\nOverall, this seems like a large improvement over current approaches. The paper does a good job of explaining its approach and motivation and does an excellent job of situating its contributions within the existing literature. The experiments show that the improvements in success are large over competing attacks, that they are robust to current countermeasures. They also show that when used \\\"cooperatively\\\" they can improve performance substantially.\\n\\nOne issue that could be easily addressed is that Table 2 is mentioned on page 5, but doesn't appear until the end of page 7. It is also mentioned before Table 1. So I would recommend changing it to Table 1 and introducing it before it is mentioned.\\n\\nIt would also be nice to have some basic descriptions of the point cloud classifiers and the SOR countermeasure. Space for this could be regained from some of the figures, which are informative, but over-represented.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper looks at the task of (adversarially or cooperatively) perturbing a point cloud in context of a classification task. It follows the prevailing paradigm of changing the original input in the direction of a high positive/negative gradient while staying \\u2018close\\u2019 to the original input, and the key contribution here is to define a different notion of \\u2018closeness\\u2019.\\n\\nWhile previous work (Xiang et. al., at least for \\u2018point addition attack\\u2019) used a combination of chamfer and Hausdorff distance as the notion of closeness, this paper additionally includes change in curvature (which is an intuitive term to include) when computing the adversarial/cooperative updates to the point cloud. The obtained results do visually look less perturbed compared to the previous approach, and the obtained adversarial shapes are more robust against two defenses studied.\\n\\nConcerns/Questions:\\n\\n1) If a point cloud P\\u2019 is only a (small) perturbation of a point cloud P, then the Chamfer distance / Hausdorff distance is essentially the L_2 / L_infinity norm of their difference. While the use of curvature terms is different, I feel the claims of importance of using \\u2018distance metric of point clouds\\u2019 is not very well justified (as I\\u2019d expect essentially same results if these two terms were replaced by L2 and L_infinity norms instead). I think the use of these terms was more necessary in the work of Xiang et. al., as they allowed point addition, so the \\u2018norm of difference of point clouds\\u2019 is not well defined.\\n\\n2) This paper uses a different (more aggressive) adversarial term (in Eqn. 8) compared to Xiang et. al., so it is not surprising that the results in Table 1 indicate more robustness to defenses.\\n\\n3) In addition to the above comments about specifics, I feel this work\\u2019s contribution over prior work is not significant. While Xiang et. al. did use L2 norm for their perturbation case, they did investigate the Chamfer/Hausdorff distances for another scenario, and therefore the main contribution here is an additional loss term.\\n\\n4) This is perhaps a hard concern to address, but simply showing some qualitative results to highlight that the changes are \\u2018imperceptible\\u2019 is not sufficient. Ideally, one should report a curve on \\u2018change perceptibility\\u2019 vs \\u2018attack success rate\\u2019 (though this would require some notion of perceptibility that was not used in optimization). Alternately, one could compare methods via A/B testing on mechanical turk, asking \\u2018Which are these two shapes are closer to the original one?\\u2019, and ablate for a certain level of confidence on the wrong class, which approach led to less changes. The current results simply show some examples, but provide no empirical way of judging which approach actually leads to more imperceptible changes.\\n\\nOverall, though the results are perceptually encouraging, I have slight concerns the empirical results reported. However, the primary issue is that the contribution regarding the additional term, while intuitive, is not a significant one in its own right.\\n\\nWhile the rating here only allows me to give a \\u20183\\u2019 as a weak reject, I am perhaps a bit more towards borderline (though leaning towards reject) than that indicates.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper proposes a novel loss function to account for imperceptible, geometry-aware deformations of point clouds. The loss is used in two cases: generating adversarial point clouds to attack representative models of point set classifiers, and generating cooperative point clouds to improve classification confidence or accuracy. The combined geometry-aware objective is well-introduced, which mainly contains Chamfer distance term, Hausdorff distance term and local curvatures consistency term. The authors apply the geometry-aware objective to generate adversarial point clouds by adopting the framework of C&W attack. For generating cooperative point clouds, the authors introduced a training procedure to reduce the overfitting of the deformed point clouds. Most of the experiments are well-conducted, and demonstrate the effectiveness of the proposed loss function.\", \"Overall, the paper is a well-written and could be an interesting contribution. However, I would like it better if it does not contain the part about the cooperative point clouds, which are not very motivated and the experiment settings are not very convincing. The algorithm in page 5 inherently used testing labels (suppose some testing data is in the training split of one of those cross-validation folds, training for the P_i' for the rest of the training data). I don't know why experiment results generated by this method would be of any meaning. I would be OK to accept this paper if the cooperative point cloud part is dropped.\", \"In page 5, the authors introduce a procedure to train the model. The first step would require take the \\u201cconsideration of class balance\\u201d. It would be better to give some details on how the class balance is considered.\", \"In ablation study, the authors give some visualization results in Figure 1. However, it would be more interesting if the authors could give some quantitative results.\", \"It would be interesting to show some ablation study on how to choose the \\u03b1, \\u03b2 and \\u03bb .\"]}"
]
} |
HJxVC1SYwr | Crafting Data-free Universal Adversaries with Dilate Loss | [
"Deepak Babu Sam",
"ABINAYA K",
"Sudharsan K A",
"Venkatesh Babu RADHAKRISHNAN"
] | We introduce a method to create Universal Adversarial Perturbations (UAP) for a given CNN in a data-free manner. Data-free approaches suite scenarios where the original training data is unavailable for crafting adversaries. We show that the adversary generation with full training data can be approximated to a formulation without data. This is realized through a sequential optimization of the adversarial perturbation with the proposed dilate loss. Dilate loss basically maximizes the Euclidean norm of the output before nonlinearity at any layer. By doing so, the perturbation constrains the ReLU activation function at every layer to act roughly linear for data points and thus eliminate the dependency on data for crafting UAPs. Extensive experiments demonstrate that our method not only has theoretical support, but achieves higher fooling rate than the existing data-free work. Furthermore, we evidence improvement in limited data cases as well. | [
"dilate loss",
"universal adversaries",
"data",
"layer",
"universal adversarial perturbations",
"uap",
"cnn",
"manner",
"approaches suite scenarios",
"original training data"
] | Reject | https://openreview.net/pdf?id=HJxVC1SYwr | https://openreview.net/forum?id=HJxVC1SYwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"skWJaSbatc",
"rygchoCqiS",
"r1lsVjRcjB",
"Syxaq9RqjS",
"HJx99dCqiS",
"r1xcIWR35r",
"ByeI1QS99B",
"BJxB-51Rtr",
"H1l5NeGTFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738514,
1573739441773,
1573739315486,
1573739156762,
1573738642094,
1572819282199,
1572651741611,
1571842557082,
1571786801720
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2022/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2022/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2022/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2022/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2022/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2022/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2022/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2022/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper focuses on finding universal adversarial perturbations, that is, a single noise pattern that can be applied to any input to fool the network in many cases. Further more, it focuses on the data-free setting, where such a perturbation is found without having access to data (images) from the distribution that train- and test data comes from.\\n\\nThe reviewers were very conflicted about this paper. Among others, the strong experimental results and the clarity of writing and analysis were praised. However, there was also criticism of the amount of novelty compared to GDUAP, on the strong assumptions needed (potentially limiting the applicability), and on some weakness in the theoretical analysis. \\n\\nIn the end, the paper seems in current form not convincing enough for me to recommend acceptance for ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the Reviewer for the valuable feedback.\\n\\n1. Section 3.2 (the top of page 4) clarification: Additive means that $\\\\sigma_{R}(W_{1}X+W_{1}p1^{T})=\\\\sigma_{R}(W_{1}X)+\\\\sigma_{R}(W_{1}p1^{T})$. Please see the response to Reviewer 4 Q1.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the Reviewer for the valuable feedback.\\n\\n1. Novelty: Though our method has similar flavor of GDUAP, as mentioned by Reviewer 1, our approach has significant improvements over GDUAP: (1) our work formulates the proposed method from theoretical intuitions (2) provides better understanding than purely empirical method in GDUAP, which could pave way for further studies in the field (3) achieves significantly high fooling rate than GDUAP. \\n\\n2. Theoretical analysis and Equation (5): As pointed by the reviewer, we arrive at the proposed method by intuitions derived from a set of reformulations and approximations. We believe such a scheme also could be considered an analysis as it explains why the method works. Regarding equation (5), please see the response to Reviewer 4 Q1.\\n\\n3. Results on other tasks: We implement the proposed algorithm on FCN-8s-VGG model (please refer to GDUAP paper) for segmentation task as suggested by the Reviewer. Our method is able to bring down the mIoU after attack better than GDUAP, indicating superior adversarial performance.\\n------------------------\\nMethod | mIoU\\n------------------------\\nOriginal | 65.49\\nGDUAP | 42.78\\nOurs | 36.58\\n------------------------\\n\\n4. Adversarial trained model: We are able to test our algorithm on adversarial trained Inception V3 model (from \\\"Tramer et al. Ensemble Adversarial Training: Attacks and Defenses [ICLR 2018]\\\", available at https://github.com/tensorflow/models/tree/master/research/adv_imagenet_models) and achieve high fooling rate reported as below:\\n------------------------\\nMethod | Fooling Rate\\n------------------------\\nGDUAP | 33.33%\\nOurs | 59.14%\\n------------------------\\nThis clearly demonstrates the effectiveness of the proposed sequential dilation algorithm.\\n\\n5. Typo: We will correct the typo.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the Reviewer for the valuable feedback.\\n\\n1. Distorton rate: L-infinity norm is the criteria widely used in the community and all comparisons are based on this. Nevertheless, we checked and found that GDUAP has similar saturation/distortion as our method. In Table 1, random accuracies are below 10% and this might be due to noise affecting some class discriminative regions of the images, especially for those images falling near the decision boundaries.\\n\\n2. Practicality of data-free approach: In many practical scenarios, the model might be made available, but the training data cannot be released due to privacy/confidentiality reasons (e.g., medical records, proprietary data). Data-free approaches suite such cases and in fact Table 3 shows good black-box performance (better than GDUAP) for our method, increasing its practical utility.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the Reviewer for the valuable feedback.\\n\\n1. Reg. Equation (5): To derive equation (5), we do not require $W_{1}p$ to be in the same orthant as column vectors of $W_{1}X$. In fact, such a $p$ almost never exists. Hence, our approach is to find a $p$ that can minimize the error due to the additive approximation of ReLU ($\\\\sigma_{R}(W_{1}X+W_{1}p1^{T})\\\\approx\\\\sigma_{R}(W_{1}X)+\\\\sigma_{R}(W_{1}p1^{T})$). We linearly relax this criteria and search for a $p$ that can bring $W_{1}p$ as close to all the column vectors of $W_{1}X$. This is realized through optimization (5) by maximizing the inner product of $W_{1}p$ with all the column vectors of $W_{1}X$.\\n\\n2. Assumption for Lemma 1: Since our method is data-free, there must be an assumption tying data to the learned weights of the network. We assume that the singular vectors of the weights must have captured the discriminatory modes of data samples while training. This means that the first singular vector carries the most important features common to most of the data points than the other singular vectors, which translates to the assumption for Lemma 1. This assumption seems to be required for explaining the high fooling rate our method obtains.\\n\\n3. Other nonlinearities in the network: Currently, our theoretical explanation is limited to ReLU nonlinearity, which itself seems sufficient to reason out the high fooling performance. The effect of other kinds of nonlinearities needs to be studied in future works.\\n\\n4. Design of Algorithm 1: Algorithm 1 is designed from problem (10), where it is implemented as a set of sequential optimizations. We have an ablation in Table 2 with the header 'Ours without accumulation', where we optimize without the 'dilate' loss exactly like as the Reviewer mentioned. The results clearly evidence the performance boost with the 'dilate' loss.\\n\\n5. Results in less data cases: For VGG19 and Inception v1, with more data there is a slight dip in fooling rate. This seems to be something specific to the networks, but for majority cases the proposed approach beats GDUAP.\\n\\n6. Results in Table 4 and 5: For Table 4 experiments, a validation set is used to select the best perturbation. But in Table 5, in order to compare with Singular Fool method, we do not employ a validation set as mentioned in the text. The slight difference in fooling rate is due to this fact.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary:\\nThis paper proposed a method to generate universal adversarial perturbations without training data. This task is timely and practical. The proposed method maximizes the norm of the output before nonlinearity at any layer to craft the universal perturbation. A sequential dilation algorithm is designed to calculate UAPs. The experiments show that the proposed method outperforms GDUAP.\\n\\nMy major concern is that there is not much novelty in the proposed method compared with GDUAP. The dilate loss function (4) is similar to the objective function (3) in the GDUAP paper. This paper provides a theoretical explanation of the dilate loss function and an improvement on the non-linearity function, which, however, is not convincing. Equation 10 is derived based on many strong assumptions. See the comments below.\", \"pros\": [\"The theoretical analysis is clear.\", \"The proposed method performs better than GDUAP in the data-free and black-box setting.\", \"The writing is good. The paper is easy to follow.\"], \"cons\": [\"The theoretical analysis is based on many strong assumptions/criteria. For example:\", \"o\\tTo derive equation (5), W1X and W1p must be in the same orthant. It is unclear how to satisfy the criteria In the algorithm.\", \"o\\tIn Lemma 1, problem (5) approximates problem (6) only if x has a very large projection on the first singular vector of W. However, x and W are fixed and independent of p. This assumption largely depends on the dataset and the weights of the model.\", \"o\\tIt would be better if the authors show that in what cases these assumptions can be satisfied.\", \"Other factors such as batch normalization and max pooling used in Inception v3, may also affect the linearity of the model. It would be better if the authors provide theoretical analysis or an ablation study on these factors.\", \"What\\u2019s the design principle behind Algorithm 1? Why can this algorithm solve the sub-optimal problem? The weights of different layers are not closely related. In the initialization part, why can we start learning p from the result of the previous layer? Would it be possible that the performance is improved due to the algorithm instead of the dilate loss?\", \"The proposed method performs worse than GDUAP does in some less data settings.\", \"The results in Table 4 and 5 are inconsistent. These two experiments use the same dataset (Imagenet) and the same number of images (D=64).\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a data free method for generating universal adversarial examples. Their method finds an input that maximizes the output of each layer by maximizing the dilation loss. They gave a well motivated derivation going from the data matrix, the data mean and to data free. The experiments results seems solid as the numbers show that their method is much better in many cases.\", \"i_have_2_main_issues\": [\"The fooling rate experiments does not seem to control for how much distortion there really is. How do you make sure that different methods have similar level of distortion and not just similar l_\\\\inf. Given that the authors says most of their method saturates all values, it is not clear that the baselines and competition really has a similar level of distortion. The fooling rate for random seems rather high. Why is random noise not mostly ignored by the model?\", \"while the method is data free. It needs complete access to the model and relies on properties of ReLu. I am not sure how realistic this setting is, and how this compares to methods that has black box access to the model. While it is interesting, the paper did not establish that universal adversarial perturbation is well-motivated and why data free is more important that model free or targeted perturbations. An attacker probably always see the input and probably wants to make it misclassified into a particular class, instead of just making the model wrong.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed a white-box (known network architecture, known network weight) data free (without need to access the data) adversarial attacking method. The main idea is to find a perturbation that maximizes the activations at different layers jointly. But the optimization is done sequentially, treating each layer\\u2019s activation (before ReLU) as a linear transformation output.\\n\\nThe method is compared with existing methods (only one existing approach for the problem, GDUAP by Mopuri et al. 2018) in terms of the fool rate. It shows significant improvement. Ablation study is carried out to compare with baselines like perturbation maximizing only first layer activation, only last layer activation, etc. Also on some other settings (black-box testing, less data) the proposed method outperforms GDUAP. \\n\\nThe problem of data-free white-box attack is very interesting and does make sense. The proposed method achieve significant improvement over the previous one (GDUAP). I do have the following concerns though.\\n\\n1), the novelty of the proposed idea seems relatively limited. The proposed idea seeks perturbation maximizing activations over all layers. It incur perturbation before ReLU. But overall, the flavor of the idea is not significantly different from GDUAP, despite the significant performance boost. \\n\\n2), it was mentioned that compare with GDUAP, this paper has more theoretical analysis. But this is not very convincing to me. There are many steps of approximation/relaxation from the original problem (Equation (1)) to the final formula (Equation (10)). Many assumptions are made over the steps. It is OK to use these steps to derive a heuristic. But these steps can hardly be called \\\"theoretical analysis\\\".\\n\\nI am particularly uncomfortable with Equation (5), which is the basis of the main idea. It assumes that all data in $W_1X$ are in the same orthant as $W_1p$. But this is unrealistic as different data in X will for sure incur different activation patterns. Did I misunderstand anything?\\n\\n3) I do like the experimental results. It looks impressive. But the baselines are really limited (granted, there are not many existing approaches). There is only one task (image classification). How about other tasks like segmentation etc shown in Mopuri et al. 2018? Also it would be nice to also show the results of other UAP methods, as it gives us a better sense of the gap between with and without data.\\n\\n4) I wonder how will the attack affect some model which has been trained with some defense mechanism, e.g., adversarial training.\", \"typo\": \"Equation (5), RHS missing a max\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper is well written and easy to follow. In this paper, a new data-free method is proposed to create universal adversarial perturbation without using data. There are some similarities with GDUAP though, authors also make some crucial improvements. They perform Euclidean norm maximization before the non-linearity of each layer, which not only has theoretical backing but also brings better performance in practice. Meanwhile, they optimize the perturbations in each layer in a sequential manner instead of joint optimization to avoid chances of reaching local minima solutions.\\n\\nThe authors provide a detailed theoretical analysis and systematic experimental results to demonstrate their arguments, which is convincing. What\\u2019s more, the proposed method achieves state-of-the-art data-free fooling rates on the large-scale dataset, which strongly demonstrates the effectiveness of their method.\\n\\nIn section 3.2, (the top of page 4) \\u201cwhich becomes additive if column vectors in W1X are in the same orthant as W1p. We relax this criteria and favour the case of making the vectors as close as possible by\\u201d\\nCould the authors provide more discussions about it?\"}"
]
} |
Bkx4AJSFvB | Efficient Bi-Directional Verification of ReLU Networks via Quadratic Programming | [
"Aleksei Kuvshinov",
"Stephan Guennemann"
] | Neural networks are known to be sensitive to adversarial perturbations. To investigate this undesired behavior we consider the problem of computing the distance to the decision boundary (DtDB) from a given sample for a deep NN classifier. In this work we present an iterative procedure where in each step we solve a convex quadratic programming (QP) task. Solving the single initial QP already results in a lower bound on the DtDB and can be used as a robustness certificate of the classifier around a given sample. In contrast to currently known approaches our method also provides upper bounds used as a measure of quality for the certificate. We show that our approach provides better or competitive results in comparison with a wide range of existing techniques. | [
"efficient",
"verification",
"relu networks",
"quadratic",
"dtdb",
"sample",
"neural networks",
"sensitive",
"adversarial perturbations",
"undesired behavior"
] | Reject | https://openreview.net/pdf?id=Bkx4AJSFvB | https://openreview.net/forum?id=Bkx4AJSFvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"hdTI6noAe",
"rkeJ4oFnjB",
"r1xNzFdnoB",
"rJedSuu2jH",
"Bkegk_OnoH",
"H1geDDdniH",
"BklGRAFe9B",
"HyxxRpXCYB",
"SyeGBpm3tH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738484,
1573849895179,
1573845260462,
1573845056109,
1573844952055,
1573844824359,
1572015818200,
1571859911592,
1571728697591
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2021/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2021/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2021/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2021/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2021/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2021/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2021/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2021/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This article is concerned with sensitivity to adversarial perturbations. It studies the computation of the distance to the decision boundary from a given sample in order to obtain robustness certificates, and presents an iterative procedure to this end. This is a very relevant line of investigation. The reviewers found that the approach is different from previous ones (even if related quadratic constraints had been formulated in previous works). However, they expressed concerns with the presentation, missing details or intuition for the upper bounds, and the small size of the networks that are tested. The reviewers also mentioned that the paper could be clearer about the strengths and weaknesses of the proposed algorithm. The responses clarified a number of points from the initial reviews. However, some reviewers found that important aspects were still not addressed satisfactorily, specifically in relation to the justification of the approach to obtain upper bounds (although they acknowledge that the strategy seems at least empirically validated), and reiterated concerns about the scalability of the approach. Overall, this article ranks good, but not good enough.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revised Version and Reply - Part 2\", \"comment\": \"2. Network size: We considered deeper networks with 50 nodes per hidden layer and their number varying from 1 to 5. The resulting relaxation for deeper nets (number hidden layers > 2) was shown to be looser than for small networks resulting in worse *lower* bounds compared to CROWN. The upper bounds computed by QPRel were consistently better across different model choices. See Table 1 for the updated results.\\n\\nA probable reason for this decrease in performance of the lower bound is the choice of $\\\\lambda$ that we perform along one line. For many layers the space of $\\\\lambda\\\\in\\\\mathbb{R}^L$ becomes higher dimensional so that a more sophisticated search in this space for a good $\\\\lambda$ is required. Note that according to Theorem 1 in the setting of only 1 hidden layer we are solving provably the tightest convex QP relaxation possible, which is not guaranteed for deeper architectures.\\n\\n\\n3. Relative vs. absolute improvements: We added the average (absolute) difference between the lower bounds to the tables. Furthermore, we considered normally and robustly trained networks for each architecture.\\n\\nWe decided to use the relative differences additionally to the absolute ones because it is hard to judge whether an absolute improvement of $\\\\Delta\\\\epsilon$ of the lower bound is significant or not without knowing the true DtDB or the ratio between the two bounds. Our empirical investigations clearly show (see Fig. 3a in the initial submission) that samples might have very different DtDB, some are close to the decision boundary and some are far away from it. On the other side we consider an improvement of 0.01% always as non-significant and an improvement of 100% as significant independently of the actual DtDB.\\n\\n\\n4. Stronger attacks: We conducted additional experiments with FGSM replaced by 200-steps PGD (starting from the anchor point, no random sampling). Experiments show that QPRel-UB outperforms them in $l_2$ setting on all architectures allowing for more samples to be verified as non-robust. Results are included into the updated version of the paper (see Table 1).\\n\\n\\n5. Speed-up: We have adjusted the statements in the updated version of the paper.\\n\\n\\n6. Related work: We have included a discussion of the work of Jordan et al. (2019) in Section 4. Thank you for pointing it out!\"}",
"{\"title\": \"Revised Version and Reply - Part 1\", \"comment\": \"Thank you for your review. Indeed, one of the main advantages of our approach is that it does not require a certain radius as input but aims to find the largest one - this makes our approach also very different to other approaches based on LP or SDP relaxations.\\n\\n\\n1. Upper bounds: Please see Section 4 for an updated and extended discussion of our method to compute upper bounds on DtDB, which can be summarized as follows:\", \"idea\": \"In each step we verify a certain neighborhood around the current anchor point $x^0$ and then expand the verified region further. For that we choose the next anchor point as follows.\\n(i) Choice of direction: we go from $x^0$ towards the solution of QPRel since, if the QP relaxation is tight, its solution should be close to an optimal solution of DtDB which is the closest adversarial point to $x^0$.\\n(ii) Choice of step size: we know that there are no adversarial points within the ball of the verified radius $d$ around the anchor, so every step size smaller than $d$ would be unnecessary small. On the other hand, if we proceed with a new anchor point that is strictly farther away than $d$, we might miss an adversarial point lying close to the boundary of the $d$-ball around $x^0$. Therefore, we choose the next anchor point to be on the boundary of the currently verified region, so that every $\\\\epsilon$-ball that we manage to verify around the new point would add to the overall robust set.\", \"termination\": \"The algorithm terminates as soon as the propagation gap $c(x,\\\\lambda)$ becomes small enough or the anchor point gets misclassified. Note that $c(x, \\\\lambda)=0$ means that the solution $x$ provides the optimal objective function value of the DtDB problem (see Lemma 1) and, thus, an adversarial example. However, the termination condition $c\\\\le c_{\\\\text{tol}}$ from Algorithm 1, line 3 cannot ensure that the optimal point $x_{\\\\text{qp}}^0$ from the last iteration belongs to a different class. Therefore, if we stop with $c\\\\le c_{\\\\text{tol}}$ and the second termination condition is not satisfied, we take an additional step on the boundary of the ball of radius $d_{\\\\text{qp}}(1+\\\\delta)$, where $d_{\\\\text{qp}}$ is the verified radius and $\\\\delta$ is a tiny offset. Additionally, we check in each step whether the next anchor point is already misclassified before the condition $c\\\\le c_{\\\\text{tol}}$ is reached (this can happen in a multi-class setting; and is indeed observed frequently). This means that the sequence of anchor points converges towards the boundary and then, if no adversarial point was found yet, makes a step across the boundary using a positive, small $\\\\delta$.\", \"convergence\": \"We also included a result for the convergence rate of Algorithm 1.\"}",
"{\"title\": \"Revised Version and Reply\", \"comment\": \"Thank you for your review. We have updated and extended the discussion of our method to compute upper bounds on DtDB (see Section 4 in the paper and summary below).\\n\\n** Upper bounds **\", \"idea\": \"In each step we verify a certain neighborhood around the current anchor point $x^0$ and then expand the verified region further. For that we choose the next anchor point as follows.\\n(i) Choice of direction: we go from $x^0$ towards the solution of QPRel since, if the QP relaxation is tight, its solution should be close to an optimal solution of DtDB which is the closest adversarial point to $x^0$.\\n(ii) Choice of step size: we know that there are no adversarial points within the ball of the verified radius $d$ around the anchor, so every step size smaller than $d$ would be unnecessary small. On the other hand, if we proceed with a new anchor point that is strictly farther away than $d$, we might miss an adversarial point lying close to the boundary of the $d$-ball around $x^0$. Therefore, we choose the next anchor point to be on the boundary of the currently verified region, so that every $\\\\epsilon$-ball that we manage to verify around the new point would add to the overall robust set.\", \"termination\": \"The algorithm terminates as soon as the propagation gap $c(x,\\\\lambda)$ becomes small enough or the anchor point gets misclassified. Note that $c(x, \\\\lambda)=0$ means that the solution $x$ provides the optimal objective function value of the DtDB problem (see Lemma 1) and, thus, an adversarial example. However, the termination condition $c\\\\le c_{\\\\text{tol}}$ from Algorithm 1, line 3 cannot ensure that the optimal point $x_{\\\\text{qp}}^0$ from the last iteration belongs to a different class. Therefore, if we stop with $c\\\\le c_{\\\\text{tol}}$ and the second termination condition is not satisfied, we take an additional step on the boundary of the ball of radius $d_{\\\\text{qp}}(1+\\\\delta)$, where $d_{\\\\text{qp}}$ is the verified radius and $\\\\delta$ is a tiny offset.\\n\\nAdditionally, we check in each step whether the next anchor point is already misclassified before the condition $c\\\\le c_{\\\\text{tol}}$ is reached (this can happen in a multi-class setting; and is indeed observed frequently). We empirically verified that all points obtained this way are indeed true adversarials. This means that the sequence of anchor points converges towards the boundary and then, if no adversarial point was found yet, makes a step across the boundary using a positive, small $\\\\delta$.\", \"convergence\": \"We also included a result for the convergence rate of Algorithm 1 under an assumption that the propagation gap $c(x,\\\\lambda)$ is not too large with respect to the optimal objective function value for the QPRel problem solved in each step.\\n\\n** Minor comments **\\n\\n1. We have rewritten Algorithm 1 in the updated version of the paper to fix typos and make it easier to read.\\n\\nNote that we use the upper case index on $x$ throughout the paper to indicate the layer, so $x^0$ is the sample in the input layer and $x^L=f(x^0)$ are values of the last layer, when $x^0$ is propagated through the net. The intermediate activations are denoted by $x^l$. We use this notation also in Algorithm 1 and we actually omit the index identifying the number of the current iteration. So also here $x^0$ denotes the part of the full $x$ corresponding to the input layer and we return at the end $x_{adv}^0 \\\\leftarrow x^0$ as an adversarial point (alternatively we could write $x_{adv} \\\\leftarrow x$).\\n\\n2. L, N and R denote the network architecture. L is the number of hidden layers, N is the number of hidden neurons per hidden layer (same for each hidden layer) and R encodes whether the network was trained robustly (if R is present) or normally (if there is no R).\"}",
"{\"title\": \"Revised Version and Reply - Part 2\", \"comment\": \"2. STRONGER ATTACKS:\\nWe conducted additional experiments with FGSM replaced by 200-steps PGD (starting from the anchor point, no random sampling).\\nExperiments show that QPRel-UB still outperforms them in $l_2$ setting on all architectures allowing for more samples to be verified as non-robust.\\nResults are included in the updated version of the paper.\\n\\n3. LARGER EPSILON + 4. NETWORK ARCHITECTURE:\\nWe have rerun the experiments on deeper networks with 50 nodes per hidden layer and their number varying from 1 to 5 each trained normally and robustly with the proposed $\\\\epsilon=1.58$. Experiments have shown that the qualitative difference in results from QPRel and CROWN does not depend on the training procedure as much as on the depth.\\n\\nOne sees that for the smaller networks QPRel is always able to verify non-trivial bounds, while the performance on larger networks w.r.t. the lower bound becomes worse than CROWN's. The upper bound computed by QPRel outperforms the competitors in all settings.\\nSee the results in Table 1 in the updated version of the paper.\\n\\nA probable reason for the decrease in performance of the lower bound is the choice of $\\\\lambda$ that we perform along one line.\\nFor many layers the space of $\\\\lambda\\\\in\\\\mathbb{R}^L$ becomes higher dimensional so that a more sophisticated search in this space for a good $\\\\lambda$ is required. Note that according to Theorem 1 in the setting of only 1 hidden layer we are solving provably the tightest convex QP relaxation possible, which is not guaranteed for deeper architectures.\\n\\n5. CONTRIBUTIONS OF PAPER:\\nWe have adjusted the phrasing accordingly. \\n\\n** Further improvements and potential directions **\\n\\n1. We agree that incorporating additional knowledge e.g. in form of activation bounds might improve the performance especially since it would bound the propagation gap $c$. However, to obtain the bounds in the intermediate layers one has to provide the algorithm with some bounds on the perturbations of the input. One of the main advantages of QPRel is that it does *not* rely on any bounds in the input layer.\\n\\n2. Thank you for pointing that out! This is definitely an interesting direction for future work.\"}",
"{\"title\": \"Revised Version and Reply - Part 1\", \"comment\": \"Thank you for your review. We address your points below.\\n\\n** Issues and Questions **\\n\\n1. SUMMARY ON \\\"BI-DIRECTIONAL VERIFICATION\\\": We agree that our statement \\\"the first bi-directional robustness verification technique\\\" without an appropriate discussion is unclear. We have added the points discussed below to the updated version of the paper (see Sections 1 and 4).\", \"discussion\": \"It is true that an arbitrary misclassified example provides an upper bound on the distance to the decision boundary (DtDB) and can be used together with a verification method (like CROWN, ConvAdv or SDPRel) to bound the DtDB from both directions. We used exactly this framework to compare with the results obtained from QPRel-UB in the experimental section.\\n\\nWhile QPRel uses its solution of a *verification task* $x_{qp}^0$ as an indicator of the direction towards the decision boundary, most of the other attacks (including FGSM, iterative PDG, Carlini-Wagner and the interval attack by Wang et al. (2019)) are gradient-based methods that perform steps towards a solution of an optimization problem constructed to, e.g., maximize the training loss with respect to the label of the anchor point. Even when a bound propagation technique is employed to find an adversarial, there is *not a robustness verification method* being applied.\", \"in_short\": \"In contrast to attacks which are inspired by a misclassification task, our methodology emerges from verification. Verify as much as possible until the decision boundary is reached. This idea is exactly the reason why we call QPRel a bi-directional *verification* technique and not verification plus an attack.\\n\\nBesides this difference in the methodology we have shown the quality of our obtained upper bounds to outperform a 200-steps PGD attack (see Table 1).\\n\\nWe summarize the relation between QPRel (-LB and -UB) and the verification+attack (V+A) framework (e.g. CROWN+PGD or CROWN+interval attack) as follows.\\n\\na) The adversarial-free verified region from QPRel is larger than the initially verified neighborhood around the anchor point, since we iteratively proceed solving a verification task around new anchor points leading to the decision boundary (see the hatched region in Figure 2). V+A returns also an initial verified region, but a subsequent attack can provide only a single point/a sequence of single points that are non-adversarial.\\n\\nb) For QPRel the most closely related attack is Carlini-Wagner as it also works with a relaxation of DtDB, but instead of solving it exactly a gradient-based approach is applied to find a feasible (i.e. misclassified) point that is possibly close to the anchor.\\n\\nc) As other attacks usually apply an optimization solver (e.g. L-BFGS, Adam or just PGD steps) on non-convex problems there are no theoretical guaranties on their success or convergence rate. In our setting we can solve the initial convex QPRel exactly (allowing us to get valid lower bounds in the first place) and provide a proof for a certain convergence rate of QPRel-UB given that the relaxation is tight enough.\\n\\nd) QPRel does not need to know the loss used to train the classifier, only the final weights are required. Therefore it is applicable in settings where we get the model only after it was trained. Common attacks (but not all, an exception is e.g. Carlini-Wagner) would have to use a substitute for the loss function and rely on its similarity to the true one in this semi-white box setting. In general, our proposed methodology would allow an attack in every setting where a verification is possible and yields an indication of the direction towards the decision boundary.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed to use a convex QP relaxed formulation to solve the neural network verification problem, and demonstrated its effectiveness on a few small networks (1-2 hidden layers) on MNIST and Fashion-MNIST datasets.\", \"there_are_several_benefits_the_proposed_methods\": \"they are technically tighter relaxations of ReLU neurons and empirically the authors show they perform well in L2 norm (but not L infinity norm, unfortunately); solving this formulation does not require to know pre-activation bounds of hidden neurons; also the convexity of the QP problem needs to be determined only once for a model, rather than once per example. Although the QP relaxation of ReLU neuron is not new and has been used in Raghunathan et al., 2018b, they solve the problem as a SDP rather than convex QP. SDP is tighter than the convex QP formulation used in this paper, however is much slower.\", \"issues_and_questions\": \"1. The concept of \\\"bi-direction verification\\\" is not new, since finding an upper bound is basically finding adversarial examples. Many previous papers have been using PGD based attacks to obtain the upper bound. Convex relaxation based verification methods like CROWN can also be used for generating adversarial examples, and it is called \\\"Interval attack\\\", which is demonstrated in [1][2]. Claiming this is the first \\\"bi-direction robustness verification technique\\\" is not accurate.\\n\\n2. The use of FGSM as an upper bound is inappropriate, as FGSM is known to be a very weak attack. Replacing it with a multi-step PGD attack is necessary. Using a stronger attack will also close the gap between upper and lower bound. Also, compare the upper bound found by PGD with QPRel-LB and update Figure 3(a). If a stronger attack like PGD is used, I think for larger norms CROWN+PGD in Figure 3(c) should be able to verify almost all examples.\\n\\n3. The models used in Table 1 is trained using a L2 perturbation of epsilon=0.1. This epsilon value is too small for L2 norm. In page 22 (last page of appendix in arxiv version) of [3], you can find the they conduct L2 robustness training but at a much larger epsilon value (eps=1.58). Sine the authors did not use these standard epsilon setting, my concern is that does the proposed method works at larger L2 epsilon?\\n\\n4. Some experiments on larger and deeper networks are necessary; especially, it is interesting to see how CROWN and the proposed method scale to deeper networks. The presented experiments only include networks with 1 and 2 hidden layers, which is insufficient. A new experiment with number of hidden neurons per layer kept (say 50) and increase the depth from 2 to 10 will be very helpful.\\n\\n5. The main claim of the paper in Introduction needs to be made clearer, especially the primary strength of the proposed algorithm is in L2 norm, and it does not seem to outperform CROWN in L infinity norm setting.\", \"further_improvements_and_potential_directions\": \"1. In the proposed method, the authors relaxed ReLU neurons using quadratic programming. This relaxation does not require to computing bounds for the neuron activation values. However, I think it is possible to include neuron activation upper and lower bounds as constraints of the QP problem (adding them as constraints like l <= x <= u in Eq. QPRel). This will make the bounds tighter. The per-neuron lower and upper bounds can be obtained using CROWN efficiently, so there is no too much computation cost.\\n\\n2. Improving the scalability of QP relaxation is another challenge. CROWN can be implemented efficiently on GPUs [4]. For QP relaxations, this can possibly be done by transforming QP solving into a computation graph that can be executed efficiently on GPUs (this is a potential future work directions and I do not expect the authors to address them during the discussion period).\\n\\nOverall I am positive with this paper, however before accepting it I think the authors should at least make their claims clearer (the relaxation performs well mainly in L2 norm, and the concept of \\\"bi-directional verification\\\" is also not entirely new), replacing FGSM by a 200-step PGD and compare the upper bound found by PGD with QPRel, and test the proposed algorithm in models trained with a larger epsilon (eps=1.58 to align with previous works, if possible) and deeper models.\\n\\n[1] Wang, S., Chen, Y., Abdou, A., & Jana, S. (2019). Enhancing Gradient-based Attacks with Symbolic Intervals. arXiv preprint arXiv:1906.02282.\\n[2] Wang, S., Chen, Y., Abdou, A., & Jana, S. (1811). MixTrain: Scalable Training of Verifiably Robust Neural Networks.\\n[3] Wong, E., Schmidt, F., Metzen, J. H., & Kolter, J. Z. (2018). Scaling provable adversarial defenses. In Advances in Neural Information Processing Systems (pp. 8400-8409).\\n[4] https://github.com/huanzhang12/RecurJac-and-CROWN\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a method to compute the distance to the decision boundary for a given network, where the network is composed of linear layers followed by a RELU activation. The authors provide a lower bound and an upper bound for the distance of a sample from the decision boundary. The lower bound is obtained as the solution to a quadratic program, which in turn is obtained by relaxing the original optimization problem. The relaxation is obtained by decomposing the RELU condition to a set of 3 constraints (eqn 2). The authors also provide conditions under which the quadratic program stays convex.\\n\\nThe paper is clearly written. The method is useful to verify robustness of neural networks. The experiments show the improvement of the proposed method over existing certificates.\\n\\nWhile the lower bound is theoretically justified, I did not see any guarantees for the upper bound. I am not referring to a convergence proof here, but simply a guarantee that the value returned by Algorithm 1 is indeed an upper bound. Algorithm 1 does not verify whether the point returned belongs to a different class. It would also be helpful to provide intuition for the iterative procedure to compute the upper bound.\", \"minor_comments\": \"1) Line 9 is Algorithm 1: x^qp0 should be x^qp (no 0)\\n2) What is L/N in Table 1?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"* Summary:\\nThis paper introduces an encoding of the bounds on Neural Networks based on a (non-convex) quadratic program. The method covers both L_inf and L_2 perturbations, and the optimization problem posed are solved using Gurobi.\\n\\nThe authors formulate the ReLU as a quadratic constraint , and then relax the problem by taking it's Lagrangian dual. This results in the standard properties from taking a lagrangian dual: any choice of dual variables will produce a lower bound.\\nFor an appropriate choice of dual variables, the lagrangian problem is convex and the authors give a way of choosing dual variables that guarantees this convexity such that the bound can be computed.\\n\\nA method is proposed to find upper bound on the verified region (which can also be understood as just finding an incorrectly classified sample, that should ideally be the closest to the original point), based on iterating the solving of the QP.\", \"comments\": [\"Encoding of the ReLU as a quadratic constraint as done in (2) is not novel, as it was done before by Dvijotham et al. (UAI 2019) or Raghunathan et al. (NeurIPS 2018). The Lagrangian relaxation that is then done is to the best of my knowledge different than any one introduced before.\", \"The problem solved is also different to most of the literature: rather than verifying robustness for a certain radius, this attempts to find the maximum radius being robust, which provide more information.\", \"I'm confused at the upper bound finding method. The solution of the QP will not necessarily respect the constraints of forward propagation of the network so if you just consider the variables corresponding to the input, the resulting output may not necessarily a violation. Also, I don't understand the motivation for why repeatedly solving the QP will lead to a good violation on the decision boundary. I know that the paper says that \\\"analytic investigation of this algorithm including a convergence proof remains future work.\\\" but at the moment there is not even an intuition for why it might be a good idea. By the second iteration, there is no notion of the reference point around which safety is computed so it's not sure how the closest violation would be found.\", \"The network tested are extremely small, even by formal verification of neural network standards, which makes it hard to appreciate the impact of the method and makes me question the applicability of the method. Is it because QP are more complex to solve than LPs?\", \"It is also a little bit problematic to give results as ratio of improvements over the lower bound on radius, when most of the network used are non robust, given that those networks have extremely low verified radius, so the relative difference will look inflated.\", \"The reporting of the verification ratio as a function of the perturbation radius is an interesting measure that I think is very benificial to making the point but I think it should be better explained as it took me a long time to get the point. The experiment section in general is quite confuse and hard to parse, having to jump around quite a lot to get what the author meant.\", \"FGSM is a very very weak baseline for the use that is employed here. By construction, it doesn't look for the smallest violation, is not iterative, and produce perturbations at the limit of the attacked budget.\", \"The paper takes the opportunity to say that their method is 2000x faster than SDP based method but not that they are 10x to 100x slower than CROWN (outside from the appendix). It's better to report results clearly than only trying to show the good points of the algorithm. The bounds obtained are tighter than those resulting from Crown so it might be a worthwile tradeoff to make.\", \"Probably worth discussing / comparing to:\"], \"provable_certificates_for_adversarial_examples\": [\"Fitting a Ball in the Union of Polytopes, Jordan et al. (ICML 2019)\", \"Typos and minor comments:\", \"Adjust the label for Figure 2\", \"\\\"Guarantied\\\" on page 4\", \"top of page 8 \\\"VerRation\\\"\"]}"
]
} |
HklE01BYDB | Improving Sample Efficiency in Model-Free Reinforcement Learning from Images | [
"Denis Yarats",
"Amy Zhang",
"Ilya Kostrikov",
"Brandon Amos",
"Joelle Pineau",
"Rob Fergus"
] | Training an agent to solve control tasks directly from high-dimensional images with model-free reinforcement learning (RL) has proven difficult. The agent needs to learn a latent representation together with a control policy to perform the task. Fitting a high-capacity encoder using a scarce reward signal is not only extremely sample inefficient, but also prone to suboptimal convergence. Two ways to improve sample efficiency are to learn a good feature representation and use off-policy algorithms. We dissect various approaches of learning good latent features, and conclude that the image reconstruction loss is the essential ingredient that enables efficient and stable representation learning in image-based RL. Following these findings, we devise an off-policy actor-critic algorithm with an auxiliary decoder that trains end-to-end and matches state-of-the-art performance across both model-free and model-based algorithms on many challenging control tasks. We release our code to encourage future research on image-based RL. | [
"reinforcement learning",
"model-free",
"off-policy",
"image-based reinforcement learning",
"continuous control"
] | Reject | https://openreview.net/pdf?id=HklE01BYDB | https://openreview.net/forum?id=HklE01BYDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"XM93tEDQra",
"S1lOShrmsB",
"SJlvWnSmjS",
"H1l3eXJ8tB",
"HkeFDVFtur",
"r1ly8pPKuH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738455,
1573243967519,
1573243902835,
1571316468174,
1570505825047,
1570499910666
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2020/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2020/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2020/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2020/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2020/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper investigates how sample efficiency of image based model-free RL can be improved by including an image reconstruction loss as an auxiliary task and applies it to soft actor-critic. The method is demonstrated to yield a substantial improvement compared to SAC learned directly from pixels, and comparable performance to other prior works, such as SLAC and PlaNet, but with a simpler learning setup. The reviewers generally appreciate the clarity of presentation and good experimental evaluation. However, all reviewers raise concerns regarding limited novelty, as auxiliary losses for RL have been studied before, and the contribution is mainly in the design choices of the implementation. In this view, and given that the results are on a par with SOTA, the contribution of this paper seems too incremental for publishing in this venue, and I\\u2019m recommending rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review response -- thanks for the feedback! [Part 2 of 2]\", \"comment\": \"Minor comments:\", \"r1\": \"Missing reference.\\nWe will update our paper with the missing reference, thank you for pointing this out.\", \"r2\": \"SLAC is not being model-based.\\nWe concur with the reviewer that SLAC does not use transitions sampled from the model for training and perhaps should be labeled as a model-free method. Our primary motivation was to showcase that SLAC trains a complicated latent model for dynamics that enjoys a lot of auxiliary supervision. In contrast, our method achieve competitive performance while being significantly simpler. We will update the wording in the paper to make the classification clearer.\", \"r3\": \"Representation power.\\nWe would like to point out that the encoder attributes for 90% of weights of our agent. The Q-function and policy networks are just small 3 layers MLPs on top of the encoder. Thus, we believe that the encoder should encapsulate some meaningful representations of internal states and our experiment is an adequate way to measure the amount of captured information.\"}",
"{\"title\": \"Review response -- thanks for the feedback! [Part 1 of 2]\", \"comment\": \"We thank the reviewers for their comments. In particular, we were gratified by R2\\u2019s positive observations: \\u201cThe approach is fairly simple and appears to be effective for a suite of challenging tasks\\u201d; \\u201cthis work could have a significant impact on the community\\u201d and \\u201cThe experiments are also thorough and well thought out\\u201d. We also appreciate that the reviewers find our paper to be well written and easy to follow.\\n \\n\\nR1, R3: Lack of novelty. \\nWe respectfully disagree with R1 and R3\\u2019s concerns. As we discuss in the paper, auxiliary objectives, including reconstruction loss, have certainly been used before in RL. However, the empirical performance of previous approaches is dramatically worse than our approach in Mujoco settings. This discrepancy is noted by R2, who correctly recognizes this as \\u201ca case where details matter\\u201d.\\n \\nWe are thus concerned that R1 and R3 may not fully appreciate this aspect of our paper. This concern is bolstered by R1\\u2019s view that our approach is that same as that of [Shelhamer et al.\\u201917], when in fact this work vividly illustrates how differing \\u201cdetails\\u201d lead to very different experimental outcomes. \\n\\nLike our paper, [Shelhamer et al.\\u201917] explored an auxiliary reconstruction loss (amongst others). But their training setup differs from ours in a variety of ways which turn out to be crucial to performance. In section 4.2, the authors note that \\u201cReconstruction by VAE is mostly harmful\\u201d and that \\u201cThe VAE even diverges for several environments\\u201d when trained stage-wise (section 4.3). Possibly because of this, the VAE is absent from their end-to-end training experiments (section 4.5). Thus due to an incorrect training protocol the authors (erroneously) conclude that an input reconstruction auxiliary loss is not effective. By contrast, our paper devises an *effective* training protocol for an input reconstruction auxiliary loss and shows that it is key to obtaining SOTA-comparable performance.\", \"r3\": \"Stability of training is due to SAC.\\nWe disagree. Augmenting SAC with a VAE (very similar conceptually to the RAE used in our approach) makes it highly unstable and sensitive to the choice of \\\\beta (see https://drive.google.com/open?id=1qYeiPXYl0iEmJYImZxDNhgjtNrDWtlfL ). Thus seems implausible that SAC is the main source of stability in our method. Furthermore, [Shelhamer et al.\\u201917] combine VAE\\u2019s with several RL algorithms and also find that it makes them unstable. \\n\\n\\nR1, R3: Importance of simplicity.\\nWe respectfully feel that this aspect has been overlooked by R1 and R3. Our method matches the performance of current SOTA methods, while being far simpler (both from an implementation and conceptual standpoint). For many reasons, this should make our approach preferable, not least of which is reproducibility. Indeed, our approach is straightforward to reimplement, unlike many RL algorithms. We also support our submission with a compact and easy to understand PyTorch implementation.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary\\n\\nThis paper proposes an approach to make the model-free state-of-the-art soft actor-critic (SAC) algorithm for proprioceptive state spaces sample-efficient in higher-dimensional visual state spaces. To this end, an encoder-decoder structure to minimize image reconstruction loss is added to SAC's learning objectives. Importantly, the encoder is shared between the encoder-decoder architecture, the critic and the policy. Furthermore, Q-critic updates backpropagate through the encoder such that encoder weights need to trade off image reconstruction and critic learning. The approach is evaluated on six tasks from the DeepMind control suite and compared against proprioceptive SAC, pixel-based SAC, D4PG as well as to the model-based baselines PlaNet and SLAC. The proposed method seems to achieve results competitive with the model-based baselines and significantly improves over raw pixel-based SAC. Further ablation studies are presented to investigate the information capacity of the learned latent representation and generalization to unseen tasks.\\n\\nQuality\\n\\nSince this is a paper with a strong practical focus, the quality needs to be judged based on the experiments. The quality of those are good in terms of the number of environments, baselines, benchmarks and seeds. I also liked the ablation studies to investigate latent representations and generalization to new tasks.\\n\\nClarity\\n\\nThe paper is very clearly written and easy to follow.\\n\\nOriginality\\n\\nUnfortunately, the originality is very low. Combining reinforcement learning with auxiliary objectives is not novel and has been studied in the Atari domain (discrete actions) as noted by the authors, see Jaderberg et al., ICLR, 2017 and Shelhamer et al., arXiv, 2017. The conceptual idea of using a reconstruction loss for images as auxiliary objective is not novel either and has been presented in earlier work already, see Shelhamer et al. The idea of sharing parameters between RL and auxiliary components is also not novel, see Jaderberg et al. One citation that is conceptually very similar to the authors' work is missing: 'Felix Leibfried and Peter Vrancx, Model-based regularization for deep reinforcement learning with transcoder networks. In NeurIPS Deep Reinforcement Learning Workshop, 2018'. The former work combines Q-value learning with auxiliary losses for learning an environment model end to end (with a reconstruction loss for the next state) in the domain of Atari.\\n\\nSignificance\\n\\nThe significance is minor to low. The fact that the authors investigate auxiliary losses in continuous-action domains has minor significance. But all in all, the paper might be better suited for a workshop rather than the main track of ICLR.\\n\\nMinor Details\\n\\nOn page 3, first equation (not numbered), there is an average over s_{t+1} missing because of the reward definition used by the authors?\\n\\nUpdate\\n\\nI read the other reviews and the authors' response. I still feel that the novelty of the work is very limited and the authors' response to lacking novelty does not convince me. However, in light of the strong experimental analysis, I feel in hindsight that a score of 1 from my side was too harsh. I therefore increase my score to 3, but I do still believe that the paper is better suited as a workshop contribution.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work presents a simple method for model-free RL from image observations. The key component of the method is the addition of an autoencoder that is trained jointly with the policy and value function, in contrast to previous methods which separate feature learning from policy learning. Another important modification is the use of a deterministic regularized autoencoder instead of a stochastic variational autoencoder. The method is evaluated a variety of control tasks, and shows strong performance when compared to a number of state-of-the-art model-based and model-free methods for RL with image observations.\\n\\nThe paper is well written and provides a very clear description of the method. The approach is fairly simple and appears to be effective for a suite of challenging tasks. RL from images remains a very challenging problem, and the approach outlined in this work could have a significant impact on the community. The experiments are also thorough and well thought out, and the release of the source code is much appreciated. While the overall novelty is a bit limited, this could be a case where details matter, and insights provided by this work can be valuable for the community. For these reasons, I would like to recommend acceptance.\\n\\nThere is mention of SLAC as a model-based algorithm. This is not entirely accurate. SLAC does learn a dynamics model as means of acquiring a latent state-representation, but this model is not used to train the policy or for planning at runtime. The policy in SLAC is trained in a model-free manner.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper aims to tackle the problem of improving sample efficiency of model-free, off-policy reinforcement learning in an image-based environment. They do so by taking SAC and adding a deterministic autoencoder, trained end-to-end with the actor and critic, with the actor and critic trained on top of the learned latent space z. They call this SAC-AE. Experiments in the DeepMind control suite demonstrate that the result models train much faster than SAC directly on the pixels, in some cases reaching close to the performance of SAC on raw state. Ablation studies demonstrate their approach is most stable with deterministic autoencoders proposed by (Ghosh et al, 2019), rather than the beta-VAE autoencoder proposed in (Nair et al, 2018), end-to-end learning of the autoencoder gives improved performance, and the encoder transfers to some similar tasks.\\n\\nI thought the paper was written well, and its experiments were done quite carefully, but it was lacking on the novelty front. At a high level, the paper has many similarities with the UNREAL paper (Jaderberg et al, 2017), which is acknowledged in the related work. This paper says it differs from UNREAL because they use an off-policy algorithm, and that UNREAL's auxiliary tasks are based off real-world inductive priors.\\n\\nI don't see the off-policy distinction as very relevant, because in the end, both UNREAL and SAC-AE are actor-critic algorithms (using A3C and SAC respectively). The way that SAC is used in the paper always collects data in a near on-policy manner, and UNREAL includes experience replay from a replay buffer, which introduces some off-policy nature to UNREAL as well. Therefore this doesn't feel like a strong argument.\\n\\nFurthermore, although some of the auxiliary tasks in UNREAL are based off human intuition for what makes sense in those environments, they also include task-agnostic auxiliary tasks: reward prediction and pixel-level control. These do not depend on real-world inductive priors, and are shown to improve performance.\\n\\nOverall, this doesn't feel like a strong enough contribution for ICLR.\", \"more_specific_comments\": [\"Section 6.1 examines the representation power of the encoder by reconstructing proprioceptive state from the encoder. I am not sure the comparison between SAC+AE and SAC is particularly meaningful here. The predictors are learned on top of the encoder output, and in SAC+AE we would expect task information to be encoded in the learned z. But in baseline SAC, there is no reason to expect this to be true - task information is more likely to be distributed across the entire network architecture. The case for SAC+AE seems much stronger from the reward curves, rather than these plots.\", \"The paper argues that their approach is stable and sample-efficient, but when looking at the reward curves, it looked about as stable as SAC. Figure 3 (where they do not train the VAE end-to-end in the red curve) has a similar story. This makes me believe that any claims of added stability are more thanks to SAC, rather than proposed methods.\"], \"edit\": \"I would like to clarify that the rating system only provides a 3 for Weak Reject and 6 for Weak Accept. On a 1-10 scale I would rate this as a 5, I feel it is closer to Weak Accept than Weak Reject.\", \"edit_2\": \"I've read the other author's comments. I'm not particularly convinced by the case for novelty, but I didn't realize that UNREAL's replay buffer was only 2k transitions instead of 1 million transitions. On reflection, I believe the main contribution here is showing that deterministic autoencoders are more reliable than stochastic ones for the RL setting, and this isn't the biggest contribution, but it's enough to make me update to weak accept.\"}"
]
} |
rJe7CkrFvS | Improving Exploration of Deep Reinforcement Learning using Planning for Policy Search | [
"Jakob J. Hollenstein",
"Erwan Renaudo",
"Justus Piater"
] | Most Deep Reinforcement Learning methods perform local search and
therefore are prone to get stuck on non-optimal
solutions. Furthermore, in simulation based training, such as
domain-randomized simulation training, the availability of a simulation
model is not exploited, which potentially decreases
efficiency. To overcome issues of local search and exploit
access to simulation models, we propose the use of kino-dynamic
planning methods as part of a model-based reinforcement learning
method and to learn in an off-policy fashion from solved planning
instances. We show that, even on a simple toy domain, D-RL
methods (DDPG, PPO, SAC) are not immune to local optima and
require additional exploration mechanisms. We show that our
planning method exhibits a better state space coverage, collects
data that allows for better policies than D-RL methods without
additional exploration mechanisms and that starting from the
planner data and performing additional training results in as
good as or better policies than vanilla D-RL methods, while also
creating data that is more fit for re-use in modified tasks.
| [
"reinforcement learning",
"kinodynamic planning",
"policy search"
] | Reject | https://openreview.net/pdf?id=rJe7CkrFvS | https://openreview.net/forum?id=rJe7CkrFvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"hTvbEPHVDk",
"rklbNiVniB",
"S1ez8cE2jB",
"HJeltOV2jS",
"Bkefy-y0Fr",
"ByxPXHFptS",
"SygxmJz8YH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738423,
1573829417277,
1573829194110,
1573828727728,
1571840218313,
1571816734793,
1571327768132
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2019/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2019/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2019/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2019/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2019/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2019/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper is about exploration in deep reinforcement learning. The reviewers agree that this is an interesting and important topic, but the authors provide only a slim analysis and theoretical support for the proposed methods. Furthermore, the authors are encouraged to evaluate the proposed method on more than a single benchmark problem.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Comment\", \"comment\": \"\\u2022 Thank you for your review!\\n\\u2022 @1.) Eventually, probably the most interesting final metric is task success and therefore achieved return \\u2013 but that strongly depends on the task.\\n\\nWithout assumptions on how the reward is structured (with respect to the state space), it is not possible to exclude portions of the state space and without excluding regions of the state space it is not possible to explore more intelligently. \\n\\nIn dynamical systems, the distance between two state space vectors cannot be measured by simply taking the euclidean distance \\u2013 this is due to the dynamics (under-actuation, obstacles). A simple example is a pendulum swing-up, the mountain car example or a robot in a maze. While the target location (in the state space) might be close in terms of euclidean distance, it may not be possible to reach that point (not enough torque and force, or a blocking wall). As such it is hard to define guiding assumptions for the exploration. As such uniform exploration of the state space appears to be a crude but reasonable approach.\\n\\n\\u2022 @1.) Curse of dimensionality: The method will suffer from the curse of dimensionality, however, this is also true for other methods - probably ways to deal with this problem are a) to reduce the high dimensional problem to a lower dimensional one, or b) to use heuristics and solve it only approximately.\\n\\nOne benefit of RRT and the local steering method is that even in high dimensional spaces the tree will span the state space coarsely if the dynamics allow that.\\n\\n\\u2022 @1.) RRT/RRT*: The difference between RRT and RRT* is that RRT* finds optimal paths from the initial point to the target points in the state space, while the paths found by RRT are not optimal (i.e. not the shortest paths) - however, RRT* requires additional environment steps to perform this optimization - whereas we mostly want to use RRT to find ways to reach large areas of the state space and optimize around the most promising regions.\\nThis is also visible in Figure 5, where training is done from 50k RRT steps (full exploration) and then slowly replaced by samples from SAC - thereby fading from pure exploration to exploitation. Although unfortunately we did not highlight this aspect well.\\n\\n\\u2022 @2.) While gradient-based algorithms suffer from local optimality, it does not feature prominently in D-RL research. And given the large amount of excitement around D-RL and the impressive success (e.g. the OpenAI work using the Shadow Dexterous Hand, although this was achieved with great effort - thirteen thousand years of experience) - we felt it beneficial to show that this is a problem that actually happens in practise and therefore is relevant.\\n\\u2022 We will include more experiments in a future extension of this paper.\"}",
"{\"title\": \"Comments on Questions\", \"comment\": \"\\u2022 Thank you for your review!\\n\\u2022 We eventually want to apply our method on robotics tasks and therefore we focus on continous state/continuous action spaces.\\n\\u2022 The paper by Zhan et al. \\u201919 (\\u201cTaking the scenic route: Automatic exploration for video games\\u201d) shows an interesting idea to extend the use of RRT even to domains like the Atari games: they use features of a neural network as a low-dimensional continuous embedding of an (Atari/similar) game state (i.e. the image). They use a simplified version of RRT that samples a target point, restores the closest state stored in the tree and tries to reach that target\\npoint. However usually RRT uses a local steering method to reach that target point \\u2013 while Zhan et al. use a (random) action sequence irrespective of the target point.\\nSince we want to target to robotics tasks, where action and feature spaces are inherently continuous and discretization becomes infeasible, we need to be able to deal with such action spaces. Moreover and related to the next comment, random steering will often not be beneficial: either it cannot be done at all, or it is too inefficient.\\n\\u2022 MCTS: Since we want to eventually apply our method on robotic tasks, we are focusing on continuous domains, we focus on planning methods that are able to deal with such domains and therefore sampling based planners. AlphaGo is an impressive showcase of an extended version of MCTS - however MCTS needs extensions to be applicable in continuous domains, such as\\nfor example (but not limited to) HOOT (Mansley et al.,\\u201cSample-Based Planning for Continuous Action Markov Decision Processes\\u201d), to be applicable to continuous action domains.\", \"a_second_aspect_is_that_mcts_typically_does_not_use_the_information_often_available_in_these_continuous_domains\": \"the locally linearised dynamics - which the local steering method of RRT exploits.\\nWe therefore chose RRT as a reasonably effective, yet reasonably simple to implement planning method - although we do not foresee any reason why other planning methods would not work as long as they are applicable to kino-dynamic domains, and produce environment interactions.\\n\\u2022 We will add more tasks in the next extension.\"}",
"{\"title\": \"Comment\", \"comment\": \"\\u2022 Thanks for your review!\\n\\u2022 Our proposed method uses a planning method (in this implementation RRT) in the learning and data collection phase - which is then used to learn a policy. During execution the policy\\nis used thereby eliminating the planner time from policy execution.\\n\\u2022 The time taken by the planner during the offline data collection phase is not evaluated in our paper yet - we will add that in a future extension.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper is mostly easy to read and I enjoyed reading it. The authors address an important issue of exploration in reinforcement learning and the used of a model-based planner is certainly a promising direction. However, I do have a number of concerns.\\n\\n1. On Q1. I think the key question here is this -- should state-space coverage be the only measure for effective exploration? The classical dilemma of explore-or-exploit in reinforcement learning is relevant here. From Figure 3, it seems that RRT tends to explore uniformly rather than \\\"intelligently\\\". For problems where there is absolutely no information guiding the exploration process this might be desirable, but then the search complexity will suffer from the curse of dimensionality and there is no evidence in this work that this is a good strategy. Perhaps switching from RRT to RRT* helps but the authors chose not to do it.\\n\\n2. On Q2. Perhaps I missed something here but other than special cases (e.g. convex problems) almost all gradient-based algorithms suffer from local optimality. I am not sure Q2 is a good question to ask here.\\n\\n3. On Q3. It seems that SAC from scratch is the best-performing approach here. This particular setting is hardly convincing in motivating the re-use of examples across tasks.\\n\\nThe above concerns, plus the fact that only one particularly simple task is being investigated here, prevent me from recommending acceptance.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper aims to improve exploration in DRL through the use of planning. This is claimed to increase state space coverage in exploration and yield better final policies than methods not augmented with planner derived data.\\n\\nThe current landscape of DRL research is very broad, but RRT can only directly be applied in certain continuous domains with continuous action spaces. With learned embedding functions, RRT can be applied more broadly (see \\\"Taking the Scenic Route: Automatic Exploration for Videogames\\\" Zhan 2019). The leap from RRT-like motion planning to the general topic of \\\"planning\\\" for policy search is not well motivated explained with respect to the literature. Uses of Monte Carlo Tree Search (as in AlphaGo) seem obviously related here.\\n\\nThis reviewer moves to reject the paper primarily on the grounds of overinterpreting experimental results from a single, extremely simple example RL task. In a domain so small, we can't tease out the role of exploration, we aren't engaging with the \\\"deep\\\" of DRL, and we are only considering one specific kind of planning. The implicit claims of general improvement to exploration and improved downstream policies are not supported by the experimental results. At the same time, no theoretical argument is attempted that would make up for the very narrow nature of the experiments.\", \"questions_for_the_authors\": [\"If HalfCheetah is used to motivate the work, and it is so easily available in the open source offerings from OpenAI, why isn't one (or many more) tasks of *at least* this complexity considered? MountainCar is one of the gym environments with a 2D phasespace compatible with the kinds of plots used in this paper.\", \"Could the authors taxonomize the landscape of planning and provide a specific argument for focusing on RRT? (RRT is a fun algorithm, but how will you draw the attention of other researchers who are currently focused on Atari games?)\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper suggested that conventional deep-reinforcement learning (D-RL) methods struggle to find global optima in toy problem when two local optima exist. The authors proposed to tackle this problem using planning method (Rapidly Exploring Random Tree, RRT) to expand the search area. Since the collected data are not correlated with reward, it is more likely to find the global optima in toy problem with two local optima . As to the planning time problem, they proposed to synthesize the planning results into a policy.\\n\\nThe experiments proved that the proposed method performs better in the aforementioned toy problem, and has advantage in adapting dynamic environment. However, the authors failed to provide sufficient analyis and theoretical support for the proposed method, plus it did not address the weakness of the RRT method-the problem of planning time.\"}"
]
} |
SJlM0JSFDr | A Theoretical Analysis of Deep Q-Learning | [
"Zhuoran Yang",
"Yuchen Xie",
"Zhaoran Wang"
] | Despite the great empirical success of deep reinforcement learning, its theoretical foundation is less well understood. In this work, we make the first attempt to theoretically understand the deep Q-network (DQN) algorithm (Mnih et al., 2015) from both algorithmic and statistical perspectives. In specific, we focus on a slight simplification of DQN that fully captures its key features. Under mild assumptions, we establish the algorithmic and statistical rates of convergence for the action-value functions of the iterative policy sequence obtained by DQN. In particular, the statistical error characterizes the bias and variance that arise from approximating the action-value function using deep neural network, while the algorithmic error converges to zero at a geometric rate. As a byproduct, our analysis provides justifications for the techniques of experience replay and target network, which are crucial to the empirical success of DQN. Furthermore, as a simple extension of DQN, we propose the Minimax-DQN algorithm for zero-sum Markov game with two players, which is deferred to the appendix due to space limitations. | [
"reinforcement learning",
"deep Q network",
"minimax-Q learning",
"zero-sum Markov Game"
] | Reject | https://openreview.net/pdf?id=SJlM0JSFDr | https://openreview.net/forum?id=SJlM0JSFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"zzNZLRApya",
"BJe7ITasir",
"Hklsij6joH",
"Skx3BiTjiH",
"Bylbko6ooH",
"SJej2cpsor",
"r1xwdcaiiS",
"SJlXXDdAcr",
"SJeRYDrnYr",
"SyeZTEeEFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738393,
1573801291404,
1573800867070,
1573800771831,
1573800664604,
1573800627181,
1573800559030,
1572927259091,
1571735429705,
1571189945076
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2017/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2017/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2017/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2017/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2017/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2017/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2017/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2017/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2017/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors offer theoretical guarantees for a simplified version of the deep Q-learning algorithm. However, the majority of the reviewers agree that the simplifying assumptions are so many that the results do not capture major important aspects of deep Q-Learning (e.g. understanding good exploration strategies, understanding why deep nets are better approximators and not using neural net classes that are so large that can capture all non-parametric functions). For justifying the paper to be called a theoretical analysis of deep Q-Learning some of these aspects need to be addressed, or the motivation/title of the paper needs to be re-defined.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"We appreciate the valuable reviews by the reviewers and have updated a revised version\", \"comment\": \"We appreciate the valuable reviews by the reviewers and the efforts the reviewers have dedicated. it seems that both Reviewers 2 and 4 are concerned with the motivation of our FQI algorithm. In the revised version, we clearly state in the abstract that the algorithm we consider is FQI with deep neural networks, which is a simplification of DQN that captures the features of experience replay and the target network. In the introduction and section 3, we explain in detail why such a simplification is reasonable.\"}",
"{\"title\": \"Response to Reviewer 2 regarding detailed comments\", \"comment\": \"Due to space limits, we address the detailed comments in a separate reply as follows.\\n\\n1. (Sparse ReLU). We agree that finding the ERM solution in each iteration of FQI is computationally challenging. Here our focus is on the theoretical properties. We adopt sparse NNs because they are a family of universal function approximators, and we consider general MDPs with a weak assumption on the smooth of the transition.\\n\\nTo alleviate the computational problem, we can instead focus on the family of overparametrized neural networks. However, it remains open that the representation power of this class of NNs. In other words, it might incur a large bias when applying the Bellman operator $\\\\mathcal{T}$ to this function class, i.e., $\\\\inf_{f\\\\in \\\\mathcal{F}} \\\\sup_{g \\\\in \\\\mathcal{F} } \\\\| f - Tg \\\\|$ is large when $\\\\mathcal{F}$ is the family of overparametrized NN. Nevertheless, we have also characterized the statistical error of FQI under this setting in Appendix B.\\n\\n2. We thank the reviewer's suggestion on the paper presentation. We write Section 3 in details to explain that reducing DQN to the version of FQI considered in our work is reasonable. The main message is that although DQN has the tricks of experience replay with a large memory size and target network that is fixed for a long time, the ideal version of this DQN reduces to our FQI, which motivates our analysis in Section 4.\\n\\nWe will revise this section to highlight our motivation and also try to make the presentation neat.\\n\\n3. We did not claim that the estimator in Appendix B solves the ERM in (3.4). Instead, we acknowledge that this problem is computationally intractable. In Appendix B, we would like to provide another neural FQI algorithm which can be computed. We also analyze the statistical error of this setting in Appendix B. \\n\\n4. As mentioned previously, the reason that we called it deep Q-learning is because our algorithm is a reasonable simplification of the DQN algorithm. Our algorithm is fitted Q-iteration with deep neural networks. Note that sampling from a fixed distribution is a standard assumption in FQI ([Munos and Szapesvari, 2008]). We have revised the abstract to make it clear that we focus on FQI.\\n\\n5. The proof of Theorem 4.4 consists of three parts: 1) error propagation, which studies how the regression error in each iteration accumulates as the FQI algorithm proceeds, 2) the regression error in each iteration, and 3) balance the error terms to get the final statistical error. \\n\\nThe general error propagation in our work essentially follows from the results in FVI ([Munos and Szapesvari, 2008]). However, the regression error analysis in the second part involves the particular structure of the Q-network. We need to analysis control the bias and variance separately. Moreover, in the last step, we explicit characterize the bias term of applying Bellman operator to the class of Q-networks $\\\\mathcal{F}$, $\\\\inf_{f\\\\in \\\\mathcal{F}} \\\\sup_{g \\\\in \\\\mathcal{F} } \\\\| f - Tg \\\\|$. Thus, the last two steps of the analysis are specific to the deep neural networks and are not covered in [Munos and Szapesvari, 2008].\\n\\n6. In our FQI algorithm, the target network is just the Q-network in the last iteration. That is, we fixed the previous Q-network as the target network and learn a new Q-network via regression using DNN. Then the new Q-network is used to replace the target network.\", \"references\": \"[Lange et al, 2012] Batch Reinforcement Learning. Sascha Lange, Thomas Gabel, and Martin Riedmiller, 2012. \\\\textit{https://link.springer.com/chapter/10.1007/978-3-642-27645-3_2}\\n\\n[Chen and Jiang, 2019] Information-Theoretic Considerations in Batch Reinforcement Learning. Jinglin Chen and Nan Jiang, 2019. \\\\textit{https://arxiv.org/abs/1905.00360}\\n\\n[Barron and Klusowski, 2018] Approximation and Estimation for High-Dimensional Deep\\nLearning Networks. Andrew R. Barron and Jason M. Klusowski, 2018. \\\\textit{https://arxiv.org/abs/1809.03090}\\n\\n[Munos and Szapesvari, 2008] Finite-time bounds for fitted value iteration. Remi Munos and Csaba Szepesvari. Journal of Machine Learning Research. 2008\"}",
"{\"title\": \"Response to Reviewer 2 regarding general comments\", \"comment\": \"We appreciate the valuable comments from the reviewer. We first address the concern on the assumption of i.i.d. sampling from a fixed behavioural policy and then address each detailed comments separately.\\n\\n\\nSampling i.i.d. data from a behavioural policy:\\n\\nAs also pointed out by Reviewer 4, the challenges of DQN involves exploration and generalization. In this work, we avoid the exploration problem by assuming sampling i .i.d. data from the behavioural policy and concentrability coefficients are bounded. We did not tackle exploration as provably efficient RL algorithms under the general function approximation setting remains an open problem. To study this problem, a standard metric is called ``regret'', and normally we need to modify the algorithm by constructing some ``optimistic value functions''. We believe that this is beyond the scope of this work. We only want to understand the vanilla version of DQN, which uses the $\\\\epsilon$-greedy approach for exploration. \\n \\n Here, our assumption of i.i.d. sampling from a behavioural policy is motivated by the common practice of having an extremely large memory buffer and sample i.i.d. data from the replay memory. Since the size of reply memory is huge, the sampling distribution of data changes very slowly as we update the replay memory. This empirical trick essentially aims to create i.i.d. data from sampling distribution, and is captured by our simplification of sampling i.i.d data from a fixed distribution.\\n \\nBesides, in DQN training, the target network is usually fixed for a long time with only the Q network updated by gradient descent. Then the target network is updated using the weights of the Q-network. This is essentially solving a regression problem with a fixed target network and use the Q network as the regressor. Then the learned Q network is used to update the target network. Thus we recover the FQI algorithm.\\n\\nTherefore, with a slight simplification of the tricks of experience replay and target network, we arrive at the FQI algorithm studied in our work. This motivates the study of our algorithm. \\n\\nMoreover, we tend not to agree with the reviewer on that \\\"FQI is not a traditional reinforcement learning algorithm\\\". In fact, FQI belongs to the family of batch reinforcement learning methods, which has lots of existing work. Batch RL is motivated by the fact that we would like to solve the reinforcement learning problem purely from historical data without the help of a simulator. Such a type of RL problems arises in applications such as recommendation system and prescription medicine, where it is challenging to run a trial-and-error algorithm or build a simulator. Please kindly find [Lange et al, 2012] for a survey and [Chen and Jiang, 2019] for recent understandings of the challenges of this problem.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for appreciating this work and for pointing out the typos. We have addressed the issues raised by the other reviewers and revised our work accordingly.\"}",
"{\"title\": \"Response Continued (references)\", \"comment\": \"Due to space limits, we list the references as follows.\", \"references\": \"[Chen and Jiang, 2019] Information-Theoretic Considerations in Batch Reinforcement Learning. Jinglin Chen and Nan Jiang, 2019. \\n\\n[Barron and Klusowski, 2018] Approximation and Estimation for High-Dimensional Deep Learning Networks. Andrew R. Barron and Jason M. Klusowski, 2018.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We appreciate the valuable comments from the reviewer. We first address the general concerns of the reviewer on the assumptions we made and then address each detailed comments separately.\", \"assumptions_regarding_exploration_and_generalization\": \"Our work aims to understand the empirical success of deep Q-learning from a theoretical perspective. To fully understand DQN, one indeed needs to tackle the challenges of exploration and generalization simultaneously. However, in this work, we pursue a more modest goal by only focusing on the generalization. Particularly, we view DQN as a sequence of fitted Q-iterations with neural networks. We aim to understand how the error incurred in each iteration of FQI affects the final generalization error (statistical error) of the DQN algorithm. \\n\\nMoreover, it is known that DQN adopts two tricks that have not been well understood -- i) experience replay and ii) target network. In practice, the memory size of experience replay is extremely large, and the target network is usually fixed for a large number of parameter updates of the Q network. As we show in Section 3, this motivates us to study the statistical error by looking into the problem of FQI.\", \"address_the_detailed_comments\": \"1. The assumption on i.i.d. data and concentrability coefficients. \\n \\n As pointed out by the reviewer, we avoided the exploration problem by assuming sampling i .i.d. data from the behavioural policy and concentrability coefficients are bounded. We did not tackle exploration as provably efficient RL algorithms under the general function approximation setting remains an open problem. To study this problem, a standard metric is called ``regret'', and normally we need to modify the algorithm by constructing some ``optimistic value functions''. We believe that this is beyond the scope of this work. We only want to understand the vanilla version of DQN, which uses the $\\\\epsilon$-greedy approach for exploration. \\n \\n Here, our assumption of i.i.d. sampling from a behavioural policy is motivated by the common practice of having an extremely large memory buffer and sample i.i.d. data from the replay memory. Since the size of reply memory is huge, the sampling distribution of data changes very slowly as we update the replay memory. This empirical trick essentially aims to create i.i.d. data from sampling distribution, and is captured by our simplification of sampling i.i.d data from a fixed distribution.\\n \\n In addition, we admit that the `` bounded concentrability coefficients'' is a technical assumption, which is used to capture the distributional shift caused by having different policies. As shown in [Chen and Jiang, 2019], concentrability is a necessary assumption for theoretical analysis. Moreover, they show that, even when the function class is closed under the Bellman operator, the reinforcement learning problem is computationally hard when the concentrability assumption is missing. Thus, we adopt this assumption for the aim of theoretical analysis. This assumption holds true when the transition kernel of the MDP has some nice properties. For hard exploration problems such that this assumption fails to hold, the hardness result in [Chen and Jiang, 2019] show that it is also not hopeful to solve efficiently using DQN.\\n \\n 2. Holder smooth assumption, nonparametric rate, and the usage of neural network.\\n \\n As the reviewer has pointed out, we do not assume that the neural network function class is closed under the Bellman operator. Instead, we show that the target function is Holder smooth when the transition kernel satisfies certain smoothness conditions. Then we show that the neural network class yields a nonparametric rate of convergence. \\n \\n It is true that the nonparametric rate can also be obtained by RKHS regression. Thus, this result does not exhibit the superiority of using neural networks. However, we adopt the deep neural network as the parametrization of the Q function of RL because our goal is to understand DQN. From that perspective, we show that, DQN roughly works as good as FQI with other nonparametric regressors. Note that this type of theoretical guarantees of DQN is not known before. \\n \\n Moreover, even in supervised learning, it seems that, without extra problem structures, the theoretical guarantees of deep learning is at most as good as kernels. Here we also assume Holder smooth for generality. Suppose we are willing to assume more problem structures, we can obtain a much faster $1/ \\\\sqrt{n}$ rate. For example, suppose the Bellman operator is closed for the family of DNNs, by extending the analysis in [Barron and Klusowski, 2018], we obtain a $\\\\sqrt{L^3 \\\\log d / n}$ rate, where $L$ is the number of layers and $d$ is the input dimension.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper addresses an important and challenging problem, and the results appear technical sound. It is also fairly thorough and well-written. However, my overall feelings toward the result are mixed, because I believe the assumptions the authors make are so strong that they essentially remove most of the interesting problem structure, and consequently the results follow by straightforward application of known techniques.\", \"two_major_challenges_in_understanding_dqn_are_as_follows\": \"1) Exploration: Why does the algorithm successfully explore and solve MDPs with large state spaces?\\n2) Generalization: How do the overparameterized neural networks used for value function approximation help with generalization (or exploration)?\\n\\nThe issue of exploration is assumed away by the authors, as they work in the batch/offline RL setting where examples (s,a,r,s') are i.i.d., and the assume that the so-called \\\"concentrability coefficient\\\", which measures mismatch between the data distribution and the data induced by the optimal policy, is bounded. This assumption is standard in the analysis of fitted Q-iteration for off-policy RL (eg, Munos and Szepesvari '08), but it implies that the algorithm does not need to solve a challenging exploration problem, since the data-gathering policy has good coverage. Unfortunately, the authors do not justify why this assumption should hold for DQN.\\n\\nThe standard analysis for off-policy fitted Q-iteration does not simply require that the concentratability coefficient is bounded, but also requires another strong assumption, which is that the function class is closed/complete under bellman updates. In general, this is a difficult property to verify, and it is well-known that fitted Q-iteration can cycle and fail to converge when it does not hold. This leads to the issue of generalization: The way the authors get around the issue of closedness/completeness is to work in the fully nonparametric regime: They take the class of neural nets under consideration to be large enough to approximate any Holder smooth function, then show that under mild assumptions on the dynamics this class of Holder smooth functions is closed under bellman updates. This is a good trick, but it has an unfortunate consequence, which is that by blowing up the class of neural networks, the generalization bound one can prove is quite weak. Ultimately, the generalization bound the authors give follows the standard rate for Holder-smooth functions in nonparametric statistics, which is exponential in dimension whenever the function class is $p$th order smooth for constant $p$. For example, when the class of functions is lipschitz the rate is $n^{-1/(2+d)}$, where $n$ is the number of examples. Since we are paying the fully nonparametric rate for generalization here, this begs the question of why neural nets were even used to begin with, which is not addressed.\\n\\nTo conclude, this is a certainly a challenging problem, but I don't think the paper is transparent about the limitations of the techniques (as described above) and I believe the title of the paper, \\\"A theoretical analysis of deep Q-learning\\\", is too strong given the shortcomings of the results.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors provide a theoretical analysis of deep Q-learning based on the neural fitted Q-iteration (FQI) algorithm [1]. Their analysis justifies the techniques of experience replay and target network, both of which are critical to the empirical success of DQN. Moreover, the authors establish the algorithmic and statistical errors of the neural FQI algorithm.\\nThen, the authors propose the Minimax-DQN algorithm for the zero-sum Markov game with two players. They further establish the algorithmic and statistical convergence rates of the sequence of action-value functions obtained by the Minimax-DQN algorithm.\\n\\n[1] Martin Riedmiller. Neural fitted Q iteration\\u2013first experiences with a data efficient neural reinforcement learning method. In European Conference on Machine Learning, pp. 317\\u2013328. Springer, 2005.\\n\\nThe strengths of this paper are as follows.\\n1. This paper is theoretically sound. The authors establish the convergence rates with detailed proofs step by step. \\n2. It is the first theoretical analysis that provides the errors of the neural FQI algorithm with a ReLU network. This analysis provides a rigorous approach to understand deep q-learning algorithms.\\n3. The authors propose an extension of DQN for the zero-sum Markov game with two players. They further analyze the convergence rates of the sequence of action-value functions obtained by the proposed algorithm.\", \"minor_comments\": \"1. Page 2: In Notation, \\\"$\\\\|f\\\\|_{2,v}$\\\" may be \\\"$\\\\|f\\\\|_{v,2}$\\\"\\u3002\\n2. Page 3: In the 2th line of Section 2.2, \\\"$\\\\{d_j\\\\}_{i=0}^{L+1}$\\\" may be \\\"$\\\\{d_i\\\\}_{i=0}^{L+1}$\\\".\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper analyze the off-policy policy improvement algorithm with a very limited sparse ReLU function class. Although some of the results are interesting, I have lots of concern on the motivation of this work. I believe this paper is technically correct, but the authors focus on a very simplified case: they just use samples generated from a fixed sampling policy, which gets rid of the analysis of exploration and sample complexity that is a main focus of the reinforcement learning community. From my point of view, this may not be some analysis of reinforcement learning, at least not for Deep Q-learning, but more likely to be some learning theory of off-policy FQI. The main theorem investigates the statistical error and convergence rate of this problem, which can be of individual interest. But overall, I think the problem the authors want to solve is not a traditional reinforcement learning algorithm, and it is not appropriate to introduce the result as the theoretical analysis of Deep Q-Learning.\", \"detailed_comments\": \"1. I think the assumption of Sparse ReLU network is too strong and generally not held in practice. Also, the optimization of such kind of network is painful, as the ell_0 constraint makes the optimization problem NP-hard. In other words, I think the authors only handle a very specific case under very ideal condition like assuming an oracles that can return the optimal network each turn.\\n2. The equivalence between FQI and target network is well-known and may not occupy so many places in Sec 3. Also, the results in Appendix B can be simply derived follows the recent development of neural network optimization. As this may be not the main contribution of this paper, I think it is better to omit these parts to make the paper more neat. \\n3. Moreover, in appendix B, the authors assumed the function class as two-layer ReLU network, which is different from the assumption in the main text and cannot justify the global convergence of (3.4).\\n4. It is somewhat strange of assume a sampling distribution, as when we say Q-learning, we want to balance the exploration and exploitation given current estimation Q. Even in Deep Q-Learning, the data are sampled with \\\\epsilon-greedy policy w.r.t the current Q network. This kinds of problems are more like off-policy policy improvement. I think call it the analysis of Deep Q-Learning is somehow not accurate and over-claimed. Maybe better called off-policy policy improvement with deep neural networks.\\n5. Theorem 4.4 is an interesting result as it shows that the error of the proposed algorithm can be decomposed into the a statistical error which depends on the smoothness of the operator Tf and an algorithm error that depend on the number of iterations. I am wondering what's the main technical differences between this work and [1], as I find the main difference is [1] don't give K-dependent algorithm error, instead assuming K have a order of log 1/epsilon to ensure algorithm error is smaller than \\\\epsilon. I feel it's not so hard to derive a bound that combines statistical error and algorithm error for [1]. Also, FVI in [1] is not in spirit totally different from FQI in this paper given that [1] use the maximum operator over action when do FVI, not take expectation over the target policy. I hope the authors can clarify in their paper.\\n6. The authors don\\u2019t mention much of the target network in the main theorem. I know the generalization to the update with target network is not so hard, but as the authors mentioned so much time in the main text, shall it be better to include the result with target network?\\n\\nStill, in my opinion, the main theorem has its own value. However, it is not proper to claim as a theoretical analysis of Deep Q-Learning. Also, I feel the function class is too restricted and the optimization issue in the proposed algorithms cannot be simply solved, and the main analysis is similar to [1] with little generalization to Holder smoothness. Thus, I tend to reject this paper.\\n\\n[1] Munos, R\\u00e9mi, and Csaba Szepesv\\u00e1ri. \\\"Finite-time bounds for fitted value iteration.\\\" Journal of Machine Learning Research 9.May (2008): 815-857.\"}"
]
} |
SkgGCkrKvH | Decentralized Deep Learning with Arbitrary Communication Compression | [
"Anastasia Koloskova*",
"Tao Lin*",
"Sebastian U Stich",
"Martin Jaggi"
] | Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters. As current approaches are limited by network bandwidth, we propose the use of communication compression in the decentralized training context. We show that Choco-SGD achieves linear speedup in the number of workers for arbitrary high compression ratios on general non-convex functions, and non-IID training data. We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models (i) over decentralized user devices, connected by a peer-to-peer network and (ii) in a datacenter. | [
"deep learning",
"arbitrary communication compression",
"training",
"deep learning models",
"key element",
"data privacy",
"learning",
"networks",
"efficient",
"large compute clusters"
] | Accept (Poster) | https://openreview.net/pdf?id=SkgGCkrKvH | https://openreview.net/forum?id=SkgGCkrKvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"n2_XdJWxe5",
"SkxoY4D2oH",
"HygWCrGcjr",
"BJgWiHfqjB",
"B1l17Sz5sS",
"SyeX0EMciB",
"Hkx924tpKr",
"BJgEjGYpKH",
"rkeaNDmTKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738361,
1573840003382,
1573688777040,
1573688729409,
1573688598522,
1573688523515,
1571816626100,
1571816092252,
1571792692614
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2016/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2016/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2016/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2016/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2016/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2016/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2016/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2016/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors present an algorithm CHOCO-SGD to make use of communication compression in a decentralized setting. This is an interesting problem, and the paper is well-motivated and well-written. On the theoretical side, the authors prove the convergence rate of the algorithm on non-convex smooth functions, which shows a nearly linear speedup. The experimental results on several benchmark datasets validate the algorithm achieves better performance than baselines. These can be made more convincing by comparing with more baselines (including DeepSqueeze and other centralized algorithms with a compression scheme), and on larger datasets. The authors should also clarify results on consensus.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revision 2\", \"comment\": \"We thank the reviewers again for useful comments.\\n\\nWe did our best to provide new results for the additional experiments.\"}",
"{\"title\": \"General Comments on Revision 1\", \"comment\": \"We would like to thank the reviewers for their useful comments. We have fixed all typos and included a few additional numerical experiments that where requested (some experiments are still running/scheduled and we plan to update our draft again on Friday with additional results).\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your valuable comments. We have addressed the main concerns that you indicated as the reason for your rating \\u20183: weak reject\\u2019 and we believe that we could sufficiently improve in all these aspects. Especially, we clarify below that our evaluation of the test performance is correct.\\n\\n[1. Comparison to DeepSqueeze]\\nThank you for pointing us to this highly related parallel work. In the revision we added DeepSqueeze to our comparison in Table 1 (all compression schemes on Cifar 10). The results for sign compression show that DeepSqueeze performs slightly worse than DCD and CHOCO-SGD. We will try to provide more of the missing values in the Table until the revision deadline (or latest for the final version).\\n\\nFor the experiments we independently tuned the hyperparameters of DeepSqueeze, with the same grid search as for the other schemes (the grid in our search is dynamically extend, to make sure that the chosen values do not lie on the boundary and are indeed optimal), allowing for a fair comparison. \\n\\n[2. Consensus]\\nWe added evaluation of the consensus distance to the paper. (See Fig. 9).\\n\\n[3. Reporting of the performance]\", \"please_let_us_clarify\": \"We mention on page 5 that \\u201cWe evaluate the top-1 test accuracy on every node separately over the whole dataset and report the average performance over all nodes.\\u201d In formulas, this means that we report mean(g(x_i)), where g() measure the test accuracy (on the full test set) and x_i denotes the model on node i (this is the expected performance of a uniform random sampled model). We agree with you that if nodes would only evaluate performance with respect to a local part of the test set, then the results would not be convincing, but this is not what we report.\\n\\nComputing the averaged model \\\\bar{x} = mean(x_i) requires one more full communication round (and might not be possible in the peer-to-peer decentralized setting). \\n\\nFor completeness, we added the test performance g(\\\\bar{x}) of the averaged model to the appendix (i.e. Figure 9 for the social network graph experiment with Resnet20 on Cifar10 dataset; we will run LSTM on WikiText-2 to obtain the corresponding plots as well).\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your positive assessment of our work. We have moved the pseudo code of the proposed momentum version of CHOCO-SGD to the main text as you suggested. Thus, we would like to ask you if you could reconsider your score to align it with your very positive comments (\\u2018I believe this paper is ready for publication.\\u2019)\\n\\n1. We decided to keep the pseudo code of CHOCO-SGD with general averaging (Algorithm 3 in the revision) in the appendix, to avoid to introduce additional notation in the main text and to keep the presentation of the main results as clean as possible.\\n\\n2. We agree that it would be nice to have theoretical guarantees for the momentum version of Choco-SGD, however, this was not a focus in this paper. We are not aware of a result in the literature that can (theoretically) prove a strict advantage of SGD with momentum over vanilla SGD. We think that answers in that much simpler setting should be derived first, before attempting the proof of the decentralized momentum scheme.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your positive assessment of our work.\\n\\n1.&2. Thank you for spotting this. We addressed these comments in the revision.\\n\\n[Experiments on Imagenet]\\nFrom Table 1 we can deduce that ECD has difficulties to converge even on the smaller Resnet-20 architecture. Similarly, DCD does consistently perform worse that CHOCO-SGD thus we believe that we will see similar differences on the large scale Imagenet training.\\n\\nTo allow for better comparison, we will add one of your suggested centralized baselines to these plots.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors present an algorithm CHOCO-SGD to make use of communication compression in a decentralized setting. This is an interesting problem and the results are promising. Firstly they prove the convergence rate of the algorithm on non-convex smooth functions, which shows a nearly linear speedup.\\n\\nSecond, on the practical part, there have 3 main results:\\n\\t1. They compare CHOCO-SGD under various compression schemes with the baseline. The results show the algorithm generally outperforms the baseline.\\n\\t2. They implement it over a realistic peer-to-peer social network and show a great communication performance under such a network with limited bandwidth.\\n\\t3. In a datacenter setting, they compare the algorithm with all-reduce, which is a centralized communication method. The results show a strong training reduction for CHOCO-SGD.\\n\\nAlso, the paper is mostly nicely written.\\n\\nHowever, there have several issues:\\n\\n\\t1. In the introduction, they introduce their experiments with the order from \\\"datacenter experiment\\\" to \\\"peer-to-peer experiment\\\", which is different from the actual presenting order.\\n\\t2. In the description of Algorithm 1, the representation of initial values should be x{(-1/2)}_{i} instead of x{(0)}_{i} since line 2 using the term x^{t-1/2}_{i} with the range of t from 0 to T-1.\\n\\t3. About \\\"datacenter setting\\\" experiment, it seems not an apple to apple comparison between CHOCO-SGD and all-reduce method since CHOCO-SGD stands for the decentralized algorithm with compression and all-reduce stands for a centralized algorithm without compression. It's better to compare with at least one centralized algorithm with a compression scheme (like QSGD[1], signSGD[2], DGC[3]).\\n\\t4. Although they compare with the baseline (DCD and ECD) on Cifar-10 dataset, it's worth to compare with them on the ImageNet since the result may be different under large-scale training.\\n\\nOverall, this could be a great paper if fixing the issues above.\\n\\n\\n[1] D. Alistarh, D. Grubic, J. Z. Li, R. Tomioka, and M. Vojnovic. QSGD: Communication-ef\\ufb01cient SGD via gradient quantization and encoding. In Proc. Advances in Neural Information Processing Systems (NIPS), 2017.\\n\\n[2] Bernstein J, Zhao J, Azizzadenesheli K, Anandkumar A. signSGD with majority vote is communication efficient and fault tolerant. arXiv. 2018 Oct 11.\\n\\n[3] Lin Y, Han S, Mao H, Wang Y, Dally WJ. Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv preprint arXiv:1712.01887. 2017 Dec 5.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies non-convex decentralized optimization with arbitrary communication compression. It is well motivated and well written. The authors consider CHOCO-SGD for non-convex decentralized optimization and establish the convergence result based on the compression ratio. This result does not rely on specific quantized method and main term in the upper bound matches with the centralized baseline. The authors also show CHOCO-SGD with momentum is effectiveness in practical. The experimental results on several benchmark datasets validate the algorithm achieves better performance than baselines\\n\\nBoth of the theoretical and empirical results are convincing. I believe this paper is ready for publication.\", \"minor_comments\": \"1. It is prefer to present Algorithm 2 and 3 in the main text, since they are mentioned by the statement of Theorem 4.1 and used in experiments respectively.\\n\\n2. Can you provide some theoretical guarantee of CHOCO-SGD with momentum?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies the convergence of CHOCO-SGD for nonconvex objectives and shows its linear speedup while the original paper of CHOCO-SGD only provides analysis for convex objectives. The momemtum version of CHOCO-SGD is also provided although no theoretical analysis is presented.\\n\\nExtensive empirical results are presented in this paper and the two use cases highlight some potential usage of the algorithm. However, there some concerns which could be addressed.\\n\\nFirst, the authors only provide analysis on CHOCO-SGD but the comparison with baselines are based on their momemtum versions. Moreover, some highly relevant baseline like DeepSqueeze are not cited and compared. Thus, the advantage of vanilla CHOCO-SGD over other alternatives is not convincing. \\n\\nSecond, the cores of decentralized optimization include minimization of objective and consensus of the solution. However, no evaluation of the consensus is presented and this leads to the following point.\\n\\nThird, it seems the authors report the average performance over all nodes using their individual model. If this is the case, the reported perfromance and comparison are not convincing. Without consensus, different nodes can have individual minimizer. In this case, the obtained average loss can be even smaller than the optimal loss. Under current measurement, if we run SGD on each worker individually without any communication, we will still get pretty good performance but this does not achieve the goal of decentralized optimization. Further clarification on this is needed.\\n\\nOverall, I think the technical contribution of this paper is unclear and the evaluation is not convincing.\"}"
]
} |
S1e-0kBYPB | Can I Trust the Explainer? Verifying Post-Hoc Explanatory Methods | [
"Oana-Maria Camburu*",
"Eleonora Giunchiglia*",
"Jakob Foerster",
"Thomas Lukasiewicz",
"Phil Blunsom"
] | For AI systems to garner widespread public acceptance, we must develop methods capable of explaining the decisions of black-box models such as neural networks. In this work, we identify two issues of current explanatory methods. First, we show that two prevalent perspectives on explanations—feature-additivity and feature-selection—lead to fundamentally different instance-wise explanations. In the literature, explainers from different perspectives are currently being directly compared, despite their distinct explanation goals. The second issue is that current post-hoc explainers have only been thoroughly validated on simple models, such as linear regression, and, when applied to real-world neural networks, explainers are commonly evaluated under the assumption that the learned models behave reasonably. However, neural networks often rely on unreasonable correlations, even when producing correct decisions. We introduce a verification framework for explanatory methods under the feature-selection perspective. Our framework is based on a non-trivial neural network architecture trained on a real-world task, and for which we are able to provide guarantees on its inner workings. We validate the efficacy of our evaluation by showing the failure modes of current explainers. We aim for this framework to provide a publicly available,1 off-the-shelf evaluation when the feature-selection perspective on explanations is needed. | [
"explainability",
"neural networks"
] | Reject | https://openreview.net/pdf?id=S1e-0kBYPB | https://openreview.net/forum?id=S1e-0kBYPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"fpHaXPDNA",
"HJerWDThiS",
"B1xuk-mfoB",
"HJlsZl7fiS",
"r1gcz1QziS",
"r1gL0Rzzir",
"Byg_L_yEqH",
"rJxy1Mc0YH",
"Bkgnv8LCtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738332,
1573865212910,
1573167328447,
1573167107482,
1573166865954,
1573166797746,
1572235344193,
1571885526753,
1571870307927
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2015/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2015/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2015/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2015/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2015/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2015/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2015/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2015/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a framework for generating evaluation tests for feature-based explainers. The framework provides guarantees on the behaviors of each trained model in that non-selected tokens are irrelevant for each prediction, and for each instance in the pruned dataset, one subset of clearly relevant tokens is selected.\\n\\nAfter reading the paper, I think there are a few issues with the current version of the paper: \\n\\n(1) the writing can be significantly improved: the motivation is unclear, which makes it difficult for readers to fully appreciate the work. It seems that each part of the paper is written by different persons, so the transition between different parts seems abrupt and the consistency of the texts is poor. For example, the framework is targeted at NLP applications, but in the introduction the texts are more focused on general purpose explainers. The transition from the RCNN approach to the proposed framework is not well thought-out, which makes the readers confused about what exactly is the proposed framework and what is the novelty.\\n\\n(2) the claimed properties of the proposed framework are rather straightforward derivations. The technical novelty is not as high as claimed in the paper.\\n\\n(3) The experiment results are not fully convincing. \\n\\nAll the reviewers have read the authors' feedback and responded. It is agreed that the current version of the paper is not ready for publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"REPLY\", \"comment\": \"The original rating of \\\"Weak Reject\\\" still holds as the authors failed to provide proper justification for the raise concerns and support their claims through additional experiments.\\n\\n\\\"We do not introduce an explanation generation framework, as explainers do. \\\" - The proposed evaluation requires the explainer of the NLP model to agree with the RCNN in-terms of relevant or irrelevant words, to be considered a good explainer. The RCNN model which is defining the relevant and irrelevant tokens for a prediction task is in fact stating that we can explain the decision of an NLP model in terms of relevant and irrelevant tokens. Hence, the proposed RCNN can also be considered as an explainer. The evaluation task is demonstrating if the other explainers are providing explanations consistent with this new explainer based on RCNN.\\n\\n\\n\\\"The RCNN is not meant to explain any other models except itself.\\\" - Unclear\\n\\n\\\"Regarding the request for more experiments:\\\" - The authors don't provide enough justification to \\\"why they didn't perform more experiments?\\\"\\n\\n\\\" Hence, with our current instantiations, any domain-agnostic explainer can be evaluated\\\" - The experiment to validate this claim are missing.\\n\\n\\\"The novelty of our paper consists in the fact that, to our knowledge, it is the first to (1) shed light over a fundamental difference\\\" - This is not a technical novelty. This is an exploratory analysis based observation\\n\\n\\\"and (2) propose a methodology for evaluating explainers that ...and without human intervention (unlike evaluation type 4).\\\" - In Section 5 Qualitative Analysis, the authors are also doing human evaluation like other methods in evaluation type-4 of their related works. Also, doing human evaluation is a strong way to justify an explainer. Though expensive, whenever possible it should be done and is in no way a limitation of current evaluation metrics.\\n\\nYour model needs labelled data for training RCNN. This adds a constraint on the usability and scalability of your proposed evaluation method. Since RCNN is also black-box, one will required another explainer to explain the RCNN. \\n\\nIn the worst-case scenario, if RCNN is trained with data such that it considers all relevant words as irrelevant, the evaluation made by RCNN will be incorrect. Hence, \\\" Success depends on the ability of the RCNN to extract correct subsets of tokens.\\\"\"}",
"{\"title\": \"RE Official Blind Review #1\", \"comment\": \"REVIEWER: Lacks technical novelty.\", \"answer\": \"The novelty of our paper consists in the fact that, to our knowledge, it is the first to (1) shed light over a fundamental difference in the goals of two major types of explanations that are currently being directly compared despite their distinct goals, and (2) propose a methodology for evaluating explainers that does not make speculations on the behaviour of the model (unlike evaluation type 3, see our related work - Section 2), which is based on a real-world scenario (complex neural architectures and real-world datasets) (unlike evaluation types 1 and 2), and without human intervention (unlike evaluation type 4). Consequently, we are also the first to test current state-of-the-art explainers in a setting having all the above features, pointing out some critical explainers' deficiencies.\", \"r\": \"Colormap is not readable.\", \"a\": \"We will update it to make it more readable.\"}",
"{\"title\": \"RE Official Blind Review #3\", \"comment\": \"A1. Our general answer should help in clarifying this. The RCNN itself doesn\\u2019t detect 3 types of tokens, but only 2: selected and non-selected. It is our procedure that provides the pruned datasets for which (1) the non-selected are guaranteed to be irrelevant, and (2) we further identify clearly relevant features among the selected ones.\\nWe train the RCNN only once (per aspect) on the original datasets. The pruning procedure is applied after the RCNN was trained in order to obtain an associated pruned dataset (for each trained RCNN) on which we provide the above 2 guarantees. If one wants to train another RCNN (on half of the original data, or on any other subset, or even if one simply changes the seed), then one has to do the pruning procedure again to obtain a new pruned dataset associated to the newly trained model. This new pair of (trained model, pruned dataset) would constitute another instance of evaluation test for the explainers, with potentially different results. But this would not invalidate the results that we obtained on the 3 trained models we evaluated on.\\n\\n\\nA2. Usually, \\u201cblack-box\\u201d would refer to the model to be explained by the explainers, while the method \\u201cto be verified\\u201d, in this work, would be the explainer. So, it is not clear what \\u201cblack-box to be verified\\u201d refers to. In case this refers to the model to be explained, then the RCNN is not used to explain any other model than itself, as we highlighted in our general answer. If \\u201cblack-box to be verified\\u201d refers to the explainer to be verified, then this is precisely our goal, to penalize the explainer based on the difference between the features that it considered relevant and the ground-truth relevant ones. \\n\\nA3. The stdev reported in Table 1 for the avg_misrnk metric is simply showing the amount of variability in the error that the explainers make on this metric. The fact that the explainers are making a highly variable amount of errors shows a downside of these explanatory methods which is emphasized by our framework, rather than a downside in our framework. Since we are not introducing an explainer but an evaluation, the high variability is not a concern for our framework.\"}",
"{\"title\": \"RE Official Blind Review #2\", \"comment\": \"\", \"reviewer\": \"Evaluating explanations generated for an opaque model with another opaque model (RCNN) is cyclical.\", \"answer\": \"Our general answer should help clarify this. The RCNN is not meant to explain other opaque models, it only explains itself, hence there is no cycle. We only evaluate explainers on the trained RCNNs with their associated pruned datasets for which we provided the guarantees mentioned in the general answer. The RCNN has a degree of transparency that we exploit: it itself, selects the features that it will further exclusively use in the final prediction. Our 2 pruning procedures ensure that, on the instances of the pruned datasets, the RCNN\\u2019s selection faithfully represents the model\\u2019s inner-working: the non-selected tokens are indeed irrelevant and some of the selected tokens are clearly relevant.\", \"r\": \"Referenced human-level explanation paper\", \"a\": \"Thank you for mentioning it, we added it accordingly in the related work.\"}",
"{\"title\": \"General Answer\", \"comment\": \"We thank the reviewers for their insightful comments. It seems that most of the raised concerns are misunderstandings that can be resolved with the following clarification.\\n\\nWe do not introduce an explanation generation framework, as explainers do. Instead, we introduce a methodology for generating evaluation tests for those explainers. Our tests consist of pairs of (trained model, pruned evaluation dataset) with 2 guarantees on the behaviour of each trained model over the instances in its associated pruned dataset:\\nthe non-selected tokens are irrelevant for each prediction, \\nfor each instance in the pruned dataset, we identify one subset of clearly relevant tokens.\\nBased on these guarantees, we evaluate explainers only on these pairs of (trained model, pruned evaluation dataset). The models are trained only once on the whole original dataset, while each pruned dataset is dependant on its associated trained model and is used only for evaluating the explainers. The RCNN is the architecture of our trained models. The RCNN is not meant to explain any other models except itself.\", \"regarding_the_request_for_more_experiments\": \"First, our methodology is domain-agnostic, so we open the path for the community to instantiate it in any area and generate many more evaluation tests. We gave 3 instantiations on an NLP task and our experiments proved that well-known explainers can make critical errors. For example, they can even tell us that the most important feature is one that was totally irrelevant, which is particularly problematic in safety-critical applications. Secondly, most of the explainers in the literature are also domain-agnostic. Hence, with our current instantiations, any domain-agnostic explainer can be evaluated by applying it to the 3 pairs of (trained model, pruned dataset) that we will release.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Overview/Contribution:\\n====================\\nThe authors present a explanation generation framework that help validate post-hoc explanations when the explanations are generated based on feature selection. They claim to demonstrate their method by showing failure modes of exiting explanation generation methods.\\n\\nOverall, the paper is not ready to be accepted to the conference and I describe my rational with the following strengths and weaknesses.\", \"strength\": [\"========\", \"Explanations make models more transparent and easy to understand for end users of the decision made by complex models such as deep neural networks [1]. In that respect, having a verification mechanism for post-hoc explanations is interesting and useful.\", \"The paper is easy to read and follow.\"], \"weakness\": [\"===========\", \"evaluating explanations generated for an opaque model with another opaque model (RCNN) is cyclical.\", \"Just like many literature in this nascent space, interpretation (which is measuring the contribution of features or subsets of features towards predicted output) is confused as explanation. Human level explanations don\\u2019t necessarily depend on the direct interaction or contribution of model derived features. Rather they describe \\u2018why\\u2019 the model come up with the decision produced.\", \"Explanation generation is gaining traction in the deep learning community especially for critical applications such as healthcare and security. However, the authors claim that post-hoc explanations currently are only evaluated for only simple non-neural model. That is misleading given the recent attention toward generating explanations for various deep learning models.\", \"As a generalized pos-hoc explanation generators verification framework, the experiments are seriously lacking and are not well designed to illicit broad applicability.\", \"1) Bekele, E., Lawson, W. E., Horne, Z., & Khemlani, S. (2018). Implementing a Robust Explanatory Bias in a Person Re-identification Network. In\\u00a0Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops\\u00a0(pp. 2165-2172).\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"-------------------- AFTER\\nThe original rating of \\\"Weak Reject\\\" still holds as the authors failed to provide proper justification for the raise concerns and support their claims through additional experiments.\\n\\n\\\"We do not introduce an explanation generation framework, as explainers do. \\\" - The proposed evaluation requires the explainer of the NLP model to agree with the RCNN in-terms of relevant or irrelevant words, to be considered a good explainer. The RCNN model which is defining the relevant and irrelevant tokens for a prediction task is in fact stating that we can explain the decision of an NLP model in terms of relevant and irrelevant tokens. Hence, the proposed RCNN can also be considered as an explainer. The evaluation task is demonstrating if the other explainers are providing explanations consistent with this new explainer based on RCNN.\\n\\n\\n\\\"The RCNN is not meant to explain any other models except itself.\\\" - Unclear\\n\\n\\\"Regarding the request for more experiments:\\\" - The authors don't provide enough justification to \\\"why they didn't perform more experiments?\\\"\\n\\n\\\" Hence, with our current instantiations, any domain-agnostic explainer can be evaluated\\\" - The experiment to validate this claim are missing.\\n\\n\\\"The novelty of our paper consists in the fact that, to our knowledge, it is the first to (1) shed light over a fundamental difference\\\" - This is not a technical novelty. This is an exploratory analysis based observation\\n\\n\\\"and (2) propose a methodology for evaluating explainers that ...and without human intervention (unlike evaluation type 4).\\\" - In Section 5 Qualitative Analysis, the authors are also doing human evaluation like other methods in evaluation type-4 of their related works. Also, doing human evaluation is a strong way to justify an explainer. Though expensive, whenever possible it should be done and is in no way a limitation of current evaluation metrics.\\n\\nYour model needs labelled data for training RCNN. This adds a constraint on the usability and scalability of your proposed evaluation method. Since RCNN is also black-box, one will required another explainer to explain the RCNN. \\n\\nIn the worst-case scenario, if RCNN is trained with data such that it considers all relevant words as irrelevant, the evaluation made by RCNN will be incorrect. Hence, \\\" Success depends on the ability of the RCNN to extract correct subsets of tokens.\\\"\\n\\n\\n\\n------------------- BEFORE\\nThe paper proposed a verification framework to evaluate the performance of different explanatory methods in interpreting a given target model. Specifically, the authors evaluated three explanatory methods namely, LIME, SHAP and L2X for a target model trained to perform sentiment analysis on text data. Authors assume for each input text, there is a subset of tokens that are most relevant and that are completely irrelevant to the final prediction task. The proposed framework uses a recurrent convolutional neural network (RCNN) to find these subsets. The performance of an explainer is evaluated in terms of overlap between the RCNN most relevant tokens and the most relevant tokens provided by the explainer as an explanation. \\n\\nMajor\\n\\u2022\\tThe paper lack technical novelty.\\n\\u2022\\tThe proposed architecture uses a RCNN to find the most relevant subset of tokens. Firstly, RCNN is also a black box that provides no intuition behind its selection decision. Secondly, in the absence of the ground truth labels for true relevance and irrelevance of a token in input sentence, this explainer method can also suffer from \\u201cassuming a reasonable behavior\\u201d assumption. The method assumes that the RCNN is performing reasonably in identifying relevant subsets.\\n\\u2022\\tThe success of the method depends on the ability of the RCNN to extract correct subsets of tokens. The data used for training the RCNN, might have some underlying bias. In that case, the evaluation is not accurate.\\n\\u2022\\tIn related work, for \\u201cInterpretable target models\\u201d the authors mentioned LIME as an example of explainer functions that explains target models that are \\u201cvery simple models may not be representative for the large and intricate neural networks used in practice\\u201d. LIME locally explains the decision of a complex function for a given data point using simpler models like linear regression. But LIME itself can be used for generating explanation for prediction of complex neural network like Inception Net. \\n\\u2022\\tThe example used to explain the difference between feature additive and feature selection-based explainer methods, is confusing. Its not clear how in health diagnostics, one will prefer feature-selection perspective. Although the most relevant features used for the instance are important to understand the decision, but in clinical settings sometimes low rank features can also be useful to understand the target model.\\n\\u2022\\tFor text, the relevant features are the individual tokens of the input sentence. Similarly, for images relevance can be important regions of the image. The authors did not have any experiments on images or tabular data.\\n\\u2022\\tIn the experiment section, the comparison is made with only 3 explainer models and for just one task. The experiments are inadequate.\\n\\u2022\\tIn Figure 4, the colormap is not readable.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary -\\n\\nThe paper proposes a verification method for instance wise feature explanations. The verification framework uses an RCNN to identify two types of tokens a) the tokens that are not predictive of outcome b) the subset of clearly relevant tokens for prediction. The data used for RCNN is a pruned version of the data used to train the black-box. The pruning eliminates data points to ensure that the tokens not selected by RCNN have no contribution to the outcome and that the model does not exhibit suffer from learning \\\"handshakes\\\". A handshake is defined as the set of tokens that may be spuriously missed because their information is encoded in another relevant token. This proof to identify such data points is shown and the RNN is therefore expected to be able to reliably identify 3 kinds of tokens a) Those that have zero contribution to the outcome. b) Those that definitely have some contribution to the outcome and c) those that could be relevant or noisy. Three instance-wise feature selection methods are compared. Results are provided on 3 metrics. a) % instances for which the most important tokes provided by the explainer is among the non-selected tokens, b) % of instances for which at least one non-selected token is ranked higher than a relevant token, and c) Average number of non-selected tokens ranked higher than any obviously relevant tokens.\", \"clarifications_and_concerns\": \"1. For the dataset considered here, I would like to see the distribution of the irrelevant, clearly relevant and unsure if they are relevant tokens as detected by the RCNN. How does this change if I further prune the dataset after ensuring that handshake and other issues have been eliminated. The main concern I have is the idea of verifying other explanations using a neural network itself. I can train the RCNN neural network with half the data (and satisfy the properties the authors mention) and my evaluation would change significantly. From the appendix I see that most of the tokens could be in the set $SDR_x$. \\n\\n2. What if the set of tokens don't overlap between the RCNN and the black-box to be verified. That said, I think the assumptions of the framework should be much more explicitly mentioned.\\n\\n3. The std deviations in the experiments are very high. Can the authors justify this and how it is still okay to use this framework for evaluating feature importance based explanations.\", \"minor\": \"1. You have cited the \\\"Anchors\\\" paper twice?\\n2. Page 3 - typo - \\\"....explainer should provide different explanations for the trained model on real data than when the data...\\\"\\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\\nUpdate -\\n\\nI have read the authors response. \\nIf the pruned dataset is created using an RCNN, then it is not clear if the RCNN is used to just explain itself or all other methods as well. Like I said, if we just train the model on a slightly different distribution of labels, or half the data randomly sampled irrespective of labels, the explanations will change because the pruned dataset i.e. the ground truth may significantly change. I am still not convinced how this makes for a good verification framework to asses other explainers.\\n\\nIt is also unclear how generalizable this verification process is to other domains. I will therefore not be updating my score.\"}"
]
} |
rkxZCJrtwS | D3PG: Deep Differentiable Deterministic Policy Gradients | [
"Tao Du",
"Yunfei Li",
"Jie Xu",
"Andrew Spielberg",
"Kui Wu",
"Daniela Rus",
"Wojciech Matusik"
] | Over the last decade, two competing control strategies have emerged for solving complex control tasks with high efficacy. Model-based control algorithms, such as model-predictive control (MPC) and trajectory optimization, peer into the gradients of underlying system dynamics in order to solve control tasks with high sample efficiency. However, like all gradient-based numerical optimization methods,model-based control methods are sensitive to intializations and are prone to becoming trapped in local minima. Deep reinforcement learning (DRL), on the other hand, can somewhat alleviate these issues by exploring the solution space through sampling — at the expense of computational cost. In this paper, we present a hybrid method that combines the best aspects of gradient-based methods and DRL. We base our algorithm on the deep deterministic policy gradients (DDPG) algorithm and propose a simple modification that uses true gradients from a differentiable physical simulator to increase the convergence rate of both the actor and the critic. We demonstrate our algorithm on seven 2D robot control tasks, with the most complex one being a differentiable half cheetah with hard contact constraints. Empirical results show that our method boosts the performance of DDPGwithout sacrificing its robustness to local minima. | [
"differentiable simulator",
"model-based control",
"policy gradients"
] | Reject | https://openreview.net/pdf?id=rkxZCJrtwS | https://openreview.net/forum?id=rkxZCJrtwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"c0Bxya9na8",
"rkxuCmq3iH",
"SJgwyeiooH",
"rklgox5jiS",
"B1eIgYQjsr",
"B1eFQ9Lcor",
"rJgVDqggoB",
"rJxqEfvvcS",
"BJxm08CJcr",
"SJek1Z91cS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738300,
1573852111632,
1573789663162,
1573785752447,
1573759214479,
1573706272560,
1573026396406,
1572463153569,
1571968714551,
1571950807071
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2014/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2014/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2014/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2014/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2014/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2014/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2014/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2014/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2014/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a hybrid RL algorithm that uses model based gradients from a differentiable simulator to accelerate learning of a model-free policy. While the method seems sound, the reviewers raised concerns about the experimental evaluation, particularly lack of comparisons to prior works, and that the experiments do not show a clear improvement over the base algorithms that do not make use of the differentiable dynamics. I recommend rejecting this paper, since it is not obvious from the results that the increased complexity of the method can be justified by a better performance, particularly since the method requires access to a simulator, which is not available for real world experiments where sample complexity matters more.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the response and additional experiments!\", \"comment\": \"Thanks for the clarifications and additional experiments using this with SAC as well, which is exactly what I suggested in my original review. I've also read through the other reviews and responses in this thread. If there were an option to update my score to \\\"neutral\\\" I would increase it to this, as the gradient information this paper adds is interesting and relevant to the community, but the empirical results in the current form are still difficult to distinguish from the baselines.\"}",
"{\"title\": \"Paper update\", \"comment\": \"We thank all reviewers again for their feedback. We have uploaded a new manuscript based on the review and we summarized our updates below:\\n\\n1. We added clarification to the notations in Equation (2) and updated the computational graph (Figure 1) to better explain the computation of our gradients.\\n2. We reported experimental results in the Appendix (Section A.1) about using cosine similarity instead of L2 norm in all examples. These experiments showed that our method is not sensitive to the choice between these two norms, and both norms are viable options for our examples.\\n3. We implemented our proposed method in another actor-critic method (SAC) and reported the experimental results in five of our examples. We use these experiments to demonstrate that it is possible to apply our method to other actor-critic methods besides DDPG. \\n\\nWe hope these updates can help articulate the benefits of incorporating gradient information in RL training whenever a differentiable simulator is available. Please feel free to leave more comments and thank you again for your review!\"}",
"{\"title\": \"Experimenting with SAC and interpreting results from the motivating example\", \"comment\": \"Thank you for your constructive review!\\n\\n== New actor-critic methods ==\\nWe agree that TD3 and SAC are good candidates to try besides DDPG. We have implemented a variant of SAC and reported the results in five examples in Section A.2 of the updated manuscript. Our experiments showed that the proposed method helped improve the performance of the original SAC in three examples and obtained similar performance in the other two.\\n\\n== Q function approximation ==\\nThe Q network does not fit the ground-truth closely because 1) The RL algorithm only explored and used a very small part of the whole domain of ($s$, $a$) for fitting. Specifically, since samples were extracted from perturbing $\\\\pi$, most of them clustered around the curve $(s, \\\\pi(s))$; 2) Due to the design of this problem, regions far away from the initial ($s=-0.5$) and final ($s=0$) positions of the mass point are rarely visited during training and not needed in the final solution. Due to these two reasons, the Q network attempted to fit the ground-truth Q well only in the banded region between $s=-0.5$ and $s=0$, and it can be observed that adding weighted loss on gradient differences helped the Q network converge to the ground-truth in this banded area faster.\\n\\n== Slower convergence when both weights are available ==\\nDue to the empirical nature of our method, we are not able to justify this phenomenon on a theoretical basis. We suspect it might be related to the fact that the Q function in this example has different sensitivity to its two inputs $s$ and $a$. In particular, if we slice the ground-truth Q surface at a given $s$, the resulting $Q-a$ curve is very flat, so more gradient information about $\\\\partial Q/\\\\partial a$ might be unnecessary and not helpful for fitting it well.\"}",
"{\"title\": \"Using a gradient version of TD($\\\\lambda$)\", \"comment\": \"We agree that using the gradients from TD($\\\\lambda$) in Equation (2) is an interesting direction to explore. However, unrolling more steps to estimate $\\\\hat{Q}_i$ requires more on-policy samples: for example, unrolling one more step in line 9 of algorithm 1 would require access to $s_{i+2}$ computed by simulating the robot from $(s_{i+1}, \\\\pi\\u2019(s_{i+1}))$. These new samples are not directly available from the off-policy replay buffer in DDPG and have to be regenerated on the fly, which hurts the sampling efficiency of the algorithm.\\n\\nWe did think about applying the same technique to on-policy RL algorithms and have implemented the TD($\\\\lambda$) version of equation (2) in PPO. Our preliminary results showed that it did not improve the performance of PPO even after hyperparameter tuning. We suspect the reason is that Equation (2) assumes the policy is deterministic in nature while PPO uses stochastic policies. Still, we think it is possible that a proper combination of TD($\\\\lambda$) gradients and RL baseline algorithms could lead to an improvement in performance.\"}",
"{\"title\": \"Experimental results on cosine similarity and thoughts on using gradients for exploration\", \"comment\": \"Thank you for your constructive feedback!\\n\\nThank you for sharing Fig. 1 in \\\"Gradient Estimators for Implicit Models\\\". We agree that such an example would highlight the shortcomings of estimating only the Q function and the benefit of training with simulation gradients whenever they are available. We will consider including this argument in a stronger motivating example and adding a similar figure in the manuscript.\\n\\nWe agree comparing other norms is an interesting idea and norms like L1 and cosine would both be interesting to try. For now, we have tested the cosine norm on all of our examples. For some of our examples (the Acrobot and MountainCar), we found that the cosine norm dominates the L2 norm. For the CartPoleSwingUp, the L2 norm still dominates. For the remaining problems, both norms work approximately equally well. We will include these quantitative results in the revised manuscript. We stress that in all cases, both regularization variants achieve performance similar to or better than pure DDPG. It is difficult to give a precise theoretical reason as to why one norm outperforms the other for certain problems, however, we can gladly report extensive empirical findings in a final version of the manuscript.\\n\\nSimulator gradients are unfortunately difficult to use to directly improve exploration since they always point in a greedy direction. In the classic exploration/exploitation tradeoff, the gradient provides exploitation. It is possible that one could devise an algorithm that may improve exploration by sampling updates which deviate from the deterministic gradient (e.g. a gradient-based variant of https://arxiv.org/pdf/1706.01905.pdf). However, such an algorithm could be tried with or without gradient fitting. It is possible that gradient-fitting would improve the efficacy of such a technique, but this is all introducing a new, potentially complex algorithm, worthy of its own manuscript and study.\"}",
"{\"title\": \"Clarifications on the correctness of Equation (2)\", \"comment\": \"Thank you for your constructive review!\\n\\nWe really appreciate your comments and are happy to discuss them during this rebuttal period. But for now, we just want to make a quick clarification on Equation (2) and justify our gradient computation:\\n\\nYou are correct that we base our computation on the Bellman equation. To be precise, we use $\\\\hat{Q}(s_i,a_i)=r(s_i,a_i,s_{i+1})+\\\\gamma Q'(s_{i+1},\\\\pi'(s_{i+1}))$. This is line 9 in Algorithm 1 in our paper, and it is also consistent with line 12 in Algorithm 1 in the original DDPG paper.\\n\\nFor brevity, in Equation (2) we use the neural network names to refer to its output value. So $\\\\pi'$ in Equation (2) stands for $\\\\pi'(s_{i+1})$ and $\\\\nabla_{\\\\pi'}Q'$ in Equation (2) stands for:\\n$$\\n\\\\nabla_aQ'(s,a)|_{s=s_{i+1},a=\\\\pi'(s_{i+1})}\\n$$\\nYou can also check the correctness of Equation (2) by comparing it to the computation graph in Figure 1, where the upper right $\\\\mu$ stands for $\\\\pi'(s_{i+1})$. In Figure 1, $\\\\nabla_{\\\\pi'}Q'$ corresponds to the gradient back-propagated along the arrow $\\\\mu\\\\rightarrow Q'$ (\\\"if we change $\\\\mu$, how much will $Q'$ change?\\\"). Similarly, the term $\\\\nabla_{s_{i+1}}\\\\pi'$ after $\\\\nabla_{\\\\pi'}Q'$ in Equation (2) corresponds to the arrow $s_{i+1}\\\\rightarrow\\\\mu$ (\\\"if we change $s_{i+1}$, how much will $\\\\mu$ change?\\\").\\n\\nPutting them together, the product $\\\\nabla_{\\\\pi'}Q'\\\\cdot \\\\nabla_{s_{i+1}}\\\\pi'$ in Equation (2) back-propagates the gradient along the path $s_{i+1}\\\\rightarrow\\\\mu\\\\rightarrow Q'$ in Figure 1. Similarly, the term $\\\\nabla_{s_{i+1}}Q'$ in Equation (2) corresponds to the arrow $s_{i+1}\\\\rightarrow Q'$ in Figure 1, and the sum $(\\\\nabla_{s_{i+1}}Q'+\\\\nabla_{\\\\pi'}Q'\\\\cdot \\\\nabla_{s_{i+1}}\\\\pi')$ computes the total derivative of $Q'$ with respect to $s_{i+1}$. The other terms in Equation (2) can be verified in the same way.\\n\\nWe hope this explanation can clear your concern with the correctness of Equation (2). We will revise the manuscript to clarify the notations.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"==Summary==\\n\\nDDPG is a popular RL method for continuous control problems. It is more widely applicable than traditional model-based approaches like MPC, since it doesn't require differentiable models of the dynamics. However, in many environments, dynamics are differentiable. This paper proposes a method for extending DDPG to exploit simulator gradients. In particular, the Bellman error objective (which is defined in terms of critic values) used for training the critic is augmented with additional terms defined in terms of gradients of the critic. This leads to faster convergence in practice on a range of benchmarks.\\n\\n==Overall Assessment==\\n\\nI recommend acceptance. The paper's contribution is well-motivated, works reasonably well, and is relatively easy to implement.\\n\\n==Comments==\\n\\nIt would be good to add an argument explaining to readers that accurately estimating Q using Q_\\\\phi does not mean that the gradients of Q_\\\\phi will be good approximations of the true gradients of Q. I found Fig 1 of arxiv.org/pdf/1705.07107.pdf informative.\\n\\nCan you justify the choice of euclidean norm in line 10? In terms of the critic helping teach the actor, the direction of the gradient may be more important than the norm. What if you used cosine sim?\\n\\nYou argue that DRL is better than MPC because DRL explores better. Could you use the simulator gradients somehow to improve exploration?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper shows how the derivatives from a differentiable\\nenvironment can be used to improve the convergence rate of\\nthe actor and critic in DDPG.\\nThis is useful information to use as most physics simulators\\nhave derivative information available that would be useful\\nto leverage when training models.\\nThe empirical results show that their method of adding\\nthis information (D3PG) slightly improves DDPG's\\nperformance in the tasks they consider.\\nAs the contribution of this work is empirical is nature,\\nI think a very promising future direction fo work is to\\nadd derivative information to and evaluate similar\\nvariants of some of the newer actor-critic methods\\nsuch as TD3 and SAC.\", \"i_have_two_minor_questions\": \"1) Figure 2(a) shows the convenrgence of regularizing states,\\n actions, and both states and actions and the text\\n describing the figure states that this is\\n \\\"expected to boost the convergence of Q.\\\"\\n However the figure shows that regularizing both states and\\n actions results in a slower convergence than doing\\n them separately. Why is this?\\n2) How should I interpret the visualization of the\\n learned Q surface in Figure 2(f) in comparison to\\n the true Q function in Figure 2(g)?\\n It does not look like a good approximation.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies optimal control problems where a physical simulator of the system is available, which outputs the gradient of the dynamics. Using the gradients proposed by the model, the authors propose to add two additional terms in the loss function for critic training in DDPG, where these to terms corresponding to the prediction error of $\\\\nabla_{a} Q(s,a)$ and $\\\\nabla_b Q(s,a)$, respectively. However, my main concern is that the form of gradient given in equation (2) might contains an error.\\n\\n1. Equation (2). Note that in DDPG, the action is given by a deterministic policy. Thus, we have $a_t = \\\\pi(s_t)$ for all $t\\\\geq 0$. For critic estimation, it seems you are basing on the Bellman equation \\n$ Q(s,a) = r(s,a) + Q(s', \\\\pi(s'))$, where $s'$ is the next state following $(s,a)$. Then, it seems that Equation (2) is obtained by taking gradient with respect to $(s,a)$. However, I cannot understand what $\\\\nabla_{\\\\pi} Q$ stands for. If it is $\\\\nabla_a Q(s_{i+1}, a_{i+1}) \\\\cdot \\\\nabla_s \\\\pi(s_{i+1}) $, then that makes sense. \\n\\n2. Based on the experiments, it seems that the proposed method does not always outperform MPC or DDPG, even in a small-scale control problem Mountaincar. Moreover, it seems that the performance is similar to that of the DDPG. \\n\\n3. Here the model-based gradient in equation (2) is defined by only unroll one-step forward by going from $s_i, a_i$ to $s_{i+1}$. It would be interesting to see how the number of unroll steps affect the algorithm, which is a gradient version of TD($\\\\lambda$).\\n\\n4. Missing reference: Differential Temporal Difference Learning https://arxiv.org/abs/1812.11137\"}"
]
} |
r1xZAkrFPr | Deep Ensembles: A Loss Landscape Perspective | [
"Stanislav Fort",
"Clara Huiyi Hu",
"Balaji Lakshminarayanan"
] | Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable approximate Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions. Developing the concept of the diversity--accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods. | [
"loss landscape",
"deep ensemble",
"subspace",
"tunnel",
"low loss",
"connector",
"weight averaging",
"dropout",
"gaussian",
"connectivity",
"diversity",
"function space"
] | Reject | https://openreview.net/pdf?id=r1xZAkrFPr | https://openreview.net/forum?id=r1xZAkrFPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"qa9zi55xK",
"SkxGPxKsir",
"ryxpOFb5ir",
"HJe6GkuvoS",
"rJxUsCwvjS",
"rJg4TaPDjr",
"BJxk8KaptB",
"H1l0JeopFB",
"rkl8KjQ6YH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738263,
1573781593734,
1573685620526,
1573515029012,
1573514910503,
1573514683669,
1571834182675,
1571823589774,
1571793790239
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2013/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2013/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2013/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2013/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2013/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2013/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2013/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2013/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Paper https://arxiv.org/abs/1802.10026 (Garipov et. al, NeurIPS 2018) shows that one can find curves between two independently trained solutions along which the loss is relatively constant. The authors of this ICLR submission claim as a key contribution that they show the weights along the path correspond to different models that make different predictions (\\\"Note that prior work on loss landscapes has focused on mode-connectivity and low-loss tunnels, but has not explicitly focused on how diverse the functions from different modes are, beyond an initial exploration in Fort & Jastrzebski (2019)\\\"). Much of the disagreement between two of the reviewers and the authors is whether this point had already been shown in 1802.10026.\\n\\nIt is in fact very clear that 1802.10026 shows that different points on the curve correspond to diverse functions. Figure 2 (right) of this paper shows the test error of an _ensemble_ of predictions made by the network for the parameters at one end of the curve, and the network described by \\\\phi_\\\\theta(t) at some point t along the curve: since the error goes down and changes significantly as t varies, the functions corresponding to different parameter settings along these curves must be diverse. This functional diversity is also made explicit multiple times in 1802.10026, which clearly says that this result shows that the curves contain meaningfully different representations.\\n\\nIn response to R3, the authors incorrectly claim that \\\"Figure 2 in Garipov et al. only plots loss and accuracy, and does not measure function space similarity, between different initializations, or along the tunnel at all. Just by looking at accuracy and loss values, there is no way to infer how similar the predictions of the two functions are.\\\" But Figure 2 (right) is actually showing the test error of an average of predictions of networks with parameters at different points along the curve, how it changes as one moves along the curve, and the improved accuracy of the ensemble over using one of the endpoints. If the functions associated with different parameters along the curve were the same, averaging their predictions would not help performance. \\n\\nMoreover, Figure 6 (bottom left, dashed lines) in the appendix of 1802.10026 shows the improvement in performance in ensembling points along the curve over ensembling independently trained networks. Section A6 (Appendix) also describes ensembling along the curve in some detail, with several quantitative results. There is no sense in ensembling models along the curve if they were the same model.\\n\\nThese results unequivocally demonstrate that the points on the curve have functional diversity, and this connection is made explicit multiple times in 1802.10026 with the claim of meaningfully different representations: \\u201cThis result also demonstrates that these curves do not exist only due to degenerate parametrizations of the network (such as rescaling on either side of a ReLU); instead, points along the curve correspond to meaningfully different representations of the data that can be ensembled for improved performance.\\u201d Additionally, other published work has built on this observation, such as 1907.07504 (UAI 2019), which performs Bayesian model averaging over the mode connecting subspace, relying on diversity of functions in this space; that work also visualizes the different functions arising in this space. \\n\\nIt is incorrect to attribute these findings to Fort & Jastrzebski (2019) or the current submission. It is a positive contribution to build on prior work, but what is prior work and what is new should be accurately characterized, and currently is not, even after the discussion phase where multiple reviewers raised the same concern. Reviewers appreciated the broader investigation of diversity and its effect on ensembling, and the more detailed study regarding connecting curves. In addition to the concerns about inaccurate claims regarding prior work and novelty (which included aspects of the mode connectivity work but also other works), several reviewers also felt that the time-accuracy trade-offs of deep ensembles relative to standard approaches were not clearly presented, and comparisons were lacking. It would be simple and informative to do an experiment showing a runtime-accuracy trade-off curve for deep ensembles alongside FGE and various Bayesian deep learning methods and mc-dropout. It's also possible to use for example parallel MCMC chains to explore multiple quite different modes like deep ensembles but for Bayesian deep learning. For the paper to be accepted, it would need significant revisions, correcting the accuracy of claims, and providing such experiments.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your response.\\n\\nReading your recent comments (\\u201cThe authors indeed conducted much broader investigation than was done in previous works and these results are clearly written. Nevertheless, the investigated phenomena are not quite new\\u201c) it seems like you agree that our paper conducts a much broader investigation and is a valuable contribution, but your main concerns are around novelty and discussion of practical application. \\n\\nWe will address your specific comments below. \\n\\n------\", \"discussion_of_practical_applications\": \"We have added a brief discussion in our earlier comment. We will definitely add a discussion in the final version of the paper (we\\u2019re already close to the 8-page limit now).\\n\\n-------\\n\\n\\u201cFrom my point of view, these two points that I mentioned are the concise description of the main contribution of the paper. The investigation of the subspace sampling is also a significant part of the paper, but I would say that findings regarding subspace sampling somehow intersect with two items that I described.\\u201d\\n\\nWe disagree with your characterization of our contribution. We believe that the main component of our paper is our comprehensive analysis of deep ensembles vs Bayesian neural nets from loss landscape perspective. \\n\\nAs we said before, to the best of our knowledge, we are the first to comprehensively investigate deep ensembles vs Bayesian neural nets from loss landscape perspective. We carefully investigated the role of random initialization in deep ensembles, measured diversity of functions and tested the complementary effects of ensembling and subspace methods on accuracy as well as calibration under data shift.\\n\\n-------\\n\\n\\u201cI believe this conclusion can be derived based on findings from [1] (table 1), where one can interpret FGE as subspace sampling method.\\u201d\\n\\nThanks for clarifying and adding a specific reference. \\n\\nTable 1 of [1] reports accuracy (error rate) to compare the effect of random init vs FGE. \\n\\nHere are some factual differences between Table 1 of [1] and our work:\\n- It does not discuss the diversity of solutions in prediction space.\\n- It does not present a combination of ensembles with subsampling-based Bayesian neural networks (low-rank Gaussian, diagonal Gaussian, dropout).\\n- It also does not deal with accuracy on corrupted data, and neither does it measure calibration under shift. \\n- In addition to results on CIFAR-10 and CIFAR-100, we also present results on ImageNet.\\n\\nWe like the work of [1] and we already cite [1] and related papers. We\\u2019d be happy to include a discussion about Table 1 of [1]. That said, we think it is non-trivial to derive all of our conclusions above from just the error rates reported in that table. \\n\\n-------\\n\\n\\u201cOne of the main components of SWA is a cyclic learning rate schedule or constant learning rate schedule with larger learning rate than the learning rate that was used at the end of the training \\u2026 It is indeed not directly stated that predictions of the models corresponding to the same trajectory are similar, but it sounds like a fairly obvious conclusion based on it.\\u201d\\n\\nWe provide direct evidence for the similarity, see Figures 2 and 4. Whether it is \\\"obvious\\\" or not is a subjective question, but to us it certainly was not and that's why we investigated it.\\n\\nWe are not the authors of the SWA paper, so we don\\u2019t really know why the SWA authors chose this particular cyclic learning rate schedule. \\n\\nTo the best of our knowledge, the text in the SWA paper does not make a direct connection between the parameters of the cyclic learning rate, and the question of why deep ensembles work better than Bayesian neural nets. \\n\\n-------\\n\\n\\u201cNevertheless, the investigated phenomena are not quite new and this is an incremental paper.\\u201d\\n\\nAs we said before, we are not aware of any other work that would comprehensively study ensembles and subspace sampling methods for Bayesian neural nets and their predictions diversity from the loss landscape point of view. In particular, we discuss the specific tradeoff between accuracy and diversity of solutions that shows the clear separation between the individual optima and the subspace samples.\\n\\nWe believe the focus of our work is sufficiently different from [1], and we believe that these papers provide complementary perspectives. \\n\\nIt might be easy in hindsight to connect the dots between [1] and our work, especially if one adds retrospective explanations (and assumptions) not directly present in [1], e.g. R3\\u2019s comments above on: \\n- interpreting FGE as illustrative of all subspace sampling methods for Bayesian neural networks (and assuming error on i.i.d test set reflects all performance metrics even under data shift)\\n- connecting choice of cyclical learning rate hyperparameters used in SWA paper, to diversity vs accuracy plots in prediction space. \\nGiven the amount of additional explanations and assumptions needed beyond just the existing text in [1] to derive our conclusions, we do not think our work is \\u201cincremental\\u201d.\"}",
"{\"title\": \"Response\", \"comment\": \"I would like to thank the authors for their feedback.\\n\\u201cThese comments seem focused on particular subsections (Section 3.3 and Section 3.1) and significantly under-estimates the total contributions of our paper. \\u201c \\nFrom my point of view, these two points that I mentioned are the concise description of the main contribution of the paper. \\nThe investigation of the subspace sampling is also a significant part of the paper, but I would say that findings regarding subspace sampling somehow intersect with two items that I described.\\n\\n\\u201cThe paper also demonstrates the complementary benefits of using subspace sampling/weight averaging in combination with deep ensembles and shows that relative benefits of deep ensembles are higher.\\u201c \\nI believe this conclusion can be derived based on findings from [1] (table 1), where one can interpret FGE as subspace sampling method. In this paper, FGE was combined with ensembling of models trained from different initializations. The increase of budget (e.g. using several initializations) leads to bigger improvement than a simple application of FGE. Nevertheless, combining these two approaches leads to better results, which shows that these methods can be combined. \\n\\nI would like to highlight the following one more time. The authors indeed conducted much broader investigation than was done in previous works and these results are clearly written. Nevertheless, the investigated phenomena are not quite new and this is an incremental paper. The provided conclusions are aligned with previous experiments but the authors did not provide any new insights into applications of their findings.\\nI believe if the authors add the practical application of their findings, it will significantly increase the novelty of the paper.\\n\\n\\nI would like to increase my score, but I still believe that it is a borderline paper.\\n\\n\\u201cWe are not sure what exactly you mean. Could you clarify your claim? We showed that functions along a trajectory (or subspace thereof) are similar whereas ensembling over random initializations leads to much more diversity; see sections 3.2 for diversity vs accuracy plots and Section 4 where we measure the relative effects of ensembles and subspace sampling methods. These results indicate that random initialization provides more diversity than subspace sampling methods.\\u201d\\n\\nOne of the main components of SWA is a cyclic learning rate schedule or constant learning rate schedule with larger learning rate than the learning rate that was used at the end of the training. If the simple averaging would be applied to the points of the training trajectory, in general it will not give a significant boost in performance. If the weights corresponding to the last epochs were taken, one would not see improvement in accuracy because the predictions have barely changed. If points were taken from the middle of the training process, one would not have seen improvement on top of the best point because the models would be much weaker. It is indeed not directly stated that predictions of the models corresponding to the same trajectory are similar, but it sounds like a fairly obvious conclusion based on it.\\n\\n[1] Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of DNNs. InNeurIPS, 2018\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your detailed review and positive comments. We hope that you will champion our paper.\\n\\nWe address some of the points you\\u2019ve made in your response. \\n\\nBased on your feedback, we added experiments with ResNet on CIFAR-100 which strengthened our claims and verified that other function space disagreement metrics result in the same effects.\\n\\n(1] We added ResNet CIFAR-100 experiments in Appendix C and they support our conclusions:\\n\\nWe have conducted a wider range of experiments to further strengthen the validity of our claims. In particular, to make sure that the separation of the independently optimized optima and functions sampled by subspace sampling methods in the diversity-accuracy plane remain true even for more challenging datasets, we added experiments on CIFAR-100 with a ResNet. The results support our previous conclusions based on CIFAR-10 and other smaller datasets, and seem to be even stronger..\\n\\n(2] Different notions of function space disagreement = diversity metrics.\\n\\nWe experimented with different distance measures between predicted probability distributions between models and settled on the fraction of predictions that are different as we thought it would be the most intuitive for the reader. We verified that the same separation between the region of independently initialized and optimized optima and the subspace sampled solutions holds for the KL-divergence, and L_n distances between the distributions (we looked at different ns, including the usual n=1 and n=2). Our conclusions, therefore, seem to be independent of the distance measure.\\n\\n(3] The effect of data augmentation.\\n\\nWe have conducted experiments with both data augmentation (ResNet20v1 on CIFAR-10 and CIFAR-100) and without data augmentation (all other experiments) and the results have the same character. We added data augmentation so that our classifier accuracy is comparable with previously published results using this architecture.\\n\\n\\u201cSome comments on the figures:\\u201d Thank you for the suggestions, we will look into these.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your review.\\n\\n\\u201cThe contribution of the paper is the following two findings: 1. Despite the fact that local minima are connected in the loss landscape the functions corresponding to the points on the curve are significantly distinct. 2. The points along the training trajectory correspond to similar functions. \\u201c \\n\\nThese comments seem focused on particular subsections (Section 3.3 and Section 3.1) and significantly under-estimates the total contributions of our paper. \\n\\nTo the best of our knowledge, we are the first to comprehensively investigate deep ensembles vs Bayesian neural nets from loss landscape perspective. We carefully investigated the role of random initialization in deep ensembles, tested the complementary effects of ensembling and subspace methods, and measured diversity of functions. Aside from earlier results on CIFAR-10 and ImageNet, we have also added new experiments on CIFAR-100 (see Figure S3 in Appendix C) which are consistent with our earlier results.\\n\\nPlease see also the summary of contributions from other reviewers. \\n\\nR1 said \\u201cThis paper analyzes ensembling methods in deep learning from the perspective of the loss landscapes. The authors empirically show that popular methods for learning Bayesian neural networks produce samples with limited diversity in the function space compared to modes of the loss found using different random initializations ... The paper also demonstrates the complementary benefits of using subspace sampling/weight averaging in combination with deep ensembles and shows that relative benefits of deep ensembles are higher. \\u201c\\n\\nR2 said \\u201cThis paper is trying to answer the question why ensembles of deep neural networks trained with random initialization work so well in practice in improving accuracy ... Overall, the paper is very well written and provides interesting insights into the multi-modal structure of deep neural network loss landscapes.\\u201d\\n\\n--------------\\n\\n\\u201cI would recommend the authors to add more rigorous description of how they constructed these plots to increase clarity of the paper. Can the authors please also clarify how they derived formulas for the expected fractional difference for f^* and f functions in the section 3.2? \\u201c\\n\\nWe have added the derivation of the two limiting functions in the appendix. The upper limit corresponds to the best case for ensembling, where the two functions are uncorrelated. The lower limit corresponds to the worst case, where the predictions are obtained by perturbing the outputs of the reference function by different amounts of noise, therefore retaining a large amount of correlation between their predictions. We provide the detailed derivation in the appendix our updated version.\\n\\n--------------\\n\\n\\u201cThe first conclusion can be mostly derived from Figure 2 right of Garipov et al.\\u201d \\n\\nWe do not agree that this conclusion can be reached from that figure as you are suggesting. Figure 2 in Garipov et al. only plots loss and accuracy, and does not measure function space similarity, between different initializations, or along the tunnel at all. Just by looking at accuracy and loss values, there is no way to infer how similar the predictions of the two functions are.\\n\\n--------------\\n\\n\\u201cThe second conclusion is also not quite new and there were several approaches to overcome it e.g. SWA [2].\\u201d\\n\\nWe are not sure what exactly you mean. Could you clarify your claim?\\nWe showed that functions along a trajectory (or subspace thereof) are similar whereas ensembling over random initializations leads to much more diversity; see sections 3.2 for diversity vs accuracy plots and Section 4 where we measure the relative effects of ensembles and subspace sampling methods. These results indicate that random initialization provides more diversity than subspace sampling methods. \\n\\n--------------\\n\\n\\u201cAnother drawback is lack of practical implications. It is known that ensembling based on dropout is worse than independent networks, but the main advantage of this and similar approaches is memory efficiency.\\u201d\\n\\nWe\\u2019re happy to add a discussion about different regimes (training time constraints, serving time constraints, memory constraints, etc), but it is beyond the scope of this paper to discuss every possible setting in detail. Some of these solutions are well-known in the literature, cf. the discussion in (Lakshminarayanan et al. 2017) or the take-home messages in (Ovadia et al. 2019): for instance, distillation is a popular solution when serving time is the primary constraint. Implicit ensembles (e.g. Monte-Carlo dropout) are popular when memory is the main constraint. The best method would obviously depend on the specific constraints (as you also point out).\\n\\nThe goal of this work is to understand the general question of why ensembles work well and we provide an explanation from the perspective of loss landscapes. In future work, we plan to take these insights to develop better algorithms for specific settings.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your review and the positive comments about our work.\\nWe would like to address the points you brought up.\\n\\n\\u201cPractical aspects of ensembling versus different methods of subspace sampling\\u201d: \\n\\nThe goal of this work is to understand the general question of why ensembles work well and we provide an explanation from the perspective of loss landscapes. In future work, we plan to take these insights to develop better algorithms for specific settings.\\nWe\\u2019re happy to add a discussion about different regimes (training time constraints, serving time constraints, memory constraints, etc), but it is beyond the scope of this paper to discuss every possible setting in detail. Some of these solutions are well-known in the literature, cf. the discussion in (Lakshminarayanan et al. 2017) or the take-home messages in (Ovadia et al. 2019): for instance, distillation is a popular solution when serving time is the primary constraint. Implicit ensembles (e.g. Monte-Carlo dropout) are popular when memory is the main constraint. The best method would obviously depend on the specific constraints (as you also point out).\\n\\n----------------\\n\\n\\u201cIt remains unclear to me what new insights does the analysis of the low-loss connectors provide?\\u201d\\n\\nWe added Section 3.3 in response to feedback on an earlier version of this paper. A couple of folks thought that our results contradicted the results from earlier papers on \\u201cmode connectivity\\u201d. We believe this confusion comes down to how folks interpret the word \\u201cconnectivity\\u201d. The original papers by (Garipov et al. 2018) and (Draxler et al. 2018) used \\u201cconnectivity\\u201d to imply continuous map between two functions (the notion you mentioned), but others (not the original authors) seem to have interpreted connectivity as similarity of functions.\\nWe mainly wanted to convey that identical loss values do not imply identical functions. That is, loss similarity, which measures if L(f_{theta_1}) and L(f_{theta_2}) are similar, does not measure prediction similarity, which measures if f_{theta_1} and f_{theta_2} are similar.\\nWhile it has been shown that two independently initialized and optimized-to optima can in fact be connected on a low-loss path in the weight space by (Garipov et al. 2018) and (Draxler et al. 2018), the papers do not explicitly discuss how similar the models along such a path are in their predictions, which can be taken as a proxy for their similarity in the space of functions. \\nGiven that multiple folks raised this point about \\u201cconnectivity\\u201d, we thought it might be useful to explicitly add a discussion about the distinction between loss similarity and prediction similarity in subsection 3.3. \\n\\n----------------\\n\\n\\u201cI would encourage authors to reformulate the statements on the connectivity in the function space\\u201d:\\n\\nWe can rephrase \\u201closs connectivity\\u201d and \\u201cfunction space connectivity\\u201d to \\u201closs similarity\\u201d and \\u201cpredictions similarity\\u201d, would that address your concerns? \\n\\n----------------\\n\\n\\u201cHowever, the novelty and the significance of the other contributions are limited (see comments below). Therefore, I consider the paper to be below the acceptance threshold.\\u201d \\n\\nTo the best of our knowledge, we are the first to comprehensively investigate deep ensembles vs Bayesian neural nets from loss landscape perspective. We carefully investigated the role of random initialization in deep ensembles, tested the complementary effects of ensembling and subspace methods, and measured diversity of functions. Aside from earlier results on CIFAR-10 and ImageNet, we have also added new experiments on CIFAR-100 (see Figure S3 in Appendix C) which are consistent with our earlier results.\", \"i_think_your_own_summary_highlights_a_lot_of_our_contributions\": \"\\u201cThis paper analyzes ensembling methods in deep learning from the perspective of the loss landscapes. The authors empirically show that popular methods for learning Bayesian neural networks produce samples with limited diversity in the function space compared to modes of the loss found using different random initializations ... The paper also demonstrates the complementary benefits of using subspace sampling/weight averaging in combination with deep ensembles and shows that relative benefits of deep ensembles are higher. \\u201c\\n\\nWe believe these results are both novel and significant, and would be interesting to the ICLR community.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper analyzes ensembling methods in deep learning from the perspective of the loss landscapes. The authors empirically show that popular methods for learning Bayesian neural networks produce samples with limited diversity in the function space compared to modes of the loss found using different random initializations. The paper also considers the low-loss paths connecting independent local optima in the weight-space. The analysis shows that while the values of the loss and accuracy are nearly constant along the paths, the models corresponding to different points on a path define different functions with diverse predictions. The paper also demonstrates the complementary benefits of using subspace sampling/weight averaging in combination with deep ensembles and shows that relative benefits of deep ensembles are higher.\\n\\nThe paper is well-written. The experiments are described well and the results are presented clearly in highly-detailed and visually-appealing figures. There are occasional statements which are not formulated rigorously enough (see comments below).\\n\\nThe paper presents a thorough experimental study of different ensemble types, their performance, and function space diversity of individual members of an ensemble. In my view, the strongest contribution of the paper is the analysis of the diversity of the predictions for different sampling procedures in comparison to deep ensembles. However, the novelty and the significance of the other contributions are limited (see comments below). Therefore, I consider the paper to be below the acceptance threshold.\", \"comments_and_questions_to_authors\": \"1) The practical aspects of different ensembling techniques are not discussed in the paper. While it is known that deep ensembles generally demonstrate stronger performance [1], there is a trade-off between the ensemble performance and training time/memory consumption. The considered alternative ensembling procedures can be favorable in specialized settings (e.g. limited training time and/or memory).\\n\\n2) It remains unclear to me what new insights does the analysis of the low-loss connectors provide? It is expected (and in fact can be shown analytically) that if the two modes define different functions then intermediate points on a continuous path define functions which are different from those defined by the end-points of the path. This result was also analyzed before from the perspective of the performance of ensembles formed by the intermediate points on the connecting paths (see Fig. 2 right in [2]).\\nMoreover, I would encourage authors to reformulate the statements on the connectivity in the function space such as: \\n-- \\u201cWe demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions.\\u201d (Abstract)\\n-- \\u201cthe connectivity in the loss landscape does not imply connectivity in the space of functions\\u201d (Discussion)\\nIn my opinion, these claims are somewhat misleading. What does it mean that the modes are disconnected in the function space? Neural networks define continuous functions (w.r.t to both the inputs and the weights), and a connector is continuous path in the weight space which continuously connects the modes in the function space (i.e. a path defines a homotopy between two functions). It is true that two modes correspond to two different functions. However, it is unclear in which sense these functions can be considered to be disconnected. \\n\\n[1] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS, 2017.\\n\\n[2] Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of DNNs. In NeurIPS, 2018.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The contribution of the paper is the following two findings: 1. Despite the fact that local minima are connected in the loss landscape the functions corresponding to the points on the curve are significantly distinct. 2. The points along the training trajectory correspond to similar functions.\\n\\nOriginality and novelty. Both findings do not seem quite new. The first conclusion can be mostly derived from Figure 2 right [1]. Moreover, the difference between functions on the curve in terms of predictions is the main motivation of Fast Geometric Ensembling. The second conclusion is also not quite new and there were several approaches to overcome it e.g. SWA [2]. I appreciate that the authors did a much broader investigation of this phenomena than it was done in previous works. Another drawback is lack of practical implications. It is known that ensembling based on dropout is worse than independent networks, but the main advantage of this and similar approaches is memory efficiency. \\n\\nThe clarity. The paper is well written, contains all necessary references and is easy to follow. The provided experimental results and supporting plots are also clear and contain the necessary description. The only part that I found a bit confusing is radial plots. I would recommend the authors to add more rigorous description of how they constructed these plots to increase clarity of the paper. Can the authors please also clarify how they derived formulas for the expected fractional difference for f^* and f functions in the section 3.2? \\n\\nOverall, it is an interesting paper, but the findings are not quite new.\\n[1] Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of DNNs. InNeurIPS, 2018\\n[2] Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Av-eraging weights leads to wider optima and better generalization.arXiv preprint arXiv:1803.05407,2018\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper is trying to answer the question why ensembles of deep neural networks trained with random initialization work so well in practice in improving accuracy. Their proposed hypothesis is that networks trained from different initializations, although all converge to a low-loss/high accuracy optimum, explore different modes in function space and therefore provide more diversity. To experimentally support their hypothesis, first they show that functions along a single training trajectory are similar, however trajectories starting from different initializations may significantly differ. The difference in function space is based on the fraction of points on which the two functions disagree in terms of their prediction. Second, they use different subspace sampling methods around a single optimum and demonstrate that they are significantly less diverse (low disagreement between predictions) than sampling from independent optima through diversity vs accuracy plots. Moreover, they comment on the recent observation that local optima are connected by low-loss tunnels. They experimentally show that even though low-loss/high accuracy path exists between local optima, these tunnels do not correspond to similar solutions in function space, further supporting the multi-mode hypothesis. The authors compare the relative benefit of subspace sampling, weight averaging and ensembling on accuracy and interpret their findings in terms of the hypothesis.\\n\\nOverall, the paper is very well written and provides interesting insights into the multi-modal structure of deep neural network loss landscapes. Even though the hypothesis of the paper is not entirely new and has been touched upon in Fort & Jastrzebski (2019), this paper contributes to the field by providing thorough experimental support and clear exposition of the idea. Therefore, I would accept this paper if the authors provided additional experimental results on a different dataset.\\n\\nThe paper mentions that the trends are consistent across all datasets the authors have explored. However, they only provide results on CIFAR-10 (and a limited set of experiments on ImageNet). Since the contribution of the paper heavily relies on providing experimental verification, it would be important to include at least the diversity vs. accuracy plot for the other datasets they have explored to demonstrate that this phenomenon is not specific to CIFAR-10. \\n\\nAdditionally, I would like to add a couple of comments on the paper that are not part of my decision, but could potentially improve the paper. \\n-The diversity score introduced in the paper is simple and intuitive, however it would be interesting to see whether the results hold across different notions of function space disagreement. \\n\\n-It is mentioned in the paper that data augmentation has been used for training the ResNet20 architecture. Would the results change significantly without data augmentation, as it adds another source of randomness to the training procedure.\\n\\n-Some comments on the figures: in Figure 3 it is very difficult to discern any difference between different shades of red (disagreement values), and in this form the plots are not too informative. Maybe rescaling or a different way of presentation would help. Interpreting Figure 7/a is a bit difficult, probably a 3D plot would be useful to explain the different line sections.\"}"
]
} |
B1xxAJHFwS | A Finite-Time Analysis of Q-Learning with Neural Network Function Approximation | [
"Pan Xu",
"Quanquan Gu"
] | Q-learning with neural network function approximation (neural Q-learning for short) is among the most prevalent deep reinforcement learning algorithms. Despite its empirical success, the non-asymptotic convergence rate of neural Q-learning remains virtually unknown. In this paper, we present a finite-time analysis of a neural Q-learning algorithm, where the data are generated from a Markov decision process and the action-value function is approximated by a deep ReLU neural network. We prove that neural Q-learning finds the optimal policy with $O(1/T)$ convergence rate if the neural function approximator is sufficiently overparameterized, where $T$ is the number of iterations. To our best knowledge, our result is the first finite-time analysis of neural Q-learning under non-i.i.d. data assumption. | [
"Reinforcement Learning",
"Neural Networks"
] | Reject | https://openreview.net/pdf?id=B1xxAJHFwS | https://openreview.net/forum?id=B1xxAJHFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9LKC9LU2w5",
"SyeDLwWqjH",
"HJgIU4UPoS",
"rklbQE8vjS",
"HygAFXIwjB",
"rylHBQLvsH",
"BylHEpDWcH",
"BylDqq5RKS",
"HJeNiNNRYS",
"HJxVNuXatB",
"rkgFLhm3YB",
"B1l0WBG3tB",
"BJgdOt4ttr",
"Hkx_AGivYH",
"rkxIbYcwtS",
"H1g49268FS",
"rJgmqLVQtH",
"BklyMs56ur",
"BJxg-5cadS",
"H1e34GDFdH",
"r1g3c5IYuB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_review",
"official_review",
"comment",
"official_review",
"official_comment",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"comment"
],
"note_created": [
1576798738232,
1573685070952,
1573508174457,
1573508121493,
1573507973548,
1573507901222,
1572072749459,
1571887758719,
1571861660081,
1571792939851,
1571728464929,
1571722501958,
1571535215794,
1571431120190,
1571428606255,
1571376267699,
1571141259109,
1570773766649,
1570773495712,
1570497076438,
1570495123856
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2012/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2012/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2012/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2012/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2012/Authors"
],
[
"~Matt_Theodore1"
],
[
"ICLR.cc/2020/Conference/Paper2012/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2012/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2012/AnonReviewer1"
],
[
"~Matt_Theodore1"
],
[
"ICLR.cc/2020/Conference/Paper2012/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2012/Authors"
],
[
"~Matt_Theodore1"
],
[
"~Sussard_Julard1"
],
[
"ICLR.cc/2020/Conference/Paper2012/Authors"
],
[
"~Sussard_Julard1"
],
[
"ICLR.cc/2020/Conference/Paper2012/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2012/Authors"
],
[
"~Sussard_Julard1"
],
[
"~Sussard_Julard1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This was an extremely difficult paper to decide, as it attracted significant commentary (and controversy) that led to non-trivial corrections in the results. One of the main criticisms is that the work is an incremental combination of existing results. A potentially bigger concern is that of correctness: the main convergence rate was changed from 1/T to 1/sqrt{T} during the rebuttal and revision process. Such a change is not trivial and essentially proves the initial submission was incorrect. In general, it is not prudent to accept a hastily revised theory paper without a proper assessment of correctness in its modified form. Therefore, I think it would be premature to accept this paper without a full review cycle that assessed the revised form. There also appear to be technical challenges from the discussion that remain unaddressed. Any resubmission will also have to highlight significance and make a stronger case for the novelty of the results.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the revision\", \"comment\": \"Thank the authors for addressing all my comments. I feel satisfied with all the revisions. Although the rate has been changed to $O(1/\\\\sqrt{T})$, I still feel this paper makes a good theoretical contribution to neural Q-learning.\"}",
"{\"title\": \"Response to all reviewers: major changes in the revision\", \"comment\": \"Thank you for reviewing our submission. We have addressed all your questions in the individual responses to each reviewer. Here we summarize the main changes made in our revision for you to have a quick overview of it.\\n\\n1. The first major revision is in Assumption 5.3. Previously, we require $\\\\Sigma_{\\\\pi}-\\\\gamma^2\\\\Sigma_{\\\\pi}^*(\\\\theta)\\\\succ \\\\alpha$ for a positive constant $\\\\alpha$. However, during the discussion in one of the open commentary, we found that this assumption is restrictive and hard to be verified in practice. We thus relaxed this condition to only assume $\\\\Sigma_{\\\\pi}-\\\\alpha\\\\gamma^2\\\\Sigma_{\\\\pi}^*(\\\\theta)\\\\succ 0$ for some positive constant $\\\\alpha$. Note that this assumption is much milder and easy to be attained since both $\\\\Sigma_{\\\\pi}$ and $\\\\Sigma_{\\\\pi}^*(\\\\theta)$ are positive definite by definition. The high level idea of the assumption remains the same, which ensures that the learning policy is not too much worse than the greedy policy. Following this milder assumption, we proved an $O(1/\\\\sqrt{T})$ convergence rate of Q-learning with multi-layer neural network function approximation under non-i.i.d. data. Consequently, we have modified the corresponding statement in the introduction, the rate in Table 1, and the result in Theorems 4.5. and 4.6.\\n\\n2. We changed the definition of $b_{\\\\max}$ in (5.6) to be $b_{\\\\max}$ in (5.6) to be $b_{\\\\max}(\\\\theta)=\\\\arg\\\\max_{b\\\\in\\\\mathcal{A}}|\\\\langle\\\\nabla_{\\\\theta} f(\\\\theta;s,b),\\\\theta\\\\rangle|$. This is consistent with Chen et al. (2019). This does not affect the result of Lemma 6.3.\"}",
"{\"title\": \"Response to official blind review #3\", \"comment\": \"Thank you for your constructive comments. We address your questions as follows.\", \"q1\": \"\\\"The novelty is a bit unclear other than the non-iid assumption. We note that modern Q-learning tends to use batching so doesn't require much of an iid assumption anyways, but this allows for more robust proofs in TD settings with non-iid training.\\\"\", \"a1\": \"Existing work on neural Q-learning (Cai et al, 2019) requires to resample a new pair of data $(s, a, s')$ at every iteration from the initial data distribution. This is not efficient in practice since one step along the trajectory may not give a good prediction of the policy. In contrast, our paper study the case where data $(s_t,a_t,s_{t+1})$ is drawn from a consecutive trajectory generated by the learning policy. Apart from the non-i.i.d. data generation, another contribution of our paper is to study the convergence of Q-learning with multi-layer neural network approximation. The extension from the two layer case in Cai et al. (2019) to our multi-layer case is not easy since the linearization error can not be calculated directly.\", \"q2\": \"\\\"The paper was a bit dense and hard to follow, we suggest reducing p.8 to have more discussion with references to proofs in the Appendix as in Chen2019.\\\"\", \"a2\": \"Thank you for the suggestion. We have added additional discussions and more details of the proof in Section 6 to make the proof easier to follow. In order to make the proof coherent, we did not divide the proof into several parts and move some parts of the proof to the appendix. Please let us know if you have any further suggestion.\", \"q3\": \"\\\"As the authors admit in open commentary, there is a mistake to be fixed which needs to be reviewed before acceptance. I think there is value to this work, however, would require seeing the change to assess a revision.\\\"\", \"a3\": \"We have fixed the problem of the indicator function used in the proof of Lemma 6.3. In particular, we chose to modify the definition of $b_{\\\\max}$ in (5.6) to be $b_{\\\\max}(\\\\theta)=\\\\arg\\\\max_{b\\\\in\\\\mathcal{A}}|\\\\langle\\\\nabla_{\\\\theta} f(\\\\theta;s,b),\\\\theta\\\\rangle|$ which is similar to the definition used in Chen et al. (2019) (Note that their paper is for linear function approximation and thus $b_{\\\\max}(\\\\theta)=\\\\arg\\\\max_{b\\\\in\\\\mathcal{A}}|\\\\phi(s,b)^{\\\\top}\\\\theta|$). This does not change the result of Lemma 6.3. See page 5 and pages 17-18 of the revision for the details.\"}",
"{\"title\": \"Response to official blind review #2\", \"comment\": \"Thank you for your insightful comments. We would like to clarify that our paper is not a direct combination of existing results. We highlight that our main contributions are to provide (1) the first finite-time analysis of Q-learning with multi-layer neural network function approximation, and (2) the first finite-time analysis of neural Q-learning with non-i.i.d. data assumption. We agree with the reviewer's comment that our analysis is built on previous work of Bhandari et al. (2018) and Cai et al. (2019). However, the analysis for deep Q-learning with non-i.i.d. data is by no means trivial, as is shown in our following responses to your comments on each technical lemma.\\n\\nOverall, our proof of Theorem 5.4 was decomposed into three parts (the bounds of terms $I_1$, $I_2$ and $I_3$ in equation (6.5) of our paper), which are bounded using Lemmas 6.1, 6.2 and 6.3 respectively. \\n\\nIn Lemma 6.1, we upper bound the difference between $\\\\mathbf{g}_t$ and $\\\\mathbf{m}_t$, where $\\\\mathbf{g}_t$ is defined based on the Bellman residual error and the multi-layer neural network function, and $\\\\mathbf{m}_t$ is defined on the same Bellman residual error but with a linearized function at the initial point $\\\\theta_0$. This lemma requires a careful calculation of the bound on the temporal difference $\\\\Delta_t(s_t,a_t,s_{t+1};\\\\theta_t)$ and the linearization error, which is not presented in previous work.\\n\\nIn Lemma 6.2, we characterize the bias of the stochastic gradient $\\\\mathbf{m}_t(\\\\cdot)$ and its idealized version $\\\\overline{\\\\mathbf{m}}(\\\\cdot)$ whose definition does not depend on the Markov data trajectory. As we mentioned at the beginning of the proof of Lemma 6.2, our proof was indeed adapted from that in Bhandari et al. (2018). However, there are a few differences between their proof and ours. First, $\\\\mathbf{m}_t(\\\\cdot)$ and $\\\\overline{\\\\mathbf{m}}(\\\\cdot)$ in our paper are defined based on a neural network function and its gradient, and thus the Lipschitz condition and the gradient norm bound are not trivial to derive. Second, the proof in Bhandari et al. (2018) is for TD learning, which does not directly apply to neural Q-learning.\\n\\nIn Lemma 6.3, we used a slightly different assumption (Assumption 5.3 in the revision) from that of Cai et al. (2018). It is worth noting that our Assumption 5.3 follows the same idea of Melo et al. (2008), Zou et al. (2019) and Chen et al. (2019), which can be interpreted as the advantage of the greedy policy over the learning policy. In contrast, Assumption 6.1 in Cai et al. (2019) directly imposes the condition on the difference between action value functions at two different policies. Therefore, our proof is based on bounding the eigenvalue of the difference between two covariance matrices (i.e., $\\\\hat\\\\Sigma_{\\\\pi}$ and $\\\\hat\\\\Sigma_{\\\\pi}^*(\\\\theta)$), which is different from that of Cai et al. (2019).\"}",
"{\"title\": \"Response to official blind review #1\", \"comment\": \"Thank you for your helpful comments. We address them point by point as follows.\", \"q1\": \"\\\"The projection step relies on a parameter $\\\\omega$ which is unknown in practice. It would be of practical interests to seek other proof techniques to avoid such projection step. ...\\\"\", \"a1\": \"The unknown constant $C$ is often treated as a hyperparameter and can be tuned using grid search in practice. We agree that it would be interesting to explore the possibility of removing the projection step as what Srikant and Ying (2019) and Chen et al. (2019) did in the linear approximation setting. We have discussed their methods in the related work section. However, adapting their proof techniques would completely change our algorithm and our current analysis framework. So we will investigate the projection-free version of our algorithm in the future work.\", \"q2\": \"\\\"Assumption 5.3 is problematic for the considered neural Q-learning setting. The matrix $\\\\hat{\\\\Sigma}_{\\\\pi}$ is of a very large dimension in the order of $O(m^2) * O(m^2)$ where the width of the neural network $m$ is assumed to diverge in Theorem 5.4 for the over-parameterization purpose. ...\\\"\", \"a2\": \"In our previous submission, we did not require $m$ to go to infinity and thus the matrix $\\\\hat\\\\Sigma_{\\\\pi}$ is well defined and positive definite. However, the minimum eigenvalue of $\\\\hat\\\\Sigma_{\\\\pi}$ could be very small and hence $\\\\alpha$ (the minimum eigenvalue of $\\\\hat\\\\Sigma_{\\\\pi}-\\\\gamma^2\\\\hat\\\\Sigma_{\\\\pi}^*(\\\\theta)$) would be a very small quantity which can slow down the convergence rate. Based on this observation, we agree that the previous assumption is too restrictive and we have removed the assumption on the minimum eigenvalue in the revision. In particular, we relax the previous assumption to a much milder one where we only require the difference between the two matrices ($\\\\hat\\\\Sigma_{\\\\pi}$ and $\\\\hat\\\\Sigma_{\\\\pi}^*(\\\\theta)$) to be positive definite (see Assumption 5.3 in the revision). Under this milder assumption, we proved that neural Q-learning converges with an $O(1/\\\\sqrt{T})$ rate. This result matches the convergence rate of neural Q-learning in Cai et al. (2019) where only a two-layer neural network approximator is used and the data are assumed to be i.i.d. generated.\", \"q3\": \"\\\"The error rate in Theorem 5.4 is an increasing function of the layer $L$ in DNN, which is counterintuitive. A typically practical observation is that a larger $L$ is better.\\\"\", \"a3\": \"The dependence on $L$ in the error rate in Theorem 5.4 comes from Lemma 6.1 which characterizes the approximation error between the linearized gradient $\\\\mathbf{m}_t$ and the gradient term $\\\\mathbf{g}_t$. The dependency of $L$ can be removed by choosing a smaller $\\\\omega=C_0m^{-1/2}L^{-9/4}$. Please see the updated Theorem 5.4 in the revision.\"}",
"{\"title\": \"Re: Re: The matrix $\\\\hat \\\\Sigma_\\\\pi$ has diverging trace\", \"comment\": \"Thanks for your reply. The reason I say $m$ is allowed to go to infinity is to say that $m$ is much larger than $n$. For example, in the NTK paper, they typically assume that $m = \\\\Omega(n^8)$.\\n\\nHowever, the contradiction in your paper is just there. I agree with you that $\\\\Sigma_{\\\\pi}$ is a population matrix. Do you agree that its trace is $\\\\Omega(p) = \\\\Omega(m^2)$? That is $\\\\| \\\\hat\\\\Sigma_{\\\\pi} \\\\|_{\\\\textrm{fro}} = \\\\Omega(m^2)$. However, as shown in your Lemma B.1, for all $x$, you have shown $\\\\| \\\\nabla_{\\\\theta} f (\\\\theta; x) \\\\|_2 \\\\leq C\\\\cdot \\\\sqrt{m}$ for some constant $C$. This means that $$\\\\hat \\\\Sigma_{\\\\pi} = 1/ m \\\\cdot \\\\mathbb{E}_{x \\\\sim \\\\pi} [ \\\\nabla_{\\\\theta} f (\\\\theta; x)\\\\nabla_{\\\\theta} f (\\\\theta; x)^\\\\top ] $$\\n has Frobenious norm bounded by $C^2$, as $$\\\\| \\\\nabla_{\\\\theta} f (\\\\theta; x)\\\\nabla_{\\\\theta} f (\\\\theta; x)^\\\\top \\\\|_{\\\\textrm{fro} } = \\\\| \\\\nabla_{\\\\theta} f (\\\\theta; x) \\\\|_2 ^2 \\\\leq C^2 \\\\cdot m.$$\\n By your Assumption 5.3 and Lemma B.1, \\n$$\\n\\\\Omega(m^2) = \\\\textrm{trace} (\\\\hat \\\\Sigma_{\\\\pi}) \\\\leq C^2. \\n$$\\nThis must be wrong!\"}",
"{\"title\": \"Re: The matrix $\\\\hat \\\\Sigma_{\\\\pi}$ has diverging trace\", \"comment\": \"You still have a misunderstanding of our assumption and analysis. In our paper, we do not need to estimate the population matrix $\\\\hat\\\\Sigma_{\\\\pi}$ based on $n$ data points. Assumption 5.3 is imposed on $\\\\hat\\\\Sigma_{\\\\pi}$ which by definition already takes the expectation over the data distribution (according to our previous response, this means infinite data points). This assumption is used in the proof of Lemma 6.3. According to our Theorems 5.4 and 5.6, we do not require $m$ to go to infinity, which is the same as some recent papers on NTK such as [Allen-Zhu et al., 2019, Du et al., 2019, Zou et al., 2019].\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper introduces a finite time analysis of Q-learning with neural network function approximators across multiple layers and no iid assumption.\", \"[Pros]\", \"Provides a novel way to analyze Q learning with nn function approximators that can be applied to other algorithms (notably in my mind, TD in actor critic where iid assumptions are often violated).\", \"[Cons]\", \"The novelty is a bit unclear other than the non-iid assumption. We note that modern Q-learning tends to use batching so doesn't require much of an iid assumption anyways, but this allows for more robust proofs in TD settings with non-iid training.\", \"The paper was a bit dense and hard to follow, we suggest reducing p.8 to have more discussion with references to proofs in the Appendix as in Chen2019.\", \"As the authors admit in open commentary, there is a mistake to be fixed which needs to be reviewed before acceptance. I think there is value to this work, however, would require seeing the change to assess a revision.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper provides a finite-time analysis of a neural Q-learning algorithm, where the data are generated from a Markov decision process and the action-value function is approximated by a deep ReLU neural network. When the neural function is sufficiently over-parameterized, the O(1/T) convergence rate is attained.\", \"pros\": \"This paper makes theoretical contribution to the understanding of neural Q-learning. This is an important but difficult task. The recent finite-time analysis on Q-learning either assumes a linear function approximation or an i.i.d. setting in the neural Q-learning. This paper makes a first attempt to study the neural Q-learning with Markovian noise. Overall, this paper is very easy to follow.\", \"cons\": \"In spite of its theoretical contributions, this paper has a few major issues.\\n\\n1. The projection step relies on a parameter $\\\\omega$ which is unknown in practice. In theorem, $\\\\omega = C m^{-1/2}$ for some unknown constant $C$. It would be of practical interests to seek other proof techniques to avoid such projection step. For instance, Srikant and Yang (2019) and Chen et al. (2019) removed this projection step in the finite-time analysis of Q-learning with a linear function approximation. \\n\\n2. Assumption 5.3 is problematic for the considered neural Q-learning setting. The matrix $\\\\hat{\\\\Sigma}_{\\\\pi}$ is of a very large dimension in the order of $O(m^2) * O(m^2)$ where the width of the neural network $m$ is assumed to diverge in the Theorem 5.4 for the over-parameterization purpose. Given the diverging dimension scenario, it is problematic to ensure Assumption 5.3. Moreover, it is unclear how to verify this condition in practice. In the literature, Melo et al. (2008) and Zou et al. (2019b) assumed a similar condition, which is OK because in the Q-learning with linear function approximation, this matrix reduces to the covariance matrix of the feature vector. \\n\\n3. The error rate in Theorem 5.4 is an increasing function of the layer $L$ in DNN, which is counterintuitive. A typically practical observation is that a larger $L$ is better.\"}",
"{\"title\": \"The matrix $\\\\hat \\\\Sigma_{\\\\pi}$ has diverging trace\", \"comment\": \"In your work, as $\\\\hat \\\\Sigma_{\\\\pi}$ is a $p\\\\times p$ matrix and its eigenvalues are lower bounded, its trace will be $\\\\Omega(p) = \\\\Omega(m^2) $. That means, as $m$ goes to infinity, the trace of this matrix will blow up. This seems very strange as estimating this matrix would be impossible using $n$ data points.\\nHow does your analysis cope with the fact that $\\\\hat \\\\Sigma_{\\\\pi}$ is divergent as $m$ goes to infinity?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"[Summary]\\nThis paper studies the convergence of Q-Learning when a wide multi-layer network in the Neural Tangent Kernel (NTK) regime is used as the function approximator. Concretely, it shows that Q-learning converges with rate O(1/T) with data sampled from a single trajectory (non-i.i.d.) of an infinite-horizon MDP + a certain (stationary) exploration policy. \\n\\n[Pros]\\nThe results in this paper improve upon recent work on the same topic. It is able to handle multi-layer neural nets as opposed to two-layer, prove a faster rate O(1/T), and handle non-iid data (as opposed to iid data where (s,a,r,s\\u2019) are sampled freshly from the beginning at each step.)\\n\\nThe paper is generally well-written. The results and proof sketches are well presented and easy to follow. The proof seems correct to me from my check, including the indicator issue pointed out in the comments which I think can be easily fixed (by explicitly writing out the indicator and thus the Cauchy-Schwarz will still apply.)\\n\\n[Cons]\\nThe result in this paper seems more or less like a direct combination of existing techniques, and thus may be limited in bringing in new techniques / messages. Key technical bottlenecks that are assumed out in prior work are still assumed out in this paper with potentially different forms but essentially the same thing.\\n\\nMore concretely, the proof of the main theorem (Thm 5.4) seems to rely critically on Lemmas 6.2 and 6.3, both of which are rather straightforward adaptations of prior work:\\n\\nLemma 6.2 (concentration of stochastic gradients on linearized problem): Seems to me like almost the same as [Bhandari et al. 2018], expect that now the network is an affine function---rather than a linear function---of \\\\theta, where the additional constant term f(\\\\theta_0; s, a) depends on (s, a). \\n\\nLemma 6.3 (good landscape of linearized problem): Comparing with prior work (Theorem 6.3, Cai et al. 2019), this Lemma works by directly assuming out the property of the arg-max operator in Assumption 5.3, which has a slightly different form from, but is essentially the same thing as (Assumption 6.1, Cai et al. 2019). \\n\\nTo be fair, the paper has to deal with the linearization error of a multi-layer net, which is dealt with in Lemma 6.1 and should be valued. But still I tend to think the above adaptations are rather straightforward and technically not quite novel.\\n\\n[Potential improvements]\\nI would like to hear more from the authors about the technical novelty in this paper, specifically how Lemma 6.1 - 6.3 compare with prior work. I would be willing to improve my evaluation if this can be addressed.\"}",
"{\"comment\": \"We would like to clarify that you had misunderstood the definitions of $\\\\mathbf{G}$ in [Jacot et al, Du et al] and $\\\\widehat\\\\Sigma_{\\\\pi}$ in our paper, which are totally different matrices. It should be noted that $\\\\widehat\\\\Sigma_{\\\\pi}$ is NOT the Gram matrix defined in [Jacot et al., 2018] or [Du et al., 2019]. We explain the definition of $\\\\widehat\\\\Sigma_{\\\\pi}$ as follows.\\n\\nAccording to (4.2) in our paper, we have that $\\\\mathbf{\\\\theta}\\\\in\\\\mathbb{R}^{m^2(L-1)+m(d+1)}$ is the concatenation of vectorized weight matrices, which means the gradient $\\\\nabla_{\\\\mathbf{\\\\theta}}\\\\widehat f(\\\\mathbf{\\\\theta};s,a)$ is also a $p=m^2(L-1)+m(d+1)$ dimensional vector. The matrix $\\\\widehat\\\\Sigma_{\\\\pi}$ used in Assumption 5.3 is defined as \\n$$\\n\\\\widehat\\\\Sigma_{\\\\pi}=1/m\\\\mathbb{E}_{\\\\pi}\\\\big[\\\\nabla_{\\\\mathbf{\\\\theta}}\\\\widehat f(\\\\mathbf{\\\\theta};s,a)\\\\nabla_{\\\\mathbf{\\\\theta}}\\\\widehat f(\\\\mathbf{\\\\theta};s,a)^{\\\\top}\\\\big],\\n$$\\nwhich is a $p\\\\times p$ matrix, and the expectation $\\\\mathbb{E}[\\\\cdot]$ is taken over the data distribution in the feature space of the state-action pair $(s,a)$. The expectation $\\\\mathbb{E}[\\\\cdot]$ over data distribution means $\\\\widehat\\\\Sigma_{\\\\pi}$ is defined based on an infinite number of data points. $\\\\widehat\\\\Sigma_{\\\\pi}$ has nothing to do with the overparameterization. It can be positive definite under certain conditions on the learning policy and the transition probability kernel, which is a standard assumption in the literature [Melo et al., 2008;Bhandari et al., 2018; Zou et al., 2019]. \\n\\nFor the Gram matrix $\\\\mathbf{G}$, we first point out that your definition is incorrect. According to [Jacot et al., 2018] and [Du et al., 2019], for an $L$-layer deep neural network, the Gram matrix (at the initial point $\\\\mathbf{\\\\theta}_0$) is denoted as $\\\\mathbf{G}^{(L)}(0)\\\\in\\\\mathbb{R}^{n\\\\times n}$, which is defined based on $n$ data points $\\\\{(s_i,a_i)\\\\}_{i=1}^{n}$. Specifically, for any $j,k=1,\\\\ldots,n$, the $(j,k)$-th entry of the Gram matrix in their paper is defined as \\n$$\\nG_{jk}^{(L)}(0)=\\\\nabla_{\\\\mathbf{\\\\theta}} \\\\widehat f(\\\\mathbf{\\\\theta}_0;s_{j},a_{j})^{\\\\top}\\\\nabla_{\\\\mathbf{\\\\theta}} \\\\widehat f(\\\\mathbf{\\\\theta}_0;s_{k},a_{k}).\\n$$\\nTherefore, the definition of the Gram matrix depends on a fixed number of data points with size $n$.\\n\\nTo summarize, the Gram matrix is a $n\\\\times n$ matrix defined based on $n$ data points. \\nIt is clear that the matrix $\\\\widehat\\\\Sigma_{\\\\pi}\\\\in\\\\mathbb{R}^{p\\\\times p}$ in our paper is not the Gram matrix and is independent of $n$.\", \"title\": \"Re: Assumption 5.3 seems unattainable. The hessian matrix in a overparametrized NN cannot have lower bounded smallest eigenvalue.\"}",
"{\"comment\": \"Assumption 5.3 assumes that $\\\\hat \\\\Sigma_{\\\\pi}$ defined in equation 5.5 has eigenvalues lower bounded by $\\\\alpha$, which seems a critical condition which leads to the $O(1/T)$ convergence rate.\\n\\nHowever, this assumption cannot be true. As shown in (5.5) and assumption 5.3, we have \\n$$\\n\\\\hat{\\\\mathbf{\\\\Sigma}}_{\\\\pi}=1 / m \\\\mathbb{E}_{\\\\pi}\\\\left[\\\\nabla_{{\\\\theta}} \\\\widehat{f}({\\\\theta} ; s, a) \\\\nabla_{{\\\\theta}} \\\\widehat{f}({\\\\theta} ; s, a)^{\\\\top}\\\\right] \\\\geq \\\\alpha \\\\cdot I_{m},\\n$$\\nwhere $I_m$ is the identity matrix in $R^m$, $\\\\alpha$ is a fixed constant, and $m$ is the total number of parameters. Thus, the trace of $\\\\hat{\\\\mathbf{\\\\Sigma}}_{\\\\pi}$ is $\\\\Omega(m)$. \\n\\nHowever, as shown in [Jacot et al] or [Du et al], the Gram matrix defined as \\n$$\\n\\\\mathbf{G} = 1 / m \\\\mathbb{E}_{\\\\pi}\\\\left[\\\\nabla_{{\\\\theta}}^{\\\\top} \\\\widehat{f}({\\\\theta} ; s, a) \\\\nabla_{{\\\\theta}} \\\\widehat{f}({\\\\theta} ; s, a)\\\\right] , \\n$$\\nhas trace bounded by $O(n)$. \\n\\nWith some simple linear algebra we have \\n$$\\n\\\\Omega(m) = \\\\textrm{trace} (\\\\hat{\\\\mathbf{\\\\Sigma}}_{\\\\pi} ) = \\\\textrm{trace}(\\\\mathbf{G} ) = O(n),\\n$$\\nwhich contradicts with the fact that $m $ is much larger than $n$, as assumed in the overparametrization setting.\", \"title\": \"Assumption 5.3 seems unattainable. The hessian matrix in a overparametrized NN cannot have lower bounded smallest eigenvalue.\"}",
"{\"comment\": \"Thanks for your reply. But I don't think the issue can be easily fixed by changing max into a sum. Otherwise, [Chen2019] would have done so instead of constructing a new assumption. I would admit that the issue can be successfully solved following the proof in [Chen2019] though.\", \"title\": \"Re: As stated in our last response, this minor issue is easily fixable.\"}",
"{\"comment\": \"As we stated in the last response to you, the minor issue you mentioned is easy to fix, and we will update the revision when the author response phase starts. To give you a general idea, there are at least two ways to fix it. One way is to follow our current proof, which is similar to the proof of Theorem 1 in [Melo2007], and change the $\\\\max$ operator in the penultimate equation of our proof for Lemma 6.3 to a summation operator, which leads to a $\\\\sqrt{2}$ factor on the right hand side of this inequality. This will introduce a factor of 2 in front of $\\\\gamma^2$ in Assumption 5.3. Another way of fixing it is to impose a slightly different assumption on the learning policy like in [Chen2019]. In particular, if we change the definition of $b_{\\\\max}(\\\\theta)$ used in equation (5.6) to be $b_{\\\\max}(\\\\theta)=\\\\arg\\\\max_{b\\\\in\\\\mathcal{A}}|\\\\langle\\\\nabla_{\\\\theta} \\\\hat f(\\\\theta;s,b),\\\\theta\\\\rangle|$, then the same result holds under our current Assumption 5.3. Neither of these fixes will affect the conclusion of our paper. We will elaborate this in detail in our revision.\", \"title\": \"As stated in our last response, this minor issue is easily fixable.\"}",
"{\"comment\": \"Thanks for your reply. In terms of the correctness of the proof, I am still unconvinced. The reason is that currently your proof separates two cases depending on $\\\\langle \\\\nabla_{\\\\theta} f_{\\\\theta} (\\\\theta_0; s,a), \\\\theta - \\\\theta_0 \\\\rangle $ is positive or negative. Note that this is a random variable. Thus, your proof seems essentially the same as having an indicator. More specifically, if this random variable is nonnegative, we write $\\\\mathrm{1}_{E} = \\\\{\\\\langle \\\\nabla_{\\\\theta} f_{\\\\theta} (\\\\theta_0; s,a), \\\\theta - \\\\theta_0 \\\\rangle \\\\geq 0 \\\\}$, then the equation below (B.5) is the same as saying\\n\\t$$\\n\\t\\\\mathbb{E}_{\\\\pi}\\\\left[\\\\left(\\\\widehat{f}\\\\left({\\\\theta} ; s^{\\\\prime}, b_{\\\\max }\\\\right)-\\\\widehat{f}\\\\left({\\\\theta}^{*} ; s^{\\\\prime}, b_{\\\\max }^{*}\\\\right)\\\\right)\\\\left\\\\langle\\\\nabla_{{\\\\theta}} f\\\\left({\\\\theta}_{0} ; s, a\\\\right), {\\\\theta}-{\\\\theta}^{*}\\\\right\\\\rangle \\\\cdot \\\\mathrm{1}_{E} \\\\right] \\\\leq \\\\mathbb{E}_{\\\\pi}\\\\left[\\\\left({\\\\theta}-{\\\\theta}^{*}\\\\right)^{\\\\top} \\\\nabla_{{\\\\theta}} f\\\\left({\\\\theta}_{0} ; s^{\\\\prime}, b_{\\\\max }\\\\right) \\\\nabla_{{\\\\theta}} f\\\\left({\\\\theta}_{0} ; s, a\\\\right)^{\\\\top}\\\\left({\\\\theta}-{\\\\theta}^{*}\\\\right) \\\\cdot \\\\mathrm{1}_{E}\\\\right].\\n$$\\nThen, in the Cauchy-Schwarz inequlity afterwards, you need to handle this indicator function directly because you take expectations when using Cauchy-Schwarz. \\n\\nIn a word, you cannot separate the random variable into different cases and compute expectations of its functions on each difference cases separately!\", \"title\": \"Re: Re: Potentially contains error in the analysis. Remarks on the contribution.\"}",
"{\"comment\": \"1. The additional reference: We are happy to cite and comment the recent arXiv paper you mentioned.\\n\\n2. Contribution: we have emphasized our contributions in the abstract and introduction of our paper. As we displayed in Table 1 in our paper, our work is different from and superior to existing papers in multiple ways. To summarize, this is the first work that studies the convergence of Q-learning with deep neural network function approximation. Compared with existing work [Cai2019] which studies Q-learning with two-layer neural network function approximation with i.i.d. noise assumptions, we study the Markovian noise of deep Q-learning. Our convergence rate is also sharper than that of existing work for two-layer neural networks even though our setting is much more challenging. \\n\\n3. Correctness: we clarify the \\u201ccorrectness\\u201d you suspected as follows:\\n\\n(1) The indicator function: while our proof technique is similar to that of [Melo2007], we did not explicitly use the indicator function in our proof as [Melo2007]. Similar results can also be found in Theorem 5.1 and Lemma 5.3 in the reference [Chen2019] (https://arxiv.org/pdf/1905.11425.pdf) pointed out by you, under a slightly different assumption. We will discuss this in our revision during the author response phase. \\n\\n(2) The \\u201cerror\\u201d in [Melo2007]: the arXiv paper [Chen2019] (https://arxiv.org/pdf/1905.11425.pdf) pointed out by you has removed the comment \\u201cHowever, we could not verify the correctness of the proof of the main theorem (Theorem 1) in [Melo2007] even after personal communication with the corresponding author\\u201d in their second version. We don\\u2019t find any unfixable error in [Melo2007].\", \"title\": \"Re: Potentially contains error in the analysis. Remarks on the contribution.\"}",
"{\"comment\": \"Thank you for pointing out the relevant papers on neural tangent kernels. We will cite them in the revision (during the author response phase when we can update the submission file).\", \"title\": \"Re: Missing related work.\"}",
"{\"comment\": \"1. This work combines the ideas of the following four papers: (1): [Cao2019] for handing deep overparameterized NN using neural tangent kernel, (2) [Bhandari2018] for the finite-time analysis of temporal difference learning with non-iid data; (3) [Cai2019] for temporal-difference learning with 2-layer overparametrized NN, and (4) [Melo2007] for linear Q-learning.\\n\\nThe general analysis framework follows from [Bhandari2018], while the technical assumption for Q-learning (Assumption 5.3) is borrowed from [Melo2007], which is also used in [Zou2019]. Specifically, the main theorems, Theorems 5.4 and 5.6, have similar counterparts in [Cai2019], albeit for 2-layer NN. Compared with that work, this work has 3 main differences/improvements: (1) handling deep nets and (2) non i.i.d. data and (3) use a different assumption for Q learning, namely Assumption 5.3. Theorem 5.4 is depends on Lemmas 6.1, 6.2, 6.3, and the proof strategy for this theorem is similar to [Bhandari2018] and [Cai2019]. Lemma 6.1 is adapted from [Cao2019], Lemma 6.2 is borrowed from [Bhandari2018] to handle non-iid data, and Lemma 6.3 is adapted from [Zou2019]. Thus, it seems that this work is a combination of existing results. \\n\\n\\nMore importantly, the proof of Lemma 6.3, which is based on Assumption 5.3, might not be correct. Specifically, on page 16, after getting (B.5), the authors separate two cases depending on whether $< \\\\nabla _{\\\\theta} f(\\\\theta_0, s, a) , \\\\theta - \\\\theta ^*> $ is positive or not. In each case, the authors establish an upper bound using Cauchy-Schwarz. This is exactly the same as the proof of Theorem 1 in [Melo2007] (http://icml2008.cs.helsinki.fi/papers/652.pdf). However, due to the indicator functions, you cannot directly combine these two cases. The indicator functions should be taken into consideration. Moreover, in recent work on linear Q-learning, [Chen 2019] (which is not cited), the authors state that the proof in [Melo2007] is NOT CORRECT! Thus, by directly following [Melo2007]'s proof, the proof of Lemma 6.4 is not correct. \\n\\n[Chen2019] gives the following remarks (https://arxiv.org/pdf/1905.11425v1.pdf):\\n``\\\"One such condition was proposed in [Melo2007] to restrict the sampling policy to be close enough to the optimal policy. However, we could not verify the correctness of the proof of the main theorem (Theorem 1) in [Melo2007] even after personal communication with the corresponding author\\\"\\n\\nThus, it would be great if the authors could check the proof and resolve the technical issue inherit from [Melo2007].\", \"cao2019\": \"Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks\", \"bhandari2018\": \"A Finite-Time Analysis of Temporal Difference Learning With Linear Function Approximation\\nCai2019; Neural Temporal-Difference Learning Converges to Global Optima\", \"melo2007\": \"An Analysis of Reinforcement Learning with Function Approximation\", \"zou2019\": \"Finite-sample analysis for sarsa with linear function approximation\", \"chen2019\": \"Performance of Q-learning with Linear Function Approximation: Stability and Finite-Time Analysis\", \"title\": \"Potentially contains error in the anlysis. Remarks on the contribution\"}",
"{\"comment\": \"This paper uses linear approximation of neural networks, which borrows the analysis using Neural tangent kernels (NTK). There are a lot of papers on NTK recently and are neglected by the author, including the paper introduced NTK. Please cite at least the following papers:\\n\\n1. Jacot et al. Neural Tangent Kernel: Convergence and Generalization in Neural Networks\\n2. Allen-Zhu et. al. Learning and generalization in overparameterized neural networks, going beyond two layers\\n3. Arora et. al. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks\\n4. Cai et. al. A gram-gauss-newton method learning overparameterized deep neural networks for regression problems\\n5. Chizat et. al. On the global convergence of gradient descent for over-parameterized models using optimal transport\", \"title\": \"Missing related work\"}"
]
} |
r1geR1BKPr | MULTI-STAGE INFLUENCE FUNCTION | [
"Hongge Chen",
"Si Si",
"Yang Li",
"Ciprian Chelba",
"Sanjiv Kumar",
"Duane Boning",
"Cho-Jui Hsieh"
] | Multi-stage training and knowledge transfer from a large-scale pretrain task to various fine-tune end tasks have revolutionized natural language processing (NLP) and computer vision (CV), with state-of-the-art performances constantly being improved. In this paper, we develop a multi-stage influence function score to track predictions from a finetune model all the way back to the pretrain data. With this score, we can identify the pretrain examples in the pretrain task that contribute most to a prediction in the fine-tune task. The proposed multi-stage influence function generalizes the original influence function for a single model in Koh et al 2017, thereby enabling influence computation through both pretrain and fine-tune models. We test our proposed method in various experiments to show its effectiveness and potential applications. | [
"influence function",
"multistage training",
"pretrained model"
] | Reject | https://openreview.net/pdf?id=r1geR1BKPr | https://openreview.net/forum?id=r1geR1BKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"rhEuoMUMAx",
"rJeh0c5hjB",
"ByxWItucsr",
"SkeI0v_9sS",
"SylSRUuqjH",
"rJgdpYkL9S",
"rJlosyF6tB",
"rkxd5xEcFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738202,
1573853907661,
1573714249329,
1573713869989,
1573713613465,
1572366783900,
1571815330821,
1571598480013
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2011/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2011/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2011/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2011/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2011/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2011/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2011/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper extends the idea of influence functions (aka the implicit function theorem) to multi-stage training pipelines, and also adds an L2 penalty to approximate the effect of training for a limited number of iterations.\\n\\nI think this paper is borderline. I also think that R3 had the best take and questions on this paper.\", \"pros\": [\"The main idea makes sense, and could be used to understand real training pipelines better.\", \"The experiments, while mostly small-scale, answer most of the immediate questions about this model.\"], \"cons\": [\"The paper still isn't all that polished. E.g. on page 4: \\\"Algorithm 1 shows how to compute the influence score in (11). The pseudocode for computing the influence function in (11) is shown in Algorithm 1\\\"\", \"I wish the image dataset experiments had been done with larger images and models.\", \"Ultimately, the straightforwardness of the extension and the relative niche applications mean that although the main idea is sound, the quality and the overall impact of this paper don't quite meet the bar.\"], \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of rebuttal\", \"comment\": \"We thank all the reviewers for their constructive comments! Below is a summary of our reply to the reviewers and the main changes to the paper.\\n\\n1) To answer R1 and R3\\u2019s questions on the use case of our model, we ran additional experiments to show that removing examples from the highest influence scores in pretraining data can be used to improve finetuned model\\u2019s test accuracy. \\n\\n2) To address R1 and R2\\u2019s comment on time complexity, we report the running time of our method on both large ELMo and small CIFAR models to show its scalability. \\n\\n3) For R2\\u2019s concern on r value, we explain why we think Pearson\\u2019s r value of 0.6 is meaningful to show the strong correlation given the complexity of this task. \\n\\n4) To address R3\\u2019s question on the experiment setting, we have new experiments showing that pretrained embedding does improve the model\\u2019s accuracy when finetuning examples are limited. \\n\\n5) We have answered the questions related to CG and Hessian matrix to be non-PSD from R1 and R3 in the reply, and modified Section 3.3 in the new version of the paper to make things clear.\\n\\n6) We have fixed the typos as pointed out by R1 in the revised paper to make it easier to follow.\\n\\nWe hope the above changes and replies can address the questions/concerns from the reviewers for this paper. \\n\\nThanks,\\nPaper2011 Authors\"}",
"{\"title\": \"Thank you for the encouraging comments! We have addressed your questions here and have updated the paper.\", \"comment\": \"Q1.1: Is the pretrain-finetune reasonable setting? Does pretraining actually help?\\nIn the experiment, we consider a transfer learning setting where we have lots of samples from source domain (pretraining stage), but fewer data from the target domain (finetuning stage). For this transfer learning setting, since there is not much data from the finetuning task, using the pretrained model from a similar task could help to improve the finetuned model\\u2019s accuracy. For example, if we only have 1k examples from 8 classes in CIFAR (8 classes other than bird and frog) to train a model for classifying these 8 classes from scratch, the resulting model can only achieve 49.0% test accuracy. While if we have another 10k training examples from 2 different classes (bird and frog) to pretrain a model and use its CNN layers as our embedding, then finetune the fully connected layers with this fixed embedding on the remaining 8 classes (1k examples), we can achieve 53.5% test accuracy on these 8 classes. So when our finetuning example is limited, pretraining an embedding on similar data actually helps. \\n\\nQ1.2: Is there any connection between influence score and testing accuracy. If we remove some testing data, how that will impact the testing accuracy.\\nThere is a strong connection between influence score and loss function, and loss function is related to testing accuracy. As an example, at the pretraining stage, we train a model with examples from only two classes (bird vs. frog) and use the remaining eight classes in CIFAR for finetuning. So the source data is bird vs. frog and the target data is the other 8 classes. After we removed the top 10% highest influence scores (positive values) examples from pretrain (source data), we can improve the accuracy on target data from 58.15% to 58.36%. As a reference, if we randomly remove 10% of pretraining data, the accuracy will drop to 58.08%. Note that when points with positive influence scores are added to the pretraining set, the model\\u2019s loss on test is expected to be increased. So removing them from the pretraining set will decrease the loss and improve accuracy.\\n\\nQ1.3: Section 4.2 also seems non-standard. Are the exact same bird vs. frog examples being used for both pretraining and finetuning? \\nWe agree that this is not a standard transfer learning setting since the source and target domains are the same. But this section is actually a study on the influence function scores\\u2019 relationship with the task similarity. Here we want to show that if the pretraining (source domain) and finetuning (target domain) are similar, the influence scores\\u2019 magnitude is expected to be larger. As an extreme case, we let the pretraining and finetuning tasks to be exactly the same.\", \"q2\": \"In what situations might we want to examine the influence of pretraining data, and can we design experiments that show those situations? Can we verify those claims using these multi-stage influence functions?\\nOur model can be used in various situations, for instance, we might want to investigate which pretraining data are highly associated with a test sample that is predicted wrong. After we detect these \\u2018wrong\\u2019 pretraining data, we can remove these data and retrain the model. This could potentially improve model performance. The examples shown in Table B in the appendix are the pretraining examples with the smallest or largest absolute influence function scores. They are identified by the influence function as the least and most useful sentences. In Fig 3, we associate the misclassified test sample with the pretraining data having the highest influence score. Also in the reply to Q1.2, we design a new experiment to show that if we remove 10% of the highest influence scores examples from pretraining data, we can improve the model performance.\\n\\nQ3.a: The impact of in Eq (12) and how does it interact with the number of fine-tuning steps?\\nIn Eq(12), we add $\\\\|W-\\\\bar{W}\\\\|^2_F$ to the finetuning loss, so that 1) finetuned model can utilize the pretrained model\\u2019s information. 2) we can build the connection between pretraining and finetuning tasks, otherwise, these two tasks are decoupled, and we could not get the influence score for pretraining data; 3) when the finetuned model converges, the embedding weights are expected to be close to the pretraining result. It is hard to tell whether Eq(12) would reduce the fine-tune steps as it is a non-convex problem. Also if the $alpha$ goes to infinity, Eq(12) will be the same as case 1 in Section 3.2.1. \\n\\nQ3.b: What if the Hessian has negative eigenvalues?\\nSince the proof of Thm 1 relies only on Taylor expansion which holds for any $H$, the influence function formulation does hold even if Hessian has negative eigenvalues, as long as it's invertible. If $H$ is invertible and it has negative eigenvalues, we can still run the CG on $H^2x=Hb$ since $H^2$ is PD. What we get is $(H^2)^{-1}Hb$, which is the same as $H^{-1}b$. That is why we use $argmin\\\\ 0.5x^TH^2x-b^THx$ in Section 3.3.\"}",
"{\"title\": \"Thank you for your comments! We have addressed them here and in the revised paper.\", \"comment\": \"Thank you for your constructive review. We will really appreciate it if you can read our response and provide us some feedback. We will be glad to discuss with you on any further concerns.\\n\\nQ1.1: linear correlation between influence score and true loss is not strong\\nIt is almost impossible to get the exact linear correlation because the influence function is based on the first-order conditions (gradients equal to zero) of the loss function, which may not hold in practice. Therefore people usually report their correlation using Pearson\\u2019s r value, e.g., Koh&Liang ICML\\u201917. In Koh&Liang ICML\\u201917, it shows the r value is around 0.8 but their correlation is based on a single model with a single data source, but we consider a much more complex case with two models and two data sources: the relationship between pretraining data and finetuning loss function. So we expect to have a lower r value. In summary, we think 0.6 is reasonable to show a strong correlation between pretraining data\\u2019s influence score and finetuning loss difference.\\n\\nQ1.2: the practical value of calculating influence scores.\\nThe influence function is to measure how the model performs when removing or adding one training example. It can be used in various ML tasks. A simpler use case is one where we have a bad/undesirable model output and we want to trace that back to the training instances that might have caused that. In the introduction and related work sections, we provide several references for the practical use cases of influence function.\", \"q2\": \"can we do larger dataset?\\nIn the appendix, we perform our model on the Elmo model with One-billion-word (OBW) dataset. OBW dataset contains 30 million sentences and 8 million unique words. We give some quantitative results in Table B in the appendix, where we show some test examples and their corresponding largest (in magnitude) influence function score sentences in 1000 randomly selected pretraining sentences in OBW dataset. \\n\\nIt is challenging to get the Pearson\\u2019s r value on the Elmo model trained with OBW dataset to show the relationship between influence scores and true loss change. To get Pearson\\u2019s r value, we need to remove each example in the pretraining dataset at a time (one sentence in One-billion-word dataset) and then train the model (model with 13.6 million parameters for a small Elmo model) from scratch to see the loss difference before/after removing one training sample. The training for Elmo model on OBW is very time consuming--with 3GPU it takes 14 days. While to get the r value, we need to do that for at least a few hundred pretraining examples. Therefore, while influence scores can be calculated on large datasets such as Elmo, we can only show the r value in the small scale dataset. It is our future work to compute the r value for Elmo model+OBW dataset.\\n\\nWe want to emphasize that the complexity of our method is linear to the number of pretraining examples and the number of parameters. To compute the influence score and get the most influential training samples for Elmo model + OBW data in Table B, our computation is very fast--only taking 20 min to compute influence function scores for 1000 randomly selected pretraining examples. This also explains why influence score is important and useful for ML area as it is usually time-consuming to figure out \\u2018bad\\u2019 training samples by removing one sample and training a model from scratch to check the performance, and influence score provides an analytic way to narrow down the candidate set efficiently.\", \"q3\": \"Page 7, last paragraph, \\u201cwe replace all inverse Hessians in (11) with identity matrice\\u201d=>why?\\nReplacing all inverse Hessions with identity is not our method, but a baseline we compared with as an ablation study. If we replace all inverse Hessians with an identity matrix, the influence scores become the inner product between training data and testing data\\u2019s gradient. In Figure 4, this baseline method can only give an r value of 0.17, while for the same setting, our method gets an r value of 0.47 as shown in Fig 2b. This shows that the inverse Hessian is necessary and our method is more accurate for measuring loss change.\", \"q4\": \"In fig 3, what is the relationship between the two MNIST images, and the relationship between the two CIFAR images?\\nIn Fig 3, we pair a misclassified test image in the finetuning task with the pretraining image which has the largest positive influence score value with respect to that test image. Intuitively, the identified pretraining image contributes most to the test image\\u2019s error. We can indeed see that the identified examples are of low quality, which leads to negative transfer.\"}",
"{\"title\": \"Thank you for your comments! We have more explanations here.\", \"comment\": \"Q1: Why not to directly add a scaled identity matrix to problem (15) to avoid the non-PSD issue?\\nWe modified the text below Eq(15) in the paper. (15) is to get $H^{-1}b$, no matter $H$ has negative eigenvalues or not. To use CG, we can solve either $argmin\\\\ 0.5x^THx-b^Tx$ or $argmin\\\\ 0.5x^TH^2x-b^THx$. For both we need $H$ or $H^2$ to be PD. $H^2$ is always PD as long as $H$ is invertible. If $H^2$ is not ill-conditioned, we solve the second formulation directly without any further modifications. If $H^2$ is ill-conditioned, we add a damping term $\\\\lambda I$ to $H^2$ where $\\\\lambda$ is very small for numerical stability and to avoid ill-condition as explained in the paper. While if we solve $argmin\\\\ \\\\frac{1}{2}x^THx-b^Tx$ using CG and $H$ is not PD, we always need to add a $\\\\beta I$ to make $H+\\\\beta I$ PD and $\\\\beta$ should be larger than the absolute value of $H$\\u2019s smallest negative eigenvalue. When $\\\\beta$ is large, the solution of $argmin\\\\ 0.5x^T(H+\\\\beta I)x-b^Tx$ can be very different from $H^{-1}b$.\", \"q2\": \"how to connect the influence function with negative transfer examples?\\nThe pretraining examples with large positive influences scores are the ones that will increase the loss function value indicating negative transfer. Based on the influence score, we could improve the negative transfer issue. For example, at pretraining stage, we train a model with examples from 2 classes (\\u201cbird\\\" vs. \\u201cfrog\\\") and use the remaining 8 classes in CIFAR for finetuning. So the source data is \\u201cbird\\\" and \\u201cfrog\\\" and the target data is the other 8 classes. After we remove the top 10% highest influence scores examples from pretraining data, we can improve the test accuracy on target data from 58.15% to 58.36%. These 10% highest influence scores training samples are negative transfer examples. As a reference, if we randomly remove 10% of pretraining data, the accuracy will drop to 58.08%.\", \"q3\": \"time complexity issue and training time results.\\nAt the end of Section 3, we discuss the time complexity of computing the inverse Hessian vector product (IHVP). As explained in Section 3.3, we do not compute or store Hessian explicitly but only compute IHVP. All the computation related to inverse Hessian can use IHVP, which makes the computation efficient. \\n\\nIf the pretrained and finetuned model have $p_1$ and $p_2$ parameters, respectively, and we are given $m$ pretraining examples and $n$ finetuning examples. The time complexity for the 2 IHVPs are $O(m*p_1*r)$ and $O(n*p_2*r)$, where $r$ is the number of CG iterations. The total time complexity of computing a multi-stage influence score for all the training data is thus $O((n*p_2+m*p_1)*r)$--linear to number of training samples and model parameters. In practice, the influence function computation is fast. For example, on the CIFAR dataset, the time for computing influence function w.r.t all pretraining data is 230s (with roughly 200 iterations of CG for 2 IHVPs in Eq. (10) and Eq. (11) ). Also in appendix B we run an ELMo model with 13.6 million parameters. We compute the influence score for this large model and get the most influential pretraining examples in OBW data in Table B. Our computation is very fast--only takes 20 mins to compute influence function scores for 1000 randomly selected pretraining examples.\", \"q4\": \"can influence function be used for the case when the source data is not available?\\nWhat we need for computing the influence function from the source data is the gradient. So our method can be used even when the source data itself is absent, but a black box unit is given to provide the gradient of each source data to the influence score computation in Eq. (11). Note in Eq. (11), the terms in the bracket are all gradients/Hessians of the finetuned model. They do not depend on the pretrained model or the pretraining data. \\n\\nAs a concrete example, company A pretrains a model on its own private source data and provides only the pretrained model to company B for a downstream task. If B does not think the pretrained model is good enough for its downstream task, B can compute the terms in the bracket of Eq. (11) to get a vector, which is $\\\\frac{\\\\partial f}{\\\\partial W}$. Then B can send this vector to A, without sharing its downstream task\\u2019s model or data. A can calculate the inner product of this vector with $I_{z,W}$ to compute the influence function score in its private pretraining (source) data to identify the examples\\u2019 contribution to the downstream task\\u2019s error. In this scenario, the multi-stage influence function score can be obtained without any exchange of model or data. So if B does not have access to the source data, it can ask A (who has access) to compute influence score and debug the embedding for it, by just sending a vector $\\\\frac{\\\\partial f}{\\\\partial W}$ to A.\\n\\nQ5 It is not correct to use \\u2018pretrain\\u2019 or \\u2018finetune\\u2019 before a noun. They should be replaced with \\u2018pretrained\\u2019 and \\u2018finetuned\\u2019.\\nThanks for the suggestion. We have made the changes in the revised version.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a multi-stage influence function for transfer learning to identify the impact of source samples to the performance of the learned target model on the target domain. It considers two cases: fixed pretrained parameters and fine-tuned parameters.\\n\\nWhy not to directly add a scaled identity matrix to problem (15) to avoid the non-PSD issue?\\n\\nHow to use the proposed method to identify source samples that cause negative transfer as discussed in the introduction?\\n\\nEven using the conjugate gradient method to reduce the complexity, the total complexity is still high as the number of parameters in a deep neural network is large. It is better to report the running time to see the efficiency of the proposed method.\\n\\nIn transfer learning, there is a setting that source data are not accessible due to, for example, the purpose of the privacy protection. In this case, can influence function be used?\\n\\nFor presentation, I think it is not correct to use \\u2018pretrain\\u2019 or \\u2018finetune\\u2019 before a noun. They should be replaced with \\u2018pretrained\\u2019 and \\u2018finetuned\\u2019.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This is an analysis paper of pretraining with the tool \\u201cinfluence function\\u201d. First, the authors calculate the influence score for the models with/without pretraining, and then propose some implementation details (i.e., use CG to estimate the inversed Hessian). To calculate the influence function of a model with pretraining, the authors use an approximation f(w)+||w-w*||, where w* is pretrained.\\nThe experiments are conducted on MNIST and CIFAR. \\n\\n1.\\tThe idea of converting a pre-trained model with f(w)+||w-w*|| is interesting. But I do not think the conclusion is very promising and convincing. The authors leverage Pearson correlation to measure the similarity between \\u201ctrue loss difference\\u201d and \\u201cscore value\\u201d. However, i do not think the value $0.62$ is significant. As shown in Figure (2), intuitively, the linear correlation between these two values do not hold. Also, I am not quite sure about the practical value of calculating influence scores.\\n2.\\tThe experiments are conduct on small-scale datasets. I am not sure whether the conclusion holds for larger dataset.\\n3.\\tPage 7, last paragraph, \\u201cwe replace all inverse Hessians in (11) with identity matrice\\u201d=>why?\\n4.\\tIn figure 3, what is the relationship between the two MNIST images, and the relationship between the two CIFAR images?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors derive the influence function of models that are first pre-trained and then fine-tuned. This extends influence functions beyond the standard supervised setting that they have been primarily considered in. To do so, the authors make two methodological contributions: 1) working through the calculus for the pre-training setting and deriving a corresponding efficient algorithm, and 2) adding $L_2$ regularization to approximate the effect of fine-tuning for a limited number of gradient steps.\\n\\nI believe that these are useful technical contributions that will help to broaden the applicability of influence functions beyond the standard supervised setting. For that reason, I recommend a weak accept. I have some questions and reservations about the current paper:\\n\\n1) Does pretraining actually help in the MNIST/CIFAR settings considered? These seem to be non-standard pretraining settings. More generally, can we relate influence to some objective measure that we care about (say test accuracy), for example by showing that removing the top X% of influential pretraining data hurts test accuracy as much as predicted? Minor: section 4.2 also seems non-standard. Are the exact same bird vs. frog examples being used for both pretraining and finetuning?\\n\\n2) In what situations might we want to examine the influence of pretraining data, and can we design experiments that show those situations? For example, perhaps we're wondering if different types of sentences in the one-billion-word dataset might be more or less useful. Can we verify those claims using these multi-stage influence functions? It is otherwise difficult to assess the utility of the qualitative results (e.g., Figure 3 and Appendix C).\\n\\n3) It'd be helpful to get a better understanding of the technical contributions of this paper. Specifically, \\na. What is the impact of $\\\\alpha$ in equation 12 and how does it interact with the number of fine-tuning steps taken?\\nb. If the Hessian has negative eigenvalues, we can still take $H^{-1}b$ by solving CG with $H^2$, but what does this correspond to? Is the influence equation well defined (or the Taylor approximation justified) if $H$ is not positive definite?\"}"
]
} |
Hygy01StvH | Impact of the latent space on the ability of GANs to fit the distribution | [
"Thomas Pinetz",
"Daniel Soukup",
"Thomas Pock"
] | The goal of generative models is to model the underlying data distribution of a
sample based dataset. Our intuition is that an accurate model should in principle
also include the sample based dataset as part of its induced probability distribution.
To investigate this, we look at fully trained generative models using the Generative
Adversarial Networks (GAN) framework and analyze the resulting generator
on its ability to memorize the dataset. Further, we show that the size of the initial
latent space is paramount to allow for an accurate reconstruction of the training
data. This gives us a link to compression theory, where Autoencoders (AE) are
used to lower bound the reconstruction capabilities of our generative model. Here,
we observe similar results to the perception-distortion tradeoff (Blau & Michaeli
(2018)). Given a small latent space, the AE produces low quality and the GAN
produces high quality outputs from a perceptual viewpoint. In contrast, the distortion
error is smaller for the AE. By increasing the dimensionality of the latent
space the distortion decreases for both models, but the perceptual quality only
increases for the AE. | [
"Deep Learning",
"Generative Adversarial Networks",
"Compression",
"Perceptual Quality"
] | Reject | https://openreview.net/pdf?id=Hygy01StvH | https://openreview.net/forum?id=Hygy01StvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"NUGd66WjqT",
"SkxN3oeEqB",
"S1ecMH32YS",
"SkxSd5_otS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738171,
1572240299857,
1571763473927,
1571682925189
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2010/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2010/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2010/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The reviewers have pointed out several major deficiencies of the paper, which the authors decided not to address.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: The paper explores the influence of the dimensionality of the latent space to the quality of the learned distributions for autoencoders (AE) and GANs (more precisely Wasserstein GANs). In particular, the paper looks at the ability of the learned AE or GAN to reconstruct the training images, the visual quality of the images as the dimension of the latent space increases, and the ability to reconstruct images not in the training set (structured in some ways).\", \"evaluation\": \"While the general flavor of question the paper studies is undoubtedly interesting, I found the paper severely lacking both in terms of the quality of writing (in particular, I was at confused about the goal of various sections/experiments), as well as the significance of the results the authors observe (and how they are reported -- I found them to be oversold).\", \"regarding_the_quality_of_results\": \"* The paper primarily talks about the ability of AEs and GANs to *reconstruct* images, either in the training set, or in the test set, or in some different dataset altogether (e.g. shifted images, different image dataset). This is a problematic thing on multiple levels: first, the goal of a GAN or AE is to fit a distribution -- merely having a data point in its domain says nothing about the *probability* of that point; second, the way these \\\"spans\\\" are tested is via running a gradient descent search for a pre-image for the data point. The authors never comment or explore whether the problem may *not* be that these data points are not in the image of the GAN, but rather that the optimization procedure doesn't succeed. (And indeed, increasing the dimensionality of the latent space may act as \\\"overparametrization\\\" for this gradient descent procedure, making it more likely to succeed.) \\n\\nFinally, there are some fairly arbitrary choices in the entire experimental setup: why WGANs vs another architecture -- are the GAN results sensitive to architecture choices? why AEs and not VAEs (and with what variational posterior) -- how sensitive are the observations here to choosing the most vanilla variant of autoencoders? These are all questions that invariably linger after reading the paper.\", \"regarding_the_quality_of_writing\": \"* There are various sloppy sentences in crucial parts of the paper. I will only list a few: \\n-- \\\"Once a suitable AE for a dataset is found, the decoder part of is used as the generative model of choice.\\\" -- this seems to suggest a semi-synthetic setup where an AE is trained to use as a generator of a data set for which a GAN is fit. I never saw this setup in Section 5 -- although this would be a good way to test \\\"relative\\\" representational power of GANs and AEs. \\n-- \\\"In principle, getting from an AE to a GAN is just a rearrangement of the NNs\\\" in Section 5.1 -- I wasn't sure what this is supposed to mean, and this is a critical part of that section. \\n-- \\\"The AE network acts as a lower bound for the GAN algorithm, therefore validating our intuition that the AE complexity lower-bounds the GAN\\\" in Section 5.1 -- also very sloppy, and I'm not sure what it means -- I guess the authors mean the reconstruction performance of AE is a lower bound on the GAN reconstruction. Not sure what this has to do with \\\"complexity\\\". \\n\\n* Various sections are meandering, and I wasn't sure what the goal is. Just a few examples: section 3 spends a lot of time talking about known theoretical results wrt. to invertibility of random-like neural nets. It wasn't clear to me how this relates to the results in Section 5, especially since the authors never leverage/talk about these theory results again. (Instead, they study empirical invertibility via gradient-descent based procedures.) Similarly, interpolating by polynomials is talked about in (2), seemingly without any point.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The work performs a systematic empirical study of how the latent space design in a GAN impacts the generated distribution. The paper is well-written and was easy to read. Also, I find this to be an interesting and promising direction of research.\\n\\nThe convergence proof in Goodfellow (2014) assumes that the data distribution has a density, and essentially states that the JS-divergence is zero if and only if the two distributions are the same. In practice, the data distribution is discrete, while the latent distribution has a probability density function. It is not possible to transform a density into a discrete distribution by a continuous map and neural networks are always continuous by construction. In theory, as training progresses, more and more latent mass will be pushed on the discrete samples and no minimizer exists (unless the function space of generator is constrained or the real distribution is smoothed out a bit). \\n\\nSince it is not possible to assess whether the GAN training has converged due the nonconvexity of the energy and non-existence of a global optimizer, the empirically observed results might be very specific to the chosen optimization procedure, stopping criterion, dataset, hyper-parameters, initialization, network architectures, etc etc. It is a challenge to study the choice of latent space in a somewhat \\\"isolated\\\" way. These issues should be discussed in the paper and the reader should be made aware of such problems.\\n\\nAnother point, could it be, that by increasing the dimension of the latent space, one makes it easier for the nonconvex optimization in (5) to converge to \\\"unlikely but realistic looking samples\\\"? I think this is not too far-fetched, as increasing the dimension of an optimization problem often makes local optimization less likely to get stuck at local optima. Also it might not be the best idea to optimize (5) with Adam since it is not a stochastic optimization problem and there are provably convergent solvers out there for this problem class. \\n\\nSince it is possible to evaluate the likelihood of the optimized reconstructions that are nearby the data points, one could check whether this is indeed the case. While constrained not to be too unlikely, I wonder whether the likelihood increases or decreases with the dimensionality of the latent space and this would make an interesting plot. \\n\\nUnfortunately, I did not understand the connections to auto-encoders, as they might optimize a fundamentally different criterion than GANs. In particular \\\"In principle, getting from an AE to a GAN is just a rearrangement of the NNs. \\\" is unclear to me. \\n\\nAlso, what is meant by lower-bound? Is the claim that the reconstruction error in an auto encoder will be lower, than if one optimizes the latent code in a GAN to reconstruct the input? Figure 3 seems to support this hypothesis, but I don't have an intuition why this should be true and have some doubts. A mathematical proof seems out of reach. \\n\\nI have trouble to understand the \\\"intuition that the AE complexity lower-bounds the GAN complexity.\\\" Before reading this paper, my intuition was the opposite: If the generator distribution covers the real distribution, the reconstruction error for GAN is zero. Intuitively, it seems a much easier task to somehow cover a distribution than to minimize an average reconstruction error. \\n\\nThe connection of WGANs to the L2 reconstruction loss in the auto-encoder is very hand-wavy. It is still an open question whether WGANs actually have anything to do with the Wasserstein distance. People working in optimal transport doubt this, due to huge amount of approximations going on. \\n\\nAt this point I'm reluctant to recommend acceptance, as the paper tries to connect things, which for me are quite disconnected and the evaluations of reconstruction error, etc. might depend in intricate ways on the nonconvex optimization procedures.\\n\\nMinor suggestions, typos, etc (no influence on my rating):\\n\\n- What is the \\\"generated manifold\\\" that is talked about in the introduction, contributions and throughout the paper? To me, it is not directly clear that the support of the transformed distribution will be a manifold (especially if G is non-injective). Anyway, the manifold structure is nowhere exploited in the paper, so I suggest to call it \\\"transformed latent distribution\\\".\\n\\n- Had to pause a little bit to understand Eq. 2 (simple polynomial interpolation). It is unnecessary to show the explicit form, as I'm sure no one doubts the existence of a smooth curve interpolating a finite set of points in R^d. \\n\\n- Equations should always include punctuation marks. \\n\\n- Eq. 5: dim --> \\\\text{dim} and s.t. --> \\\\text{s. t.}\\n\\n- Fig 3b: the red curve is missing or hidden behind another curve.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Impact of the latent space on the ability of GANs to fit the distribution\\n\\nThis paper purports to study the behavior of latent variable generative models by examining how the dimensionality of the latent space affects the ability of said models to reconstruct samples from the dataset. The authors perform experiments with deterministic autoencoders and WGAN-GP on CIFAR, CelebA, and random noise, and measure MSE and FID as a function of the dimensionality of z.\", \"my_take\": \"This paper does not offer any especially intriguing insights, and many of the conclusions the authors draw are, in my opinion, not supported by their experiments. The paper is confusingly written and hard to follow\\u2014throughout my read I struggled to determine what the authors meant, and it was not clear to me what this paper is supposed to contribute. The potential impact of this paper is very low, and I argue strongly in favor of rejection.\", \"notes\": \"\", \"my_most_critical_complaint_is_the_central_experiment_set_of_the_paper\": \"measuring MSE and FID as a function of Dim-Z for two models. First of all, the authors assume that the reduction in MSE as a function of dim Z is indicative of increased memorization in the GAN models. I disagree that this is the case; since the GAN-based reconstruction is done via optimization it is unsurprising that increasing dim Z increases the reconstruction quality, as you are literally giving the model more degrees of freedom with which to fit the individual samples. This is glaringly evident in Figure 8, where increasing dim Z renders the model better able to reconstruct a sample of pure noise, which is almost certainly not in its normal output distribution, (or if it is, is in there with staggeringly low probability). The fact that the higher dim-z models are better able to reconstruct the noise supports the notion that it is merely the number of degrees of freedom that matter in these experiments, rather than what the model actually learns.\\n\\nSecond, it is important to note that FID can be easily gamed by memorization, and for an autoencoder (which has direct access to samples) with an increasingly large bottleneck it is unsurprising that increasing dim-Z tends to decrease the FID, and equally unsurprising that increasing the dim-Z for the GAN does not tend to improve results, since this does not really allow the model increased memorization capacity (not to mention the relationship between performance and dim-Z has been explored before in GAN papers).\\n\\nThird, the organization of the experimental section makes it very difficult to infer what the authors are trying to conclude from these experiments. The noise experiment is presented, but no insights or conclusions are drawn, other than (a) noting that the model has a harder time reconstructing the noise than training samples and (b) that lower dim models have a harder time reconstructing the noise, both of which are just restatements of the information presented in the figure rather than an actual insight or conclusion.\\n\\n-I\\u2019m not really sure what the experiment in section 3 is supposed to show. This experiment is poorly described and lacking details. First of all, what is the loss function used there? Is this the output of the discriminator or the MSE between the output and a target sample? How is z* found and what does it represent\\u2014is it just a randomly sampled latent, or is it the latent that corresponds to the z-value which minimizes some MSE loss for a target sample? If it\\u2019s the latter, why is this notation not introduced until section 4? If it\\u2019s a latent, why are you calling it a data point? Why are there no axes and no scales on these plots? How is it clear that there is an optimization path from z0 to z*; is that supposed to be inferred from z0 having a higher value than z* or appearing to be directly uphill from z*, because it\\u2019s not clear to me that that is the case in Figure 2a. In general I did not find this experiment to support the conclusions the authors draw. \\n\\n-Figure 4: It is important to note that FID can be trivially gamed by memorizing the dataset, and an autoencoder is much more well-suited to memorizing the dataset as it has direct access to samples (whereas a GAN must get them through the filter of the discriminator). Authors should test interpolation or hold-out likelihood for the autoencoder, these models are not directly comparable in this manner.\\n\\n-The presentation of this paper is, in general, all over the place. The authors should focus on writing such that each point follows the next, building progressively towards their results and insights, and making it easy for a reader to follow their train of thought.\\n\\n\\u201cIn this work, we show that by reducing the problem to a compression task, we can give a lower bound on the required capacity and latent space dimensionality of the generator network for the distribution estimation task.\\u201d At what point is this lower bound (either in terms of model capacity or latent space dimensionality) specified in the paper? Is Figure 3 supposed to be this lower bound, because to me it only indicates that the autoencoder tends to have a lower MSE, not that it conclusively lower bounds the memorization capacity of the GAN. Wouldn't a method like GLO which directly optimizes for memorization be a better lower bound for this, anyhow?\\n\\n\\u201cWe rely on the assumption, that less capacity is needed to reconstruct the training set, than to reconstruct the entire distribution\\u201d What does this phrase mean? Are the authors referring to the entire distribution of natural images, of which the training set is assumed to be a subset? Or do they mean the output distribution of the generator? This was not clear to me.\", \"minor\": \"-\\u201c style transfer by Karras et al. (2018),\\u201d, and \\u201canomaly detection (Shocher 2018).\\u201d StyleGAN is not a style transfer paper, and InGAN is not about anomaly detection. Please do not incorrectly summarize papers.\\n\\n-\\u201cTrained GAN newtworks\\u201d While amusing, this is a typo. Please thoroughly read your paper and correct all typos and grammatical mistakes, like \\u201ccombiniation.\\u201d\\n\\n-\\u201c\\u2026that an accurate reconstruction of the generator manifold is possible works using first order methods\\u201d The word \\u201cworks\\u201d seems to be out of place here. Again, please thoroughly proofread your paper.\\n\\n-The legend in Figure 2 has a white background, making the white x corresponding to z0 invisible. Please fix this, and add appropriate axes to this plot.\\n\\n-Figure 7 and 8 may in fact have error bars, but they are not described (are they 1 std or another interval?) or referenced, and in Figure 8 (if these are error bars) they are nearly invisible.\"}"
]
} |
Hye1RJHKwB | Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators | [
"Daniel Stoller",
"Sebastian Ewert",
"Simon Dixon"
] | Generative adversarial networks (GANs) have shown great success in applications such as image generation and inpainting.
However, they typically require large datasets, which are often not available, especially in the context of prediction tasks such as image segmentation that require labels. Therefore, methods such as the CycleGAN use more easily available unlabelled data, but do not offer a way to leverage additional labelled data for improved performance. To address this shortcoming, we show how to factorise the joint data distribution into a set of lower-dimensional distributions along with their dependencies. This allows splitting the discriminator in a GAN into multiple "sub-discriminators" that can be independently trained from incomplete observations. Their outputs can be combined to estimate the density ratio between the joint real and the generator distribution, which enables training generators as in the original GAN framework. We apply our method to image generation, image segmentation and audio source separation, and obtain improved performance over a standard GAN when additional incomplete training examples are available. For the Cityscapes segmentation task in particular, our method also improves accuracy by an absolute 14.9% over CycleGAN while using only 25 additional paired examples. | [
"Adversarial Learning",
"Semi-supervised Learning",
"Image generation",
"Image segmentation",
"Missing Data"
] | Accept (Poster) | https://openreview.net/pdf?id=Hye1RJHKwB | https://openreview.net/forum?id=Hye1RJHKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"l6hr25do6z",
"rklgOOV9jr",
"r1lnE_4qjS",
"HkxJkw45sS",
"Ske4xC31ir",
"SygA0eq1qr",
"rkxZuQfJcH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738141,
1573697639982,
1573697588171,
1573697239173,
1573010923734,
1571950805743,
1571918697112
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2009/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2009/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2009/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2009/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2009/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2009/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"All three reviewers appreciate the new method (FactorGAN) for training generative networks from incomplete observations. At the same time, the quality of the experimental results can still be improved. On balance, the paper will make a good poster.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you for your feedback on this paper. We hope to clarify some details with the following and thus respond to your questions.\\n\\n\\\"The most serious limitation of the paper is that the technique is not compared with any other semi-supervised methods, such as the Augmented CycleGAN. Because of this it is not clear how the technique compares with SOTA, and so the significance of the paper is not clear.\\\"\\n\\nOur main contribution is a theoretical foundation for GAN training applicable to generation in a missing data scenario as well as general prediction tasks, and not only limited to image segmentation. While a comparison to SOTA methods for certain sub-tasks (such as image segmentation) could indeed be interesting, our aim was not to claim SOTA for any of these sub-tasks, but to demonstrate that our technique can be applied in a range of possible application scenarios and deliver results in line with our theory (e.g. that performance should increase with more paired samples as the p-dependency discriminator can better estimate its density ratio). \\n\\nWe agree that applying our technique to more tasks and introducing more complex network architectures to reach better performance is certainly worthwhile - given the range of application scenarios we consider and the space constraints for the paper, however, we feel that we have to point to future work in this context.\\n\\nHowever, to account for your concerns given the space constraints, we ran additional experiments using the CycleGAN on the image segmentation task (as mentioned also in the response to AnonReviewer4). We used the same network architectures and training setup as the GAN and FactorGAN (so that the standard GAN loss is used alongside spetral normalization). This ensures a fair comparison to GAN and FactorGAN. We included the results in an updated version of the paper, so please refer to the paper for more details. In short, CycleGAN is outperformed by FactorGAN in this setting, even when FactorGAN is only given 25 paired samples, and so FactorGAN is able to model the input-output dependencies more accurately.\\n\\nWe also trained the Augmented CycleGAN by minimally adapting their code [1] to our Cityscapes setting. The only changes were increasing the input resolution from 64x64 to 128x128, and adding one more layer in the discriminator networks due to the higher input resolution. However, the model did not converge, so we are unable to add these results as another baseline.\\nComparison to commonly used missing data imputation methods is also difficult due to the higher number of variables to impute (3 color channels * 128 pixels * 128 pixels per image). We attempted to run missForest [2] but it was too memory-intensive for this reason.\\n\\nWe are currently experimenting with reimplementing the Augmented CycleGAN from scratch, and will update you if we have additional results to share.\\n\\n\\\"The title is the same as an the arxiv paper title, and so the double-blind requirement is trivially violated.\\\"\", \"please_note_that_we_are_fully_compliant_with_the_iclr_2020_submission_requirements\": \"We fully anonymised both paper and code, and submission on arXiv is explicitly allowed. Citing the call for papers, it says: \\\"However, papers that cite previous related work by the authors and papers that have appeared on non-peered reviewed websites (like arXiv) or that have been presented at workshops (i.e., venues that do not have a publication proceedings) do not violate the policy. The policy is enforced during the whole reviewing process period. Submission of the paper to archival repositories such as arXiv are allowed.\\\"\\n\\n[1] Augmented CycleGAN official codebase. https://github.com/aalmah/augmented_cyclegan\\n[2] missForest as implemented in missingPy (https://pypi.org/project/missingpy/)\"}",
"{\"title\": \"Author response\", \"comment\": \"We would like to thank you for your thoughtful review and are delighted about your positive assessment of the paper.\\n\\n\\\"For the paired MNIST experiment I found it hard to assess the qualitative results visually and am always concerned about the ad-hoc nature of Inception Distances - I find it difficult to attribute weight to them quantitatively since they are usually being used to assess things where they might suffer from a common error (e.g. they are both based on NNs).\\\"\\n\\nWe agree that the evaluation metric is not necessarily optimal. Since GAN evaluation is still an unsolved problem however, we believe providing Inception distances along with visual examples is a reasonable choice given the lack of clearly superior alternatives.\\n\\n\\\"I appreciated having error bars on some of the plots to help assess significance - would it not be possible to put error bars on all plots?\\\"\\n\\nWe included error bars wherever possible, as we agree they are quite helpful to assess significance. Unfortunately, we are not able to add them to the other plots due to the high computational requirements of training each model in each configuration (multiple days of training on a single GPU), combined with the considerable number of different configurations.\\n\\n\\\"Also, I'm not fully on board with the dependency metric in (5) but then the authors also point out the same concerns. \\\"\\n\\nWe agree that the metric is not without flaws. However, we believe that including the metric provides useful information and thus decided to keep it in the paper.\\n\\nFinally, we agree that training stability is an important aspect in our setting, since we rely on the discriminators being good estimators of the respective density ratios.\\nWhile we did not observe them in the experiments we included in the paper, we did notice that regularisation of the discriminators (here in the form of spectral normalisation) is important to ensure stability. Without such regularisation, the p-dependency discriminator can become very confident in its predictions, leading to large gradients to the generator that can prevent successful training. While we can not add further experiments easily due to the paper's space constraints, we included a short summary of this issue with a focus on how it could be resolved by extending our theoretical framework to inherently more stable GAN formulations into the conclusion section of the paper.\\n\\nAbout your note on independent marginals, it is correct that the model in this setting is more constrained than the general variant we propose. However there are some use-cases, such as independent component analysis, where an input has to be separated into components that do not exhibit dependencies between each other. This setting would be tackled in our framework by feeding the input to the generator, and viewing each output component as its own marginal, so that the q-dependency discriminator will ensure that the marginal outputs are independent.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thanks for your generally positive review and your useful feedback. We would like to respond to the questions raised in the following.\\n\\n\\\"It is not clear to me to what extent does the proposed model outperform the regular CycleGAN on a large amount of paired training samples due to architectural changes (including spectral normalization).\\\"\\n\\nWe ran additional experiments for the CycleGAN on the image segmentation task, using the same network architectures and training setup as the GAN and FactorGAN (so that the standard GAN loss is used alongside spectral normalization). We included the results in an updated version of the paper, so please refer to the paper for more details. In short, CycleGAN is outperformed by FactorGAN in this setting, even when FactorGAN is only given 25 paired samples, and so FactorGAN is able to model the input-output dependencies more accurately. Do note however that CycleGAN treats all samples as unpaired and instead relies on its cycle consistency assumption to model the input-output dependencies.\\n\\n\\\"Also, it would be nice if the comparison was carried out with a newer, possibly SotA models for unpaired image-to-image translation (MUNIT, FUNIT, BicycleGAN).\\\"\\n\\nWe agree that further scaling our proposed factorisation technique to more recently proposed models would be interesting. However, we believe that our main contribution is a theoretical foundation for both generation in the presence of missing data as well as general prediction tasks. It is not limited to image segmentation, and not based on a particular network architecture for the generator and discriminators, shown by the use of different networks in the paper. Therefore, we believe our experiments sufficiently support our main contribution, as they demonstrate the validity of the factorisation approach in different scenarios.\\n\\n\\\"Moreover, there are some simple modifications that can be made to a standard CycleGAN/Pix2pix training pipeline that would facilitate the small number of annotations (for example, see \\\"Learning image-to-image translation using paired and unpaired training samples\\\").\\\"\\n\\nWe agree that methods such as the CycleGAN can be adapted to the same problem setting. However, many of these simple adaptations (Augmented CycleGAN, the method described in the 'Learning image-to-image translation' paper) involve adding more loss terms to the objective in an ad-hoc manner which makes it difficult to characterise optimal solutions of the overall optimisation objective. It also results in more hyper-parameters required for balancing the different loss terms. Additionally, the tasks for the discriminators can overlap \\u2013 for example in the 'Learning image-to-image translation' paper, where one discriminator models the marginal generator output while another the conditional generator output. In contrast, our factorisation elegantly partitions the joint modeling task and assigns it to multiple discriminators without functional overlaps. Furthermore, as we show in the paper, we can keep the standard GAN loss where equilibrium is reached when the generator and data distribution are the same.\\nTo add to this, the paper you mentioned is not only restricted to deterministic generators, but also uses a cycle consistency loss that relies on the assumption that the mapping between the domains is deterministic and bijective. Since this is not the case for many problems (including the Cityscapes segmentation task), the perfect reconstruction encouraged by the cycle consistency loss is not possible. This has detrimental effects on the resulting model, as shown for the CycleGAN learning to embed extra information in its outputs to circumvent the information loss when mapping from one domain to the other that would normally make perfect reconstruction impossible. [1]\\nRegardless, we included the mentioned paper in the related work section.\\n\\n[1] \\\"CycleGAN, a Master of Steganography\\\", Casey Chu, Presentation at the Machine Deception Session, NeurIPS 2017\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper is tackling the problem of training generative adversarial networks with incomplete data points. The problem appears to be important for semi-supervised training of image to image translation models, where we may have a lot of observations in both domains, but a little annotated correspondences between the domains.\\n\\nThe solution proposed by the authors involves an observation that discriminator in GANs is estimating the density ratio between real and fake distributions. This ratio can then be decomposed into a product of marginal density ratios, with two additional multipliers, corresponding to density ratios between a joint real/fake distribution and a product of its marginals. The authors then use discriminators to approximate all the ratios, which allows them to facilitate semi-supervised training.\\n\\nMy decision is \\\"weak accept\\\".\\n\\nIt is not clear to me to what extent does the proposed model outperform the regular CycleGAN on a large amount of paired training samples due to architectural changes (including spectral normalization).\\n\\nAlso, it would be nice if the comparison was carried out with a newer, possibly SotA models for unpaired image-to-image translation (MUNIT, FUNIT, BicycleGAN).\\n\\nMoreover, there are some simple modifications that can be made to a standard CycleGAN/Pix2pix training pipeline that would facilitate the small number of annotations (for example, see \\\"Learning image-to-image translation using paired and unpaired training samples\\\").\\n\\nIt is hard to evaluate the comparative performance of the method without the comparisons mentioned above.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"I found this paper very easy and clear to follow - the authors present, what I believe to be an elegant, approach to training a GAN in the presence of missing data or where many marginal samples might be available but very few complete (e.g. paired) samples. The approach proceeds by identifying that the joint distributions (true and approximate) can be factored so as to yield a number of different density ratios which can then be estimated by specific discriminators; in particular, these include the appropriate marginal density ratios and then corresponding overall correction factors. As a caveat to the review I should point out that while I am familiar with GANs, they are not my main area of expertise so this should be taken into consideration - apologies if there is literature I have missed.\", \"experiments\": \"The authors provide a number of illustrative experiments that demonstrate the efficacy of the approach across a number of tasks. There are many differing GAN models but due to the nature of the problem I don't have a big issue with the majority of the comparisons being against a standard GAN since the tasks are suitably designed. For the paired MNIST experiment I found it hard to assess the qualitative results visually and am always concerned about the ad-hoc nature of Inception Distances - I find it difficult to attribute weight to them quantitatively since they are usually being used to assess things where they might suffer from a common error (e.g. they are both based on NNs). Also, I'm not fully on board with the dependency metric in (5) but then the authors also point out the same concerns. The other experiments I found more convincing.\\n\\nI appreciated having error bars on some of the plots to help assess significance - would it not be possible to put error bars on all plots?\\n\\nI found the additional extensions presented in the appendix to be interesting ideas as well and would be interested to see how the approach works with other GAN objectives as mentioned for future work.\\n\\nI am mostly very positive about this work - my main concern is really common to most GANs - all the analysis relies on the premise that the discriminators can be setup as good estimators for the density ratios. We know that this is not always the case since everything comes from samples and if the capacities of each of the discriminators are not set appropriately then I would expect problems to occur - has this been explored by the authors? It would be no detriment to the work to include failure examples where the authors purposefully make use of inappropriate architectures for some of the discriminators to check for this? For example, there will be large imbalances in the number of training samples used for the different discriminators - how does this affect stability?\", \"other_notes\": [\"Whilst I understand the point about independent marginals in 2.4 I'm not sure I see the motivation as clearly since it seems that the model is much more useful when there is dependent information but maybe there's a use-case I'm not thinking of?\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors present FactorGANs, which handle missing data scenarios by constructing conditional marginal estimates from ratios of joint and marginal distributions, estimated with GANs. FactorGANs are applied to the problem of semi-supervised (paired+unpaired) translation and demonstrate good performance.\", \"strengths\": \"-Nice formulation, which I believe is novel. Well written, good initial results.\", \"limitations\": \"-The most serious limitation of the paper is that the technique is not compared with any other semi-supervised methods, such as the Augmented CycleGAN. Because of this it is not clear how the technique compares with SOTA, and so the significance of the paper is not clear.\\n-The approach scales linearly with the number of marginals, which may limit its applicability to more general imputation tasks.\\n-The title is the same as an the arxiv paper title, and so the double-blind requirement is trivially violated.\", \"overall\": \"A nice formulation, but weak experimental investigations (no comparisons to SOTA semi-supervised translation) make the significance of the paper unclear. This makes it a borderline paper. I strongly encourage the authors to update their experiments accordingly.\", \"post_response\": \"Thank you to the authors for the detailed response and additional experimentation. I have updated my rating. It is a nice formulation, and the experimental validation of the technique has been strengthened. The additional experiments (i.e. comparing to the augmented cyclegan) that the authors are following through on will further improve the paper, making it a clear accept.\"}"
]
} |
SkeAaJrKDS | Combining Q-Learning and Search with Amortized Value Estimates | [
"Jessica B. Hamrick",
"Victor Bapst",
"Alvaro Sanchez-Gonzalez",
"Tobias Pfaff",
"Theophane Weber",
"Lars Buesing",
"Peter W. Battaglia"
] | We introduce "Search with Amortized Value Estimates" (SAVE), an approach for combining model-free Q-learning with model-based Monte-Carlo Tree Search (MCTS). In SAVE, a learned prior over state-action values is used to guide MCTS, which estimates an improved set of state-action values. The new Q-estimates are then used in combination with real experience to update the prior. This effectively amortizes the value computation performed by MCTS, resulting in a cooperative relationship between model-free learning and model-based search. SAVE can be implemented on top of any Q-learning agent with access to a model, which we demonstrate by incorporating it into agents that perform challenging physical reasoning tasks and Atari. SAVE consistently achieves higher rewards with fewer training steps, and---in contrast to typical model-based search approaches---yields strong performance with very small search budgets. By combining real experience with information computed during search, SAVE demonstrates that it is possible to improve on both the performance of model-free learning and the computational cost of planning. | [
"model-based RL",
"Q-learning",
"MCTS",
"search"
] | Accept (Poster) | https://openreview.net/pdf?id=SkeAaJrKDS | https://openreview.net/forum?id=SkeAaJrKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"tNfWVsRN3i",
"SJgkxPunoB",
"B1eSGNwnjB",
"rkxz7yRciH",
"rJgc-yRqir",
"SkxOuncqjH",
"S1eRClLYjH",
"HJx6w1CWsH",
"BygniAabiH",
"rJlddRTbiS",
"ByxdSCpWjS",
"H1xECZ5_qS",
"Hkxs3EFlcr",
"BklMPEJx5H"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738110,
1573844711510,
1573839884911,
1573736217790,
1573736193968,
1573723247972,
1573638358052,
1573146469066,
1573146276450,
1573146223857,
1573146175576,
1572540876068,
1572013235494,
1571972185905
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2008/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2008/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2008/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2008/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2008/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2008/Area_Chair1"
],
[
"ICLR.cc/2020/Conference/Paper2008/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2008/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2008/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2008/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2008/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2008/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2008/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes Search with Amortized Value Estimates (SAVE) that combines Q-learning and MCTS. SAVE uses the estimated Q-values obtained by MCTS at the root node to update the value network, and uses the learned value function to guide MCTS.\\n\\nThe rebuttal addressed the reviewers\\u2019 concerns, and they are now all positive about the paper. I recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewer\", \"comment\": \"Thank you for your reply. We agree it is always possible to run more experiments to further tease apart the details of our findings, and we hope that future work will build upon this paper to do so.\\n\\nOur experiments were specifically designed to test the referenced hypotheses; specifically: \\n\\nIn the comparison to PUCT, our hypothesis is that count-based priors should fail in regimes with small search budgets, large branching factors, and many bad actions, which is exactly what we find in Tightrope (which was precisely designed to have high branching factors with many bad actions). We also find these policies will spend most of their time re-expanding terminal actions, indicating that the prior has collapsed. We are happy to add this additional result to the paper.\\n\\nIn the comparison to SAVE without AL, we hypothesized that actions which the underlying Q-function thinks are good are not actually executed (because search ends up avoiding them), thus leading to a poorly approximated Q-function. Our experiments confirm this, because they show that when SAVE w/o AL does not have access to search, its performance is extremely poor: in other words, the Q-function is indeed very bad. If it would help, we could provide some additional statistics such as the proportion of time the search causes the agent to take a different action, or how much the Q-values change as a result of search.\\n\\nWe agree that our response in (c) is a post-hoc justification and that it warrants further investigation. However, we also see this as outside the scope of the present paper, which is to investigate the effect of different choices about how to use the knowledge gained from search (and not to explain why model-based methods can perform better than model-free).\"}",
"{\"title\": \"After Rebuttal\", \"comment\": \"Thanks for the response. I don't think the rebuttal address all of my questions. My score will remain the same.\\n\\nTo be more precise about the 'hypotheses' comment. I don't think a final performance score is convincing enough. For example, after hypothesizing the failure modes of the count based prior, is it possible to show that it is indeed happening in the experiments? What is the action 'A' that is recommend by Q-learning but modified by planning, and thus not have been updated? I would also argue that the response to (c) is again making hypotheses rather than evidence.\"}",
"{\"title\": \"Updated version (2/2)\", \"comment\": [\"Text and figures:\", \"R2, R4: We have clarified in the text what the \\u201cQ-Learning\\u201d agent is, and what it means for it to have a \\u201ctest budget\\u201d in Figure 3.\", \"R2, R3: We have clarified the difference between past work and our contributions, and provided further justification for SAVE\\u2019s approach.\", \"R2: We have added a comment in the main text about including the temperature parameter in the softmax function.\", \"R2: We have clarified how the MDP works in Tightrope.\", \"R2: We have clarified that SAVE and PUCT may both perform well with larger search budgets, but that our emphasis is on the regime with small search budgets.\", \"R2: We have added discussion about the performance of PUCT in the Construction tasks.\", \"R4: We have made the explanation around Eq. 5 flow more naturally and used more precise language around Eq. 6.\", \"R4: We have updated the figures to use a colorblind-friendly palette, and we have updated the error bars in the figures to have caps, to make them easier to parse.\", \"R2, R3, R4: We have updated the text based on all of the additional smaller comments as well.\"]}",
"{\"title\": \"Updated version (1/2)\", \"comment\": \"Dear reviewers,\\n\\nWe have now made the additional changes as promised, which we feel have improved the paper\\u2014thank you for the suggestions! We hope we have been able to address all of your concerns, but welcome additional feedback if you feel there is more we can do to strengthen the paper.\\n\\nWe note that while performing the additional experiments in tabular Tightrope (in response to R4), we performed a new hyperparameter scan and found that in the dense setting of Tightrope a smaller setting of the UCT constant (i.e., making the search greedier) increased the performance of the PUCT baseline. However, the overall pattern of results remains the same in the dense setting, and the quantitative results in the sparse reward setting (which is more representative of the rest of environments) stayed exactly the same: PUCT overall performs less well in cases with sparser rewards and smaller search budgets. We have updated Figure 2 with these results. We find it interesting that in all our experiments SAVE seems to be relatively robust to the setting of c, while other MCTS methods like PUCT are much more sensitive to this parameter.\", \"experiments\": [\"R2, R3, R4: We have included experiments with tabular Q-learning in Figure 2. We find that tabular SAVE outperforms tabular Q-learning in all of our Tightrope experiments. Tabular PUCT can outperform Q-learning when given a higher search budget, though tends to underperform Q-learning with lower search budgets. We have additionally added some discussion of this to the main text.\", \"R2: We have started experiments on the Covering task using several different exploration types: epsilon-greedy over estimated Q-values, categorical sampling from the softmax of estimated Q-values, categorical sampling from the normalized visit counts, and UCB. We find that using epsilon-greedy (which is what we were using previously) works the best out of these exploration strategies by a substantial margin. We speculate that this may be because it is important for the Q-function to be well approximated across all actions, so that it is useful during MCTS backups. However, UCB and categorical methods do not uniformly sample the action space, meaning that some actions are very unlikely to be ever learned from. The amortization loss does not help either, as these actions will not be explored during search either. The error in the Q-values for unexplored actions grows over time (due to catastrophic forgetting), leading to a poorly approximated Q-function that is unreliable. In contrast, epsilon-greedy consistently spends a little bit of time exploring these actions, preventing their values from becoming too inaccurate. We hypothesize this might be less of a problem if we were to use a separate state-value function for bootstrapping (as is done by AlphaZero), which we plan to explore in future work. We have added the current results of these experiments, and this discussion, to the appendix (see Figure C.3 and Section C.4). We will additionally update the figure with the final results once training is complete.\", \"R4: We experimented with using one-step Q-learning to learn an action-value function in the tabular PUCT agent, rather than using Monte Carlo returns to learn a state-value function, and find that these two approaches result in similar levels of performance.\", \"R4: We experimented with an action selection policy for the UCT agent which chooses an action at random from unvisited actions if all of the explored actions have an expected value of zero (which is the expected value of bad/terminal actions). We find that this indeed improves performance. While the effect is statistically significant (p=0.02) the effect size is quite small: on the dense setting with M=95% we achieve a median reward of 0.08 (using this thresholding action selection policy) versus 0.07 (selecting the max of visited actions). We have added these results and discussion to the appendix (Section B.2).\", \"R2, R4: We have included results on Covering showing that SAVE results in higher performance than Q-learning, even when controlling for the same number of environment interactions. We have both mentioned this result in the main text, and included a new Figure C.4 in the appendix.\"]}",
"{\"title\": \"I have looked at the rebuttals.\", \"comment\": \"I confirm that I have read the authors' responses and other reviews. The authors' responses are satisfactory for me. I see no need to update my score (which was already positive).\\n\\nIf possible, it would be nice to already see an updated version of the paper with some of the simpler updates as discussed in various reviews and responses (like clarifications in various parts of the text). Since the authors' responses leave the impression to me that they understand where our confusion is coming from in various points raised in the reviews, and have promised to address these issues, I trust that they will be able to do this successfully. So, seeing these updates on/before November 15 is not crucial to me -- it would just be nice to see already if possible.\\n\\nI understand that already including updates that necessitate the running of additional experiments may be infeasible in a short amount of time, if the experiments are still in progress.\"}",
"{\"title\": \"Thanks for your reviews. Please take a look at the rebuttal.\", \"comment\": \"Dear reviewers,\\n\\nThank you very much for your efforts in reviewing this paper.\\n\\nThe authors have provided their rebuttal. It would be great if you take a look at them, and see whether it changes your opinion in anyway. If there is still any unclear point or a serious disagreement, please bring it up. Also if you are hoping to see a specific change or clarification in the paper before you update your score, please mention it.\\n\\nThe authors have only until November 15th to reply back.\\n\\nI also encourage you to take a look at each others\\u2019 reviews. There might be a remark in other reviews that changes your opinion.\\n\\nThank you,\\nArea Chair\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you very much for your comments! We are glad to hear that you think the paper is clear and that the idea is interesting.\\n\\n1. We respectfully disagree that SAVE is ad-hoc. Our primary baselines\\u2014SAVE w/o AL and PUCT\\u2014are methods that have been previously published, and both suffer from potential issues during training which SAVE explicitly addresses. We tried to emphasize this in the text, but perhaps the justification for SAVE\\u2019s approach was not sufficiently clear. We will attempt to clarify this in our revision, and provide further justification below:\\n\\n(a) In Section 2.2, we hypothesized that approaches which use a count-based policy prior will suffer in environments with high branching factors and small search budgets. We formulated the Tightrope domain as a way to test this hypothesis, and our experiments demonstrate that our hypothesis is correct. Since value is the quantity that we actually want to maximize, it makes more sense\\u2014and results in better performance\\u2014if we regress towards values rather than regressing towards visit counts.\\n\\nSimilarly, we hypothesized in Section 2.1 that the reason prior approaches combining Q-learning and MCTS found unstable performance is due to the fact that the experience generated by MCTS is too off policy and does not allow the Q-function to learn about bad actions. Our experiments on the Construction tasks comparing against the SAVE w/o AL baseline similarly demonstrate that this hypothesis is correct: by including information about bad actions through the amortization loss, the Q-function becomes much more stable.\\n\\n(b) Regarding why the L2 loss works less well, we agree that our justification is post-hoc and requires further investigation. However, we believe the empirical performance justifies using the cross-entropy loss instead, and that a more detailed explanation of the difference is more a topic for future work.\\n\\n(c) SAVE achieves better performance than a model-free agent trained with more data because it allows the agent to try out multiple potential actions at each point in time. While the agent may estimate these actions to have similar values, search allows it to uncover imperfections in those estimates and avoid locally suboptimal actions (such as an action that causes the episode to terminate). In contrast, a model-free agent does not have the ability to try out and compare multiple actions. If it ends up taking an action that causes the episode to terminate, then it will have to restart from the beginning. Thus, SAVE allows the agent to gather experience from later in the episode than model-free agents, resulting in better performance even when controlling for the same number of environment transitions.\\n\\n3. The SAVE w/o AL agent is based on the GN-DQN-MCTS agent described by Bapst et al. (2019) (as stated in the second paragraph of Section 4.2), and is similar in spirit to other approaches which include planning in the training loop but which do not leverage the value computations performed during search (Gu et al. 2016, Azizzadenesheli et al. 2018). The relevant aspect of these papers is this common property (that they do not use the computed values from search), and thus we feel that the SAVE w/o AL baseline is a sufficient comparison. Additionally, there are many other choices made by the other papers which would make them inappropriate to directly compare to. Specifically, Gu et al. were concerned on continuous Q-learning and evaluated on Mujoco continuous control tasks. Azizzadenesheli et al. were concerned with learning a model using GANs and using that within MCTS; while they evaluated their approach on Atari they did not achieve better performance than an agent trained with DDQN. In contrast, in our Atari experiments SAVE strongly outperforms our model-free baseline (R2D2).\", \"other_comments\": \"1. This is a good point; we should have included a comparison to tabular Q-learning as well. We are working on this and will post a later revision with these results.\"}",
"{\"title\": \"Response to Reviewer #2 (2/2)\", \"comment\": \"10. The best performance we could achieve with PUCT on the Construction tasks is in the form of the SAVE w/ PUCT baseline in Figure 3. However, this agent still performs Q-learning and transforms the Q-values into a policy for use by the PUCT exploration term. If we train an actual policy prior and regress towards the visit counts, we find that performance is around zero in the Covering task (this result is actually what motivated our experiments in the Tightrope domain). This is because the Construction tasks have enormous branching factors (1000s-10000s of actions per state), and thus with a small search budget PUCT is unable to learn a useful policy for the reasons described in Section 2.2. Thus, we would not expect PUCT to work in the Marble Run task either. We will add further discussion of these results in the text.\\n\\nIt is likely that PUCT would work better on Atari which has a small branching factor; however, the point of our Atari experiments was to show that SAVE can be dropped into existing Q-learning agents and achieve good performance with minimal effort. In contrast, even if we included a PUCT baseline, it would require significant work to tune the agent on Atari. We thus take this as an illustration of SAVE\\u2019s ease-of-use.\"}",
"{\"title\": \"Response to Reviewer #2 (1/2)\", \"comment\": \"Thank you for your insightful comments and suggestions for future work! We have addressed these in detail below. However, we are unsure which of your comments were most important in deciding your score. Would you be able to clarify this?\\n\\n1. Of course, we agree that the combination of Q-learning with MCTS is not new. We attempted to convey this in Sections 2.1 and 2.2, though perhaps we did not make it clear enough the difference between past work and our contributions. We will work on updating the language in the paper to be clearer on this. In particular, we emphasize that in contrast to previous approaches, SAVE is the first to simultaneously use MCTS to strengthen the Q-function, and the Q-function to strengthen MCTS. Moreover, SAVE addresses two important limitations of these previous approaches: that without a cross-entropy amortization loss, the Q-function will be poorly approximated; and that using the visit counts from search to improve the policy can be unreliable in the regime of small search budgets. We also find that the cross-entropy loss is quite crucial in our experiments to achieve good performance, compared to the L2 loss.\\n\\nThank you for pointing out Guo et al. (2014); we were missing this reference and will add it to the paper.\\n\\n2. We chose epsilon-greedy exploration because it is the standard choice of exploration for DQN agents, and it allowed for a more controlled comparison between SAVE and the model-free Q-learning. However, we agree it would be interesting to try using UCB exploration to select the final action. We will perform some experiments with this and update the paper with the results. We did try softmax exploration in the past but did not find it made a difference.\\n\\n3. Extending SAVE to continuous action spaces is an important direction, but out of scope here (though we\\u2019re exploring it now). However, we emphasize that discrete problems constitute a large proportion of domains in deep RL, ranging from games (Atari, Go, etc.) to real-world applications like combinatorial optimization. Thus, exploring ways of improving discrete search is a valuable research direction in its own right.\\n\\n4. Thank you for this suggestion! Although we have not experimented with a temperature parameter in the softmax functions we expect that better performance might be attained by tuning such a parameter. However, we leave this as an interesting direction for future work and will leave a comment about this in the paper.\\n\\n5. Thank you for pointing out this out, we will clarify this in the text. Specifically, the behavior of the transition function is identical across episodes, with the exception of the behavior at the final state in the sparse reward setting.\\n\\n6. This is a good point; we should have included a comparison to tabular Q-learning as well. We are working on this and will post a later revision with these results.\\n\\n7. In many domains, a simulator may be available but also may be very slow. In our experiments, the simulator for the Construction tasks is a good example of this: training the agents with a search budget of 50 or 100 simulations is prohibitively costly. In real world environments, many simulators are extremely costly to run (such as physical simulators for fluid dynamics). Thus, it is an important area of research to demonstrate how such simulators can be effectively used even when we can rely on only very few simulations (both at training and at test time).\\n\\nAs to how SAVE compares to PUCT with larger search budgets during training, we can see in Figure 2 that as the search budget increases, both methods converge reliably to the solution. If one has access to a fast simulator and can perform a significant amount of search, we see no reason not to do this. Rather, we are interested in scenarios in which a large amount of search is impractical to use and where PUCT-like methods will not perform well. We will update the text to emphasize this further.\\n\\n8. We discuss this point at the end of Section 4.3. Additionally, please see our response to Reviewer #4 (comment 7).\\n\\n9. We agree this is confusing and will update both the main text and the appendix to be clearer. Specifically, the \\u201cQ-learning\\u201d agent is an agent which uses regular Q-learning during training, and at test time additionally performs some amount of search (using the same version of MCTS as that used by SAVE). It is different from SAVE w/o AL in that the Q-learning agent may only perform search at test time, while SAVE w/o AL performs search both during training and testing.\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"Thank you for your positive review! We believe that SAVE is a potentially important contribution towards improving our collective intuition of hybrid search-and-learning methods and are glad that you find it provides useful insights as well. In fact, you may be interested to know that SAVE came about in part because we tried a PUCT-style approach and were surprised to find that it did not work very well. We think this is an important finding to communicate to the broader research community so\\u2014as you said\\u2014we can continue to build even stronger agents that work well across a wide range of settings and search budgets.\\n\\n1. Thank you for pointing out the imprecise language. We will add a reference to Eq. 5 in the text above and tweak the language to make it flow more naturally so that it doesn\\u2019t seem like it appears so suddenly. We will also update the language after Eq. 6 to be more precise.\\n\\n2. This is a good point; we should have included a comparison to tabular Q-learning as well. We are working on this and will post a later revision with these results.\\n\\n3. We agree that this might make more sense; the reason we train the state-value function from Monte Carlo returns is that this is most similar to the method of training described by PUCT-like methods in the literature such as Silver et al. (2017, 2018). However, we will perform some experiments using Q-learning as well. Our expectation is that this will not help very much, though, as the failure of PUCT is not due to its value function but due to the method by which it learns its policy. Moreover, this is only likely to help in the sparse reward setting, as in the dense reward setting every good action has an immediate positive reward.\\n\\n4. Yes, that interpretation is correct. For unvisited actions, we assign a Q-value of 0. In some environments, you are correct that these different ways of setting the Q-values may be better. However, in the Tightrope domain, we think that setting unexplored Q-values to zero is likely the best approach because all possible rewards are greater than or equal to zero. Once an action is found with non-zero reward the best option is to stick with it, so it would not make sense to set the values optimistically. Actions that cause the episode to terminate have a reward of zero, so it would also not make sense to set the values pessimistically as this would lead to over-exploring terminal actions. Setting the values to the average of the parent would either have the effect of setting to zero or setting optimistically (if the parent had positive reward). Thus, it seems to make the most sense to set the unexplored Q-values to zero in this domain.\\n\\n5. We select the action to execute in the environment out of those actions which were explored during the search; the specific action that is chosen is the one with the highest Q-value. It is an interesting suggestion to try selecting uniformly at random from the unvisited actions in the case where all estimated values are bad (for example, if they are all zero). We will try an experiment to this effect and update the appendix with the results.\\n\\n6. We agree this is confusing and will update both the main text and the appendix to be clearer. Specifically, the \\u201cQ-learning\\u201d agent is an agent which uses regular Q-learning during training, and at test time additionally performs some amount of search (using the same version of MCTS as that used by SAVE).\\n\\n7. In very simple domains like Tightrope, Q-learning may indeed be more efficient in terms of calls to the simulator (e.g., see Figure 2d). However, for moderately complex domains like the Construction tasks, we have seen similar results as with Marble Run: the SAVE agent converges to a higher level of performance that the model-free agent cannot reach even with 10x as much experience. We are working on creating some figures to illustrate this, which we will include in the appendix.\", \"minor_comments\": \"Thank you for these additional comments, we will address these in the text and are working on updating Figures 2 and 3 as per your suggestions.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes an approach, named SAVE, which combines model-free RL (e.g. Q-learning) with model-based search (e.g. MCTS). SAVE includes the value estimates obtained for all actions available in the root node in MCTS in the loss function that is used to train a value function. This is in contrast to closely-related approaches like Expert Iteration (as in AlphaZero etc.), which use the visit counts at the root node as a training signal, but discard the value estimates resulting from the search.\\n\\nThe paper provides intuitive explanations for two situations in which training signals based on visit counts, and discarding value estimates from search, may be expected to perform poorly in comparison to the new SAVE approach:\\n1) If a trained Q-function incorrectly recommends an action \\\"A\\\", but a search process subsequently corrects for this and deviates from \\\"A\\\", no experience for \\\"A\\\" will be generated, and the incorrect trained estimates of this action \\\"A\\\" will not be corrected.\\n2) In scenarios with extremely low search budgets and extremely high numbers of poor actions, a search algorithm may be unable to assign any of the visit count budget to high-quality actions, and then only continue recommending the poor actions that (by chance) happened to get visits assigned to them. \\n\\nThe paper empirically compares the performance of SAVE to that of Q-Learning, UCT, and PUCT (the approach used by AlphaZero), on a variety of environments. This includes some environments specifically constructed to test for the situations described above (with high numbers of poor actions and low search budgets), as well as standard environments (like some Atari games). These experiments demonstrate superior performance for SAVE, in particular in the case of extremely low search budgets.\\n\\nI would qualify SAVE as a relatively simple (which is good), incremental but convincing improvement over the state of the art -- at least in the case of situations with extremely low search budgets. I am not sure what to expect of its performance, relative to PUCT-like approaches, when the search budget is increased. For me, an important contribution of the paper is that it explicitly exposes the two situations, or \\\"failure modes\\\", of visit-count-based methods, and SAVE provides improved performance in those situations. Even if SAVE doesn't outperform PUCT with higher search budgets (I don't know if it would?), it could still provide useful intuition for future research that might lead to better performance more generally across wider ranges of search budgets.\\n\\n\\nPrimary comments / questions:\\n\\n1) Some parts of the paper need more precise language. The text above Eq. 5 discusses the loss in Eq. 5, but does not explicitly reference the equation. The equation just suddenly appears there in between two blocks of text, without any explicit mention of what it contains. After Eq. 6, the paper states that \\\"L_Q may be any variant of Q-learning, such as TD(0) or TD(lambda)\\\". L_Q is a loss function though, whereas Q-learning, TD(0) and TD(lambda) are algorithms, they're not loss functions. I also don't think it's correct to refer to TD(0) and TD(lambda) as \\\"variants of Q-learning\\\". Q-learning is one specific instead of an off-policy temporal difference learning algorithm, TD(lambda) is a family of on-policy temporal difference learning algorithms, and TD(0) is a specific instead of the TD(lambda) family.\\n\\n2) Why don't the experiments in Figures 2(a-c) include a tabular Q-learner? Since SAVE is, informally, a mix of MCTS and Q-learning, it would be nice to not only compare to MCTS and another MCTS+learning combo, but also standalone Q-learning.\\n\\n3) The discussion of Tabular Results in 4.1 mentions that the state-value function in PUCT was learned from Monte-Carlo returns. But I think the value function of SAVE was trained using a mix of the standard Q-learning loss and the new amortization loss proposed in the paper. Wouldn't it be more natural to then train PUCT's value function using Q-learning, rather than Monte-Carlo returns?\\n\\n4) Appendix B.2 mentions that UCT was not required to visit all actions before descending down the tree. I take it this means it's allowed to assign a second visit to a child of the root node, even if some other child does not yet have any visits? What Q-value estimate is used by nodes that have 0 visits? Some of the different schemes I'm aware of would involve setting them to 0, setting them optimistically, setting them pessimistically, or setting them to the average value of the parent. All of these result in different behaviours, and these differences can be especially important in the high-branching-factor / low-search-budget situations considered in this paper.\\n\\n5) Closely related to the previous point; how does UCT select the action it takes in the \\\"real\\\" environment after completing its search? The standard approach would be to maximise the visit count, but when the search budget is low (perhaps even lower than the branching factor), this can perform very poorly. For example, if every single visit in the search budget led to a poor outcome, it might be preferable to select an unvisited action with an optimistically-initialised Q-value.\\n\\n6) In 4.2, in the discussion of the Results of Figure 3 (a-c), it is implied that the blue lines depict performance for something that performs search on top of Q-learning? But in the figure it is solely labelled as \\\"Q-learning\\\"? So is it actually something else, or is the discussion text confusing?\\n\\n7) The discussion of Results in 4.3 mentions that, due to using search, SAVE effectively sees 10 times as many transitions as model-free approaches, and that experiments were conducted on this rather complex Marble Run domain where the model-free approaches were given 10 times as many training steps to correct for this difference. Were experiments in the simpler domains also re-run with such a correction? Would SAVE still outperform model-free approaches in the more simple domains if we corrected for the differences in experience that it gets to see?\\n\\n\\nMinor Comments (did not impact my score):\\n- Second paragraph of Introduction discusses \\\"100s or 1000s of model evaluations per action during training, and even upwards of a million simulations per action at test time\\\". Writing \\\"per action\\\" could potentially be misunderstood by readers to refer to the number of legal actions in the root state. Maybe something like \\\"per time step\\\" would have less potential for confusion?\\n- When I started reading the paper, I was kind of expecting it was going to involve multi-player (adversarial) domains. I think this was because some of the paper's primary motivations involve perceived shortcomings in the Expert Iteration approaches as described by Anthony et al. (2017) and Silver et al. (2018), which were all evaluated in adversarial two-player games. Maybe it would be good to signal at an early point in the paper to the reader that this paper is going to be evaluated on single-agent domains. \\n- Figure 2 uses red and green, which is a difficult combination of colours for people with one of the most common variants of colour-blindness. It might be useful to use different colours (see https://usabilla.com/blog/how-to-design-for-color-blindness/ for guidelines, or use the \\\"colorblind\\\" palette in seaborn if you use seaborn for plots).\\n- The error bars in Figure 3 are completely opaque, and overlap a lot. Using transparant, shaded regions could be more easily readable.\\n- \\\"... model-free approaches because is a combinatorial ...\\\" in 4.2 does not read well.\\n- Appendix A.3 states that actions were sampled from pi = N / sum N in PUCT. It would be good to clarify whether this was only done when training, or also when evaluating.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper proposes Search with Amortized Value Estimates (SAVE), which combines Q-learning and Monte-Carlo Tree Search (MCTS). SAVE makes use of the estimated Q-values obtained by MCTS at the root node (Q_MCTS), rather than using only the resulting action or counts to learn a policy. It trains the amortized value network Q_theta via the linear combination of Q-learning loss and the cross-entropy loss between the softmax(Q_MCTS) and softmax(Q_theta). Then, SAVE incorporates the learned Q-function into MCTS by using it for the initial estimate for Q at each node and for the leaf node evaluation by V(s) = max_a Q_theta(s,a). Experimental results show that SAVE outperforms the baseline algorithms when the search budget is limited.\", \"The idea of training Q-network using the result of MCTS planning is not new (e.g. UCTtoRegression in Guo et al 2014), but this paper takes further steps: the learned Q-network is again used for MCTS planning as Q initialization, the cross-entropy loss is used instead of L2-loss for the amortized value training, and the total loss combines Q-learning loss and the amortization loss.\", \"In Figure 1, it says that the final action is selected by epsilon-greedy. Since SAVE performs MCTS planning, UCB exploration seems to be a more natural choice than the epsilon-greedy exploration. Why does SAVE use a simple epsilon-greedy exploration? Did it perform better than UCB exploration or softmax exploration? Also, what if we do not perform exploration at all in the final action selection, i.e. just select argmax Q(s,a)? Since exploration is performed during planning, we may not need exploration for the final action selection?\", \"Can SAVE be extended to MCTS for continuous action space? SAVE trains Q-network, rather than a policy network that can sample actions, thus it seems to be more difficult to deal with continuous action space.\", \"In Eq. (5), we may introduce a temperature parameter that trade-offs the stochasticity of the policy to further improve the performance of SAVE.\", \"In Tightrope domain (sec 4.1), it says: \\\"The MDP is exactly the same across episodes, with the same actions always having the same behavior.\\\", but it also says: \\\"In the sparse reward setting, we randomly selected one state in the chain to be the \\u201cfinal\\u201d state to form a curriculum over the length of the chain.\\\" It seems that those two sentences are contradictive.\", \"In the Tightrope experiment's tabular results (Figure 2), the performance of Q-learning is not reported. I want to see the performance of Q-learning here too.\", \"In Figure 2, the search budgets for training and testing are equal, which seems to be designed to benefit SAVE than PUCT. Why the search budget should be very small even during training? Even if the fixed and relatively large search budget (e.g. 50 or 100) is used during training and the various small search budgets are only used in the test phase, does SAVE still outperform PUCT?\", \"In Figure 2 (d), model-free Q-learning does not perform any planning, thus there will be much less interaction with the environment compared to SAVE or PUCT. Therefore, for a fair comparison, it seems that the x-axis in Figure 4-(d) should be the number of interactions with the environment (i.e. # queries to the simulator), rather than # Episodes. In this case, it seems that Q-Learning might be much more sample efficient than SAVE.\", \"In Figure 3, what is the meaning of the test budget for Q-Learning since Q-Learning does not have planning ability? If this denotes that Q-network trained by Q-learning loss is used for MCTS, what is the difference between Q-Learning and SAVE w/o AL?\", \"In Figures 3, 4, 5, it seems that comparisons with PUCT are missing. In order to highlight the benefits of SAVE for efficient MCTS planning, the comparison with other strong MCTS baselines (e.g. PUCT that uses learned policy prior) should be necessary. A comparison only with a model-free baseline would not be sufficient.\", \"-----\"], \"after_rebuttal\": \"Thank the authors for clarifying my questions and concerns. I feel satisfied with the rebuttal and raise my score accordingly.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes SAVE that combines Q learning with MCTS. In particular, the estimated Q values are used as a prior in the selection and backup phase of MCTS, while the Q values estimated during MCTS are later used, together with the real experience, to train the Q function. The authors made several modifications to \\u2018standard\\u2019 setting in both Q learning and MCTS. Experimental results are provided to show that SAVE outperforms generic UCT, PUCT, and Q learning.\\n\\nOverall the paper is easy to follow. The idea of the paper is interesting in the sense that it tries to leverage the computation spent during search as much as possible to help the learning. I am not an expert in the of hybrid approach, so I can not make confident judgement on the novelty of the paper.\", \"the_only_concern_i_have_is_that_the_significance_of_the_result_in_the_paper\": \"1. The proposed method, including the modifications to MCTS and Q learning (section 3.2 and 3.3), is still a bit ad-hoc. The paper has not really justified why the proposed modification is a better choice except a final experimental result. Some hypotheses are made to explain the experimental results. But the authors have not verified those hypotheses. Just to list a few here: (a). The argument made in section 2.2 about count based prior; (b). the statement of noisy Q_MCTS to support the worse performance of L2 loss in section 4.2; (c). In the last paragraph of section 4.3, why would a model free agent with more episodes results in worse performance?\\n2. The baselines used in this paper are only PUCT and a generic Q learning. What are the performances of other methods that are mentioned in section 2.1, like Gu 2016, Azizzadenesheli 2018, Bapst 2019?\", \"other_comments\": \"1. What is the performance of tabular Q-learning in Figure 2 (a-c)?\"}"
]
} |
ryxC6kSYPr | Infinite-Horizon Differentiable Model Predictive Control | [
"Sebastian East",
"Marco Gallieri",
"Jonathan Masci",
"Jan Koutnik",
"Mark Cannon"
] | This paper proposes a differentiable linear quadratic Model Predictive Control (MPC) framework for safe imitation learning. The infinite-horizon cost is enforced using a terminal cost function obtained from the discrete-time algebraic Riccati equation (DARE), so that the learned controller can be proven to be stabilizing in closed-loop. A central contribution is the derivation of the analytical derivative of the solution of the DARE, thereby allowing the use of differentiation-based learning methods. A further contribution is the structure of the MPC optimization problem: an augmented Lagrangian method ensures that the MPC optimization is feasible throughout training whilst enforcing hard constraints on state and input, and a pre-stabilizing controller ensures that the MPC solution and derivatives are accurate at each iteration. The learning capabilities of the framework are demonstrated in a set of numerical studies. | [
"Model Predictive Control",
"Riccati Equation",
"Imitation Learning",
"Safe Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=ryxC6kSYPr | https://openreview.net/forum?id=ryxC6kSYPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"s4nFDDMG0",
"SkxpsghhoH",
"S1en1khhiB",
"H1gTi35hjr",
"HyxliPNhor",
"HJlVVT-GsH",
"r1e8h2-MsH",
"r1xCKjZfoB",
"rJeZNvcxcB",
"rkeMnzRk9H",
"H1x0gLTRKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738080,
1573859492936,
1573859044459,
1573854373262,
1573828504418,
1573162283665,
1573162157883,
1573161861986,
1572017960612,
1571967658026,
1571898870462
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2006/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2006/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2006/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2006/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2006/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2006/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2006/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2006/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2006/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2006/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper develops a linear quadratic model predictive control approach for safe imitation learning. The main contribution is an analytic solution for the derivative of the discrete-time algebraic Riccati equation (DARE). This allows an infinite horizon optimality objective to be used with differentiation-based learning methods. An additional contribution is the problem reformulation with a pre-stabilizing controller and the support of state constraints throughout the learning process. The method is tested on a damped-spring system and a vehicle platooning problem.\\n\\nThe reviewers and the author response covered several topics. The reviewers appreciated the research direction and theoretical contributions of this work. The reviewers main concern was the experimental evaluation, which was originally limited to a damped spring system. The authors added another experiment for a substantially more complex continuous control domain. In response to the reviewers, the authors also described how this work relates to non-linear control problems. The authors also clarified the ability of the proposed method to handle state-based constraints that are not handled by earlier methods. The reviewers were largely satisfied with these changes.\\n\\nThis paper should be accepted as the reviewers are satisfied that the paper has useful contributions.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the quick response and clarification\", \"comment\": \"This exactly addresses my question -- I still maintain a weak accept as I think the contribution of this paper is interesting and relevant to the community, but the experimental results in the current form are of limited interest.\"}",
"{\"title\": \"Clarification\", \"comment\": \"Dear Reviewer 1. Thanks for your response. To clarify, in Lemma 1 we claim that the MPC problem is infinite horizon optimal for a large enough N given the P from the Riccati equation for a bounded set of initial conditions. In the proof, in appendix, we refer to the original result in the paper \\u201cconstrained linear quadratic regulator\\u201d, where this claim is demonstrated under the assumption that the state and input constraints are convex compact sets with the origin in their interior. Hence for a sufficient N there is no sub-optimality. This is a known result in the control literature. We are sorry if this was not made clear and hope that the reviewer could still change their mind about our score in light of the updates in our revised paper. Thanks again for the constructive contribution.\"}",
"{\"title\": \"Thank you for the response!\", \"comment\": \"I have read through the other reviews and responses in this thread and maintain my original score of a weak accept as this is an interesting new method and the pre-stabilization is useful. I apologize for responding so late in the discussion period -- but the clarification that I *meant* to ask for in my original review is that you use the DARE of the unconstrained problem to get the terminal control cost Q_N=P (in the text following Eq. (8)) that is used in the constrained problem, not as part of the stabilization. Using this still seems sub-optimal as there could be some other terminal cost that is optimal for the constrained problem that is different from the unconstrained DARE solution. I could imagine a system with very extreme bounds where there is a final cost for the constrained system is extremely different than the final cost for the unconstrained system. It would be useful to have a reference to the controls literature that discusses this disconnection.\"}",
"{\"title\": \"Revised Paper\", \"comment\": \"A revised paper has now been submitted that extends the numerical experiments to a vehicle platooning problem that includes 18 state variables and 10 control inputs, and hope that this additional contribution addresses the reviewers' concerns on the simplicity of the numerical experiments and better demonstrates the properties of the algorithm on a more significant, real-world control problem. We have also revised the paper to make the contribution relative to (Amos et al. 2018) clearer in the introduction and section 4.1 paragraph 1, and have made other minor revisions to improve the readability of the paper, correct notational mistakes, and accommodate the additional experiments within the 8 page recommended limit. We would like to thank the reviewers again for their constructive feedback, and believe that the revised paper makes a much stronger contribution as a result.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We would like to thank the reviewers for taking the time to provide comprehensive and constructive reviews.\\n\\nAn issue that has been highlighted by all three reviewers is the low complexity of the numerical experiments, so the response to all three reviewers starts with the same text that addresses this concern: \\n\\nWe concur with Reviewer 1\\u2019s observation that the investigation of simple systems is necessary as a sanity check, and this was the rationale of the numerical experiments presented here, but we also agree that it is a fair observation to highlight the lack of complexity as a limitation of the paper. In direct response to Reviewer 4\\u2019s suggestion to investigate a nonlinear system that has been successively linearised, the DARE cannot be used for non-linear systems, unless in cases where the non-linearities are quite limited. This is also discussed in Appendix E of our paper. Extensions to some interesting non-linear cases are going to be the subject of a follow-up study. In direct response to Reviewer 3\\u2019s comment \\u2018can one construct scenarios where the baseline approach (Amos et al., 2018) fails?\\u2019 \\u2013 one of the significant limitations of (Amos et al., 2018) is that state constraints are not included in their differentiable MPC formulation (see equation (10) in their paper). As a consequence, their approach very quickly fails in general for even the simple LTI case presented here when state constraints are included because the MPC optimization has become infeasible in the forwards pass, and so one of the major contributions of this paper is ensuring feasibility in the presence of state constraints. We therefore believe that the numerical demonstration with an LTI system is necessary to provide credibility for when the approach is extended to non-linear systems, but agree that a 2DOF system is insufficient. We propose to investigate a second LTI system with a larger amount of states and inputs, inspired by a real-world application. Would the reviewers be satisfied with this additional experiment?\", \"comments_specific_to_reviewer_1\": \"\\u201cthe DARE solution in (7,8) is derived to optimally control a LTI system *without* control/state bounds but is then used to control the LTI system *with* control/state bounds in (4)\\u2026\\u201d\\n\\nThe DARE solution simply stabilizes the system in the absence of constraints and thus improves the numerical conditioning of the problem of optimizing the perturbation (delta u). The optimization of this perturbation signal ensures that the predicted control sequence is optimal (in an open-loop sense) for the constrained problem. A feedback control law is obtained by repeating the optimization at each timestep using current information on the system state.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We would like to thank the reviewers for taking the time to provide comprehensive and constructive reviews.\\n\\nAn issue that has been highlighted by all three reviewers is the low complexity of the numerical experiments, so the response to all three reviewers starts with the same text that addresses this concern: \\n\\nWe concur with Reviewer 1\\u2019s observation that the investigation of simple systems is necessary as a sanity check, and this was the rationale of the numerical experiments presented here, but we also agree that it is a fair observation to highlight the lack of complexity as a limitation of the paper. In direct response to Reviewer 4\\u2019s suggestion to investigate a nonlinear system that has been successively linearised, the DARE cannot be used for non-linear systems, unless in cases where the non-linearities are quite limited. This is also discussed in Appendix E of our paper. Extensions to some interesting non-linear cases are going to be the subject of a follow-up study. In direct response to Reviewer 3\\u2019s comment \\u2018can one construct scenarios where the baseline approach (Amos et al., 2018) fails?\\u2019 \\u2013 one of the significant limitations of (Amos et al., 2018) is that state constraints are not included in their differentiable MPC formulation (see equation (10) in their paper). As a consequence, their approach very quickly fails in general for even the simple LTI case presented here when state constraints are included because the MPC optimization has become infeasible in the forwards pass, and so one of the major contributions of this paper is ensuring feasibility in the presence of state constraints. We therefore believe that the numerical demonstration with an LTI system is necessary to provide credibility for when the approach is extended to non-linear systems, but agree that a 2DOF system is insufficient. We propose to investigate a second LTI system with a larger amount of states and inputs, inspired by a real-world application. Would the reviewers be satisfied with this additional experiment?\", \"comments_specific_to_reviewer_3\": \"\\u2022\\tBy \\u2018the same class\\u2019 we mean that the expert and learner are both 2DOF LTI systems controlled with Quadratic box-constrained MPC controllers. This comment has been removed in the updated paper to improve clarity.\\n\\n\\u2022\\tWe thank the reviewer for the suggested references, which are both are of interest\\n\\n\\u2022\\tIn the first suggested paper (Y. Pan et al.) the \\u2018expert\\u2019 controller takes the form of a model predictive controller where the model is a gaussian process and the solution is provided using differential dynamic programming. The \\u2018learner\\u2019 however, is simply a deep neural network, for which it is generally impossible to determine whether the learned controller will satisfy hard constraints on the system state a-priori, or whether it will be stabilizing. The entire purpose of the work presented in this paper is to provide interpretable structure to a neural network when used for imitation learning, for which hard constraint satisfaction and stability can be guaranteed a-priori, provided that the prediction error of the MPC model is limited. Ultimately, our study, as well as the previous ones on differentiable MPC, aims towards future integration of more complex sensory (e.g. visual) information as done in Y. Pan et al. However, in this work we decided to focus on improving one part of the stack, i.e. on establishing conditions for having a more stable and optimal method to imitate controllers from given state-action measurements. The integration of visual data and of convolutional networks is of great interest and will be addressed in future work. \\n\\n\\u2022\\tThe second paper (R. Cheng et al.) deals with reinforcement learning, where a prior policy is introduced to reduce variance and, in the case of H-infinity robust control, to provide stability with respect to the \\u201cuncertainty\\u201d resulting in the use of an RL policy. The two policies are weighted and it is shown that this results in a regularization of the original loss. This is clearly inspired by robust control, and in the general case adding stability or robustness will lead to an inevitable level of conservatism or sub optimality with respect to the original reward or loss. In this paper the re-parameterisation does not lead to sub-optimality and does not alter the MPC problem.\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"We would like to thank the reviewers for taking the time to provide comprehensive and constructive reviews.\\n\\nAn issue that has been highlighted by all three reviewers is the low complexity of the numerical experiments, so the response to all three reviewers starts with the same text that addresses this concern: \\n\\nWe concur with Reviewer 1\\u2019s observation that the investigation of simple systems is necessary as a sanity check, and this was the rationale of the numerical experiments presented here, but we also agree that it is a fair observation to highlight the lack of complexity as a limitation of the paper. In direct response to Reviewer 4\\u2019s suggestion to investigate a nonlinear system that has been successively linearised, the DARE cannot be used for non-linear systems, unless in cases where the non-linearities are quite limited. This is also discussed in Appendix E of our paper. Extensions to some interesting non-linear cases are going to be the subject of a follow-up study. In direct response to Reviewer 3\\u2019s comment \\u2018can one construct scenarios where the baseline approach (Amos et al., 2018) fails?\\u2019 \\u2013 one of the significant limitations of (Amos et al., 2018) is that state constraints are not included in their differentiable MPC formulation (see equation (10) in their paper). As a consequence, their approach very quickly fails in general for even the simple LTI case presented here when state constraints are included because the MPC optimization has become infeasible in the forwards pass, and so one of the major contributions of this paper is ensuring feasibility in the presence of state constraints. We therefore believe that the numerical demonstration with an LTI system is necessary to provide credibility for when the approach is extended to non-linear systems, but agree that a 2DOF system is insufficient. We propose to investigate a second LTI system with a larger amount of states and inputs, inspired by a real-world application. Would the reviewers be satisfied with this additional experiment?\", \"comments_specific_to_reviewer_4\": \"We thank the reviewer for the thorough reading of the paper and will include the corrections in an updated paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper continues the recent direction (e.g. Amos & Kolter 2017) of differentiating through optimal control solutions, allowing for the combination of optimal control methods and learning systems. The paper has some nice contributions and I find this research direction to be very exciting, which is why I think it merits acceptance, however I find the experiments (Section 4) could be greatly improved.\\n\\nThe main contribution of the paper are the analytical derivative of the solution to the DARE. The pre-stabilising controller reformulation is a neat trick. \\n\\nThe main issue I have with this paper is that the experiments are performed only on a toy 2D problem. Even an LTI system can be interesting! Of course it is important to start with a toy problem, but once positive results have been shown, it would be much more convincing if the paper showed some more complicated system, possibly an iteratively linearised non-linear system. My feeling (and possibly many others') is that these type on differentiable controllers can be extremely powerful, however this power is sadly not demonstrated here.\", \"errata\": \"before eq (3): dt is not a pertubation to the feedback control\\neq (4) argmin over \\\\delta u rather than \\\\delta, presumably\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper shows how to use the Discrete-time Algebraic Riccati Equation (DARE) to provide infinite horizon stability & optimality to differentiable MPC learning. The paper also shows how to use DARE to derive a pre-stabilizing (linear state-feedback) controller. The paper provides a theoretical characterization of the problem setting, which shows that prior work on differentiable MPC learning may lead to unstable controllers without the proposed augmentations using DARE.\\n\\nI'm not sure I understand the implications of imitating \\\"from an expert of the same class\\\". Can the authors elaborate?\\n\\nCan the authors compare & contrast with this paper?\", \"https\": \"//arxiv.org/abs/1709.07174\\n(I have my own views, but I'd like hear the authors' thoughts first)\\n\\nMy biggest complaint is with regards to the experiments. Unless I'm mistaken, it seems there isn't a thorough empirical study of the theoretical claims, especially as it relates to previous work. E.g., can one construct scenarios where the baseline approach (Amos et al., 2018) fails, and compare with the proposed approach?\\n\\nThe idea of pre-stabilization is interesting, and seems related to this paper: https://arxiv.org/abs/1905.05380\\n\\n\\n\\n**** After Author Response ****\\nThanks for the response, I am raising my score to weak accept.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper shows how to make infinite-horizon MPC differentiable\\nby differentiating through the terminal cost function and controller.\\nRecent work in non-convex finite-horizon continuous control [1,2,3] face\\na huge issue in selecting the controller's horizon length and\\nbetter-understanding differentiable infinite horizon\\ncontrol has potentially strong applications in these domains.\\nAs a step in this non-convex direction, this paper provides a nice\\ninvestigation in the convex LTI case.\\nThe imitation learning experiments on a small spring dynamical\\nsystem are a necessary sanity check for further work, but\\nmany other more complex systems could be empirically studied\\nand would have made this paper stronger.\", \"one_point_that_would_be_useful_to_clarify\": \"the DARE solution in (7,8) is\\nderived to optimally control a LTI system *without* control/state bounds but\\nis then used to control the LTI system *with* control/state bounds in (4).\\nDoes this lead to suboptimal solutions to the true infinite-horizon problem?\\n\\n[1] Chua, K., Calandra, R., McAllister, R., & Levine, S. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. NeurIPS 2018.\\n[2] Hafner, D., Lillicrap, T., Fischer, I., Villegas, R., Ha, D., Lee, H., & Davidson, J. Learning latent dynamics for planning from pixels. ICML 2019.\\n[3] Tingwu Wang, Xuchan Bao, Ignasi Clavera, Jerrick Hoang, Yeming Wen, Eric Langlois, Shunshi Zhang, Guodong Zhang, Pieter Abbeel, Jimmy Ba. Benchmarking Model-Based Reinforcement Learning. arXiv 2019.\"}"
]
} |
H1epaJSYDS | Anchor & Transform: Learning Sparse Representations of Discrete Objects | [
"Paul Pu Liang",
"Manzil Zaheer",
"Yuan Wang",
"Amr Ahmed"
] | Learning continuous representations of discrete objects such as text, users, and items lies at the heart of many applications including text and user modeling. Unfortunately, traditional methods that embed all objects do not scale to large vocabulary sizes and embedding dimensions. In this paper, we propose a general method, Anchor & Transform (ANT) that learns sparse representations of discrete objects by jointly learning a small set of anchor embeddings and a sparse transformation from anchor objects to all objects. ANT is scalable, flexible, end-to-end trainable, and allows the user to easily incorporate domain knowledge about object relationships (e.g. WordNet, co-occurrence, item clusters). ANT also recovers several task-specific baselines under certain structural assumptions on the anchors and transformation matrices. On text classification and language modeling benchmarks, ANT demonstrates stronger performance with fewer parameters as compared to existing vocabulary selection and embedding compression baselines. | [
"sparse representation learning",
"discrete inputs",
"natural language processing"
] | Reject | https://openreview.net/pdf?id=H1epaJSYDS | https://openreview.net/forum?id=H1epaJSYDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"o3xNC7m7aK",
"ByeF7mM3sH",
"SJxj4JM3jr",
"HygycRZhoS",
"BkxKAnZ3sH",
"S1ghoqZhiB",
"HyxrQW3foH",
"B1l9VtzZqB",
"S1eBeeATKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738051,
1573819169465,
1573818163089,
1573817991192,
1573817552685,
1573816996058,
1573204252809,
1572051249744,
1571835884994
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2004/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2004/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2004/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2004/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2004/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2004/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2004/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2004/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a method to produce embeddings of discrete objects, jointly learning a small set of anchor embeddings and a sparse transformation from anchor objects to all the others. While the paper is well written, and proposes an interesting solution, the contribution seems rather incremental (as noted by several reviewers), considering the existing literature in the area. Also, after discussions the usefulness of the method remains a bit unclear - it seems some engineering (related to sparse operations) is still required to validate the viability of the approach.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Submission Update\", \"comment\": \"We thank all the reviewers for their detailed comments and suggestions for improvements. We have incorporated your feedback and updated the paper accordingly. Here we outline the main additions to the paper:\", \"tables_2_and_3\": \"we added comparisons with 2 post-processing compression methods based on clustering and sparse hashing [1], as well as further improving this using sparse coding with k-SVD [2]. Table 2 shows results for PTB language modeling and Table 3 shows results for WikiText-103 language modeling. We outperform both methods and we hypothesize this is because post-processing compression accumulates errors in both language modeling as well as the embedding reconstruction. We observe that the performance improvement of ANT over post-processing compression methods is larger on WikiText-103 as compared to PTB, demonstrating that our end-to-end sparse embedding method is particularly suitable for tasks with large vocabularies. We conjecture that as vocabulary size increases, running clustering becomes harder, e.g. good initializations like KMeans++ become prohibitively expensive. We describe implementation and hyperparameter details for these 2 post-processing baselines in appendix E2. We also improve compression for WikiText-103 language modeling task for ANT by a better hyper-parameter search. New result is updated in Table 3.\\n\\nSubsection 3.4: we have added details regarding the time and memory complexity of training our sparse embedding layer. We implement our method using a dense matrix for the anchor embeddings A and a **sparse matrix** for the sparse transformations T. Although, naively deep learning frameworks do not fully support backprop on such sparse matrix (basically change of non-zero locations in the sparse matrix is not supported ) and we had do some engineering around it. In particular, for T we store only the non-zero positions and their values in a sparse format that allow efficient row slicing (adjacency list or CSOO format). The memory usage during training, storage, and evaluation are proportional to the size of A and the number of non-zero entries in T: size(A) + nnz(T). Time complexity is hard to analyse empirically, but the runtime for training does increase by 1.6 times on WikiText-103 language modeling task, but its mostly due to our unoptimized engineering. However, during inference time we see negligible difference because now native sparse ops for the T matrix can be utilized. In our experiments, we find that T is indeed very sparse, allowing us to obtain 10-100x compression of the embedding matrix, which in our opinion is a good trade-off. We also outlined several tips to further speedup training in Appendix C and ways to incorporate our method with existing speedup techniques like softmax sampling or noise-contrastive estimation.\\n\\n\\n[1] Y. Guo et al., Learning to hash with optimized anchor embedding for scalable retrieval, TIP, 2017.\\n[2] Aharon et al., K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation, TSP 2006\"}",
"{\"title\": \"Reply to Reviewer #3 part 2\", \"comment\": \"[R3 #Emb] #Emb represents the number of non-zero embedding parameters, which is computed as size(A) + nnz(T). Our methods are implemented using a dense matrix for the anchor embeddings A and a sparse matrix for the sparse transformations T. The memory usage and time complexity during training, storage, and evaluation are therefore proportional to the size of A (since A is dense) and the number of **non-zero** entries in T (since T is very sparse). We also compute #Emb for other baselines in the same way: the total number of non-zero entries used in the embedding parameters. For methods that compress the embedding matrix into dense embedding parameters (e.g. low-rank [1], vocabulary selection [2]), this reduces to the number of parameters in the compressed form. Our method achieves the **fewest** non-zero parameters after compression (lowest #Emb) while retaining the **best** accuracy/lowest perplexity metric as compared to the baselines.\\n\\n[R3 ablations] Table 1 shows the importance of sparsity and non-negativity (relu) on T towards better compression and performance as compared to the baselines that do not use sparsity and non-negativity. We observe that sparsity is important: baseline methods that only perform lower-rank compression with dense factors (e.g. low-rank, vocabulary selection) tend to suffer in performance while using more parameters, while our method retains performance with better compression. Tables 1 and 2 (second half) also demonstrate that domain knowledge, when available, gives further boosts in performance.\\n\\n[R3 anchors] Depending on the initialization, the anchors may or may not store embeddings for particular objects. When initializing A using clustering or frequency, the anchors directly correspond to embeddings for frequent words or words at cluster centers. The case when A is initialized randomly gives anchors that do not represent any particular words as the reviewer pointed out.\\n\\n[R3 k-means] In some cases, we often start with pre-trained embedding spaces such as Glove and fine-tune for specific tasks. Instead of maintaining the entire Glove embedding matrix during training, storage, and inference, we can simply use Glove **once** to obtain initial cluster centers to initialize A. This still reduces time and space complexity during training, storage, and inference. In addition, our results show that using a dynamic (random) basis performs also well. We also note that in certain scenarios, instead of clustering across the entire vocabulary, we can take the frequent (say 20%) objects and cluster on top of those to get better coverage of the important objects. All the initialization strategies we proposed are flexible and can be optimized for specific downstream tasks.\\n\\n[R3 knowledge] T is a |V| x |A| matrix where T_{ij} represents the transformation parameter from anchor object j to object i. Ideally, we want T to be row sparse such that each object i is induced from only a few anchor objects, so we perform proximal gradient updates on T. If we know that object i is related to anchor object j (e.g. object i = canary, anchor object j = bird), then entry T_{ij} should not be sparse/equal to zero. This method implicitly takes into account both positive pairs (T_{ij} need not be sparse) and negative pairs (T_{ij} constrained to be sparse).\", \"references\": \"[1] Grachev et al., Compression of recurrent neural networks for efficient language modeling, arXiv 2019\\n[2] Chen et al., How large a vocabulary does text classification need? A variational approach to vocabulary selection, NAACL 2019\\n[3] Y. Guo et al., Learning to hash with optimized anchor embedding for scalable retrieval, TIP 2017.\\n[4] Aharon et al., K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation, TSP 2006\"}",
"{\"title\": \"Reply to Reviewer #3 part 1\", \"comment\": \"Thank you for your detailed comments and suggestions for improvements. We answer your questions and provide more experimental comparisons with baselines below.\\n\\n[R3 usefulness] Our methods are implemented using a dense matrix for the anchor embeddings A and a **sparse matrix** for the transformations T. Although, naively deep learning frameworks do not fully support backprop on such sparse matrix (basically change of non-zero locations in the sparse matrix is not supported) and we had do some engineering around it. In particular, for T we store only the non-zero positions and their values in a sparse format that allow efficient row slicing (adjacency list or CSOO format). The memory usage during training, storage, and evaluation are proportional to the size of A and the number of non-zero entries in T: size(A) + nnz(T). Time complexity is hard to analyse, but empirically the runtime for training does increase by 1.6 times on WikiText-103 language modeling task, but its mostly due to our unoptimized engineering. However, during inference time we see negligible difference because now native sparse ops for the T matrix can be utilized. We do not require that V < N for our method to work, au contraire typically V >> N. In our experiments, we find that T is indeed very sparse, allowing us to obtain 10-100x compression of the embedding matrix, which in our opinion is a good trade-off. We have added these details to subsection 3.4 in the paper. We also outlined several tips to further speedup training in Appendix C and ways to incorporate our method with existing speedup techniques like softmax sampling or noise-contrastive estimation.\\n\\nSimply applying l1 sparsity to the entire V x N embedding matrix can be seen as a special case of our method where we use **no** anchors. This is undesirable since 1) each object is also modeled independently without information sharing between objects (from a statistical perspective, no strength in parameter sharing), and 2) there are no underlying anchors to induce the remaining representations.\\n\\n[R3 compression techniques] For the purposes of comparison, we selected a method based on hashing [3] as a post-processing step after training the embedding matrix. Specifically, we call Post-SH baseline where we take the trained embedding matrix from a language model trained on PTB or WikiText-103, compress the matrix using the method from [3] (k-means to obtain the anchors + sparse representation the remaining points as in Alg 1 of [3]), and use the reconstructed matrix for evaluation. As performance was not good, we tried to improve the method. In particular, we use k-SVD [4] to solve for a sparse representation instead of using ad-hoc projection methods (eq 8-9) from [3] and report it as an additional baseline which we call Post-SH+k-SVD. Comparing to these post-processing methods we demonstrate that end-to-end joint training of sparse embedding matrices is beneficial over post-processing compression.\", \"we_present_these_results_as_follows\": \"\", \"using_awd_lstm_on_ptb_language_modeling\": \"#anchors\\tperplexity\\t#params (M)\\nPost-SH \\t\\t 1,000\\t\\t118.8\\t\\t0.60\\nPost-SH\\t\\t 500 \\t\\t 166.8\\t\\t0.30\\nPost-SH+k-SVD\\t1,000 \\t\\t78.0 \\t\\t0.60\\nPost-SH+k-SVD\\t500 \\t\\t 103.5 \\t\\t0.30\\nANT (ours)\\t\\t1000\\t\\t72.0\\t\\t 0.44\\nANT (ours)\\t\\t500\\t\\t 74.0\\t\\t 0.26\", \"using_awd_lstm_on_wikitext_103_language_modeling\": \"#anchors\\tperplexity\\t#params (M)\\nPost-SH \\t\\t 1,000\\t\\t764.7\\t\\t5.7\\nPost-SH\\t\\t 500 \\t\\t 926.8\\t\\t2.9\\nPost-SH+k-SVD\\t1,000 \\t\\t73.7 \\t\\t5.7\\nPost-SH+k-SVD\\t500 \\t\\t 148.3 \\t\\t2.9\\nANT (ours)\\t\\t1000 \\t\\t39.7 \\t\\t3.1\\nANT (ours) \\t\\t500 \\t\\t 54.2 \\t\\t0.4\\n\\nWe have also updated Tables 2 and 3 in the paper accordingly with these new baselines.\\n\\nThese empirical results show that joint end-to-end training of the sparse embedding matrices is beneficial over post-processing compression, where errors may accumulate in both downstream tasks as well as embedding reconstruction. We observe that the performance improvement of ANT over post-processing compression methods is larger on WikiText-103 as compared to PTB, demonstrating that our end-to-end sparse embedding method is particularly suitable for tasks with large vocabularies. We emphasize that we are the first to incorporate these ideas of anchor points and sparse transformations into modern neural models for discrete objects.\"}",
"{\"title\": \"Reply to Reviewer #1\", \"comment\": \"Thank you for your detailed comments and suggestions for improvements. We answer your questions and provide more experimental comparisons with baselines below.\\n\\n[R1 related work and contributions] While the works you cited do indeed use the concept of \\u2018anchors\\u2019 to represent a space of objects, our main contribution was to demonstrate how these anchors and sparse transformations can be **trained jointly** with neural models as a general input embedding layer, and how we can obtain better sparse representations using domain knowledge. The methods in papers listed by you apply when we have access to some similarity or distance function between the objects (or its proxy like ranked examples). One can definitely apply those methods as a post-processing step to reduce features while preserving the embedding space more or less (More on this in next bullet). We have added a paragraph and cited some of the works in this area. However, it is not clear how to apply those methods with arbitrary functions a deep net classifier or language model is learning. We are the first to present this general approach and demonstrate its effectiveness of a suite of tasks involving representation learning of discrete objects.\\n\\n[R1 baseline comparisons] We implemented the method based on hashing [1] as a post-processing step after training the embedding matrix. Specifically, we call Post-SH baseline where we take the trained embedding matrix from a language model trained on PTB or WikiText-103, compress the matrix using the method from [1] (k-means to obtain the anchors + sparse representation the remaining points as in Alg 1 of [1]), and use the reconstructed matrix for evaluation. The performance was not very good, so we tried to improve the method. In particular, we use k-SVD [2] to solve for a sparse representation instead of using ad-hoc projection methods (eq 8-9) from [1] and report it as an additional baseline which we call Post-SH+k-SVD. Comparing to these post-processing methods we demonstrate that end-to-end joint training of sparse embedding matrices is beneficial over post-processing compression.\", \"we_present_these_results_as_follows\": \"\", \"using_awd_lstm_on_ptb_language_modeling\": \"#anchors\\tperplexity\\t#params (M)\\nPost-SH \\t\\t 1,000\\t\\t118.8\\t\\t0.60\\nPost-SH\\t\\t 500 \\t\\t 166.8\\t\\t0.30\\nPost-SH+k-SVD\\t1,000 \\t\\t78.0 \\t\\t0.60\\nPost-SH+k-SVD\\t500 \\t\\t 103.5 \\t\\t0.30\\nANT (ours)\\t\\t1000\\t\\t72.0\\t\\t 0.44\\nANT (ours)\\t\\t500\\t\\t 74.0\\t\\t 0.26\", \"using_awd_lstm_on_wikitext_103_language_modeling\": \"#anchors\\tperplexity\\t#params (M)\\nPost-SH \\t\\t 1,000\\t\\t764.7\\t\\t5.7\\nPost-SH\\t\\t 500 \\t\\t 926.8\\t\\t2.9\\nPost-SH+k-SVD\\t1,000 \\t\\t73.7 \\t\\t5.7\\nPost-SH+k-SVD\\t500 \\t\\t 148.3 \\t\\t2.9\\nANT (ours)\\t\\t1,000 \\t\\t39.7 \\t\\t3.1\\nANT (ours) \\t\\t500 \\t\\t 54.2 \\t\\t0.4\\n\\nWe have also updated Tables 2 and 3 in the paper accordingly with these new baselines.\\n\\nThese empirical results show that joint end-to-end training of the sparse embedding matrices is beneficial over post-processing compression, where errors may accumulate in both downstream tasks as well as embedding reconstruction. We observe that the performance improvement of ANT over post-processing compression methods is larger on WikiText-103 as compared to PTB, demonstrating that our end-to-end sparse embedding method is **particularly suitable** for tasks with large vocabularies. We would like to emphasize that we are the first to incorporate these ideas of anchor points and sparse transformations into modern neural models for discrete objects.\", \"references\": \"[1] Y. Guo et al., Learning to hash with optimized anchor embedding for scalable retrieval, TIP, 2017.\\n[2] Aharon et al., K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation, TSP 2006\"}",
"{\"title\": \"Reply to Reviewer #2\", \"comment\": \"Thank you for your detailed comments and suggestions for improvements. We answer your questions below.\\n\\n[R2 embedding matrix] Why is the large embedding matrix a problem?\\n\\nWhen training neural models for text, documents, URLs, users, queries, or any other problem involving large sets of discrete objects, the embedding matrix scales prohibitively with the number of objects and takes up most of the parameters across the entire model. It is therefore desirable to reduce the size of the embedding matrix for more efficient learning, storage, and inference. The reviews from both of the other reviewers, as well as the long line of related work in this area (section 2 of our paper), indicates that there is much interest in this area and train models with much fewer parameters that work as well as large models.\\n\\n[R2 other methods] Besides the low-rank form proposed, are there any other ways to compress it?\\n\\nSection 2 of our paper describes several lines of related work on compressing the embedding matrix. In summary, these baseline methods span methods based on low-rank approximations, quantization, hashing, vocabulary selection, and codebook learning. We also compared our proposed method with these baselines in our experiments (Tables 1, 2, 3), showing that our method outperforms the existing compression baselines on text classification and language modeling tasks. We would like to emphasize that we are the first to incorporate these ideas of anchor points and sparse transformations into modern neural models for discrete objects, demonstrating strong performance on several tasks involving representation learning of discrete objects.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a general embedding method, Anchor & Transform (ANT), that learns sparse representations of discrete objects by jointly learning a small set of anchor embeddings and a sparse transformation from anchor objects to all the others.\", \"strengths_of_the_paper\": \"1. The paper is well-written and easy to be followed.\\n2. The research problem is of great value to be investigated.\", \"weaknesses_of_the_paper\": \"1. The idea of utilizing anchors to reduce the size of features (in your case, the total embeddings of discrete objects to be inferred) has been widely studied in related fields in computer science. For instance, there are a number of papers in the field of manifold learning using anchors to reduce the size. The inherent connections and relationships between the proposed methods and other algorithms using anchors should be carefully discussed.\\n2. The contribution of the paper seems not significant, as the idea of utilizing anchors to reduce the number of parameters to be inferred has been widely studies in the related work. There are a number of papers utilizing anchors, such as the followings (just list some of them):\\nB. Xu et al., Efficient manifold ranking for image retrieval, in SIGIR 2011.\\nS. Liang et al., Manifold learning for rank aggregation, in WWW 2018.\\nY. Guo et al., Learning to hash with optimized anchor embedding for scalable retrieval, in TIP, 2017.\\nIn the last reference aforementioned above, both anchors and embeddings are jointly taken into account. \\n3. Baselines should include some manifold learning algorithms that take anchors into account.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This manuscript proposed to represent the embedding matrix as a small set of anchor embedding and sparse transformation. The paper is trying to be general-purpose, end-to-end trainable, and able to incorporate domain knowledge. Experimental results show that it is possible to compress the embedding in the proposed way without much loss of accuracy.\\n \\nThe authors propose to find anchor embedding by several methods, such as frequency, clustering, or random sampling. The sparsity on the transform is imposed by L_1. Although I get the basic idea and I am familiar with many of the techniques, it is unclear to me what is the main focus of this paper, and the technical contribution is quite vague. Why is the large embedding matrix a problem? Besides the low-rank form proposed, are there any other ways to compress it? This paper is not well motivated at all. Therefore, I think this manuscript is not ready to publish in its current form.\\n\\n#####\\nThank you for the response! I've increased my score to 3: Weak Reject. Although the idea of compressing the (word) embedding layer using low-rank structures is not new (even with the end-to-end training), the main technical contribution in this paper is to jointly learn the anchor embedding (anchor pre-selected with multiple schemes) and sparse transformation (sparsity achieved via Proximal GD). Moreover, domain knowledge can be incorporated by adding specialized constraints such as orthogonality and selective penalization. \\n\\nAt first glance, the idea presented in this paper seems not new, and I doubt many people are doing similar stuffs already in practice. I find the explanations on the technical points in Appendix C helpful. The empirical study in this paper looks strong. The authors considered experiments in text classification and language modeling with a number of baselines, which demonstrates the advantages of anchor and joint training in the proposed way. This paper presents several useful heuristics around, but I share the concern with other reviewers about whether the main point is compelling enough, given the existing body of work along with this line.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper describes a \\\"layer\\\" that aims at producing embeddings for discrete objects by using fewer parameters than classical embeddings layers. Indeed, the model proposes, instead of learning an embedding matrix of size VxN, to learn a matrix of embeddings of anchors (AxN) and a transformation matrix (VxA) such that the embedding of any object can be found by multiplying A with T. On top of that, they propose different regularization techniques to improve the quality of the learned embeddings, and particularly a proximal gradient method over a L1 normalization on T to reduce the number of parameters. They propose also different ways to initialize A and also a method for incorporating a priori information (e.g knowledge) into the model. They evaluate this model on different tasks: text classification and language modeling and show that they can achieve good performance while using fewer parameters than Sota methods.\\n\\nFirst of all, the paper is well written, and the description is very detailed and understandable. It was a pleasure to read such a paper! \\n\\nOne point which is unclear is the interest of using such a method, and more precisely in which cases, this method can be useful. Indeed, the overall number of parameters of ANT is AxN + VxA (N being the size of the embeddings, A the number of anchors and V the size of the vocabulary) while classical methods are VxN parameters. Said otherwise, we need to have V<N to really have less parameters to train in the model -- knowing that classical embeddings spaces size is usually between 256 and 1024, it means that we have to target a task where the number of anchors is quite low. I agree that the sparsity term on T is here to encourage to decrease the number of parameters but first, the same sparsity could be applied on the original VxN embedding matrix, and also, even if, at the end, the T matrix is sparse, during learning one has to maintain a large matrix in memory. I would like the authors to discuss more on this point which is crucial? Particularly, I am not sure to understand what the #Emb value is in the table (AxN + AxV or just AxN), and how to compare the models. (There is a discussion in Section 3, but the argumentation does not explain why having so many parameters at train time is not a problem). Also, since this is the crucial point in the paper, I would be interested in having a discussion about the use of neural models compression techniques after learning that could also \\\"do the job\\\" (even if they are not trained end-to-end). \\n\\nOne other remark concerns the different \\\"components\\\" added into the model (e.g sparsity, orthogonality, Relu...). It is difficult to measure the interest of each of them, and I would recommend the authors to provide an ablation study to make the effect of the different choices more understandable by the reader.\\n\\nThe notion of anchors also is misleading since it gives the impression that the A matrix will store embeddings for particular objects, while there is no constraint of that type. Each line of the A matrix is an embedding, but this embedding is not associated with one of the objects seen at train time (no direct mapping from anchors to words in the vocabulary). This has to be made more clear at the beginning of the paper. \\n\\nConcerning the initialization of A by K-means, it assumes that the space of objects has a particular metric. The authors say that this metric can come from a pretrained embedding space, but in that case, the problem in the number of parameters (which is the main justification of this work) is invalid (i.e if you already have an embedding matrix, then just let us fine-tune it). Could you clarify ? \\n\\nThe fact that the method would allow incorporating knowledge is certainly the most interesting point. The way it is done has to be better explained (I do not understand why positive pairs are taken into account by not enforcing sparsity on T at this particular point, the way negative pairs are handled seem more natural)\\n\\nThe paper is interesting and proposes a new simple model that could be used to keep good performance while reducing the number of parameters of the final model. Discussions have to be added to discuss the relevance of the approach since it still needs a large number of parameters at train time, and the role of each component could be studied more in depth.\"}"
]
} |
Skg2pkHFwS | Emergence of Collective Policies Inside Simulations with Biased Representations | [
"Jooyeon Kim",
"Alice Oh"
] | We consider a setting where biases are involved when agents internalise an environment. Agents have different biases, all of which resulting in imperfect evidence collected for taking optimal actions. Throughout the interactions, each agent asynchronously internalises their own predictive model of the environment and forms a virtual simulation within which the agent plays trials of the episodes in entirety. In this research, we focus on developing a collective policy trained solely inside agents' simulations, which can then be transferred to the real-world environment. The key idea is to let agents imagine together; make them take turns to host virtual episodes within which all agents participate and interact with their own biased representations. Since agents' biases vary, the collective policy developed while sequentially visiting the internal simulations complement one another's shortcomings. In our experiment, the collective policies consistently achieve significantly higher returns than the best individually trained policies. | [
"collective policy",
"biased representation",
"model-based RL",
"simulation",
"imagination",
"virtual environment"
] | Reject | https://openreview.net/pdf?id=Skg2pkHFwS | https://openreview.net/forum?id=Skg2pkHFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"oPQZwXLK0",
"H1eTRv42sH",
"rJlsPbo0KH",
"Bkg92xTnFr",
"HJlU5046OH"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798738020,
1573828565204,
1571889506662,
1571766450135,
1570750094018
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2003/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2003/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2003/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2003/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents an ensemble method for reinforcement learning. The method trains an ensemble of transition and reward models. Each element of this ensemble has a different view of the data (for example, ablated observation pixels) and a different latent space for its models. A single (collective) policy is then trained, by learning from trajectories generated from each of the models in the ensemble. The collective policy makes direct use of the latent spaces and models in the ensemble by means of a translator that maps one latent space into all the other latent spaces, and an aggregator that combines all the model outputs. The method is evaluated on the CarRacing and VizDoom environments.\\n\\nThe reviewers raised several concerns about the paper. The evaluations were not convincing with artificially weak baselines and only worked well in one of the two tested environments (reviewer 2). The paper does not adequately connect to related work on model-based RL (reviewer 1 and 2). The paper does not motivate its artificial setting (reviewer 2 and 1). The paper's presentation lacks clarity from using non-standard terminology and notation without adequate explanation (reviewer 1 and 3). Technical aspects of the translator component were also unclear to multiple reviewers (reviewers 1, 2 and 3). The authors found the review comments to be helpful for future work, but provided no additional clarifications.\\n\\nThe paper is not ready for publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you all the reviewers for your comments.\", \"comment\": \"We appreciate your time and effort to review our paper. In particular, thank you for the comments about the contrived setting, missing crucial citations, and the translator component. Those comments will be very helpful when we improve our research in the future.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new approach to conducting RL: it proposes to first train a collection of individual agents independently, and then exploit them to learn one collective policy. Each agent has an observation space that is differently impaired, such as by blocking out some specific set of pixels in its image observations. Each such agent learns its own dynamics and reward models based on its own observations, and goes on to train a policy based on simulated rollouts using these learned models. In the second phase, a collective policy is trained as follows: every training episode happens inside a different \\\"host\\\" agent's simulation, and the collective policy is thus effectively trained on this mixture of variously inaccurate simulations. Since different individuals have different observations, the collective policy receives an input observation that is the aggregate of all agents' individual observations. Throughout, this paper builds on top of the \\\"world models\\\" approach proposed in Ha & Schmidhuber 2018.\\n\\nMost sections of this paper are well-written and the high-level ideas are novel and interesting. My main issues with the paper are listed below, in rough order of priority:\\n(1) I find the experimental evaluations short of convincing.\\n * Baselines: The proposed collective policy is evaluated against the individual policies of agents in its population, which is a very weak baseline since the individual agents are artificially impaired. It is also evaluated against an upper bound: a policy trained directly in the real world, and shown to be only slightly weaker in performance. I would suggest the following additional baseline:\\n - individual agents without observation impairments, trained in simulation (essentially the world model paper this approach builds on). [\\\"world models\\\"]\\n - the average of the individual policies [\\\"policy ensemble''].\\n - a policy trained with a model that is a composite average of the individual world models [\\\"model ensemble\\\"]. This is the in the spirit of prior work on model ensembling, such as Chua et al 2018, \\\" ... handful of trials.\\\"\\n * Environments: The results presented so far indicate the proposed approach works on one environment (CarRacing) and does not on another (VizDoom). I would like to see experiments on more environments to figure out whether CarRacing is the exception or the rule.\\n * choice of observations for individual agents: in a general setting, how are these to be engineered? \\n\\n(2) It does not sufficiently acknowledge or clarify connections to other work in the field, such as on model bootstrapping in model ensembles (e.g. Chua et al 2018, \\\"... handful of trials.\\\") or evaluate against them. In particular, I think the proposed approach is a novel way to achieve model bootstrapping, plus some additional tweaks: rather than train different models on different subsets of training experience, it trains different models on different feature views of the same training episodes. \\n\\n(3) It does not sufficiently motivate its setting: when is it true in realistic settings that it would make sense for different agents in an environment to be artificially impaired in different ways? Experiments are only conducted in contrived settings where portions of the environment are deliberately removed from the observation for different agents. I also have a somewhat related suggestion: perhaps future versions of the paper might focus on using different modalities (such as RGB images and depth) for the different agents.\\n\\n(4) From my understanding, there seems to be a tradeoff between how much overlap there exists between different agents' observations and how translatable different agents' views are to each other. In other words, the method requires the translator T to take one view of an agent and transform it into (a feature representation of) the view of another agent. This seems ill-defined in general settings. \\n\\n(5) Related to the above, how is the aggregated vector [z_{ct}, h_{ct}] generated for the collective policy when it is eventually evaluated in the real world? Are the features computed separately by each agent before aggregation, or is one agent selected and the translator T used again? If the former, then does this induce a domain shift between training time when the policy is trained on predicted features from the translator, and testing time when the policy sees true features corresponding to different agents?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a model-based RL architecture based on World Models which additionally makes use of an ensemble of models. Specifically, each model in the ensemble is independently trained models and has different amounts and types of occlusions (i.e. part of the image cropped out or a small piece of the image is magnified). The ensemble of models are used to train what is referred to as a \\u201ccollective\\u201d policy in the CarRacing and VizDoom environments. The collective policy achieves better performance than policies trained using the individual models and achieves higher data efficiency than model-free RL.\\n\\nWhile I think the paper tackles an interesting question---how to leverage multiple models of the world that may be trained on different data and with different capabilities---I unfortunately do not feel that this paper is ready for publication at ICLR, and thus recommend rejection. My justifications are that (1) the paper suffers from clarity issues and several cases makes claims or statements which are incorrect, (2) the proposed evaluation is somewhat contrived and the method does not seem to generalize well to other domains, and (3) there is not much engagement with the rest of the literature on model-based RL. I think that addressing #1 and #3 alone would substantially improve the paper. An additional change that would make the paper much more compelling would be to train the models on more ecologically valid observations, such as from multiple modalities or different camera angles (see below for discussion).\\n\\n\\nMy first concern with the paper has to do with clarity. Overall, I found the paper quite difficult to read as it uses a lot of unusual terminology to explain what it does. For example, the paper uses \\u201crule component\\u201d to refer to what would normally be a \\u201creward function\\u201d. Similarly, although this is an ensemble method, the term \\u201censemble\\u201d is never used even once. The word \\u201cbias\\u201d is also used in a way which is different from how it is normally used---here it seems to refer to the fact that the agents see different things, whereas in ML bias typically has a precise meaning in terms of the bias-variance tradeoff. I also noticed several places throughout the paper where citations or related work seem to be misunderstood:\\n \\n- 1st page, 2nd paragraph: this is not what Griffiths (2010) is about, and is not really an appropriate use of the term \\u201cinductive bias\\u201d\\n- 2nd page, contribution 1: the paper claims that it is not dealing with a partially observable setting, but this is exactly what the paper is doing. Specifically, each model in the ensemble receives different partially observed observations from the environment. Although in traditional RL partially observed usually means \\u201cone agent, one observation of a state\\u201d, there is nothing limiting the general formulation of partial observability to cover the case of \\u201cmultiple agents, multiple observations of the same state\\u201d.\\n- 8th page, line 2: Lake et al (2017) is not really about \\u201clearning to think\\u201d.\\n- Section 4, last paragraph: this seems to be not very relevant to this paper, as the present paper is not about a multiagent system or about graph-based representations.\\n \\nThere are some additional issues with clarity of the model explanation; for example, it\\u2019s not clear how the translator component is trained (the paper states that it is trained with meta-learning, but this is vague).\\n \\nMy second concern is that the approach is somewhat contrived and does not seem to generalize well to other domains. In particular, under what circumstances would you want to train multiple models that receive inputs that are missing different pieces? I can see the appeal of learning different models in some cases (e.g. different camera angles, or different sensory modalities) but these have not been evaluated here which makes the whole thing seem a bit arbitrary. Moreover, while it seems to work ok in the CarRacing domain (though the ensemble-trained policy does not perform as well as a non-ensemble agent which is trained with full observations) it does not seem to work well in the VizDoom environment. On the basis of these two issues it is not at all clear to me how well the proposed method will work in more ecologically valid settings requiring multiple models.\\n \\nFinally, my third concern is that the engagement with the rest of the model-based RL literature is lacking. As mentioned above, this method is an ensemble method but there is no discussion of other model-based RL ensemble methods (e.g. [1] and [2]). Similarly, the paper dismisses other work on partial observations but this work is quite relevant to the present paper which does indeed include partial observations (see above). While there is a lot of text spent discussing other work such as literature from cognitive science, this is not really that relevant to the present paper and that space would be better spent discussing related work from model-based RL.\", \"some_additional_minor_comments\": \"It would be interesting to see a comparison between the ensemble policy trained with models that always receive the same (full) observations versus the present approach (where they receive partial observations). This might still provide some benefit as the models will still result in slightly different latent representations due to different initialization and training data, so it would be interesting to see whether this helps as well even if the observations are the same.\\n\\nWhat does \\u201cmodel tr\\u201d and \\u201cpolicy lr\\u201d mean in Table 1?\\n\\nIn the various tables and figures, how are the error bars computed? Are they over episodes, or agent seeds? In general, how many seeds were used to perform evaluations? (Ideally it would be at least 3).\\n\\nPage 2, section 2.1: \\u201cV and M use sequential image frame data from multiple rollouts from a random policy and apply variational inference and backpropagation algorithm for inference and learning.\\u201d \\u2192 this is imprecise; they do not apply both variational inference and backpropagation; backpropagation is used to implement variational inference.\\n\\nI found the results of Figure 3d quite interesting, though am wondering if you controlled for the number of evaluations of each model? E.g. if you only train the policy on one model at a time versus training them on all models at a time, do you train for longer in the first case so that the policy ultimately receives the same amount of information from all the models?\\n\\n[1] Kurutach, T., Clavera, I., Duan, Y., Tamar, A., & Abbeel, P. (2018). Model-ensemble trust-region policy optimization. arXiv preprint arXiv:1802.10592.\\n[2] Chua, K., Calandra, R., McAllister, R., & Levine, S. (2018). Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems (pp. 4754-4765).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper studies the settings where multiple agents each act independently in different copies of an environment, without interacting with each other. Each agent uses model-based learning, learning a representation of the world, then learning a controller against the learned world model (learned with an evolutionary strategy of CMA-ES). Different agents will naturally learn world models with different biases - this paper proposes a method that learns a collective policy from the biased models of the different agents, by letting each agent observe imagined rollouts from other agents' world models, training on that data. They find this improves performance over learning within each agent's individual model.\\n\\nFrom a style perspective, I found this paper hard to read. Many terms are immediately abbreviated into non-conventional single letter abbreviations (C for controller, T for Translator). This extends to the results table - it's quite difficult to understand the results in Table 1 and Table 2 without a careful reading to find what what the abbreviations mean, and I still don't understand what the Overlap (%) and Cov. (%) columns mean in Table 2, despite reading the paragraph explaining it several times. The analysis section also makes a distinction between delusional agents and cheating agents, which I also didn't understand. The GIFs provided demonstrate that the model moves towards the occluded region, and this is described as a novel delusion that hasn't been observed before, distinct from the cheating behavior from Ha and Schmidhuber's world models paper. I don't understand why these are different - in both cases, the agent travels to a region of space where no fireballs appear, and since reward is defined as not getting hit by a fireball, this is optimal behavior for the imperfect world model.\\n\\nTo created biased representations, the authors apply cutout or zoom in on specific parts of the state. I found this surprising - shouldn't different models naturally lead to biased representation by themselves? The VAEs used should naturally be inaccurate enough to lead to different imperfections.\\n\\nThere are very few details for the translator T, that translates latent z_i from world model M_i into the correspond latent x_j for world model M_j. This seems like a very important detail of the method, I could see different translators T leading to very different results, and would have like to see ablations for how different Ts affected the training procedure.\\n\\nOverall, the question I found myself asking the entire paper was, \\\"what does this add on top of existing work on training against an ensemble of world models?\\\" This question has been studied in previous work (a quick search turned up https://arxiv.org/abs/1809.05214 and https://arxiv.org/abs/1802.10592). In these prior works, instead of learning a translator T between representations, we decode an imagined rollout from the VAE for each world model, in the original high-dimensional input state (images, in the case of this paper), and simply train on that data instead. This does not require learning O(N^2) translator models translating z_i to z_j for each pair of world models. I would have liked to see a comparison to this approach, because right now I don't see why you would want to learn translators T in the first place. Simply aggregating rollouts across models would still lead to individually biased world models that can be aggregated to learn a single controller that performs better than any single model.\", \"edit\": \"have read author reply, no changes to review.\"}"
]
} |
rke3TJrtPS | Projection-Based Constrained Policy Optimization | [
"Tsung-Yen Yang",
"Justinian Rosca",
"Karthik Narasimhan",
"Peter J. Ramadge"
] | We consider the problem of learning control policies that optimize a reward function while satisfying constraints due to considerations of safety, fairness, or other costs. We propose a new algorithm - Projection-Based Constrained Policy Optimization (PCPO), an iterative method for optimizing policies in a two-step process - the first step performs an unconstrained update while the second step reconciles the constraint violation by projecting the policy back onto the constraint set. We theoretically analyze PCPO and provide a lower bound on reward improvement, as well as an upper bound on constraint violation for each policy update. We further characterize the convergence of PCPO with projection based on two different metrics - L2 norm and Kullback-Leibler divergence. Our empirical results over several control tasks demonstrate that our algorithm achieves superior performance, averaging more than 3.5 times less constraint violation and around 15% higher reward compared to state-of-the-art methods. | [
"Reinforcement learning with constraints",
"Safe reinforcement learning"
] | Accept (Poster) | https://openreview.net/pdf?id=rke3TJrtPS | https://openreview.net/forum?id=rke3TJrtPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"cOtUbdqMZO",
"HJx656pYjH",
"rkeuT36KjH",
"SyemnsTFoH",
"S1eaRqTtjH",
"BkgdO3S0FS",
"B1xGfhvptS",
"rJxNx-bBKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737986,
1573670293399,
1573670079568,
1573669802683,
1573669589381,
1571867759623,
1571810313898,
1571258603794
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2002/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2002/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2002/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2002/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2002/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2002/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2002/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a new algorithm for solving constrained MDPs called Projection Based Constrained Policy Optimization. Compared to CPO, it projects the solution back to the feasible region after each step, which results in improvements on some of the tasks considered.\\n\\nThe problem addressed is relevant, as many tasks could have important constraints e.g. concerning fairness or safety. \\n\\nThe method is supported through theory and empirical results. It is great to have theoretical bounds on the policy improvement and constraint violation of the algorithm, although they only apply to the intractable version of the algorithm (another approximate algorithm is proposed that is used in practice). The experimental evidence is a bit mixed, with the best of the proposed projections (based on the KL approach) sometimes beating CPO but also sometimes being beaten by it, both on the obtained reward and on constraint satisfaction. \\n\\nThe method only considers a single constraint. I'm not sure how trivial it would be to add more than one constraint. The reviewers also mention that the paper does not implement TRPO as in the original paper, as in the original paper the step size in the direction of the natural gradient is refined with a line search if the original step size (calculated using the quadratic expansion of the expected KL) does violate the original constraints. (Line search on the constraint as mentioned by the authors would be a different issue). Futhermore, the quadratic expansion of the KL is symmetric around the current policy in parameter space. This means that starting from a feasible solution the trust region should always overlap with the constraint set when feasibility is maintained, going somewhat agains the argument for PCPO as opposed to CPO brought up by the authors in the discussion with R2. I would also show this symmetry in illustrations such as Fig 1 to aid understanding.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"General Reponse to All Reviewers\", \"comment\": \"We thank all the reviewers for the valuable feedback and constructive suggestions. We have updated a version of our paper, and provided clarification.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We thank Reviewer #3 for the helpful and insightful feedback. We have updated a version based on your suggestions. We provide answers to individual questions below.\\n\\nReviewer #3\\u2019s concern #1: SQP for tractable version\", \"response\": \"We include Lemma D.1 for completeness and the proof of Theorem D.2. As you suggest, we have updated Lemma D.1.\", \"smaller_issues\": \"#1: Mention that the proofs are in Appendix.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We thank Reviewer #2 for the helpful and insightful feedback. We have updated a version based on your suggestions. We provide answers to individual questions below.\\n\\nReviewer #2\\u2019s comment #1: Incremental work\", \"response\": \"We do not use line search. Instead our algorithm reconciles the constraint violation (if any) by directly projecting the policy back onto the constraint set. This allows us to perform efficient updates in learning constraint-satisfying policies while not violating the constraints. We agree that we could use line search but it is not necessary.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We thank Reviewer #1 for the helpful and insightful feedback. We have updated a version based on your suggestions. We provide answers to individual questions below.\\n\\nReviewer #1\\u2019s comment #1: Comparison between theorem 3.1 and theorem 3.2, and the proof\", \"response\": \"We agree that our algorithm has two approximations, which adds computation. However, we use the following approach to reduce this computational cost while ensuring safe policy improvement.\\n(1) We only compute the Fisher information matrix once. The reason we can do that is the step size is small. Hence, we can reuse the Fisher matrix of the reward improvement step also in the KL divergence projection step. \\nThe experimental results imply that this approach does not hurt the performance of PCPO. We have updated the paper in Section 6 to illustrate this approach.\\n\\n(2) We use the conjugate gradient method to avoid taking the inverse of the Fisher matrix. This further reduces the computational cost. The convergence of the conjugate gradient method is controlled by the condition number of the Fisher matrix: the larger the condition number, the more epochs needed to obtain an accurate approximation. Hence there is a tradeoff between the computational cost and the approximation accuracy. Please see Appendix F for an example of the approximation error and the computational cost of the conjugate gradient method.\\n\\nReviewer #1\\u2019s comment #3: Presentation of the experimental results section\", \"reponse\": \"Thank you for your suggestion. We have updated the paper in Section 6.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a technique to handle a certain type of constraints involved in Markov Decision Processes (MDP). The problem is well-motivated, and according to the authors, there is not much relevant work. The authors compare with the competing methods that they think are most appropriate. The numerical experiments seem to show superiority of their method in most of the cases. The proposed method has 4 main variants: (1) define projection in terms of Euclidean distance or (2) KL-divergence, and (a) solve the projection problem exactly (usually intractable) or (b) solve a Taylor-expanded variant (so there are variants 1a,1b,2a,2b).\\n\\nUnfortunately, I do not feel well-qualified enough in the MDP literature to comment on the novelty, and appropriateness of comparisons. For now, I will take the authors' word, and rely on other reviewers. The motivation of necessity of including constraints did seem persuasive to me.\\n\\nOverall, this seems like a nice contribution based on the importance of the problem and the good experimental results, hence I lean toward accepting. I do have some concerns that I mention below (chiefly that the theory presented is a bit of a red herring), but it may be that the overall novelty/contribution outweight these concerns:\\n\\n(1) Concern 1: the theorems (Thm 3.1, 3.2) apply to the intractable version, and so are not relevant to the actual tractable version of the algorithm. These are nice motivations, but ultimately we're left with a heuristic method. Perhaps you can borrow ideas from the SQP literature?\\n\\n(2) Concern 2: Fig 3(e), \\\"Grid\\\" data, your algo with KL projection does worse in Reward than TRPO, which is not unexpected since TRPO ignores constraints. But the lower plot shows that TRPO actually outperforms your KL projection-based algorithm even in terms of constraint! By trying to respect the constraint, your algorithm has made things worse. Can you explain this phenomenon?\\n\\n(3) Concern 3: Thm 4.1 and Thm D.2, I don't know what you're proving because I don't know what f is. Please relate it to your problem and to your updates (e.g., Algo 1). If you are talking about just minimizing f(x) with convex quadratic constraints, then I think you are re-inventing the wheel (overall, your proof looks like you are re-inventing the wheel -- doesn't everything follow from the fact that projections operators are non-expansive? If you scale with H (and assume it is positive definite) then you're still working in a Hilbert space, just with a non-Euclidean norm, and so you can re-use existing standard results on the convergence of projected gradient methods in Hilbert space to stationary points.\", \"smaller_issues\": [\"The appendix was never referenced in the main paper. At the appropriate places in the main text, please mention that the proofs are in the appendix, and mention appendix C when discussing the PCPO update.\", \"For the PCPO update, the theorem needs to mention that H is positive definite. Using H as the Fisher Information matrix automatically guarantees it is positive semi-definite (please mention that), so the problem is at least convex, and then you assume it is invertible to get a unique solution.\", \"It wasn't obvious to me when the constraint set is closed and convex. Please discuss.\", \"When H is ill-conditioned, why not regularize with the identity? You switch between H and the identity, but why not make this more continuous, and look at H + mu I for small values of mu?\", \"Lemma D.1 is trivial if you use the definition of normal cones and subgradients. You also don't need to exclude theta from the set C, since if it is in the set C, then the quadratic term will be zero, hence less than/equal to zero.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new algorithm - Projection based Constrained Policy Optimization, that is able to learn policies with constraints, i.e., for CMDPs. The algorithm consists of two stages: first an unconstrained update for maximizing reward, and the second step for projecting the policy back to the constraint set. The authors provide analysis in terms of bounds for reward improvement and constraint violation. The authors characterize the convergence with two projection metrics: KL divergence and L2 norm. The new algorithm is tested on four control tasks: two mujoco environments with safety constraints, and two traffic management tasks, where it outperforms the CPO and lagrangian based approaches.\\n\\n\\nThis is an interesting work with impressive results. However, this work still has a few components that need to be addressed and further clarification on novelty. Given these clarifications in an author's response, I would be willing to increase the score.\\n\\n\\n1) Incremental work\\nThe work extends the CPO [1] with a different update rule. Instead of having the update rule of CPO that does reward maximization and constraint satisfaction in the same step, the proposed update does that in two steps. The theory and the algorithm stem directly from the original CPO work, including appendix A-C. The authors claim that another benefit of PCPO is that it requires no hyper-tuning, but same is true for CPO (in the sense that they both don\\u2019t need Lagrange multiplier) . \\n\\n\\n2) The utility of the performance bounds and fixed point\\nThe performance bounds depend on the variable $\\\\delta$, which is never explained. I\\u2019m assuming it is the same $\\\\delta$ that is used in Lemma A.1. In that case, Theorem 4.1 tells about the existence of the fixed point of the algorithm under the assumptions specified Sec 4 (smooth objective function, twice differentiable, Hessian is positive definite). There is no discussion regarding the comparison of the fixed-point of the algorithm with the optimal value function/policies. Also, all the analysis is with Hessian, whereas in the algorithm the Hessian is approximated via conjugate descent. \\n\\n\\n3) How is line-search eliminated? \\nOne of the benefits of the proposed algorithm is that it doesn\\u2019t require line search (Sec 1). The underlying algorithm is still based on monotonic policy improvement theory in general, and more specifically on TRPO, so it should still have line-search as part of the optimization procedure.\", \"references\": \"[1] Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization. In Proceedings of International Conference on Machine Learning, pp. 22\\u201331, 2017.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary :\\n\\nThis paper introduces a constrained policy optimization algorithm by introducing a two-step optimization process, where policies that do not satisfy the constraint can be projected back into the constraint set. The proposed PCPO algorithm is theoretically analyzed to provide an upper bound on the constraint violation. The proposed constrained policy optimization algorithm is shown to be useful on a range of control tasks, satisfying constraints or avoiding constraint violation significantly better than the compared baselines.\", \"comments_and_questions\": [\"The key idea is to propose an approach to avoid constraint violation in a constrained policy gradient method, where the constraint violation is avoided by first projecting the policy into the constrained set and then choosing a policy from within this set that is guaranteed to satisfy constraints.\", \"Existing TRPO method already proposes a constrained optimization method (equation 2 as discussed), where the constraint is within the policy changes. This paper further introduces additional constraints (in the form of expected cost or safety measures) where the intermediary policy from TRPO is further projected into a constraint set and the overall policy improvement is based on the constraint satisfied between the intermediary policy and the improved policy. In other words, there are two levels of constraint satisfaction that is required now for the overall PCPO update.\", \"The authors propose two separate distance measures for the secondary constraint update, based on the L2 and KL divergence.\", \"I am not sure of the significance of theorem 3.2 in comparison of theorem 3.1? Proof of theorem 3.1 is easy to follow from the appendix, and as noted, follows from Achiam et al., 2017\", \"Section 4 discusses similar approximations required as in TRPO for approximating the KL divergence constraint. Similar approximations are requires as in TRPO, with an additional second-order approximation for the KL projection step. The reward improvement step follows similar approximations as in TRPO, and the projection step requires Hessian approximations considering the KL divergence approximation.\", \"This seems to be the main bottleneck of the approach? The fact it requires two approximations, mainly for the projection step seems to add further complexity to the propoed approach? The trade-off therefore is to what extent this approximation is required for safe policy improvement in a constrained PO problem versus computational efficiency?\", \"Two baselines are mainly used for comparison of results, mainly CPO and PDO, both of which are constrained policy optimization approaches. The experimental results section requires more clarity and ease of presentation, as it is a bit difficult to follow what the results are trying to show exactly. However, the main conclusion is that PCPO significantly satisfies constraints in all the propoed benchmarks compared to the baselines. The authors compare to the sota baselines too for evaluating the significance of their approach.\", \"Overall, I think the paper has useful merits - although it seems to be a computational ly challenging approach requiring second order approximations for both the KL terms (reward improvement and project step). It may be useful to see if there is a computationally simpler, PPO form of the variant that can be introduced for this proposed approach. I think it is useful to introduce such policy optimization methods satisfying constraints - and the authors in this work propose a simple approach based on projecting the policies into the constraint set, and solving the overall problem with convex optimization tools. Experimental results are also evaluated with standard baselines, demonstrating the significance of the approach.\"]}"
]
} |
BJliakStvH | Maximum Likelihood Constraint Inference for Inverse Reinforcement Learning | [
"Dexter R.R. Scobee",
"S. Shankar Sastry"
] | While most approaches to the problem of Inverse Reinforcement Learning (IRL) focus on estimating a reward function that best explains an expert agent’s policy or demonstrated behavior on a control task, it is often the case that such behavior is more succinctly represented by a simple reward combined with a set of hard constraints. In this setting, the agent is attempting to maximize cumulative rewards subject to these given constraints on their behavior. We reformulate the problem of IRL on Markov Decision Processes (MDPs) such that, given a nominal model of the environment and a nominal reward function, we seek to estimate state, action, and feature constraints in the environment that motivate an agent’s behavior. Our approach is based on the Maximum Entropy IRL framework, which allows us to reason about the likelihood of an expert agent’s demonstrations given our knowledge of an MDP. Using our method, we can infer which constraints can be added to the MDP to most increase the likelihood of observing these demonstrations. We present an algorithm which iteratively infers the Maximum Likelihood Constraint to best explain observed behavior, and we evaluate its efficacy using both simulated behavior and recorded data of humans navigating around an obstacle. | [
"learning from demonstration",
"inverse reinforcement learning",
"constraint inference"
] | Accept (Spotlight) | https://openreview.net/pdf?id=BJliakStvH | https://openreview.net/forum?id=BJliakStvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"JVJayhEGu3",
"H1e1ksZioB",
"HJxa7qWisr",
"HygPYt-isH",
"B1x8n_bior",
"HkeSgyVa9r",
"S1xYIh4v9r",
"rJlIrVJk9B",
"SkxBRWYCFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737958,
1573751510775,
1573751333319,
1573751166527,
1573750958054,
1572843244852,
1572453457127,
1571906621891,
1571881421263
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2000/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2000/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2000/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2000/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2000/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2000/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2000/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2000/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper introduces a novel way of doing IRL based on learning constraints. The topic of IRL is an important one in RL and the approach introduced is interesting and forms a fundamental contribution that could lead to relevant follow-up work.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for your review and for pointing out that this is an interesting problem to address! We address your question about applications below, and we\\u2019ve added a bit of these ideas to the introduction of the updated version as well.\\n\\nTo continue with the car example introduced in the text, it could be possible to automatically learn about changes in road accessibility, such as temporary obstacles or detours, when other cars unexpectedly avoid an area and take seemingly sub-optimal routes. Moreover, in driving tasks, collisions are an unacceptable behavior, so we argue that they are better modeled by hard constraints than by soft penalties in the reward function. More broadly speaking, constraints are an important part of defining safe behavior in safety-critical systems. For instance, we would certainly want a robot learning to assist with surgical tasks to be able to learn that certain actions must always be avoided.\\n\\nAnother specific application that we\\u2019re interested in exploring is in automated diagnosis of human motor impairments. The nominal model can be based on the range of motion / abilities of healthy individuals (or a baseline from a patient), and demonstrations can be provided by patients over time. Our method could then detect a decline in motor abilities (i.e. the presence of new constraints) and assist clinicians in inferring if an individual\\u2019s motor impairment is more likely to be caused by limited joint ranges (state constraint) or loss of strength (action constraint).\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thank you for your detailed and constructive feedback! Discussing these points will help us clarify and improve the paper. You seem to raise two main points: looking for more detail in the problem formulation and looking for expanded discussion on the limitations of the approach.\", \"to_your_point_on_the_problem_formulation\": \"you are correct that our goal is to try to identify constraints from demonstrations and the nominal MDP to attempt to recover the true MDP, and we believe that it is a fair and useful analogy to think of adding these constraints as \\u201cshrinking\\u201d the nominal MDP to the true MDP. We approach this problem from the perspective of selecting the most likely constraints (to shrink the MDP) given the available information, namely the nominal MDP and the demonstrations. If we take the nominal model as given and assume a uniform prior over possible constraints, then choosing the most likely constraints given the demonstrations is equivalent to choosing the constraints that maximize the likelihood of the demonstrations, which is why we pursue this formulation. We\\u2019ve added some additional language at the end of sec. 3.1 and beginning of sec. 3.3 to clarify that maximizing demonstrations likelihood is not the goal itself, but an approach to achieve the goal of finding the most likely constraints by solving an equivalent problem.\\n\\nWith regards to your comment on over-fitting, we believe that this term is still well suited to the demonstration likelihood maximization formulation we present. For example, if we learn constraints from a training set of demonstrations, but these constraints conflict with observations in a test set of demonstrations (and thus reduce their likelihood to zero), then we observe the overfitting effect on the constraints themselves (we\\u2019ve chosen constraints that do not exist in the true MDP). On the other hand, if we do identify a solution that generalizes perfectly to all new demonstrations, then we have not selected a false constraint that conflicts with a demonstration, and thus not over-fit.\\n\\nYou also mentioned that more real-world experiments would be welcome. While we believe that the presented results do illustrate the usefulness and functionality of the method, we agree that more involved scenarios could better \\u201cstress test\\u201d the method. However, we have no additional real-world results to report at this time.\\n\\nFinally, to your point on discussing the limitations of the work, we agree that it is important to be upfront about these points, so we\\u2019ve added some discussion on this to the end of sec. 3.3. However, we would like to clarify the point you mention about \\u201cconstraining the possibility to drive slowly.\\u201d In this case, a constraint preventing slow driving would only be learned if the task incentivized slow driving, but none was observed. If the task incentivized fast driving (such as reaching a destination quickly), then it would be unexpected to see slow driving anyway, making it an unlikely constraint candidate (though it would still be a valid candidate).\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your review and for pointing out that you find the problem interesting and well-motivated! We address your two comments below, and we believe this discussion will lead to a better paper overall.\\n\\n\\u201c(i) Does the constraint semantically similar to the domain of the MDP? In my intuition, one can create a convex hull over the state and action representations to actually estimate the constraint.\\u201d\\nSince we define constraints as sets of state-action pairs which we prohibit, it is possible to interpret the addition of constraints as modifying the domain of the MDP by removing states and/or actions. We believe the idea of considering convex hulls over the state/action space actually aligns well with our set-based notion of constraints. For instance, if you define a feature that indicates whether an agent is inside (or outside) of some convex hull, then the space within this convex hull (or the space outside of it) would be equivalent to the set defining a feature constraint in our formulation. If all demonstrations remain within this convex hull (even though we expect some to exit), then our method can infer that remaining within the convex hull is a constraint.\\n\\n\\u201c(ii) Suppose, the reward function is unknown, how your method will fare in this case?\\u201d\\nThank you for mentioning this, as this is a point that we can further clarify. Our algorithm does require some nominal reward function to operate, but this reward function does not need to be perfect. The purpose of the reward function is to inform what behavior we would expect to see, so that we can quantify how demonstrations deviate from these expectations, which allows us to infer the constraints most likely to cause this difference. The more accurate the nominal reward is, the more accurate our expectations will be, leading to better constraint inference.\\n\\tIn the absence of an a priori known reward, we see two possible options to choose / recover a reward so that our method can be used. 1) Use general knowledge / intuition about the task to propose a simple nominal reward (e.g. \\u201cI know this is a navigation task, so incentivize short paths\\u201d or \\u201cI know resources are constrained, so penalize energy usage\\u201d). 2) Gather demonstrations from a baseline nominal condition (such as a well-characterized region where all constraints are known beforehand) and use existing IRL techniques to estimate a reward function. Our method can then be used to estimate the unknown constraints in previously unseen conditions sharing that basic reward structure. We followed approach 2) in our human obstacle avoidance example (sec. 4.2), by performing MaxEnt IRL on demonstrations of humans navigating the space without an obstacle. In the novel condition where the obstacle is added, our method detects that the human trajectories deviate from expectations and infers the presence of the previously unknown obstacle.\\n\\tWe\\u2019ve added some discussion on this point in a new subsection 3.2.2 that discusses when we might use 1) or 2) to choose a reward if one is not already available.\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"Thank you for your detailed and encouraging review, which provides an excellent summary of our work! We are glad that you find this work impactful, and we appreciate your constructive feedback. We address your comments below, and we believe this discussion will lead to a stronger paper overall.\\n\\n\\u201c- The exact definition and usage of nominal environments and rewards are still unclear to me. For example in Figure 2 (b), how did you define and get nominal MDP?\\u201d\\nBroadly speaking, the nominal environments and rewards can be thought of as either \\u201cgeneric\\u201d or \\u201cbaseline\\u201d conditions.\\nWe talk about the generic case at the start of section 3.2, where you design a model flexible enough to describe all reasonable car behaviors, then you can use our method to infer the particular constraints applying to a model of car, a type of roadway, a specific driver, or a combination of these specific factors. The nominal model from Figure 2(b) falls into this \\u201cgeneric\\u201d category: we assume that we know the basic reward (reaching the goal quickly is incentivized) as well as the structure of the environmental elements (states, features, and actions). However, we assume that we do not know which of these elements our demonstrators might consider constraints, so we learn the particular relationship through their demonstrations.\\nThe other perspective is to think of the nominal model/reward as a \\u201cbaseline\\u201d condition. If we think of nominal models as coming from a baseline condition, then we would derive them from observations of a system in a known, well-characterized configuration. Our method is then useful for detecting new, unknown constraints that alter the system, such as a road closure or a new obstacle entering the space. For instance, in our human obstacle avoidance example, we first learn a nominal reward from the baseline condition of humans navigating the empty space, and we use the expectations from this baseline (along with the new demonstrations) to detect the presence of a new obstacle (due to deviations from \\u201cbaseline\\u201d behavior). The baseline type of nominal model is useful anytime you want to detect new constraints altering agent behavior.\\nWe\\u2019ve made these points explicit in the updated paper in a new subsection 3.2.2.\\n\\n\\u201c- Since Figure 1 is related to the second experiment, I recommend moving it to the experiment section.\\u201d\\nIn the updated version, we\\u2019ve moved the original Figure 1 to the experiment section, and replaced it with another figure showing how constraints affect behavior in sec. 3.2.\\n\\n\\u201c- At 3.2.1., \\u201cBecause constraints are sets of state-action pairs, imposing a\\nconstraint within an MDP means restricting the set of actions that can be taken from certain states.\\u201d -> Need clarification\\u201d\\nWe\\u2019ve rephrased this idea in the updated version to make this clearer, instead stating that:\\nIf we impose a constraint on an MDP, then none of the state-action pairs in that constraint set may appear in a trajectory of the constrained MDP. To enforce this condition, we must restrict the actions available in each state so that it is not possible to produce one of the constrained state-action pairs.\\n\\n\\u201c- At 3.2.1., \\u201cFor MDPs with deterministic transitions, it is clear that any agent respecting these constraints will not visit an empty state.\\u201d -> Why?\\u201d\\nThe assertion follows from the fact that, with deterministic transitions, agents are able to know exactly the state to which they will transition given their chosen action. If an action deterministically leads to an empty state, then the agent knows that choosing such an action will put them in a position (the next state) where they have no available future actions which respect the constraints. Therefore, any demonstrations from an agent which respects the constraints must also avoid empty states. We\\u2019ve reworded this text in the updated version to clarify this point.\\n\\n\\u201c- At (8), {} to $\\\\emptyset$\\u201c\\nWe have followed your suggestion to update our notation for the empty set.\\n\\n\\u201c- In Figure 3, I was a bit confused about the relationship between the threshold and the false positive rate at first glance. What I understood is that a small threshold leads to lots of iteration for constraint selection, which increases the false positive. I want authors to add some comments on that.\\u201d\\nYes, your understanding is exactly right on that point. We\\u2019ve added some additional comments to the paper that clarify this directly.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"In this work, a novel inverse constraint learning method is proposed, where the goal is to find out the constraints over state-action pairs for given demonstration and MDP **including a reward function** (so different from inverse cost learning). The novelty of this work comes from introducing maximum entropy inverse reinforcement learning (MaxEntIRL) framework to previous works [1, 2], and this work mainly focused on the tabular setting. The objective of this work is to solve the optimization in (8), which tries to find out the constraint that maximizes the probability of trajectories that cannot be generated if that constraint is applied. (Such an objective minimizes the normalization constant in (5) and results in maximization of the demonstration likelihood under the constraint.) To solve this objective, the proposed algorithm first computes the feature occupancy (Algorithm 1), and then use those feature occupancy with greedy iterative constraint inference (Algorithm 2 that motivated by maximum coverage problem) to get constraints. Two experiments in the GridWorld show that the proposed method effectively works.\\n\\nI think this work is quite fundamental, impactful to be accepted at the conference and is possibly extended to practical scenarios (like explainable and safety RL and imitation learning) in the future. One thing I\\u2019d like to point out is to enhance the readability by reordering contents and adding some additional explanations to clarify their arguments. There are a few comments and questions I have:\\n\\n- The exact definition and usage of nominal environments and rewards are still unclear to me. For example in Figure 2 (b), how did you define and get nominal MDP?\\n- Since Figure 1 is related to the second experiment, I recommend moving it to the experiment section. \\n- At 3.2.1., \\u201cBecause constraints are sets of state-action pairs, imposing a\\nconstraint within an MDP means restricting the set of actions that can be taken from certain states.\\u201d -> Need clarification\\n- At 3.2.1., \\u201cFor MDPs with deterministic transitions, it is clear that any agent respecting these constraints will not visit an empty state.\\u201d -> Why?\\n- At (8), $\\\\{\\\\}$ to $\\\\emptyset$\\n- In Figure 3, I was a bit confused about the relationship between the threshold and the false positive rate at first glance. What I understood is that a small threshold leads to lots of iteration for constraint selection, which increases the false positive. I want authors to add some comments on that.\\n\\n Reference\\n[1] Chou, Bereson, Ozay, \\u201cLearning Constraints from Demonstrations,\\u201d arXiv 2019\\n[2] Chou, Bereson, Ozay, \\u201cLearning Parametric Constraints in High Dimensions from Demonstrations,\\u201d CoRL 2019\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper aims to address a new method for inverse reinforcement learning based on maximum likelihood constrained inference. In general, I find the problem very interesting and the motivation of the work is quite reasonable. However, I have two major comments:\\n\\n(i) Does the constraint semantically similar to the domain of the MDP? In my intuition, one can create a convex hull over the state and action representations to actually estimate the constraint.\\n\\n(ii) Suppose, the reward function is unknown, how your method will fare in this case?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper considers learning of constraints in MDPs in an IRL setting with the goal of maximizing the likelihood of demonstrations (in the constrained MDP). The constraints come in the form of avoiding certain states, actions or features. The authors propose an algorithm for learning the constraints and evaluate their approach in synthetic and a real-world experiment.\\n\\nThe paper is mainly well written and considers a relevant problem. However, at a conceptual level, I am missing a more precise problem formulation and an extended discussion of the possible failure cases/limitations of the approach. Also more real-world experiments would be welcome (the presented real-world problem is relatively easy, expert's trajectories are likely to agree and not make any \\\"mistakes\\\").\\n\\nIn more detail, I think the paper can be improved by spelling out the problem formulation more precisely (Section 3.1) -- in its current form I think it somewhat fuzzy. What I mean by this is that there is a nominal MDP (which is at least as \\\"large\\\" as the true MDP) and there are demonstrations. The goal seems to be to identify constraints from the demonstrations and the MDP such that if added to the nominal MDP it \\\"shrinks\\\" to the true MDP. Is that the actual underlying problem? (I understand the current formulation in the terms of the likelihood of demonstrations). In this formulation, I think the definition of over-fitting arises naturally. In the current formulation though, over-fitting seems somewhat disconnected in the sense that even if we identify a solution which generalizes perfectly to new demonstrations, we would talk about over-fitting (the optimization of the likelihood maximization is not the problem that we actually want to solve). \\n\\nFurther, as also mentioned by the authors, there is a problem with sub-optimal demonstrators. Considering the example of cars given in the paper, there might be drivers that do violate speed limits. In that case, the proposed approach will fail to identify some constraints. On the other hand, if all \\\"optimal\\\" demonstrations are to go fast, the approach would constraint the possibility to drive slowly. All this is fine, but I think it warrants a broader discussion which is not deferred to the Conclusion/Future Work section as it is very crucial regarding the applicability of the proposed approach.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The submission considers estimating the constraints on the state, action and feature in the provided demonstrations, instead of learning rewards. The authors use the likelihood as MaxEnt IRL methods to evaluate the \\\"correctness\\\" of the constraints, and find the most likely constraints given the demonstrations. While the problem is challenging (NP-hard), suboptimality of the proposed algorithm is analyzed. Experiments are provided to demonstrate the performance of the proposed method.\\n\\nThe problem considered is interesting, and the authors provide a straightforward but empirically effective method. However, the motivation is a little unclear to me. Specifically, what will be the practical cases, where the learning the constraints is important and necessary? Can authors further motivate this topic by providing more real-world applications?\"}"
]
} |
H1eKT1SFvH | Towards Effective 2-bit Quantization: Pareto-optimal Bit Allocation for Deep CNNs Compression | [
"Zhe Wang",
"Jie Lin",
"Mohamed M. Sabry Aly",
"Sean I Young",
"Vijay Chandrasekhar",
"Bernd Girod"
] | State-of-the-art quantization methods can compress deep neural networks down to 4 bits without losing accuracy. However, when it comes to 2 bits, the performance drop is still noticeable. One problem in these methods is that they assign equal bit rate to quantize weights and activations in all layers, which is not reasonable in the case of high rate compression (such as 2-bit quantization), as some of layers in deep neural networks are sensitive to quantization and performing coarse quantization on these layers can hurt the accuracy. In this paper, we address an important problem of how to optimize the bit allocation of weights and activations for deep CNNs compression. We first explore the additivity of output error caused by quantization and find that additivity property holds for deep neural networks which are continuously differentiable in the layers. Based on this observation, we formulate the optimal bit allocation problem of weights and activations in a joint framework and propose a very efficient method to solve the optimization problem via Lagrangian Formulation. Our method obtains excellent results on deep neural networks. It can compress deep CNN ResNet-50 down to 2 bits with only 0.7% accuracy loss. To the best our knowledge, this is the first paper that reports 2-bit results on deep CNNs without hurting the accuracy. | [
"quantization",
"deep neural networks",
"layers",
"bit allocation",
"bits",
"accuracy",
"weights",
"activations",
"deep cnns compression",
"towards effective"
] | Reject | https://openreview.net/pdf?id=H1eKT1SFvH | https://openreview.net/forum?id=H1eKT1SFvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ZnsDK63P4h",
"SkgGDCrhiS",
"rJlLBT1Yjr",
"SkgvAjJKsr",
"rJehZskYoB",
"B1etb9GNjr",
"SyeB8pofiS",
"SkxS-gdzjH",
"BklwCkdGoH",
"Hkx3g0MZoS",
"HJeAzEJQqS",
"rJx2sPqAFS",
"rylTYKApFB",
"BJezfXDH_S",
"BJl5neg1_H",
"Sygq4eUhDH",
"Skg_McpoPH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"comment",
"comment"
],
"note_created": [
1576798737929,
1573834329797,
1573612861793,
1573612494613,
1573612292457,
1573296640710,
1573203276639,
1573187580895,
1573187535272,
1573101043858,
1572168726093,
1571887011745,
1571838341240,
1570235146003,
1569812657984,
1569640497713,
1569606159822
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1998/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1998/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1998/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1998/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1998/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1998/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1998/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1998/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1998/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1998/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1998/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1998/AnonReviewer3"
],
[
"~Saeed_Ranjbar1"
],
[
"ICLR.cc/2020/Conference/Paper1998/Authors"
],
[
"~Kevin_Zhang2"
],
[
"~Evgenii_Zheltonozhskii1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This works presents a method for inferring the optimal bit allocation for quantization of weights and activations in CNNs. The formulation is sound and the experiments are complete. However, the main concern is that the paper is very similar to a recent work by the authors, which is not cited.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Acknowledge the rebuttal\", \"comment\": \"I appreciate the authors' responses to my questions. I can see that the paper now considers the efficiency issue of the mixed-precision feedforward more fairly.\"}",
"{\"title\": \"Summary of Revisions\", \"comment\": [\"We thank all reviewers for their careful reviews, insightful comments and feedback on our paper. The draft has been revised accordingly. The revised draft has been uploaded. The main revisions are summarized below:\", \"We cited the ICIP paper mentioned by Reviewer 1 and discussed the differences with the ICIP paper in the related work section. We added the results of the ICIP paper in Table 1 and compared our method with the ICIP paper in the experiment section. (response to Reviewer 1)\", \"We added a section (section 6.1) to clarify the computational complexity of our method. We made a comparison of the amount of arithmetic operations to the equally quantized method in section 6.1. (response to Reviewer 2)\", \"The optimization method is elaborated in section 4.2. A figure is also added to show an example of the optimization method (Fig. 3). (response to Reviewer 2)\", \"We discussed the relationship between the variances and the bitrates in the supplementary material. (response to Reviewer 3)\"], \"here_we_summarize_the_main_differences_with_the_icip_paper_mentioned_by_reviewer_1\": [\"Our ICLR submission shows the additivity property of the output error and provides a mathematical derivation for the additivity property.\", \"Our quantization framework differs from the ICIP paper in two-fold. First, we adopt a dead zone to the quantization function of weights. Second, we provide a scheme to support the retraining for both quantized weights and activations.\", \"Our work demonstrates that the optimal bit allocation solved by our method has very positive impacts on inference speed, which has been verified by the hardware simulation experiments.\", \"By combining the dead-zone quantization and STE based retraining with the optimal bit allocation strategy, our quantization framework achieves state-of-the-art results. To the best of our knowledge, our work is the first to report 2-bit results on deep neural network ResNet-50 without hurting the accuracy on ImageNet.\", \"Best,\", \"Authors\"]}",
"{\"title\": \"Revisions to the Draft\", \"comment\": [\"We revised the draft based on the comments:\", \"We added section 6.1 to clarify the computational complexity of our method.\", \"We made a comparison of the amount of arithmetic operations to the equally quantized method in section 6.1.\", \"The optimization method is elaborated in section 4.2. A figure is also added to show an example of the optimization method (Fig. 3).\", \"The updated version has been uploaded.\"]}",
"{\"title\": \"Revisions to the Draft\", \"comment\": \"We revised the draft based on the comments:\\n-\\tWe cited the ICIP paper and discussed the differences with the ICIP paper in the related work section. \\n-\\tWe added the results of the ICIP paper in Table 1 and compared our method with the ICIP paper in the experiment section. \\nThe updated version has been uploaded.\"}",
"{\"title\": \"Response to Reviewer #3 Questions\", \"comment\": \"Thank you for the careful reviews and for the comments. We answer your questions below.\", \"q1\": \"Do the authors mean to conclude that layers with large number of weights hold a lot of redundancy and don't have a significant impact on the overall accuracy of the model? This needs to be clarified further.\\n\\nWe empirically found that the layers having a larger number of weights receive lower bitrates (and vice-versa). The reason could be the values of the variances of the layers. According to the classical Lagrangian rate-distortion formulation, the optimal bit allocation follows a rule,\\n\\nRate = G( - 1 / sigma^2 ), \\n\\nwhere G(.) is a strictly increasing function and sigma^2 is the variance of the variables. Based on the rule above, the layers with larger variances receive larger bitrates (and vice-versa). \\n\\nWe calculated the variances of layers for two deep networks ResNet-50 and MobileNet-v1 (see Table 1 and Table 2 below). The results show that the layers with a smaller number of weights typically have a larger variance, and thus these layers receive larger bitrates. We added a paragraph and a figure in the supplementary material to discuss the relationship between variances and bitrates. The draft has been updated accordingly. Thank you for this good question.\\n\\nTable 1 \\u2013 Variances of Weights across Layers on ResNet-50\\n+-------------------------+----------+----------+----------+-----------+------------+\\n| Layer Index | 5 | 15 | 25 | 35 | 45 |\\n+-------------------------+----------+----------+----------+-----------+------------+\\n| #Weights (10^5) | 0.16 | 1.47 | 2.62 | 2.62 | 23.6 |\\n+-------------------------+----------+----------+----------+-----------+------------+\\n| Variance (10^3) | 1.33 | 0.59 | 0.51 | 0.37 | 0.11 | \\n+-------------------------+----------+----------+----------+-----------+------------+\\n| Bitrate | 4.2 | 3.5 | 3.3 | 2.8 | 1.2 |\\n+-------------------------+----------+----------+----------+-----------+------------+ \\n\\nTable 2 \\u2013 Variances of Weights across Layers on MobileNet-v1\\n+-------------------------+----------+----------+----------+-----------+------------+\\n| Layer Index | 2 | 8 | 14 | 20 | 26 |\\n+-------------------------+--------- +----------+----------+-----------+------------+\\n| #Weights (10^7) | 0.29 | 1.15 | 4.61 | 4.61 | 9.22 |\\n+-------------------------+----------+----------+----------+-----------+------------+\\n| Variance | 7.83 | 0.49 | 0.54 | 0.23 | 0.07 |\\n+-------------------------+----------+----------+----------+-----------+------------+\\n| Bitrate | 7.9 | 5.38 | 5.86 | 5.29 | 4.1 |\\n+-------------------------+----------+----------+----------+-----------+------------+\"}",
"{\"title\": \"Response to Question 1 and 2\", \"comment\": \"Thank you for your interest in this paper and for your comments. Below we answer your questions.\", \"q1\": \"Uniformity of quantization\\n\\nDead zone quantization can also support integer arithmetic if we set the values of the first negative and the first positive quantization centroids as k * delta, where k is an integer value and delta is the length of quantization interval. By doing this, every quantization centroid becomes an integral multiple of delta and we can use the integral multiplier for integer arithmetic.\\n\\nFor example, if we set k = 2, the corresponding quantization centroids are:\\n\\n ... , -n * delta , ... , -3 * delta , -2 * delta , 0 , 2 * delta , 3 * delta , ... , n * delta , ...\\n\\nand the integral multipliers \\\"... , -n , ... , -3 , -2 , 0 , 2 , 3 , ... , n , ...\\\" can be used for integer arithmetic.\\n\\nWe will introduce how to apply integer arithmetic with dead zone quantization and update the paper accordingly. Thank you for this good point.\", \"q2\": \"Computational complexity vs. memory requirements\\n\\nWe calculated the amount of arithmetic operations of our method required to perform a single inference on ResNet-50, and did a comparison with the equally quantized method (please see our response to Q1 of Reviewer #2). The results show that our method has fewer operations than the equally quantized method at 4 bits and 6 bits. While, at 2 bits, our method has 1.4x more operations than the equally quantized method. \\n\\nIn practice, the inference time on hardware devices is constrained by both compute and memory access. The Pareto-optimal bit allocation tends to allocate fewer bits per weight for layers that have a lot of weights. As a result, it helps to reduce the corresponding memory-access time which in turn reduces compute idle time and improves the overall inference speed. The simulation results on the Google TPU platform show that our method is 1.5x faster than the equally quantize method on ResNet-50 at 2 bits. Please see our response to Reviewer #2 for the details.\"}",
"{\"title\": \"Response to Reviewer #2 Comments (part 2)\", \"comment\": \"\", \"q2\": \"I wish the actual optimization part briefly mentioned in section 4.2 could be elaborated more. It is a crucial part but somewhat understated.\\n \\nThank you for this suggestion. We will elaborate section 4.2 to show the optimization steps in more details. We will respond to this comment again once we finish the revision.\", \"q3\": \"I also wonder what\\u2019s the effect or limitation of using MSE for this optimization, where cross-entropy is a more suitable choice. I know that the objective function in eq 5 is just to find the best combination of bit allocations per layer, but still, the error space might not be the best for this classification problem.\\n \\nWe agree that MSE does not directly optimize accuracy and thus may not be the best choice for classification problem. The reason we choose MSE as the measurement of the quantization impact is mainly because it ensures that the additivity property of output error holds, from both empirical observations and mathematical derivations (as shown in the draft), and the additivity property is essential for Pareto condition. Besides, optimization with MSE not only supports classification tasks but also can be applied to any other tasks like object detection and semantic segmentation where regression loss is also required.\\n\\n+-----------------------+-----------+----------+-----------+-----------+\\n | size | 4 bits | 6 bits | 8 bits | 10 bits | \\n+-----------------------+-----------+----------+-----------+-----------+ \\n | cross-entropy | 41.3 | 43.6 | 46.0 | 57.0 |\\n+-----------------------+-----------+----------+-----------+-----------+\\n | MSE | 63.6 | 70.8 | 70.9 | 70.9 |\\n+-----------------------+-----------+----------+-----------+-----------+ \\n\\nCross-entropy is a more suitable choice for classification, but our empirical observations show that it is not compatible with the Pareto optimal bit allocation framework. The table above shows the results on MobileNet-v1 when replacing MSE in Eq. 3 with cross-entropy for optimization. One can see that there is a noticeable accuracy drop using cross-entropy in the optimization framework. We also observed that the additivity property doesn\\u2019t hold anymore if we use cross-entropy as the measurement. From the mathematical point of view, it is unclear whether or not the additivity property is still valid for metrics beyond MSE, we would like to leave it for future study. Thank you for this insight.\"}",
"{\"title\": \"Response to Reviewer #2 Comments\", \"comment\": \"Thank you for the careful reviews and for the comments. We answer your questions below.\", \"q1\": \"First of all, the paper is not about 2bit quantization. It seeks an \\u201caverage\\u201d 2bit quantization. \\u2026 Is it really more efficient to do multiplication-and-addition between 2 bit weights and 5 bit input (the output of the previous activation) than between 4bit weights and 4bit input?\\n\\nWe did a comparison of the computational complexity of our method and the equally quantized method on ResNet-50. We calculated the number of arithmetic operations of both methods required to perform single inference.\\n\\nSpecifically, we define a 32-bit multiplication/addition operation as one operation. To count the number of operations of the mixed-precision computation (e.g., 3 bit weight and 5 bit input), we follow the protocol defined by MicroNet challenge (https://micronet-challenge.github.io/scoring_and_submission.html) and consider the resolution of an operation to be the maximum bit-width of the 2 operands of this operation. For example, a multiplication operation with one 3-bit and one 5-bit operand will count as 5/32 of an operation.\\n \\n+-------------------------+-------------------+-------------------+--------------------+\\n| size | 2 bits | 4 bits | 6 bits |\\n+-------------------------+-------------------+-------------------+--------------------+\\n| equally quantized| 3.6 x 10^8 | 7.2 x 10^8 | 10.8 x 10^8 |\\n+-------------------------+-------------------+-------------------+--------------------+\\n| our method | 5.1 x 10^8 | 7.0 x 10^8 | 9.4 x 10^8 |\\n+-------------------------+-------------------+-------------------+--------------------+\\n \\nThe table above shows the amount of operations required when weights and activations are compressed to 2 bits, 4 bits and 6 bits respectively. Our unequally quantized method has less amount of operations than the equally quantized method when weights and activations are compressed to 4 bits and 6 bits on average. While, at 2 bits, our method has 1.4x more operations than the equally quantized method. \\n \\nOn the other hand, the amount of operations does not imply equivalent inference speed in practice, as the processing of deep networks on hardware devices is constrained by both compute and memory access. We would like to reiterate that our method is effective to reduce the memory access time and thus provide higher inference rate compared to the equally quantized method, particularly for memory-bound hardware platforms where data movements are much slower and less energy efficient than compute. This is achieved by Pareto-optimal bit allocation which tends to allocate fewer bits per weight for layers that have a lot of weights. Thus, given fixed bandwidth, more weights can be loaded from off-chip memory to on-chip memory when processing the layers with a lot of weights, which in turn reduces compute idle time and improves the overall inference rate.\\n \\nTo verify the point above, we simulated the inference speed on Google TPU v1 at 2 bits for both equally and unequally quantized methods with ResNet50. As existing hardware does not well support mixed-precision operations, we assume the weights and activations with unequal bit-widths are fetched from off-chip memory, then decoded to fixed 8-bit stream and fed to compute unit that supports 8-bit multiplications (e.g. TPU). The simulation results on Google TPU platform show that our method is 1.5x faster than the equally quantize method.\\n\\nWe will add a paragraph to clarify the computational complexity of our method and the equally quantized method.\"}",
"{\"title\": \"Clarification - the Main Differences with the ICIP paper\", \"comment\": \"Thank you for your reviews and for pointing out the ICIP paper. We notice that the ICIP paper was posted on IEEE Xplore website on 22 Sept, which is 3 days prior to the ICLR deadline (Sept 25). According to the ICLR 2019 reviewer guideline, \\u201cno paper will be considered prior work if it appeared on arxiv, or another online venue, less than 30 days prior to the ICLR deadline\\u201d. We believe that our submission meets the ICLR regulations and rules.\\n\\nOur ICLR submission has substantial differences with the mentioned ICIP paper including the theoretical analysis, methods and insights, and experimental results. Moreover, with the new compression framework, the ICLR submission achieves 2-bit quantization results on deep architecture ResNet-50. To our best knowledge, this is the first work that reports 2-bit results without hurting the accuracy. Below we summarize the main differences with the ICIP paper:\\n\\n(1) Our ICLR submission provides a mathematical derivation for the additivity property. With two reasonable assumptions, we demonstrate that the additivity property holds for any neural networks which are continuously differentiable in the layers.\\n\\n(2) Our quantization framework differs from the ICIP paper in two-fold. First, we adopt a dead zone to the quantization function of weights. Second, we apply the straight-through estimator (STE) to perform back-propagation on the retraining stage for both quantized weights and activations. The ICIP paper uses the simple uniform quantizer and the framework does not provide a scheme to support the retraining for quantized weights and activations. However, as we illustrated in the experiment section, dead zone and STE retraining are critical for improving the accuracy.\\n\\n(3) In our ICLR submission, we reveal that the pattern of Pareto-optimal bit allocation across layers has positive impacts on neural network inference rate in practice. It tends to allocate fewer bits per weight for layers that have a lot of weights, which helps to reduce memory-access time which in turn reduces compute idle time and improves the overall inference rate. We verified this point by designing hardware simulation experiments on Google TPU v1 platform. Results show that the Pareto-optimal bit allocation improves the inference rate on ResNet50 by 1.5x compared to its equal bit allocation counterpart.\\n\\n(4) Combined the dead-zone quantization and STE based retraining with the optimal bit allocation strategy, our quantization framework achieves state-of-the-art result on deep neural network ResNet-50 at 2 bits. To the best of our knowledge, this is the first work that reports 2-bit results without hurting the accuracy. The ICIP paper can only compress ResNet-50 down to 4 bits and the accuracy drops significantly at 2 bits.\\n\\nWe will change our ICLR draft accordingly, and then upload it to the review website. We would also like to answer any other questions.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This works presents a method for inferring the optimal bit allocation for quantization of weights and activations in CNNs. The formulation is sound and the experiments are complete. My main concern is regarding the related work and experimental validation being incomplete, as they don't mention a very recent and similar work published in ICIP19 https://ieeexplore.ieee.org/document/8803498: \\\"Optimizing the bit allocation for compression of weights and activations of deep neural networks\\\". A reference in related work as well as a comparison in experimental validation would be necessary and the novelty of this work is rather weak given the above mentioned 2019 publication.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work nicely proposes a new theoretically-sound unequal bit allocation algorithm, which is based on the Lagrangian rate-distortion formulation. Surprisingly, the simple Lagrange multiplier on the constraints leads us to the convenient conclusion that the rate distortion curves for the weight quantization and the activation quantization have to match. Based on this conclusion, the authors claim that their search for the best bit allocation strategy is with a less complexity.\\n\\nI found this paper interesting and enjoyed reading it. However, I wish the paper could address some issues that are a little bit confusing to me. \\n\\nFirst of all, the paper is not about 2bit quantization. It seeks an \\u201caverage\\u201d 2bit quantization. It means that some weights in some layers can be quantized with higher or lower bits per weight. Same story goes on for the activation quantization. I don\\u2019t exactly know the implication of this, but it seems that the hardware implementation of a convolution layer could be either too complicated to benefit from this quantization scheme, or doesn\\u2019t really improve the efficiency of, say 4bit quantization for all layers. Is it really more efficient to do multiplication-and-addition between 2 bit weights and 5 bit input (the output of the previous activation) than between 4bit weights and 4bit input? I\\u2019m not a hardware person, but this part needs to be clearly addressed. Storage-wise, lowering the bitrate might be a clear benefit (I guess)\\n\\nI wish the actual optimization part briefly mentioned in section 4.2 could be elaborated more. It is a crucial part but somewhat understated. \\n\\nI also wonder what\\u2019s the effect or limitation of using MSE for this optimization, where cross-entropy is a more suitable choice. I know that the objective function in eq 5 is just to find the best combination of bit allocations per layer, but still, the error space might not be the best for this classification problem.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Very good paper that studies the error rate of low-bit quantized networks and uses Pareto condition for optimization to find the best allocation of weights over all layers of a network. The theoretical claims are strongly supported by experiments, and the experimental analysis covers state-of-the-art architectures and demonstrates competitive results. The paper in addition also analyzes the inference cost of their approach (in addition to the accuracy results), and shows positive results on ResNet and MobileNet architectures.\\n\\nThe paper primarily shows that the mean squared error of the final output of a quantized network has the additive property of being equal to the sum of squared errors of the outputs obtained by quantizing each layer individually. Although there is no reason why this should be case, experimental results from the authors on AlexNet and VGG-16 validate this. Based on this assumption, the authors then use a Lagrangian based constrained optimization to minimize the sum of squared errors of outputs when individual weights/activations are quantized, with the constraint being the total bit budget for weights and activations. The authors show that this can be optimized under the Pareto condition easily.\\n\\nThe experimental section is quite detailed and covers the popular architectures instead of toy ones. The accuracy results compared to other 2-bit and 4-bit approaches are competitive. It's also nice to see analysis of inference cost where unequal bitrate allocation performs better than other methods.\\n\\nThe authors show that given the constrained optimization, layers that have a large number of weights receive lower bitrates and vice-versa. While it makes sense that this would contribute to stronger inference speedup compared to methods with either equal bitrate allocation across layers or those that allocate higher bitrate to layers with large number of weights, it's not entirely clear why the optimization would produce this allocation in the first place. Do the authors mean to conclude that layers with large number of weights hold a lot of redundancy and don't have a significant impact on the overall accuracy of the model? This needs to be clarified further.\"}",
"{\"comment\": \"Dear Authors,\\nThanks for your comment. From review of your code, I cannot see the code related to your optimization section. The details regarding how you calculated the slopes and how you considered the rate constraints are missing.\", \"title\": \"code is not available for OPTIMIZATION UNDER PARETO CONDITION\"}",
"{\"comment\": \"Thanks for your interest in our work. Apologies that we didn't have enough time to clean up our code before the deadline, also, we felt it is necessary to double-check the quantized models and redo evaluations again to make sure all the results reported here are reproducible.\\n\\nFinally, we have uploaded the evaluation code and quantized models to the dropbox link provided earlier. Kindly let us know if there is any question regarding the code.\", \"title\": \"Codes are available\"}",
"{\"comment\": \"Hi,\\nAs of close to 56 hours after submission deadline , no code is present in the provided dropbox link. It is not fair to provide a placeholder link for code submissions (which impact the review process) and submit code taking considerable buffer time after submission deadline.\", \"title\": \"No code in provided dropbox link even after 56 hours of submission deadline\"}",
"{\"comment\": \"Nice work. The result you have achieved are really spectacular.\\nIn the simulations you assume that all calculations are performed in full precision available to the accelerator (8 or 16 bits), while for custom hardware it would make more sense to use lower-precision arithmetic, which can provide significant speedups. However, I have two concerns regarding application of this approach to the proposed method which I hope you can address:\\n\\n1. Uniformity of quantization.\\nThe main advantage of uniform quantization is the fact it is isomorphic, which allows to apply integer arithmetic to operate over bin indices rather than values themselves. If I understand correctly, dead zone quantization, unfortunately, lacks this property, meaning it would require some more complicated implementation to perform matrix multiplication, for example, using lookup tables. \\nPossibly it can be done by taking account of beta separately, but it's not 100% clear for me and might introduce additional computational overhead. \\n2. Computational complexity vs. memory requirements.\\nMost of work you compare to, do not perform additional compression except quantization. That means, the number of bits provided by those works applies both to computational complexity and memory requirements. On the other hand, since your work performs additional compression of the weights and activation, in your case those numbers are not equal anymore. From my understanding, provided numbers are the average amount of storage required for one value (of either weights or activations). That would probably mean that amount of computation required for the network would not be equivalent to an equally quantized two-bit network. \\nCould you provide some number regarding computational requirements of inference for your method (for example amount of bit-operations required for single inference)?\", \"title\": \"Strong results, but some concerns about low precision arithmetics\"}"
]
} |
HyxY6JHKwr | You Only Train Once: Loss-Conditional Training of Deep Networks | [
"Alexey Dosovitskiy",
"Josip Djolonga"
] | In many machine learning problems, loss functions are weighted sums of several terms. A typical approach to dealing with these is to train multiple separate models with different selections of weights and then either choose the best one according to some criterion or keep multiple models if it is desirable to maintain a diverse set of solutions. This is inefficient both at training and at inference time. We propose a method that allows replacing multiple models trained on one loss function each by a single model trained on a distribution of losses. At test time a model trained this way can be conditioned to generate outputs corresponding to any loss from the training distribution of losses. We demonstrate this approach on three tasks with parametrized losses: beta-VAE, learned image compression, and fast style transfer. | [
"deep learning",
"image generation"
] | Accept (Poster) | https://openreview.net/pdf?id=HyxY6JHKwr | https://openreview.net/forum?id=HyxY6JHKwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"VCj0bB3Pxu",
"rJxU3GVhsH",
"ByxPVGVhiS",
"HkgibGN3oS",
"S1etcbNhjS",
"r1lkpJl6qS",
"HJegbBKCFr",
"B1gpCylRFH",
"S1gGP5untH",
"rJeHvbXItr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"comment"
],
"note_created": [
1576798737900,
1573827246153,
1573827118690,
1573827074856,
1573826961211,
1572827063157,
1571882232513,
1571844052818,
1571748442123,
1571332445357
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1997/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1997/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1997/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1997/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1997/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1997/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1997/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1997/AnonReviewer5"
],
[
"~Vincent_Dumoulin1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes and validates a simple idea of training a neural network for a parametric family of losses, using a popular AdaIN mechanism.\\nFollowing the rebuttal and the revision, all three reviewers recommend acceptance (though weakly). There is a valid concern about the overlap with an ICLR19-workshop paper with essentially the same idea, however the submission is broader in scope and validates the idea on several applications.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper revision\", \"comment\": [\"We have uploaded an updated version of the paper. Here are the main changes:\", \"Cited Brault et al. and Babaeizadeh & Ghiasi\", \"Re-ran all models at least 3 times and reported standard deviations\", \"Ran beta-VAE experiments with models of varying capacity, by changing the width of the networks. The results are reported in Figures 2 and 4\", \"Increased the latent state size to 256 for VAEs on CIFAR-10, resulting in better qualitative and quantitative results\", \"Timed the proposed method compared to a fixed-weight model. The proposed method is only 8% slower\", \"Removed batch16 models from the compression experiment and trained all compression models for 2 million iterations\", \"Added a model with per-batch sampling of loss weights to the ablation study\"]}",
"{\"title\": \"Official response\", \"comment\": \"We thank the reviewer for the positive feedback and useful suggestions.\\n\\nWe acknowledged the work of Brault et al. in the updated version of the manuscript.\\n\\nWe will release code associated with the paper after it is accepted for publication.\\n\\nThe proposed experiments are indeed interesting. We now comment on each one separately:\\n1) Timing results are provided in the appendix of the updated manuscript. When using the same network architecture, the proposed method slows down training by only 8%.\\n2) There might have been some misunderstanding here. In most of our experiment we do sample the weights per training sample, not per SGD iteration. We have also tried sampling weights per mini-batch and observed the performance is very close, but on average slightly worse. We added this model to the ablation study in the appendix.\\n3) We agree that his would be a very interesting experiment to run, but unfortunately we were not able to complete it during the rebuttal period. We will strive to do it for the final version of the paper.\"}",
"{\"title\": \"Official response\", \"comment\": \"We thank the reviewer for the positive feedback and useful suggestions. Below we comment on the raised concerns.\\n\\nWe respectfully disagree that the focus should always be on maximizing a single metric. While in some applications this may be desirable, in others it is important to obtain a family of models that covers the full loss frontier. For instance, this is the case both for image compression and style transfer: it is desirable to be able to vary the parameters \\u2014 the compression rate and the degree of stylization, respectively \\u2014 at inference time. We do agree that for some other tasks only a single best-performing model may be of interest, in which case the proposed metric would be useful.\\n\\nWe agree that it would be interesting to fine-tune a YOTO-trained model with a fixed-weight loss to get a further improvement in performance. We were not able to perform this experiment during the rebuttal time, but will strive to include them in the final paper. However, in the cases we study, even the models trained without such fine-tuning can perform very close to the single-weight models, especially if the YOTO model has higher capacity than the fixed-weight models.\\n\\nWe are unfortunately unsure about how to exactly interpret the comment about weight sampling. Indeed we have found that sampling from a uniform distribution over the weights does not perform very well, but log-uniform worked well in our experiments, even if the sampling range was quite large, up to 3 orders of magnitude. Re-training with a narrower range could indeed further improve the results, but we are unsure what this has to do with uniformity vs log-unformity of the weight distribution.\", \"comments_about_the_minor_points\": [\"It would be interesting to also report VAE results on ImageNet, but VAEs are not commonly evaluated on ImageNet, so we choose to stick to more standard datasets and rather perform more in-depth experiments on these.\", \"We have done our best making the plots as readable as possible, including plotting both the frontiers and the full losses, subtractive normalization of the full losses, and experimenting with log-scales of the axes (the latter unfortunately did not always improve the readability of the plots). The plots currently presented in the paper are our best attempt. We would be grateful for specific advice on improving the plots.\", \"Some images are missing in Fig. 7 because producing each of the images in the bottom row requires training an additional fixed-weight model, and we chose to only train a subset of them to save computation.\"]}",
"{\"title\": \"Official response\", \"comment\": \"We thank the reviewer for useful comments. We agree that the experimental evaluation had certain shortcomings, and we think the reviewer\\u2019s feedback allowed us to substantially improve it. We have uploaded an updated version of the paper addressing the issues raised. Further details are provided below.\\n\\nQ1. Overall, to our knowledge, vanilla VAEs are known to not perform very well on CIFAR-10. The model the reviewer pointed at, \\u201cImproved Variational Inference with Inverse Autoregressive Flow\\u201d, involves a few modifications, most notably a ResNet architecture with hierarchical latents, an associated inference scheme, as well as an inverse autoregressive flow posterior. It would be interesting to apply our method with this model, but in this paper for the sake of simplicity we experiment with a usual encoder-decoder convnet with one layer of latents. We verified that the networks are well converged. However, we have found that (perhaps unsurprisingly) a substantial improvement in performance can be gained by simply increasing the dimensionality of the latent representation and increasing the network capacity: both the reconstructions and the samples become much sharper. We report the results with these higher-capacity architectures in the updated manuscript.\\n \\nQ2. Thanks for this question, we now performed a controlled experiment on the impact of network capacity on the performance of the method. To this end, we trained a set of networks with gradually increasing widths. The results are shown in Figure 2 and Figure 4. For small-capacity models, fixed-weight networks perform substantially better than YOTO. This is to be expected, since with limited capacity it is difficult for a single YOTO network to cover different parameter settings. However, when increasing the network capacity, YOTO catches up and almost matches the performance of per-weight trained networks. \\n\\nQ3. We performed this selection on the validation set.\\n\\nQ4. We agree that the batch16 models were confusing, and we removed them altogether. Now both the fixed-weight and the YOTO models are trained in exactly the same way.\\n\\nQ5. We re-trained every model 3 or 4 times, and report standard deviations in most experiments in the updated paper. We found training to be quite stable on most tasks.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n-------------\\nThe authors propose a methodology to train a single Deep Neural Network (DNN) that minimize a parametrized (weighted) sum of different losses. Since the model is itself conditioned by the weights, it allows to train a single model for all weights values, instead of retraining when the weights change.\\n\\nExperiments suggest that this methodology does not degrade the performances to much w.r.t. retraining on every weight changes.\\n\\nThe proposed conditioning of the layers is done via a reparametrization (FiLM, Perez et al. 2018) of the weights with a scale $\\\\sigma(\\\\lambda)$ and a bias $\\\\mu(\\\\lambda)$ where $\\\\mu$ and $\\\\sigma$ are MLP. This allows to condition the layer on $\\\\lambda$, while keeping the number of parameters low.\\n\\nNovelty\\n----------\\nThe idea of integrating a family of loss functions with model conditioning has also been proposed by Brault et al. [1], in the context of multi-task kernel learning. Hence I believe this work should be acknowledged.\\n\\nAs the product of kernels is the tensor product of their feature maps, it would suggest to condition the network's layers by taking the tensor product of the weights with respect to an MLP on $\\\\lambda$. This could be applied on each layers or simply on the last Fully Connected layer. Note however that it would drastically increase the number of parameters and hence not be a viable solution (or maybe with some channel pooling?).\", \"references\": \"[1] Infinite Task Learning in RKHSs; Brault, Romain and Lambert, Alex and Szabo, Zoltan and Sangnier, Maxime and d'Alche-Buc, Florence; Proceedings of Machine Learning Research; 2019.\\n\\nQuality\\n----------\\nThe paper is self content and well written.\\n\\nExperiments are well detailed and seems to be reproducible. I would be a great addition to release the code in a public repository (with a link in the paper or appendices) if the paper is accepted.\", \"i_would_also_suggest_the_following_experiments\": [\"An experiment showing the time penalty induce by training the loss conditional model. The authors claims that training multiple separate models is inefficient compared to their proposed method. While it seems obvious, it deserve an experiment as one of the claim.\", \"The authors propose to sample one $\\\\lambda$ per SGD iteration. However it ma be useful to sample more of them. Especially when the set of $\\\\lambda$ is large (high dimensional)\", \"Possibly use a pre-trained model and only tune the $\\\\sigma(\\\\lambda)$, $\\\\mu(\\\\lambda)$ MLPs\", \"Overall my decision is weak accept, the paper lacks of novelty and the experiments could be more extensive.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The problem tackled by the paper is related to the sensitivity of deep learning models to hyperparameters. While most of the hyperparameters correspond to the choice of architecture and optimization scheme, some influence the loss function. This paper assumes that the loss function consists of multiple weighted terms and proposes the method of finding the optimal neural network for each set of parameters by only training it once.\", \"the_proposed_method_consists_of_two_aspects\": \"the conditioning of the neural network and the sampling of the loss functions' weights. Feature-wise Linear Modulation is used for conditioning and log-uniform distribution -- for sampling.\\n\\nMy decision is a weak accept.\\n\\nIt is not clear to me if the choice of performance metrics is correct. In many practical scenarios, we would prefer a single network that performs best under a quality metric of choice (for example, perceptual image quality) to an ensemble of networks that all are good at minimizing their respective loss functions. Therefore, the main performance metric should be the following: how much computation is required to achieve the desired performance with respect to a chosen test metric.\\n\\nMoreover, it might be obvious that the proposed method would be the best w.r.t. this metric, compared to other hyperparameters optimization methods, since it only requires a neural network to be trained once with little computational overhead on top. But then its performance falls short of the \\\"fixed weight\\\" scenario, where a neural network is trained on a fixed loss function and requires to raise the complexity of the network to achieve similar performance.\\n\\nTherefore, obtaining a neural network that would match the desired performance in the test time and would have a similar computational complexity requires more than \\\"only training once\\\", with more components, such as distillation, required to be built on top of the proposed method. The title of the paper is, therefore, slightly misleading, considering its contents.\\n\\nAlso, it is slightly disappointing that the practical implementation of the method does not allow a more fine-grained sampling of weights, with uniform weights sampling shown to be degrading the performance. This implies that the method would have to be either applied multiple times, each time searching for a more fine-grained approximation for the best hyperparameters, or achieve a suboptimal solution.\", \"below_are_other_minor_points_to_improve_that_did_not_affect_the_decision\": \"-- no ImageNet experiments for VAE\\n-- make plots more readable (maybe by using log-scale)\\n-- some images are missing from fig. 7 comparison\"}",
"{\"comment\": \"Thanks for pointing out this very relevant paper, unfortunately we missed it during the search for related work. We will properly cite it and otherwise modify our submission accordingly.\", \"title\": \"Thanks\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #5\", \"review\": \"Due to the late rebuttal, I was not able to respond during discussion time.\\n\\nQ1, Q2) sound and look convincing.\\nQ3) I still cannot find details on this in the paper. How the validation set was chosen? What was the size? The authors need to make all the experiments fully reproducible.\\nQ4, Q5) ok\\n\\nMy concerns were sufficiently addressed in the current revision, and I will increase my score. However, the paper still feels close to a borderline, but probably, tending to \\\"accept\\\". \\n\\nAlso, I agree with Review #4, that also wondering about the application of the proposed method to hyperparameter search (Suggestion 1 in my review). Even if this would not be a sota in hyperparameter search, it feels like missing the opportunity to make the paper much stronger, by adding one more nice property to the proposed model.\\n\\n-----\\n\\nGenerative models often use a loss function that is a weighted sum of different terms, e.g., data-term and regularizer. Let's denote these weights as \\u03bb. The paper proposes a method for learning a single model that approximates the result produced by a generative model for a range of loss-term weights \\u03bb. The method uses the following mechanisms i) \\u03bb-conditioned layers ii) training with a stochastic loss function, that is induced by a (log-uniform) distribution over \\u03bb. The performance of the model is demonstrated on the following problems learning \\u03b2-VAE, image compression, and style transfer. The models clearly demonstrate an ability to approximate problem solutions for a range of coefficients. The paper is clearly written. The experiments, however, need future discussion.\\n\\n1) The beta-VAE experiments (sec. 4.1)\\n\\nQ1. While models demonstrate a reasonable behavior on Shapes3d dataset. The samples and reconstructions on CIFAR10 (Figure 8) indicate that all models are not trained well. If this is the case, conclusions might be misleading, since the approximating output of undertrained models might be much simple comparing to well-trained ones. Authors may want to provide a comparison of the trained models with conventional VAEs (with \\u03b2=1), the reference figures for CIFAR10 are provided, for example, in https://arxiv.org/abs/1606.04934.\\n\\nQ2. Wider YOTO seems to help a lot, but, what happens to the baseline models of increased size?\\n\\n\\\"We select the fixed \\u03b2 so that it minimizes the average loss over all \\u03b2 values.\\\"\\n\\nQ3. Was it done directly on a test set, or were validation-data used?\\n\\n2) Image compression (sec. 4.3)\\n\\n\\\"Finally, a wider model trained with a larger batch size (\\u201cYOTO wider batch16\\u201d) closely follows the fixed weight models in the high compression regime and outperforms them in the high quality regime.\\\" (Figure 5)\\n\\nQ4. How is this compared to the baseline with batch16?\\n\\nQ5. Authors also may want to provide std for provided metrics. The difference does not look statistically significant.\", \"suggestion_1\": \"It might also be interesting to see if we can use this technique to perform a hyperparameter search. Train the model, select one the best performing set of hyperparameters, and then train models with this best value.\\n\\nOverall, the paper proposes an interesting technique, that surprisingly, can work for a range of hyperparameters, and potentially have a high practical impact. However, the empirical evaluation is half-baked, specifically has certain methodological drawbacks e.g., perhaps undertrained beta-VAE model, absence of standard deviations while comparing (close) numerical results, and comparing models with different optimization parameters -- the performance difference might be due to optimization. \\n\\nI recommend to reject the paper, however, I will appreciate discussions with authors and other reviewers, and will consider changing my score in case of reasonable argumentation.\"}",
"{\"comment\": \"The submission is a really interesting application of feature-wise transformations. There exists prior work on using conditional instance normalization to condition a style transfer network on content and style loss coefficients (Babaeizadeh and Ghiasi, 2019) that should be acknowledged. As a result, I don\\u2019t think this submission can claim novelty on the idea of conditioning a network on loss coefficients, but it is still a valuable contribution in that it demonstrates the general applicability of this idea beyond the style transfer domain.\", \"references\": \"Babaeizadeh, M. & Ghiasi, G. (2019). Adjustable Real-time Style Transfer. In ICLR Workshop on Deep Generative Models for Highly Structured Data.\", \"title\": \"Relevant work\"}"
]
} |
ryeYpJSKwr | Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization | [
"Michael Volpp",
"Lukas P. Fröhlich",
"Kirsten Fischer",
"Andreas Doerr",
"Stefan Falkner",
"Frank Hutter",
"Christian Daniel"
] | Transferring knowledge across tasks to improve data-efficiency is one of the open key challenges in the field of global black-box optimization. Readily available algorithms are typically designed to be universal optimizers and, therefore, often suboptimal for specific tasks. We propose a novel transfer learning method to obtain customized optimizers within the well-established framework of Bayesian optimization, allowing our algorithm to utilize the proven generalization capabilities of Gaussian processes. Using reinforcement learning to meta-train an acquisition function (AF) on a set of related tasks, the proposed method learns to extract implicit structural information and to exploit it for improved data-efficiency. We present experiments on a simulation-to-real transfer task as well as on several synthetic functions and on two hyperparameter search problems. The results show that our algorithm (1) automatically identifies structural properties of objective functions from available source tasks or simulations, (2) performs favourably in settings with both scarse and abundant source data, and (3) falls back to the performance level of general AFs if no particular structure is present. | [
"Transfer Learning",
"Meta Learning",
"Bayesian Optimization",
"Reinforcement Learning"
] | Accept (Spotlight) | https://openreview.net/pdf?id=ryeYpJSKwr | https://openreview.net/forum?id=ryeYpJSKwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"r2PDRSwode",
"B1xy9hr3sB",
"H1xSRBvior",
"r1l_8VDiiH",
"HkgRB7vsoS",
"S1xV6bwjsS",
"S1xux0YRKr",
"BJl7k5fPtH",
"SkgpLghVYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737870,
1573833862922,
1573774796762,
1573774416234,
1573774149757,
1573773756202,
1571884528007,
1571396059276,
1571237973165
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1996/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1996/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1996/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1996/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1996/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1996/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1996/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1996/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper explores the idea of using meta-learning for acquisition functions. It is an interesting and novel research direction with promising results.\\n\\nThe paper could be strengthened by adding more insights about the new acquisition function and performing more comparisons e.g. to Chen et al. 2017. But in any case, the current form of the paper should already be of high interest to the community\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Code update, Minor corrections\", \"comment\": [\"Revision 15th Nov, 2019.\", \"We updated the code in the anonymous repository.\", \"We corrected some minor spelling mistakes in the first revision version of the PDF.\"]}",
"{\"title\": \"Author response to official blind review #2\", \"comment\": \"We are grateful for your detailed comments and suggestions to further improve our paper. We hope that the following remarks and the updated version of our paper help to improve your impression of MetaBO. We highlighted the updated sections in yellow in the updated PDF.\\n \\n1.) Comparison with Chen et al. [1]\\n-----------------------------------\\nWe agree that Chen et al., \\\"Learning to learn *without* gradient descent by gradient descent\\\" [1], would be an interesting baseline for our method. We would like to clarify why we did not benchmark against this approach.\\n - There is no code available for this method. The link provided by the reviewer points to a repository for a *different paper with a similar name* (namely Andrychowicz et al., \\\"Learning to learn *by* gradient descent by gradient descent\\\" [2]). This tackles the problem of learning *local, gradient-based* optimization and is thus not applicable in our scope of *global, derivative-free* optimization. We have exchanged several emails with the first author of [1] (Yutian Chen) about availability of the code (already before the time of our submission), and when we emailed again this week based on the reviews, he kindly allowed us to quote his email answer to our question whether the code for \\\"Learning to learn without gradient by gradient descent\\\" could be made publicly available: \\\"Unfortunately I haven't been able to open source code due to lack of time. I'll check if there's some way of sharing part of the code with you, but I can't guarantee on that.\\\", Yutian Chen, Sept. 24th, 2019.\\n - We spent considerable effort trying to reproduce the results on our own. However, we were not able to reach the performance reported in the paper's supervised setting. Therefore, unfortunately, an adaptation of the proposed method to our transfer learning setting, which would require an even more complex RL-approach (due to the lack of gradients of the objective functions, as you correctly pointed), seems out of reach for us at the moment. In fact, this is one of the reasons why we chose to retain the GP-surrogate model and only tackle the less complex problem of learning solely the AF.\\n\\n2.) Investigation of generalization performance\\n-----------------------------------------------\\nAs suggested, we extended and improved the experiments assessing MetaBO's generalization performance (App. A.2).\\n - We included scalings and translations in our experiments on the generalization performance of MetaBO on the global optimization benchmark functions. We present the proposed heatmap visualization in the updated PDF (App. A.2, Fig. 8).\\n - We also present the generalization performance on the simulation to real task (App. A.2, Fig. 9).\\n - We now *do not include* the training distribution as a subset of the test distribution anymore in these experiments. We emphasize however that the intended use case of our method is an evaluation on functions from the training distribution. In the simulation-to-experiment task, for example, the training distribution is constructed on a range around measured physical parameters which is chosen such that the true parameters of the hardware system lie in this range with high confidence. Nevertheless, these experiments can give interesting insights into the nature of the tasks and we gladly add them to the paper. \\n\\n3.) Dependence on the number of source tasks\\n--------------------------------------------\\nAs you suggested, we extended our experiments on the dependence of MetaBO's performance on the number of training tasks (App. A.3, Fig. 10). The new experiments underline again that MetaBO performs favorably both in the regime of scarse and abundant source data. Furthermore, MetaBO scales much better to large amount of source tasks than the baseline methods (App. A.3, Tab. 3).\\n\\n4.) Minor comments\\n------------------\\n - We reorganized and improved Sec. 4 explaining our method and hope that this helps to clarify your questions.\\n - We added the proposed references.\\n - We corrected some minor spelling mistakes.\", \"references\": \"[1] Chen et al., \\\"Learning to learn without gradient descent by gradient descent\\\", ICML 2017\\n[2] Andrychowicz et al., \\\"Learning to learn by gradient descent by gradient descent\\\", NIPS 2016\"}",
"{\"title\": \"Author response 1/2 to official blind review #3\", \"comment\": \"We thank you for the overall positive feedback and are grateful for numerous and detailed suggestions to improve our paper. We hope that the following remarks and the new results and clarifications in the updated version of the paper remedy your concerns and can convince you to amend your score. We highlighted the updated sections in yellow in the updated PDF.\\n\\n1.) Suggested baseline methods, Visualizing MetaBO's search strategies\\n----------------------------------------------------------------------\\nWe gladly followed your suggestion to provide additional experiments to gain insight into the search strategies MetaBO produces. Please refer to App. A.1 for details.\\n - We implemented the two suggested baseline methods (GMM-UCB, eps-greedy) to determine whether MetaBO learns representations that go beyond standard AFs combined with a prior over x. To obtain an upper bound on what could be achieved by tuning the parameters w (of GMM-UCB) as well as \\\\epsilon (of eps-greedy), we selected their best value on the test set (of course, this can't be done in practice, but even when this approach is allowed to \\\"cheat\\\" like this, MetaBO still performs better), cf. App. A.1, Tab. 2. \\nOn top of your suggestion, we also additionally considered a schedule which gradually decreases w and \\\\epsilon over the course of an optimization episode in order to reduce the impact of the prior as data on the target task becomes more and more reliable (we saw this as the natural extension of your proposed baselines). MetaBO still outperforms the proposed baseline methods, indicating that it learns search strategies which go beyond a prior over x combined with standard AFs, cf. App. A.1, Fig. 7. \\n - We would like to point out that this was to be expected at least for GMM-UCB as this method is very closely related to the TAF-approach which served as a baseline in our paper. Indeed, TAF is also a weighted superposition of a prior from the source tasks (observed improvement according to the source GPs) and a standard AF (EI) on the target task. Moreover, TAF employs principled mechanisms to adjust the weights of this superposition according to the relevance of source data on the target tasks (resulting in the presented versions TAF-ME and TAF-RANKING).\\n - To shed further light on the search strategies MetaBO produces, we devised two simple one-dimensional toy problems (Rhino-1 (App. A.1, Fig. 5), Rhino-2 (App. A.1, Fig. 6)) to demonstrate that MetaBO learns to use non-greedy evaluations in the beginning of an episode to obtain high information gain (rather than low regret) about the target function. This results in more efficient search strategies compared to approaches which simply favour specific zones in the search space.\\n\\n2.) Extended experiments on functions drawn from GP priors, dimensionality-agnostic NAFs\\n----------------------------------------------------------------------------------------\\nAs suggested, we extended and improved the experiments on objective functions sampled from GP priors. Please refer to App. A.4 for details.\\n - To increase the complexity of the tasks, we performed experiments on smaller lengthscales with RBF-kernel and additionally performed experiments using the Matern-5/2 kernel (App. A.4, Fig. 11).\\n - We would like to emphasize that the experiments on GP priors merely serve as a sanity check in our paper. The focus of our work has been the transfer-learning setting in which the x-feature plays a central role as it enables MetaBO to recognize structure in the source tasks to learn sophisticated sampling strategies (as exemplified by our new Rhino-experiments). Note that all other considered transfer learning methods (including GMM-UCB, eps-greedy and TAF) also rely on this input feature (through the GMM, the best source designs, and the source GPs, respectively). Therefore, we did not further investigate dimension-agnostic versions of MetaBO in the paper. Nevertheless, we agree that this is an interesting route of research which we consider to address in more detail in future work. \\n - We now plot the results in log-scale.\\n\\n3.) Applicability of MetaBO regarding test- and training-time\\n-------------------------------------------------------------\\nWe would like to point out that NAFs can be used as a plug-in feature in any BO framework as it has the exact same interface as standard AFs. Furthermore, gradients for AF-optimization can be obtained effortlessly using automatic differentiation frameworks (we used the standard PyTorch framework for our implementation). To demonstrate that test-time runtime is not increased considerably compared to standard EI and to show that MetaBO scales much better w.r.t. the amount of source data compared to TAF, we added a table (App. A.3, Tab. 3) to our paper which compares the presented AFs with respect to test-time runtime. Regarding training time, we would like to point to the second paragraph of Sec. 5, where we now detail the computational resources for NAF-training.\"}",
"{\"title\": \"Author response 2/2 to official blind review #3\", \"comment\": \"*** Second part of author's response, please also refer to first part ***\\n\\n4.) Tracked regret level in generalization experiments\\n------------------------------------------------------\\nFollowing suggestions of AnonReviewer2, we extended the experiments investigating MetaBO's generalization performance on the global optimization benchmark functions (App. A.2, Fig. 8). In these new experiments, we consistently used the 1%-percentile of evaluations of the respective objective functions on a Sobol grid with one million points as the regret threshold. (We used different regret thresholds for different functions, since an error of 0.1 might be a lot for one function but only little for another; the 1%-percentile does not suffer from this issue, as it does already adapt to the range of outputs of the function at hand.)\\n\\n5.) Multi-task GPs\\n------------------\\nWe did not consider multi-task GPs as proposed by Swersky et al. [2] as a baseline method in our paper because it is reported in the literature [3] that the performance of this method degrades for more than approximately M=5 tasks. While it is indeed correct that MTBO's global probabilistic model should scale to M=20 tasks with a few tens of data points each, it is reported to be infeasible to \\\"correctly\\\" determine (using MCMC) the MxM parameters of the task-correlation kernel.\\n\\n6.) Incumbent as input feature\\n------------------------------\\nWe thank you for the suggestion to add the incumbent to the set of input features of NAF. We will consider adding this feature in future experiments, but we note that this should only improve the performance of our method. \\n\\n7.) Include flips and rotations\\n-------------------------------\\nWe agree that this would be one of many further interesting experiments to perform. However, due to time constraints in the rebuttal phase, we decided to focus on your other suggestions, as we feel they might give a better impression of MetaBO's capabilities.\\n\\n8.) Nitpicks, spelling, and grammar\\n-----------------------------------\\n- We went through the paper again and corrected some minor spelling and grammatic mistakes.\\n- The loss function of Chen et al. [1] consists of the sum of the losses incurred over the optimization episodes performed during training. To be able to train this in a supervised fashion, one has to backpropagate gradients through the whole optimization episode which includes the objective function evaluations, the GP, as well as EI. Therefore, in the original setting of [1], gradients of the objective functions are necessary. It is indeed correct that Chen et al. used samples from a GP-prior as objective functions during training. However, to apply their method in a transfer learning setting, one would have to use the available source objective functions as the training distribution. Therefore, the gradients of these objectives would have to be available. Please refer also to our discussion of this point in the answer to AnonReviewer2's review.\", \"references\": \"[1] Chen et al., \\\"Learning to learn without gradient descent by gradient descent\\\", ICML 2017\\n[2] Swersky et al., \\\"Multi-task bayesian optimization\\\", NIPS 2013\\n[3] Klein et al., \\\"Fast bayesian optimization of machine learning hyperparameters on large datasets\\\", AISTATS 2017\"}",
"{\"title\": \"Author response to official blind review #1\", \"comment\": \"Thanks for your very positive feedback and the acceptance score. We gladly answer the remaining open questions. We highlighted the updated sections in yellow in the updated PDF.\\n\\n1.) Gain insights into behavior of NAFs\\n---------------------------------------\\nWe provide new experiments to give more insights into the behavior of our neural acquisition functions (NAFs). The results show that MetaBO's NAFs indeed learn representations that go beyond standard AFs combined with a prior over x. Please refer to Appendix A.1 of the updated PDF for details.\\n - We devised two one-dimensional toy problems (Rhino-1 (App. A.1, Fig. 5), Rhino-2 (App. A.1, Fig. 6)) to demonstrate that MetaBO learns to use non-greedy evaluations in the beginning of an episode to obtain high information gain (rather than low regret) about the target function. This results in more efficient search strategies compared to approaches which simply favor specific zones in the search space. \\n - Note that this effect can already be observed in the original results in our paper (Fig. 2, Fig. 3(a)), where MetaBO starts episodes with evaluations yielding higher regret than other pre-informed AFs (TAF) but quickly surpasses their performance by using the information obtained through these non-greedy evaluations. \\n - We further implemented two additional easily-interpretable baseline methods (GMM-UCB, eps-greedy) as proposed by AnonReviewer3 which rely solely on a prior over x. The results (App. A.1, Fig. 7) show that such simple approaches are not able to reach MetaBO's performance, underlining that MetaBO produces more sophisticated search strategies.\\n\\n2.) Difference to learning-to-learn\\n-----------------------------------\\nRegarding your question of the difference of our approach to a learning-to-learn type approach such as Chen et al. [1], we would like to point you to our answer to AnonReviewer2, where we discuss this question in detail.\\n\\n3.) Architecture\\n----------------\\nWe thank you for the remark that the right-most panel of Fig. 1 is inaccurate. It indeed shows a continuous distribution, while our policy defines a categorical distribution. However, our NAFs can be evaluated at any point in the domain and our method does indeed use an adaptive grid \\\\xi_t to form this categorical distribution from the AF outputs during training. We wanted to emphasize this through the shaded area in our figure. We improved Fig. 1 in the PDF to remove the inaccuracies and extended and improved our description of our architecture in Sec. 4. We hope that the additional explanations help to clarify your questions.\\n\\n4.) Minor mistakes\\n------------------\\nWe corrected the minor mistake. Furthermore, we explained that we did not carry out expensive hardware experiments for MetaBO-50 because it did not show promising performance in simulation compared to the full version of MetaBO.\", \"references\": \"[1] Chen et al., \\\"Learning to learn without gradient descent by gradient descent\\\", ICML 2017\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a framework for meta learning neural acquisition functions for the Bayesian optimization of various underivable functions. The neural acquisition functions are learned using proximal policy optimization in an outer loop on different problems on the same domain, and the learned acquisition function can be deployed at test time in a practically vanilla Bayesian optimization procedure. The authors demonstrate the performance of the method through benchmarks on four problems.\\n\\nI recommend that this paper be accepted for publication. The paper is well written and it proposes a novel direction for research. However, I think that the authors should look further inside their newly designed acquisition functions, not merely treat them as black boxes. Find below some questions and comments.\\n\\n\\nDue to the inclusion of the sample position x in the state tuple, I am curious as to what the authors think is the difference between their method and a learning-to-learn type of approach. Is the acquisition function learning to favor specific zones in the search space based on previous experiments? Some more experiments or insights on this would be useful to better understand what makes this method succesful.\\n\\nWhy was a categorical distribution used for the policy? These samples are located in D, aren't you getting rid of information by assuming they are completely independent? Aren't you also biasing the distribution by adding the local maxima to the set of \\u03be (Xi)?\\n\\nAlso, the right-most block in Figure 1 shows a continuous probability distribution, which is incorrect. If the distribution is indeed categorical, there is no continuity between points.\", \"minor_mistakes\": [\"page 5, paragraph 3: \\\"This choice does not penalize explorative evaluations which do not yield and immediate improvement\\\" should read \\\"an immediate improvement\\\"\", \"Figure 4b, MetaBO-50 is missing\", \"***********\"], \"post_rebuttal\": \"************\\n\\nI have read the other reviews and the various replies by the authors. I'd say you did a good job in answering most questions and added a lot of valuable information in the appendices. I maintain my score.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: The authors propose a meta-learning based alternative to standard acquisition functions (AFs), whereby a pretrained neural network outputs acquisition values as a function of hand-chosen features. These neural acquisition functions (NAFs) are trained on sets of related tasks using standard RL methods and, subsequently, employed as drop-in replacements for vanilla AFs at test-time.\", \"feedback\": \"Overall, the proposed method makes sense and would benefit from further experimental ablation. Using RL to automatically derive (N)AFs is a nice change of pace from the hand-crafted heuristics that dominate BO. I like the ideas at play here and hope that you will convince me to amend my score.\\n\\nResults on synthetic functions presented in the body of the paper demonstrate that NAF outperforms, e.g., EI when transferring between homogenous tasks. In contrast, results when transferring between relatively heterogenous functions (Fig. 9) indicate that the aforementioned performance gain reflect NAFs ability to specialize. Two things remain unclear however: \\n a. What types of regularity are NAFs able to exploit?\\n b. How quickly do NAFs benefits fall off as tasks become increasingly heterogenous?\\n\\n\\nRegarding (a), I am not yet convinced that NAFs learn representations that go beyond standard AFs combined with a prior over $x$. To help test this hypothesis, here is a sketch of a simple baseline algorithm:\\n 1. Fit, e.g., a Gaussian Mixture Model to the top $k=1$ designs $x^{*}_{i}$ on observed tasks $i \\\\in [1, N]$, \\n 2. Given a new task $f_{j}$, let log-likelihood $GMM(x)$ act as a 'prior' of sorts on $x$ \\n 3. Use cross-validation to tune the scalar parameter $w$ of a new AF defined as the convex combination:\\n\\n GMM-UCB(x_{k}) = w * GMM(x_{k}) + (1 - w) * UCB(x_{k})\\n = w * GMM(x_{k}) + (1 - w) * [\\\\mu_{k} + \\\\sqrt{\\\\beta} * \\\\sigma_{k}].\\n\\nI suggest using UCB both because NAF could easily learn it from its inputs and because EI values often decay dramatically over the course of BO (I usually set UCB's confidence parameter to a fixed value $\\\\beta = 2$). \\n\\nFurther simplifying this idea, you could instead use an $\\\\epsilon$-greedy style heuristic that, with probability $\\\\epsilon$, samples without replacement from the set of historical minimizers and otherwise uses a standard AF. These baselines are comparatively straightforward and easily interpreted, so I hope that you will consider adding something along these lines.\\n\\n\\nAdditionally, here are some questions/suggests to help probe (a-b):\\n 1. Another baseline: EI with multi-task GP? The cubic scaling should be fine for, e.g., 'xxx-20' multi-task variants.\\n 2. Extend experiments on functions drawn from GP priors (Fig 9):\\n i. How does homogeneity (as enforced via the GP hyperprior) impact performance when transferring knowledge?\\n ii. Rate of convergence suggests sampled tasks may be too easy; consider using Matern-5/2 and smaller lengthscales [*].\\n 3. What happens if you expand the task augmentation process to further include, e.g., flips and rotations?\\n 4. How do 'dimension-agnostic' versions of NAF (where $x$ is excluded from its input) perform on other synthetic tasks?\\n 5. Visualizing NAF (or the search strategies it produces) would be useful for building intuition.\\n 6. How were NAF input features chosen? Were alternatives such as also passing the 'best seen' value, considered?\\n 7. How easy to use are NAFs in comparison to alternative AFs (both in terms of training and test-time maximization)?\\n 8. Please report regret in log-scale (in appendix); currently, it is hard to tell what is going on in some places. Similarly, the tracked regret level in Figures 3 & 7 changes between tasks without explanation?\\n\\n\\nIn summary, I genuinely want NAF to succeed but am not yet convinced of its performance. If you can provide empirical results to help extinguish my doubts, I will gladly change my assessment.\\n\\n\\nNitpicks, Spelling, & Grammar: \\n - Some minor spelling and/or grammatical error, but the paper reads fairly well.\\n - On [Chen et al., 2017]: To the best of my knowledge, these RNN-based methods only require the gradient of the loss function. For example, using GP-based EI as the training signal only requires differentiating through EI + GP rather than through the target function $f$. Similarly, in cases where gradients are not available, the authors elude to use of RL algorithms such as REINFORCE.\\n\\n[*] For Matern-5/2, just change the prior on your basis functions' weight parameters (https://github.com/metabo-iclr2020/MetaBO/blob/master/metabo/environment/objectives.py#L295) from standard normal to multivariate-t with 5 degrees of freedom.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors present MetaBO, which uses reinforcement learning to meta-learn the acquisition function (AF) for Bayesian Optimization (BO) instead of using a standard constant AF. The authors shows that MetaBO enables transferring knowledge between tasks and increasing sample efficiency on new tasks. The paper is mostly clearly written and I am not aware of existing work on meta-learning the AF for BO. However, the approach is related to Chen et al, which is cited in the text but not used as a baseline. It is also not shown clearly enough how the performance of MetaBO depends on the number of training tasks and distance between training and test tasks. I therefore consider the paper as borderline.\\n\\nMajor comments\\n=============\\n1. The presented approach is very similar to Chen et al, which is discussed in the related work section but not used as a baseline. Although Chen et al assumed that f(x) is differentiable, their approach can be easily generalized to non-differentiable functions by using RL as Chen et al discussed in the last paragraph of section 2.1. Chen et al does not depend on a GP and is therefore more scalable. The source code is publicly available (https://github.com/deepmind/learning-to-learn) and you can also adapt your implementation by removing the GP part.\\n\\n2. Global Optimization Benchmark Functions: How does the performance of MetaBO depend on the number of training samples (number of training tasks times the budget T)?\\n\\n3. Figure 3: How does MetaBO generalizes to functions that are translated and scaled at the same time? This can be visualized as a heatmap with the scaling and translation on the x and y axis, and using the color to show the number of steps to reach a certain reward. How does the generalization performance depend on the noise level, where the noise can be sampled from standard normal distribution? Why does EI perform better if the function is translated more?\\n\\n4. Simulation-to-Real task: How does the generalization performance of MetaBO depend on the distance between training and source tasks (x-axis: distance; y-axis: steps to reach a certain reward)? You sampled test tasks 10%-200% around the true parameters. Test tasks can therefore have identical or similar parameters than training tasks.\\n\\n5. Simulation-to-Real task: How does the performance depend on the number of training tasks (x-axis: # training tasks; y-axis: steps to reach a certain performance)?\\n\\nMinor comments\\n=============\\n6. Section 1, 2nd paragraph: The performance of BO also depends on the GP kernel and kernel hyper-parameters, not only the AF. Please mention this. Similarly, \\u2018no need to calibrate any hyperparameter\\u2019 in section 4 ignores GP hyper-parameters. Please clarify.\\n\\n7. Section 2, 4th paragraph: A Neural Process (https://arxiv.org/abs/1807.01622) is another scalable alternative to a GP. Please cite.\\n\\n8. Section 3, 2nd paragraph: Please cite standard AFs such as EI, PI, UCP. \\n\\n9. Section 4, last paragraph before \\u2018Training procedure\\u2019. The state s_t is undefined at this point. This section misses a clear description of the state, reward, and transition function of the MDB. Does the state s_t take previous function evaluations into account (e.g. via a RNN state), or only \\\\mu and \\\\sigma at the current step t? Does the state include the time step as described in the text and in the section about the value network in the appendix but not in table 1.\\n\\n10. Section 4, \\u2018the state corresponds to the entire functions\\u2019. It only depends on the first two moments (and the time step t?).\\n\\n11. Section 4: replace \\u2018not to be available\\u2019 by \\u2018unavailable\\u2019.\\n\\n12. Section 4: reference or describe \\u2018Sobol grid\\u2019.\\n\\n13. Section 4: The approach to maximize the AF on grid points does not scale to high-dimensional search spaces. Please also clarify how global and local grid points were chosen. In particular, \\u2018local maximization\\u2019 is unclear. Also, \\u2018cheap approximation\\u2019 of the global maximum of f(x) is infeasible if the search space is high-dimensional. \\n\\n14. Please move figure 3 above figure 4.\"}"
]
} |
B1xu6yStPH | Using Explainabilty to Detect Adversarial Attacks | [
"Ohad Amosy and Gal Chechik"
] | Deep learning models are often sensitive to adversarial attacks, where carefully-designed input samples can cause the system to produce incorrect decisions. Here we focus on the problem of detecting attacks, rather than robust classification, since detecting that an attack occurs may be even more important than avoiding misclassification. We build on advances in explainability, where activity-map-like explanations are used to justify and validate decisions, by highlighting features that are involved with a classification decision. The key observation is that it is hard to create explanations for incorrect decisions. We propose EXAID, a novel attack-detection approach, which uses model explainability to identify images whose explanations are inconsistent with the predicted class. Specifically, we use SHAP, which uses Shapley values in the space of the input image, to identify which input features contribute to a class decision. Interestingly, this approach does not require to modify the attacked model, and it can be applied without modelling a specific attack. It can therefore be applied successfully to detect unfamiliar attacks, that were unknown at the time the detection model was designed. We evaluate EXAID on two benchmark datasets CIFAR-10 and SVHN, and against three leading attack techniques, FGSM, PGD and C&W. We find that EXAID improves over the SoTA detection methods by a large margin across a wide range of noise levels, improving detection from 70% to over 90% for small perturbations. | [
"adversarial",
"detection",
"explainability"
] | Reject | https://openreview.net/pdf?id=B1xu6yStPH | https://openreview.net/forum?id=B1xu6yStPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Rj0ZE8tyv",
"HJeuJ1NX9S",
"Skxtu9afqB",
"Bke1qCDAtB",
"r1xUKwB0FH",
"SylGovyRFS",
"rJeIFIAcYS",
"Hyxa0GMcFH",
"r1l5cbcsPB"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798737842,
1572187871767,
1572162160553,
1571876486961,
1571866494363,
1571841946212,
1571640957925,
1571590868945,
1569591698436
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1995/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1995/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1995/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1995/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1995/Authors"
],
[
"~Anthony_Wittmer1"
],
[
"ICLR.cc/2020/Conference/Paper1995/Authors"
],
[
"~Anthony_Wittmer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes EXAID, a method to detect adversarial attacks by building on the advances in explainability (particularly SHAP), where activity-map-like explanations are used to justify and validate decisions. Though it may have some valuable ideas, the execution is not satisfying, with various issues raised in comments. No rebuttal was provided.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1995\", \"review\": [\"The paper proposes a method to check whether a model is under attack by using state of the art explainability model, SHAP. They evaluated their technique using CIFAR-10 and SVHN w.r.t. 5 baseline techniques. They showed their method outperforms all the other baselines with a significant margin.\", \"Overall I think the paper made a valuable contribution to the adversarial ML literature. Using explainability to detect the presence of adversarial attacks seems like a nice intuitive idea and the results show that it indeed works.\", \"However, the contribution of the paper is rather incremental. They just used SHAPE to adversarial and negative examples. I do not see any insight while explaining the results.\", \"Why under PGD and FGSM attack under higher noise, the proposed technique is similar or slightly worse than Lid and Mohalonabis baselines?\", \"I would also like to see how these results hold good for a complicated dataset like ImageNet\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary: The authors propose an explanation-based adversarial example detection algorithm. The main idea is to train a discriminator to detect whether the explanatory saliency map is consistent with the input. Experiments have been conducted on CIFAR10 and SVHN to validate the method.\", \"comments\": \"+ The idea is straightforward and easy to follow.\\n\\n- The use of SHAP as the only explanation method is not well explained. There are a plenty of works on visual explanation methods, such as guided-backprop[1], excitation-backprop[2], integrated gradient[3], Grad-CAM[4], real-time saliency[5] and so on. And based on my expertise, SHAP cannot generate the most accurate saliency among these methods. If the proposed framework is general, why not to conduct ablation study on the different choice of explainer?\\n\\n- Doubts on the effectiveness of the proposed method. According to former works[6, 7], explanatory saliency methods are vulnerable and unreliable with respect to input perturbations. But in this paper, the authors assume that the explanation saliency map for normal examples are perfectly correct and used as positive instances for training the discriminator. I think they only focus on target attack, in which the attacking target label is semantically distinct from the original label, and the resulting saliency map distribution is very different from the correct one. However, considering a tabby cat image is perturbed to become tiger cat, since two classes are very close, the resulting saliency maps should be similar and the detector may fail to detect the adversarial example. Therefore, I encourage the authors to provide more results on this challenging scenario (for example, conduct un-target attack on imagenet dataset).\\n\\n- The reported results in Figure 2(e) is abnormal. First, the blue line (authors' method) is very close to AUC=1.0 across different noise levels, which means that the detector can perfectly classify all the adversarial examples in all the situation. Second, the reported values of other methods are not correct. For example, the black line (original Mahalanobis) is below AUC=0.5 across all the noise level. However, in the Table3 ResNet-CIFAR10 row of its original paper[8], the reported AUC under C&W attack is 95.84, which is much larger than those shown in the figure. Therefore, I think the comparison is invalid. Similar problems also appear in Figure 2(f).\\n\\n[1] J. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. (2015). Striving for simplicity: The all convolutional net. In ICLR (workshop track).\\n[2] J. Zhang, Z. Lin, J. Brandt, X. Shen, and S. Sclaroff. (2016). Top-down neural attention by excitation backprop. In ECCV.\\n[3] Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic attribution for deep networks. In ICML. \\n[4] Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV.\\n[5] Dabkowski, P., & Gal, Y. (2017). Real time image saliency for black box classifiers. In NeurIPS.\\n[6] Kindermans, P. J., Hooker, S., Adebayo, J., Alber, M., Sch\\u00fctt, K. T., D\\u00e4hne, S., ... & Kim, B. (2019). The (un) reliability of saliency methods. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning (pp. 267-280). Springer, Cham.\\n[7] Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. In NeurIPS\\n[8] Lee, K., Lee, K., Lee, H., & Shin, J. (2018). A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In NeurIPS\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"A Simple method to detect adversarial examples, but needs more work.\\n\\n#Summary:\\nThe paper proposed a method that utilizes the model\\u2019s explainability to detect adversarial images whose explanations that are not consistent with the predicted class. The explainability is generated by SHAP, which uses Shapley values to identify relative contributions of each input to a class decision. It designs two detection methods: EXAID familiar, which is aimed to detect the known attacks and EXAID unknown, which is against unknown attacks. Both of the two methods are evaluated on perturbed test data which are generated by FGSM, PGD and CW attack with perturbations of different magnitudes. Qualitative results also show that the proposed method can effectively detect adversaries, especially when the perturbation is relatively small.\\n\\n#Strength\\nThe method is easy to implement and using the idea of interpretation for detecting adversarial examples seems interesting.\\n\\nGood results are demonstrated compared with other comparators.\\n\\n#Weakness\\nThe idea of this paper is based on the interpretation method of DNN. However, it has been shown that these interpretation methods are not reliable and easy to be manipulated [1][2]. Therefore, although the method is simple to design, it also brings other security concerns.\\nUnfortunately, the paper does not address these issues. In addition, the comparators listed in the experiments are not state-of-art or common baselines. It is either not clear why authors modified the existing method and develop their own \\u201cunsupervised\\u201d version. \\nIn the experiments, many details are omitted. For example, how is the \\u201cnoise level\\u201d defined? Are they based on L1, L2 or L-inf perturbation? For PGD attack, how many iterations does the generation run and what is the step size? How many effective adversarial examples are generated for training and testing? And all the experiments are conducted in a relatively small dataset, it is also suggested to do experiments on large datasets, e.g. Imagenet.\\nIn the evaluation part, it looks strange to me why the EXAID familiar performs worse than EXAID unknown in evaluating FGSM attack on SVHN since the EXAID familiar is trained using FGSM attack.\\n\\n#Presentation\\nI think the authors used a wrong template to generate the article. The font looks strange and the headnote indicates it is prepared for ICLR2020. The paper contains many typos and even the title contains a misspelling. Poor coverage of citations. There are more works for detecting adversarial examples that are published, e.g. [3][4][5]. On the other hand, the paper does not have the literature review for work related to the model interpretation.\\n\\nOverall, I think the paper is not good enough for publication at ICLR.\\n[1] Dombrowski, Ann-Kathrin, et al. \\\"Explanations can be manipulated and geometry is to blame.\\\" arXiv preprint arXiv:1906.07983 (2019).\\n[2] Ghorbani, Amirata, Abubakar Abid, and James Zou. \\\"Interpretation of neural networks is fragile.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 2019.\\n[3] Meng, Dongyu, and Hao Chen. \\\"Magnet: a two-pronged defense against adversarial examples.\\\" In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135-147. ACM, 2017.\\n[4] Liao, Fangzhou, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. \\\"Defense against adversarial attacks using high-level representation guided denoiser.\\\" In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1778-1787. 2018.\\n[5] Ma, Shiqing, Yingqi Liu, Guanhong Tao, Wen-Chuan Lee, and Xiangyu Zhang. \\\"NIC: Detecting Adversarial Samples with Neural Network Invariant Checking.\\\" In NDSS. 2019.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper suggests a method for detecting adversarial attacks known as EXAID, which leverages deep learning explainability techniques to detect adversarial examples. The method works by looking at the prediction made by the classifier as well as the output of the explainability method, and labelling the input as an adversarial example if the predicted class is inconsistent with the model explanation. EXAID uses Shapley values as the explanation technique, and is shown to successfully detect many standard first-order attacks.\\n\\nThough method is well-presented and the evaluation is substantial, the threat model of the oblivious adversary is unconvincing. The paper makes the argument that oblivious adversaries are more prevalent in the real world, but several works [1,2,3,etc.] have shown that with only query access to input-label pairs from a deep learning-based system, it is possible to construct black-box adversarial attacks. Thus, it is unclear why an attacker cannot just treat the detection mechanism as part of this black box, and mount a successful query-based attack. \\n\\nThough I recognize that the task of detection is separate from the task of robust classification, in both cases the defender should at least operate in the case where the attacker has input-output access to the end-to-end system (including whatever detection mechanisms are present). In particular, it seems impossible to \\\"hide\\\" a detector from an end user (when the method detects an adversarial example, it will alert the user somehow that the input was rejected), and so the user will be able to use this information to fool the system. The authors should investigate the white-box accuracy of their detection system, or at the very least try black-box attacks against the detector. For this reason I do not recommend acceptance for the paper at this time.\\n\\n[1] https://arxiv.org/abs/1804.08598\\n[2] https://arxiv.org/abs/1807.04457\\n[3] https://arxiv.org/abs/1712.04248\"}",
"{\"comment\": \"Thank you for your comment.\", \"there_is_a_fundamental_distinction_that_should_be_stressed_between_two_different_tasks\": \"(A) Build robustness against an attack, and (B) detect that an attack was made. While the tasks are related, they are fundamentally different.\\n\\nThe current paper discusses attack detection. The question points out that the results differ from a model-robustness paper, which is expected.\", \"more_specifically\": \"Consistent with previous papers, we do find that running Nattack on our model, the success rate of the attack was 100% on CIFAR-10. However, the current paper aims to *detect* successful adversarial examples rather than make a model more robust. Also, we did not use Nattack to attack the robust LID model that was used in the Nattack paper (which has accuracy of 66.9%), but to attack our base model, which is unprotected, and has accuracy of 87%. We used our own model instead of the robust model to maintain consistency across the rest of the experiments. In this setup, which was justified in the paper, the attack was not aimed to evade LID detection, so it isn't surprising Nattack didn't completely evade LID detector.\", \"title\": \"Attack detection and Model robustness are different tasks\"}",
"{\"comment\": \"Sorry, I find the result about Nattack in terms of LID is strange and unconvincing.\\n\\nAs the reported result by the work of Nattack , Nattck has broken the detection of LID with the attack success rate of 100%. That is, the result of LID on Nattack is 0%. \\n\\nHowever, as the reply shown, the result of LID on Nattack reported by the authors is 67%, which is close with the clean accuracy (66.9%) reported by the work of Nattack and has a big gap with the previous result (0%). Maybe the minor adjustments make something wrong for Nattack.\", \"title\": \"Strange results\"}",
"{\"comment\": \"Thank you for your important feedback and helpful suggestion!\\n\\nWe originally discussed Nattack [1] when explaining the attack scenario, but did not compare with it directly to keep the focus on attack detection, rather than model robustness.\\n\\nFollowing your comment, we further evaluated our detection approach with the [1] attacks. Specifically, we used the implementation provided by the authors for attacking LID (github.com/Cold-Winter/Nattack/tree/master/lid) using their best published hyper parameters, and made minor adjustments to fit our pytorch model. We also reduced the population size from 300 to 200 so the attack model fit our K40 GPU RAM. We evaluated our defense using the successful adversarial images.\\n\\nFor Nattack, \\u00a0EXAID (our approach) again consistently outperforms other detection baselines, on both CIFAR and SVHN, while keeping detection rates at the same ball park as with other attacks. Specifically, on CIFAR, EXAID improves detection AUC over the baselines, from 0.70 (ANR), 0.68 (unsupervised LID), 0.67 (original LID), 0.43 (unsupervised Mahalanobis) and 0.46 (original Mahalanobis) to *0.96* (EXAID familiar) and 0.89 (EXAID unknown).\\n\\nSimilarly, on SVHN, EXAID improves from 0.53 (ANR), 0.63 (expand LID), 0.49 (original LID), 0.35 (expand Mahalanobis) and 0.56 (original Mahalanobis), to *0.95* (EXAID unknown) and 0.78 (EXAID familiar).\\n\\nWe will add detailed results to the next version of the paper.\", \"title\": \"EXAID also detects Nattack, outperforms baselines.\"}",
"{\"comment\": \"Hi, this paper is an interesting work. However, I have some questions about the evalation.\\n\\nI think a stronger attack is missing in the evalation, i.e., Nattack[1]. Since the baseline LID (Ma et al., 2018). has been broken by Nattck with the attack success rate of 100%, I have the question whether the proposed method provides the true robustness against the adversarial examples. That it, I am wondering does the proposed method suffer from the same attack?\\n\\nIt would be solid to include further experiments on the robustness against Nattack in the paper. \\n\\n[1] NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks. ICML 2019\", \"title\": \"Interesting work, how about evalating on Nattack?\"}"
]
} |
B1lda1HtvB | Feature Selection using Stochastic Gates | [
"Yutaro Yamada",
"Ofir Lindenbaum",
"Sahand Negahban",
"Yuval Kluger"
] | Feature selection problems have been extensively studied in the setting of
linear estimation, for instance LASSO, but less emphasis has been placed on
feature selection for non-linear functions. In this study, we propose a method
for feature selection in high-dimensional non-linear function estimation
problems. The new procedure is based on directly penalizing the $\ell_0$ norm of
features, or the count of the number of selected features. Our $\ell_0$ based regularization relies on a continuous relaxation of the Bernoulli distribution, which
allows our model to learn the parameters of the approximate Bernoulli
distributions via gradient descent. The proposed framework simultaneously learns
a non-linear regression or classification function while selecting a small
subset of features. We provide an information-theoretic justification for
incorporating Bernoulli distribution into our approach. Furthermore, we evaluate
our method using synthetic and real-life data and demonstrate that our approach
outperforms other embedded methods in terms of predictive performance and feature selection. | [
"Feature selection",
"classification",
"regression",
"survival analysis"
] | Reject | https://openreview.net/pdf?id=B1lda1HtvB | https://openreview.net/forum?id=B1lda1HtvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"_7XeEO8vrY",
"HylAbOm2iB",
"ByeWmvX3jH",
"rkeInLm3jr",
"H1e8061p9H",
"rye3z0ytcr",
"ryeWRH4TFr",
"SJetxqOMuB",
"SkeKx9U6vB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798737812,
1573824517732,
1573824280943,
1573824174444,
1572826573959,
1572564499819,
1571796424717,
1570044401036,
1569708528615
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1994/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1994/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1994/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1994/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1994/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1994/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1994/Authors"
],
[
"~Ian_Connick_Covert1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors propose a method for feature selection in non linear models by using an appropriate continuous relaxation of binary feature selection variables. The reviewers found that the paper contains several interesting methodological contributions. However, they thought that the foundations of the methodology make very strong assumptions. Moreover the experimental evaluation is lacking comparison with other methods for non linear feature selection such as that of Doquet et al and Chang et al.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Blind Review #1\", \"comment\": \"We thank the reviewer for the detailed and constructive comments.\\n We propose to use a neural network for feature selection, rather than to perform feature selection in neural networks. Clearly, CNN do not require feature selection since the inputs are pixels. There still isn\\u2019t an effective method for performing feature selection while learning nonlinear complex relationships between variables. Our solution is yet the first $\\\\ell_0$ based embedded method to achieve this task. We disagree with the reviewer's assumption that training a nonlinear model which uses all feature results in comparable performance. In all of the examples presented in the paper, the STG improves the accuracy dramatically compared to all alternatives and compared with a similar NN without feature selection (see DNN results added to plots and tables). Regardless of generalization, identifying a small subset of features that interact through a nonlinear model leads to a number of benefits: reducing experimental costs, enhancing interpretability, computational speed up and even improving model generalization on unseen data. In biomedicine, scientists collect multitude datasets comprising of many biomarkers (e.g., genes or proteins) that require the development of effective diagnostics or prognostics models. For instance, in Genome-wide association studies (GWAS), feature selection can help identify such models and lead to improved risk assessment and reduced cost. \\n \\n In what follows we address each comment following the presented order.\\nP1. We have added the performance of a standard neural network to all of the relevant examples (see DNN in tables and plots). Note that in the COX example DeepSurv is, in fact, a neural network without feature selection. P2. This is a very nice suggestion. We have examined the deterministic non-convex regularization presented by the reviewer and observed that is inferior to the proposed approach in various aspects. In a new section that appears in Section H in the appendix, we detail and demonstrate the differences between the deterministic and stochastic formulation of the proposed gates. We have observed that without stochasticity \\u201cdeterministic gates\\u201d converge to values in the range of (0,1), in contrast, the \\u201cstochastic gates\\u201d converge to {0,1}. Deterministic gates accompanied by thresholding somewhat improves this inherent problem. Importantly, stochastic gates achieve superior results to this two-step deterministic procedure as we now demonstrate. One clear additional advantage of stochastic gates is what we call a \\u201csecond chance\\u201d, where the injected noise allows revaluation of features even if their parameters reached 0/1 in an early training phase.\\n \\nP3. Following the reviewer's concern, we have added two new experiments consisting of a large number of features (Gisette dataset (5000 features) and Reuiter Corpus Volume 1 dataset (47,236 features)). Please see section J.4 and J.5 in the Appendix in which we present the details and results of these two high dimensional experiments. \\nP4. Our initial draft has been available online prior to the submission of [1]. The authors in [1] cite our original draft in their manuscript. P5. Thanks for spotting this, we have revised this unnumbered equation to correct this mistake. P6 Thanks again for this suggestion, we are now demonstrating applicability to ~50K features as pointed in in P3. P7. We have added a new section in the appendix (see section I), discussing and demonstrating how to tune the regularization parameter.\\n \\n \\n[1]\\\"Concrete Autoencoders for Differentiable Feature Selection and Reconstruction\\\" by Abid et al. [2019]\"}",
"{\"title\": \"Response to Blind Review #2\", \"comment\": \"We thank the reviewer for the detailed and constructive comments. In the following, we address all the points raised by the reviewer.\\nP1. We agree that the STG is indeed one major contribution, which is a simple yet highly effective relaxation to the Bernoulli distribution. We argue that the method itself (embedded feature selection with the STG) is of major importance. To the best of our knowledge, it provides the first embedded non-linear feature selection solution. This is analogous to the contribution of LASSO to the statistical community, despite the fact the same $\\\\ell_1$ regularization was used earlier for basis pursuit.\\nFurthermore, we believe that the STG is useful for other applications. The Hard Concrete (HC) [3] improves upon the Gumbel-Softmax (or equivalently the Concrete distribution) by applying a hard thresholding function to the Concrete values, which allowed the authors in [3] to achieve network sparsification. The proposed STG differs from the HC, as the first is based on a Gaussian and the latter relies on a uniform distribution. We have demonstrated extensively that the HC is less suitable for the task of feature selection compared to STG. This is partially due to the higher empirical variance the HC suffers from. This is demonstrated in several experiments and in Appendix G.2. We have revised section 5 to include more details on the HC. \\nP2. This point is now incorporated in section (2.2) which has been edited to be more concise. \\nP3. This assumption is only used for providing a theoretical connection between the deterministic and stochastic objectives. For example, the true sparsity is used to provide a theoretical analysis of the LASSO in [1,2]. In practice, we do not need to know this number. \\nP4. It is an interesting suggestion to generalize the assumption in future work. \\nP5. We have rephrased subsection 6.1 to improve the flow in the experimental section. The structure of the experiments is changed, we start with experimental evaluation of the proposed approach in section 6 followed by applications in section 7. Acrene is a cancer dataset, We believe the reviewer is referring to the MADELON dataset. We have presented results using this data set in the original submission, which appears in the supplementary material. We have demonstrated that the proposed method achieves state of the art results on the MADELON data. \\nP6. We would be glad to compare to this method. We have emailed the authors to request the code. P7. Thanks for pointing this out. We have changed the citation style as suggested. \\n\\n [1] \\u201cSharp thresholds for high-dimensional and noisy recovery of sparsity\\u201d, Martin J. Wainwright [2009]\\n[2] \\\"On the prediction performance of the lasso.\\\" Dalalyan, Arnak et al. [2017]\"}",
"{\"title\": \"Response to Blind Review #4\", \"comment\": \"We thank the reviewer for the detailed and constructive comments. The reviewer organized his recommendations in 3 groups which we address in consecutive order. P.1) We agree that one main advantage of the proposed method is its ability to perform feature selection while learning a non-linear model. This is achieved by a non-convex objective. We demonstrate empirically using several examples that this is in fact computationally \\u2018benign\\u2019. This is similar to other recent successful results obtained by non-convex optimization via deep neural networks. P2. Indeed, a non-convex formulation is useful for various applications. However, such formulation alone is not sufficient for an embedded feature selection method. A new section was added (see Appendix H) to evaluate the effect of such deterministic non-convex regularization. We have observed that without stochasticity, \\u201cdeterministic gates\\u201d converge to values in the range of (0,1), while the \\u201cstochastic gates\\u201d converge to {0,1}. The deterministic gates accompanied by thresholding somewhat improve the problem. However, stochastic gates achieve superior results to this two-step deterministic procedure. One clear additional advantage of stochastic gates is what we call a \\u201csecond chance\\u201d, where the injected noise allows re-evaluation of features even if their parameters reached 0/1 in an early training phase. P3. Thanks for the suggestion we have added box plots of the median rank for the XOR experiment (see Fig. 1C) and MADELON (see Fig. 6C). This indeed demonstrates the superiority of our method over the alternatives. Furthermore, to evaluate our method in a high dimensional regime we experimented with two additional datasets (see Appendix J.4 and J.5). P4. We agree we believe that this type of continuous relaxation is useful for other applications as well (e.g. basis pursuit, robust representation variational inference and more). Furthermore, we demonstrate that our relaxation outperforms the previously suggested \\u201cHard-Concrete\\u201d, not to mention the \\u201cConcrete\\u201d relaxation which fails to sparsify the feature space.\\n \\nWe next respond to the improvement suggestions provided by the reviewer.\\nS1. We have revised the introduction, providing a more concise motivation and explanation of the proposed approach. \\nS2. Sections 6.4 and 6.5 (now 7.1 and 7.2) were abbreviated. We now refer the reader to the relevant citations for a description of the PBMC and COX datasets. \\nS3. We have re-organized the experimental section. Section 6 now provides experiments evaluating the method in the linear and non-linear setting. In section 7, we demonstrate its utility to biomedical applications, where reducing the number of features translates to cheaper and more accurate medical assays.\\nThe results provided in Table 1 augment the values provided in a recent paper (SRFF) [1]. In [1], the authors provide the optimal results (without referring to the number of features). Here, we intended to demonstrate that the proposed method competes with SRFF in their setting. We have added the number of selected features in Table 2. The description of Optuna is now expanded in the supplementary material. \\nFinally, regarding the minor comment 1, we have used the value of $alpha_N$ as presented in [2] (see section IV). The expression for alpha_N in this paper includes the \\\\sqrt \\\\log k term.\\n[1] Gregorov\\u00e1, Magda, et al. \\\"Large-scale nonlinear variable selection via kernel random features.\\\" Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Cham, 2018.\\u200f\\n [2] \\u201cSharp thresholds for high-dimensional and noisy recovery of sparsity, Martin J. Wainwright [2009]\\u201d\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"The authors propose a feature selection method for high-dimensional datasets that attempts to fit a model while selecting relevant features.\", \"the_strategy_they_follow_is_below\": \"1. They formulate feature selection as an optimization problem by augmenting standard empirical risk minimization with zero-one variables associated with each feature representing the absence-presence, and adding a penalty proportional to the number of included features. They relax the discrete variables using a continuous relaxation and provide a simple unbiased estimator for the gradient of the relaxation. After training the relaxation is rounded to a zero-one solution by a simple scheme. \\n2. They provide an information theoretic motivation for their formulation of feature selection\\n3. They exhibit the performance of their method on a number of synthetic and real data scenarios: (i) linear models with a true underlying sparse parameter, (ii) binary classification with a small number of true determining features, (iii) regression performance post-feature selection with synthetic non-linear models (with a few determining features) and two real datasets. They also use the method for a classification problem with RNA-seq data on T-cells and a survival analysis based on a breast-cancer dataset called METABRIC. \\n\\nDespite my recommendation, there are a number of things that I like about the paper that I list below, along with directions where I believe the article can be improved. \\n1. At a certain abstraction, the main idea of the paper is to do feature selection at the same time as model fitting (as the LASSO for e.g. does) while ignoring constraints of convexity raised in the optimization, and simply using stochastic gradient with a reasonable unbiased estimate of the gradient. This is a reasonable idea, particularly if under some reasonable assumptions, the non-convex formulation that is obtained is expected to be computationally 'benign'. \\n2. In a number of the experiments, and particularly 6.1 (sparse linear model) 6.2 (noisy XOR classification) I suspect the non-convex formulation is what is providing a lot of the improvement. This has been observed empirically in a number of other settings, for e.g. in matrix completion/factorization problems. Verifying this hypothesis in a simple, synthetic (and therefore controlled) dataset would be a good contribution for a future version. \\n3. The authors have done a fairly good job of validating the method in a number of different settings, even if some of the presentation of their results can possibly be somewhat improved. For e.g. the median rank is better shown with box plots (as in the Chen et al 2018 paper cited by the authors).\\n4. There are a number of relaxations of discrete variables used in optimization and theoretical computer science literature. For instance, the approach of the authors is reminiscent to 'mean field' methods, or standard linear programming relaxation of combinatorial optimization problems (i.e. the first level of the Sherali-Adams LP hierarchy). On the other hand, naive versions of this are not likely to work well on (say) sparse linear regression. The current methods do which suggests that the continuous relaxation is useful. \\n\\nAt an expository level, I also think the paper could do with quite a bit of improvement:\\n1. The introduction is sparse and hurried, and without providing sufficient motivation and intuition for the contributions of the article. \\n2. In 6.4, 6.5, the introduction about RNA-seq or Cox models can be removed and relevant work cited instead. \\n3. Organizing the experiments as real data, and synthetic data might be semantically better, though that would necessitate splitting Table 1. I am also unclear on why the authors show performance in Tables 1, 2 independent of the number of features selected, while for the experiment on RNA-seq data the full accuracy/#features tradeoff is given. The sparse explanation about using the Optuna paper is certainly not enough.\", \"minor_comments_not_related_to_decision\": \"1. The value for \\\\alpha_N in synthetic sparse linear model experiment of 6.1 likely has an extraneous \\\\sqrt \\\\log k\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper is concerned with embedding a supervised feature selection within a classification setting.\\nThe originality is to use an L_0 regularization (counting the number of retained features), besides the classification loss; the authors leverage the ability to include boolean variables in a neural network and to optimize their value using gradient descent through the reparameterization trick.\", \"i_am_mildly_convinced_by_the_paper\": [\"Out of the four contributions listed p. 2, STG is the most convincing one; still, the description thereof is not cristal clear: the reparametrization trick is not due to the authors. The discussion (section 5) needs be more detailed, adding the HC details (presently in appendix); could you comment upon the difference between the proposed STG and the Gumbel-Softmax due to Jang et al, cited ?\", \"Likewise the authors delve into details regarding the early state of the art, while omitting some key points. For instance, p. 3, the fact that many authors replaced an L_0 penalization with an L_1 one is rooted on the fact that, provided that the optimal L_0 solution is sparse enough, the L_0 and L_1 problems have same solutions. This section can be summarized;\", \"the sought sparsity is assumed to be known, which is bold;\", \"Assumption 2 is debatable; one would like to find at most the Markov blanket of the label variable. See Markov Blanket Feature Selection for Support Vector Machines, AAAI 08.\", \"There are digressions in the paper which make it harder to follow the argumentation (section 6.1); section 6.2 is not at the state of the art; in Guyon et al's Feature Selection Challenge (2003), the Arcene artificial problem involves a XOR with 5 key features, and 15 additional features are functions of the key features.\", \"Suggestion, you might compare with the L_0 inspired regularization setting used for unsupervised feature selection in Agnostic Feature Selection, Doquet et al, 2019.\"], \"details\": \"check the citation style: use \\\\citep instead of \\\\cite.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The author rebuttal sufficiently addresses my concerns, so I am upgrading my score.\\n\\n***\\n\\nThe paper considers the problem of embedded feature selection for supervised learning with nonlinear functions. A feature subset is evaluated via the loss function in a \\\"soft\\\" manner: a fraction of an individual feature can be \\\"selected\\\". Sparsity in the feature selection is enforced via a relaxation of l0 regularization. The resulting objective function is differentiable both in the feature selection and learned function making (simultaneous) gradient-based optimization possible. A variety of experiments in several supervised learning tasks demonstrates that the proposed method has superior performance to other embedded and wrapper methods.\\n\\nMy decision is to reject, but I'm on the fence regarding this paper. I'm not clearly seeing the motivation for an embedded feature selection method for neural network models: for the datasets considered in the paper, it would seem that training a nonlinear model that used all the features would result in performance at least as good as training the nonlinear model with a prepended STG layer. Perhaps there is evidence that filtering features, e.g., irrelevant features, results in higher accuracy and that the prepended STG layer achieves this accuracy, but that evidence is missing from the paper. Also, there could be downstream computational savings, e.g., at prediction time, if the dimension was very large, but this is not the setting tested in the experiments. I suppose interpretability could be considered motivation, but, even so, isn't there at least one simpler, deterministic approach (described below) that also \\\"solves\\\" the problem? Finally, it isn't clear how the method scales with increasing sample size and dimension as all the datasets tested are relatively small in these respects.\\n\\n***\", \"questions_and_suggestions_related_to_decision\": [\"The performance values using all features should be included in the experimental results so that the value added by STG can be assessed.\", \"Why not use the simpler deterministic and differentiable relaxation z = \\\\sigma(\\\\mu), where \\\\sigma() is a \\\"squashing\\\" function from the real numbers to [0,1] applied element-by-element to the vector \\\\mu? What specifically is/are the advantage(s) that the randomness in the definition of z at the bottom of pg. 3 provide over this deterministic alternative?\", \"Though well-described and methodologically rigorous, the experimental comparison is none-the-less a little disappointing: one dataset for classification and half the datasets for regression are synthetic and low-dimensional. The remaining regression datasets are real but also low-dimensional. The survival analysis dataset is also low-dimensional (as described in the supplementary material). This leaves one real classification dataset which was on the order of 20,000 examples and 2500 features. Why were larger sample-size and dimensionality datasets not tested? These should be readily available. For example, the gisette dataset from the NIPS 2003 feature selection challenge has 5000 features. See \\\"MISSION: Ultra Large-Scale Feature Selection using Count-Sketches\\\" by Aghazadeh & Spring et al. (2018) for other high-dimensional datasets. Even a single run for each large dataset would have provided some evidence of scalability.\", \"***\"], \"other_minor_comments_not_related_to_decision\": [\"\\\"Concrete Autoencoders for Differentiable Feature Selection and Reconstruction\\\" by Abid et al. (2019) targets unsupervised feature selection but has enough similarities in the approach that it should be considered related work.\", \"[Typo?] The unnumbered equation after (5) should not have a sum over d in the second term. Perhaps a sum over k was intended? Also, in this equation, the gradient of the loss wrt/ z samples, average of gradients over z samples times..., does not seem to match what the gradient would be given the algorithmic description in the supplementary material, a gradient of the (sample) average z times...\", \"The abstract states the paper is proposing a method for high-dimensional feature selection, but all of the experiments have datasets with max. dimensionality 2538.\", \"Some discussion of how the regularization parameter can be selected by a user of the proposed method would be good to include.\"]}",
"{\"comment\": \"Thank you for bringing these two papers to our attention. We will cite these papers in the manuscript.\\n\\nThe paper \\\"Dropout Feature Ranking for Deep Learning Models\\\" uses the original concrete distribution and proposes a method for feature ranking. As opposed to the Hard Concrete distribution, the original concrete distribution does not provide sparsity. Therefore, the authors propose to rank the features and then train a new network that uses the top-ranked features. The distinction is analogous to the difference between L2 and L1 regularization for linear regression problems where the former does not sparsify the variables. Furthermore, the method proposed in their paper is a wrapper method. \\n\\nOn the other hand, in our study, we focus on developing a fully embedded feature selection method. Specifically, we studied two candidate distributions: a) the Hard Concrete and b) our novel STG. We demonstrate that the Hard Concrete distribution results in feature sparsification, but suffers from high variance. Importantly, we empirically show that our novel STG distribution overcomes this limitation and resulting in high performance in terms of accuracy and feature selection. \\n\\n \\n\\nThe paper \\\"Adaptive Compressed Sensing MRI with Unsupervised Learning\\\" (Bahadir et al., 2019) addresses the problem of compressed sensing of MRI scans. The authors use a trick similar to the concrete distribution in order to undersample the number of Fourier coefficients needed for reconstruction of the MRI scan. The method is unsupervised and uses a different objective and regularization than the one proposed in our study. Furthermore, our manuscript has been available online prior to the work of Bahadir et al., 2019. In fact, the most related work to the study by Bahadir, is \\\"Concrete Autoencoders: Differentiable Feature Selection and Reconstruction.\\\" by Bal\\u0131n, Muhammed Fatih et. al 2019, which cites our original preprint.\", \"title\": \"Response to comment\"}",
"{\"comment\": \"Hi, this is nice work. However, I have a question about connections with a couple existing papers.\\n\\nCan you elaborate on how your method differs from \\\"Dropout Feature Ranking for Deep Learning Models\\\" (Chang et al., 2017)? Your objective (Eq. 4) seems exactly the same as theirs (Eq. 2). And while Chang et al. address the problem of feature ranking, not feature selection, they also note the objective's link with l0 regularization.\\n\\nThe only difference I noticed is a different parameterization for the continuous relaxation of Bernoulli samples. Your stochastic gate (STG) relaxation may lead to faster convergence, but I only saw a comparison with the Hard-Concrete. I'm curious what you would expect to find in a comparison with the original Concrete relaxation.\\n\\nEssentially the same method was also used in \\\"Adaptive Compressed Sensing MRI with Unsupervised Learning\\\" (Bahadir et al., 2019), see Eqs. 1-3.\\n\\nCould you explain what differentiates this work? If nothing else, it seems those papers should be cited.\", \"title\": \"Differences with prior work\"}"
]
} |
HyewT1BKvr | SpectroBank: A filter-bank convolutional layer for CNN-based audio applications | [
"Helena Peic Tukuljac",
"Benjamin Ricaud",
"Nicolas Aspert",
"Pierre Vandergheynst"
] | We propose and investigate the design of a new convolutional layer where kernels are parameterized functions. This layer aims at being the input layer of convolutional neural networks for audio applications. The kernels are defined as functions having a band-pass filter shape, with a limited number of trainable parameters. We show that networks having such an input layer can achieve state-of-the-art accuracy on several audio classification tasks. This approach, while reducing the number of weights to be trained along with network training time, enables larger kernel sizes, an advantage for audio applications. Furthermore, the learned filters bring additional interpretability and a better understanding of the data properties exploited by the network. | [
"audio",
"classification",
"convolutional neural network",
"deep learning",
"filter",
"filter-bank",
"raw waveform"
] | Reject | https://openreview.net/pdf?id=HyewT1BKvr | https://openreview.net/forum?id=HyewT1BKvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"JSVNFjgTo",
"rkxEAorhsH",
"SJeDD_-nor",
"HJgqWxgsoH",
"B1etp1gisB",
"Syxf1vAciS",
"HygHvXCqjB",
"SygoIGCcoH",
"Bkg5AvUOor",
"BklbJDXJqB",
"rkge3Hj6tS",
"BJgpQEuhtr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737782,
1573833675518,
1573816414812,
1573744642234,
1573744576676,
1573738202069,
1573737308739,
1573737043471,
1573574610193,
1571923672805,
1571825064178,
1571746853149
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1993/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1993/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1993/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1993/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1993/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1993/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1993/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1993/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1993/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1993/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1993/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposed a parameterized convolution layer using predefined filterbanks. It has the benefit of less parameters to optimize and better interpretability. The original submission failed to inlcude many related work into the discussion which was addressed during the rebutal.\", \"the_main_concerns_for_this_paper_is_the_limited_novelty_and_insufficient_experimental_validation_and_comprisons\": [\"There have been existing work using sinc parameterized filters, learnable Gammatones etc, which are very similar to the proposed method. Also in the rebutal, the authors acknowledged that \\\"We did not claim that cosine modulation was the novelty in our paper\\\" and it is \\\"just a way of simplifying implementation and dealing with real values instead of complex ones\\\" and \\\"addressing the question of convergence of parametric filter banks to perceptual scale\\\".\", \"Although the authors addressed the missing related work problem by including them into discussions, the expeirmental sections need more work to include comparisons to those methods and also more validations on difference datasets to address the concern on the generalization of the proposed method.\"], \"title\": \"Paper Decision\"}",
"{\"title\": \"Acknowledgement of the rebuttal\", \"comment\": \"The authors have cleared most of my technical concerns and lack of bibliography about the Wavelet-based scatter transform networks. It somewhat shows the merit of the learnable filterbanks in that the learned filter lengths are short unless they have to. However, while logically convincing, I still wish that there could be some experimental contrasts against the more deterministic methods, especially in terms of complexity.\"}",
"{\"title\": \"response to reviewer #3\", \"comment\": \"Thank you for your comments. We apologize for the late answer, however addressing your comments required us to perform additional experiments.\\n\\nFor the relationship with the work of Mallat\\u2019s group, thank you for your remark, please refer to the answer to reviewer #2 as he had similar remarks.\\n\\nThe results presented in Figure 1b suffered from an implementation issue (namely a rounding error) which have now been corrected, as well as their analysis, in the revised version of the paper. However, due to the short time available to address all reviewers\\u2019 comments, we did not have the time to fully reproduce the experiment from Fig. 1b, as some settings require (especially 90% overlap) large training times.\", \"influence_of_the_filter_length\": \"Looking more carefully at the bandwidth of the learned filters, a majority of them have a bandwidth close to 100Hz or greater. If we take the example of the Gaussian filter where a width (variance) of sigma in the frequency domain corresponds to a width of 1/sigma in the time domain, we obtain a Gaussian window with a width (variance) of 10 ms. Accordingly, if the frequency bandwidth is higher, the time width is smaller. So that a filter size of 10ms is sufficient to fit most of our functions. Filter length of 100 ms would be useful to capture low frequency components (lower than 100Hz) but it seems that it is not necessary for the dataset we analyze within the scope of the task at hand, which is audio classification and in particular speech signals. Most of the relevant information for classification can be found in the part of the spectrum above 100Hz. That is why, in our setting, an increase of the filter size beyond 10 ms has no influence on the accuracy (assuming a stride of one sample).\\nConcerning the influence of the overlap (stride) in the range of filter lengths 1 to 10ms, we have performed a new experiment to test it, with a corrected implementation. We replaced the plot (Fig. 1b) showing the evolution of the accuracy in this range. We observe an interesting behavior: the curves for the different overlap values cross in this range. We added the following text to the paper to explain the bad accuracy for large overlap with small kernel size < 4 ms: \\u201cOn the other extreme, short kernels (less than 4ms) with large overlap (or small stride), can render the network short-sighted in time. In that case, long temporal patterns require the combination of a large amount of successive output values. The convolutional layers following the SpectroBank layer, deeper inside the network, may not be able capture these long patterns. This results as well in a drop of the accuracy observed on Fig. 1b.\\u201d\\nThank you very much for your careful reading and pointing the confusion in Fig. 1b. We were able to correct it and the results look (to us) much more logical.\"}",
"{\"title\": \"Thanks!\", \"comment\": \"I apologize for the former message, this reply solves most of my concern. I appreciate your honesty. I will reconsider my review. Thanks.\\n\\nI think the authors should find a precise setting to highlight the speed gain (convergence/individual layers), yet I agree this is a difficult task. If this is not possible, then such claims should be removed from the manuscript.\\n\\nI have the same thinking w.r.t. the initialization (stated above), for which some clarifications could help to understand better the improvements due to this method, in addition to the analysis of section 3.5.(I'm still not sure if each NNs of the sections 3.2-3.4 have been initialized with a \\\"good\\\" initialization or not)\"}",
"{\"title\": \"addtional clarifications\", \"comment\": \"Thank you for the quick feedback.\\n\\n3) By smaller we mean \\\"having less trainable parameters\\\": smaller final dense layers and/or less intermediate convolution layers. \\n\\n4) Due to timing constraint and comments , we focused on SincNet only. We did not have time to reproduce all experiments with SincNet, only the AudioMNIST ones. However SincNet does not benefit specifically form the mel-scale initialization as accuracy remains very close from either a learnt Gammatone initialized with linear frequency scale or even with a SincNet modified with initialization also with a linear frequency scale. This is mentioned in the revised version.\"}",
"{\"title\": \"Thanks for some clarifications\", \"comment\": \"Dear authors,\\n\\nThanks for your reply. I answer paragraph per paragraph. Only a portion of my comments have been addressed.\\n\\n1/2/ Thanks for the clarifications. I agree with your statements, my point was more to help enriching the related works part.\\n\\n3/ Well, \\\"smaller\\\" in which sens? If I'm correct, the kernel sizes are similar yet the parametrization rely on less parameters. What about the speed?\\n\\n3/ Has this comparison been done systematically through the section 3.2, 3.3 and 3.4?(I saw a paragraph about the initialization but my understanding is that, for each experiment, it applies only to the original implementation on which the experiment is based on) If I'm correct, then, that could be highlighted more, and not only on the specific example of the SincNet.\"}",
"{\"title\": \"response to reviewer #2 (2/2)\", \"comment\": \"The choice of the optimizer: indeed a change of the optimizer affects the performances and we selected the one giving the best results for each network we trained. That is why we get a better accuracy for some of the networks, compared to the original paper where they were presented. The accuracy numbers reported for the re-implementation of AudioNet for AudioMNIST were higher than the ones presented in the original paper (92.5% with SGD in the original AudioMNIST paper, vs. 94.9% with Adam in our re-implementation), providing a more fair comparison between the original Audionet and the Spectrobank-enabled Audionet.\\n\\nThe number of filters is not of high relevance and is not a critical point of the solution, but we just wanted to observe the effect it has on accuracy. As mentioned in reviewer #1 response, the \\u2018optimal\\u2019 number of filters for the classification tasks we studied might prove to be quite different for other types of tasks.\\n\\nRegarding the faster training performance claim, when training Audionet, validation accuracy is greater than 93% for the first time after 13 epochs (and suffers from accuracy drops later) whereas when training Spectrobank-AudioNet, the 93% validation accuracy is reached after 4 epochs only (and does not become lower in the following epochs), but due to the non-negligible network settings, comparing convergence speed fairly is difficult.\\n\\nIt is however true that despite its lower number of parameters, a spectrobank layer might slower than a non-parametric convolution layer, since you must generate the filters from parameters, and then perform convolution with longer filters. In our experiments based on a modified SampleCNN, an epoch of Spectrobank-enabled network was 1.5 times slower (3 ms / step vs. 2 ms / step) than the non-Spectrobank equivalent (same number of filters in the first layer, same batch size). However, the number of filters in the first layer needed by a non-parametric layer is much larger than in the parametric case to achieve the same (or better) results. SampleCNN\\u2019s 1st layer uses stride=3 which corresponds to a 98% overlap for a 10 ms filter. Our results shown in the paper use 75% overlap, corresponding to a case where stride=40.\\n\\nThe overall speed gain in training we metion is partly caused by the reduced network architecture (less filters needed in the first layer) and quicker convergence of the learned parametric filters.\\n\\nIn section 3.4, we agree that it would be more interesting to compare our results using the Urbansounds dataset with state of the art performance, requiring to use data augmentation. We will add those results in the final version of the paper (those experiments require additional work that we do not have time to perform within the rebuttal period).\\n\\nConcerning the discussion presented in page 5 regarding the 99% overlap, the results computed are in fact incorrect (due to a rounding error in the implementation), making the discussion of this particular case irrelevant. This will be updated in the revised version.\"}",
"{\"title\": \"response to reviewer #2 comments (1/2)\", \"comment\": \"Thanks for your comments, please find below the answers to the points your raised.\\n\\nThe first part of the response mentions Wavelets and the scattering transform. We will add the proposed papers to the state of the art section, discussing why our work is different. Our paper focuses on learning parametrized functions and analyzing the learned parameters, to get some insights about the data and learning. The scattering transform makes use, in each layer, of a set of wavelets with fixed scale (not learnable). \\nAs stated in [1], section 2.3: \\u201cScattering uses a multi-layer cascade of a pre-defined wavelet filter bank with nonlinearity and pooling operators.\\u201c We do not use pre-defined wavelet filter banks, the filterbank is learnt through the learnable parameters. The approach we have (same approach as in the papers we cite) is half way between a fixed filter bank and free learnable kernels. Again in section 2.3 of the paper cited above: \\u201cIn contrast to Scattering, we learn linear combinations of a filter basis into effective filters and non-linear combinations thereof.\\u201c They learn linear combination of a filter basis while we learn the filters and their combinations. The set of learned filters may be a basis or not.\\n\\n\\nA more closely related work is the one of Khan et al (Neurips 2018) about learning parametrized Wavelets. We refer to it and use it as a baseline. However, we point out that Khan et al. do not cite any work on the scattering transform. This is unfortunate and we will add a reference to Mallat\\u2019s team work on the scattering transform. \\n\\nThe purpose of the comparison of the number of parameters was included in the paper as means of illustrating that our layer can enable training of smaller networks that will exceed the accuracy of the fixed (but more complex) representation networks.\\n\\nThe choice of a specific initialization has been investigated and we showed (cf. section 3.5) that the training process moves the learned filters toward a perceptual scale. Choosing specific parameters might speed up the training but does not affect overall accuracy. While performing tests with SincNet to address reviewer 1 concerns, we replaced the mel-based initialization from Ravanelli et al. by the one used in Spectrobanks and did not observe any significant difference in overall accuracy.\\n\\n\\n[1] Jacobsen et al. , Structured Receptive Fields in CNNs\", \"https\": \"//www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Jacobsen_Structured_Receptive_Fields_CVPR_2016_paper.pdf\"}",
"{\"title\": \"Response to comments from reviewer #1\", \"comment\": \"Thank you for your comments.\\nWe would like to address your response in detail, and try to clarify the misunderstandings. As can be seen at the first glimpse over the list of cited literature, we have cited papers [1-3] within the Motivation and Related Work sections and are familiar with the details of these papers:\\n\\nIn paper [1] authors have used the initialization on a Mel scale and then trained the filter in a non-parametric way. Unlike in their paper, we have tried to show that this type of initialization is not necessary, and that parametric filters learned converge to the Bark scale. So, the novelty is twofold, since we have gone beyond showing the convergence trends for the frequencies, but also for the corresponding bandwidth. In most papers having studied parametric filter banks, filter initialization is done following a psychoacoustic scale or using prior knowledge about the signal under consideration. While this can speed up training times, this is an unnecessary step.\\n\\nIn paper [2] authors have used single parameter filters in a filter bank, which are insufficiently adaptable to the task at hand, since the frequency-bandwidth relationship in Wavelets is not adapted to the perceptual models in audio applications. Running the AudioMNIST experiment using the learnable Wavelet filter bank in the \\u2018SpectroBank-Audionet\\u2019 from [2] gives 89.9%+-1.18% accuracy which is much lower than both the baseline and our experiment using a Gammatone learnable filterbank. When using the learnable Wavelet filter bank on the simplified Spectrobank network, accuracy is even worse, dropping to 88.9 % +- 1.43%. It is also the case for the experiments performed on GoogleSpeechCommand dataset, varying overlap ( see Fig. 1 (a)) the Wavelet filter-bank yields the worst results.\\n\\nIn paper [3] authors have used sinc parameterized filters. Using a SincNet first layer (with Mel-scale pre-initialization) in Spectrobank-Audionet yields an accuracy of 97.0% +- 0.5%, which is very close to the results presented in the paper, using learnable Gammatones. When using the simplified Spectrobank network, accuracy is 97.2%+-1%, also close to the performance of learned Gammatones. The Mel-based filter initialization used (also in [1]) by Ravanelli et al. has negligible impact on the results. \\nIn order to better show the impact of the first layer, we did another AudioMNIST experiment with a much simpler network, having only 50k trainable parameters. This very simple model architecture is made of the following layers:\\n- Spectrobank\\n- Maxpooling (4, stride=4)\\n- Dense (16) + Dropout(0.5)\\n- Softmax output layer (10 classes)\\nAgain with this setting, differences proved to be quite small between the two settings: 79.9%+-4.3% for Gammatones and 80.6%+- 4% for SincNet.\\nIn conclusion is that plugging SincNet as a learnable filter shows performance that fits into the observations made in Fig. 1a, showing very close results for Gammatone/Gammachirp/Gaussian filter banks as first layer, and we will add the SincNet accuracy curve on Fig. 1a in the revised version of our paper.\\n\\nWe did not claim that cosine modulation was the novelty in our paper. This is just a way of simplifying implementation and dealing with real values instead of complex ones. Thanks nevertheless for bringing to our knowledge reference [4] which is however only marginally relevant to our work.\\nWe would like to put the emphasis on the fact that our paper was addressing the question of convergence of parametric filter banks to the perceptual scale, without prior initialization using known perceptual models. \\n\\nRegarding the influence of the number of filters present in the filter bank, our findings with respect to a very specific classification task (GoogleSpeechCommand) indicate that the \\u2018sweet spot\\u2019 is close to 32 filters. However, when performing another task (e.g. source separation), it can be expected that this optimal value becomes larger.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents an interesting signal processing-based extension of CNNs, where the first layer convolution is replaced by some pre-defined filter banks. Since those filter banks are parameterized with a smaller number of parameters, while they have been proven to be effective in audio processing, I was convinced that this approach could produce better performance than a generic CNN with no such consideration.\\n\\nI am still wondering though, what is the main difference between this approach and Wavelet transform-based scatter transform networks that Stephane Mallat has proposed for years, for example in (And\\u00e9n and Mallat 2014). I figure the proposed method in this paper is more flexible as it does not use the pre-defined filterbanks; instead it tries to learn the parameters to specify the only necessary filters for the particular problem. But I think the authors may need to address the difference from this previous work done by Mallat's group, because they at least share a similar philosophy. \\n\\nAnother thing that's not entirely clear for me was the effect of the filter length. Obviously, it should depend on the particular classification problem. For example, for speech, there needs to be consideration about the shortest stationary period of speech, while in some other cases like music and urban sound, it should be in different lengths to capture the specifics. It's a bit hard for me to believe that the different choices of filter banks from 1 to 100 ms all gave the same results (in Figure 1b). I think, if there is an optimal filter length depending on the problem, which has to be found to guarantee the performance, it has to be better investigated in the paper. \\n\\nIt is a confusing message to me, because the paper claims that the first layer of their network can cover a large area, which responds to a large receptive field, with a single filter by using a different parameter. It is a clearly a different kind of observation than the computer vision networks where the large receptive fields are defined with a deeper architecthre and strides. However, the shortest filter (1ms) and the longest one (100ms) doesn't make any difference, empirically? More discussion is needed to resolve this confusion.\\n\\n\\nJ And\\u00e9n and S Mallat, \\\"Deep scattering spectrum\\\", IEEE Transactions on Signal Processing, 2014\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes to specify the first layer of a CNN for audio applications with predefined filterbanks from the signal processing community. Those latter are only specified by a limited number of parameters, such as the bandwidth or the central frequency of the filter, and those parameters are then optimized through the standard back-propagation algorithm. Some accuracy improvements are obtained on non trivial datasets.\\n\\nI think there are a lot of interesting ideas and the numerical improvements seem consistent with the method. However, I find that this study would benefit of more careful comparisons to understand which particular component is responsible for some of their success! Also, I think some relevant papers are missing in the introduction.\", \"pros\": [\"Good numerical performances.\", \"Interesting study of the impact of predefined filters; an analysis at the end of the paper(bandwidth, principal frequency chosen by the algorithm) is shown, which is a positive aspect of the paper.\"], \"cons\": [\"Several attempts to employ hybrid architectures (as defined in the text) have been already proposed. References to hybrid architecture from Mallat's group are missing, e.g.:\"], \"https\": [\"//arxiv.org/abs/1809.06367/ https://arxiv.org/abs/1605.06644 . Another line of work concerns the steerable filters, which is another manner to parametrized the filters and learn them (e.g.: https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Jacobsen_Structured_Receptive_Fields_CVPR_2016_paper.pdf ) Another manner could be to directly learn the filters as wavelets: https://arxiv.org/pdf/1811.06115.pdf . I agree some of those references are only considering images, but those methods are definitely not specific to them.\", \"Comparing the number of parameters of hybrid and non hybrid architectures is meaningless in this setting, as in all the experiments, the number of parameters of the layers above the first layer are kept identical: one only sees the difference due to the first layer, whose kernel is indeed relatively high-dimensional.\", \"Also, my understanding is that the general pipeline is slower: indeed, a parameter update aims to compute @f/@w=@f/@x*@x/@w. The computation of the term @f/@x is unavoidable and is identical to its non-hybrid counter-part. However, @x/@w might be sometimes higher because the computations can involve potentially more complex functions(e.g., exponential, cos, sin contrary to linear functions). Would you mind to clarify this thought?(A small fair timing comparison would be welcome!)\", \"Furthermore, the improvement in performances is clearly thanks to those a-priori incorporated. As stated in the text, many works propose to initialize the CNN with a specific filter bank. Have the authors tried to compare their performances if the first layer is simply initialized with those filters and then freely evolve? I feel this is missing and would make the claim of the paper stronger. If this has been already done, please highlight it in the text.\", \"Abstract: it is claimed that this technique leads to a training speedup (i.e., less epochs) but I do not understand where this is shown.\", \"Section 3: Sometimes(e.g., AudioMnist), the hybrid training pipeline is quite different from the original implementation, for instance, because of the use of ADAM when the original implementation was using SGD. Did the use of a different optimizer affect the performances?(e.g., SGD?)\", \"Section 3.4: Why not comparing with data augmented settings?\"], \"suggestions_of_improvement\": [\"I would have liked to see a littlewood-paley plot (eg, the sum of the modulus of the filters in the frequency domain) to understand better the distribution of the filters in the Fourier domain, in particular w.r.t. the high-frequency.\", \"\\\"the output maybe too redundant\\\"(page 5) - I don't understand why this would be an issue? In this case, the network should decide which coefficients to discard if the classifier is good enough, shouldn't it?\"], \"post_discussion\": \"R1 made several relevant comments about the technical novelty and my concerns weren't fully solved. Thus I decided to maintain my score.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a new type of convolutional layer for (band-pass) filtering of input signals (e.g., audio recordings). The main benefit is that the layer can be specified with a small number of parameters (+ filter configurations that are typically fixed beforehand get to be tuned to improve inductive bias). This is achieved via modulated windows or wavelets. While this is interesting, I do not see any conceptual novelty. Namely, previous work has already proposed and considered such layers. More specifically, in [1] the authors have considered exponentially-modulated Gaussian windows (detailed experiments, influence of different initialization strategies, properties of learned filters, distribution of modulation frequencies etc.). In [2] the layer is realized using wavelets. In [3] the filter is expressed as a difference between two sinc functions. The authors might argue that the conceptual difference compared to [1] is cosine modulation (see Remark 2 on page 4). Well, cosine modulated filters were considered in [4] as Parzen filters (v1 was on arXiv in June 2019). The latter work has not even been cited by the authors. Moreover, the paper does not discuss the consequences of using cosine modulations instead of exponentials. Section 2.2 in [4] explains why the use of cosine modulations is well suited for real-valued signals. In particular, the moduli of Fourier coefficients are symmetric around the origin for real-valued signals and for this reason spectrograms are typically computed over positive frequencies only. Thus, from this perspective it does not make much difference whether one uses cosine or exponential modulation (when it comes to standard feature extraction approaches for speech processing).\\n\\nIn the empirical evaluation the focus is on showing the utility of filter optimization on different tasks. The first experiments investigates basic properties such as how the number of filters and their overlap influence the effectiveness of a model. It is unclear why a single learning task is sufficient to conclude that more than 30 filters does not amount to an improvement in accuracy (128 and 64 filters are used in [3] and [4], respectively). This lack of reference to findings in previous work make the analysis incomplete. The approach is evaluated in total on three datasets: audio-mnist, google speech command, and urban-sound. While the reported results indicate a good performance of the considered approach over different tasks, the experiments completely ignore previous approaches for filter learning. This lack of baselines and reference to related work makes the experiments inadequate.\\n\\nIn general, my main concern with the experiments is that the section is written as if this is the first work proposing filter learning. I feel that a comparison to at least on of the baselines [1-4] would be required for a non-trivial assessment of the approach.\\n\\n\\n[1] N. Zeghidour, N. Usunier, I. Kokkinos, T. Schatz, G. Synnaeve, and E. Dupoux (ICASSP 2018). Learning filterbanks from raw speech for phone recognition.\\n[2] H. Khan and B. Yener (NIPS 2018). Learning filter widths of spectral decompositions with wavelets.\\n[3] M. Ravanelli and Y. Bengio (arXiv:1812.05920 2018). Speech and speaker recognition from raw waveform with SincNet.\\n[4] D. Oglic, Z. Cvetkovic, P. Sollich (arXiv:1906.09526 2019). Bayesian Parznets for Robust Speech Recognition in the Waveform Domain.\"}"
]
} |
SJev6JBtvH | Testing For Typicality with Respect to an Ensemble of Learned Distributions | [
"Forrest Laine",
"Claire Tomlin"
] | Good methods of performing anomaly detection on high-dimensional data sets are
needed, since algorithms which are trained on data are only expected to perform
well on data that is similar to the training data. There are theoretical results on the
ability to detect if a population of data is likely to come from a known base distribution,
which is known as the goodness-of-fit problem, but those results require
knowing a model of the base distribution. The ability to correctly reject anomalous
data hinges on the accuracy of the model of the base distribution. For high dimensional
data, learning an accurate-enough model of the base distribution such that
anomaly detection works reliably is very challenging, as many researchers have
noted in recent years. Existing methods for the goodness-of-fit problem do not ac-
count for the fact that a model of the base distribution is learned. To address that
gap, we offer a theoretically motivated approach to account for the density learning
procedure. In particular, we propose training an ensemble of density models,
considering data to be anomalous if the data is anomalous with respect to any
member of the ensemble. We provide a theoretical justification for this approach,
proving first that a test on typicality is a valid approach to the goodness-of-fit
problem, and then proving that for a correctly constructed ensemble of models,
the intersection of typical sets of the models lies in the interior of the typical set
of the base distribution. We present our method in the context of an example on
synthetic data in which the effects we consider can easily be seen. | [
"anomaly detection",
"density estimation",
"generative models"
] | Reject | https://openreview.net/pdf?id=SJev6JBtvH | https://openreview.net/forum?id=SJev6JBtvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0fqZFeKUgp",
"rJx8OwSnsH",
"H1l0rTEhjr",
"HJl_WFgjir",
"Syg0lMC9oH",
"Skxa7Wkmcr",
"Skeymf3J5B",
"BkxifG02KH",
"BylKbgx8KH",
"rygtsAw7YB",
"B1gQLJXzYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_review",
"comment",
"official_review",
"official_review"
],
"note_created": [
1576798737751,
1573832558088,
1573829958365,
1573746944003,
1573736949680,
1572167973157,
1571959319113,
1571770898580,
1571319808885,
1571155616709,
1571069771005
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1992/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1992/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1992/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1992/AnonReviewer1"
],
[
"~Shengyu_Zhu1"
],
[
"ICLR.cc/2020/Conference/Paper1992/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1992/AnonReviewer2"
],
[
"~Shengyu_Zhu1"
],
[
"ICLR.cc/2020/Conference/Paper1992/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1992/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a new method for testing whether new data comes from the same distribution as training data without having an a-priori density model of the training data. This is done by looking at the intersection of typical sets of an ensemble of learned models.\\n\\nOn the theoretical side, the paper was received positively by all reviewers. The theoretical results were deemed strong, and the ideas in the paper were considered novel. The problem setting was considered relevant, and seen as a good proposal to deal with the shortcoming of models on out of distribution data. \\n\\nHowever, the lack of empirical results on at least somewhat realistic datasets (e.g. MNIST) was commented on by all reviewers. The authors only present a toy experiment. The authors have explained their decision, but I agree with R1 that it would be appropriate in such situations to present the toy experiment next to a more realistic dataset. This also means that the effectiveness of the proposed method in real settings is as of yet unclear. Although the provided toy example was considered clear and illuminating, the clarity of the text could still be improved.\\n\\nAlthough the reviewers had a spread in their final score, I think they would all agree that the direction this paper takes is very exciting, but that the current version of the paper is somewhat premature. Thus, unfortunately, I have to recommend rejection at this point.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Re Author Response\", \"comment\": \"Thank you for your response, authors. And thanks for providing insight into your experimental decisions. I agree that the Gaussian mixture simulation targets the issue directly. I like this experiment---as I hope I made clear in my review---and think it should be in the paper. However, I don't exactly see the tension between that simulation and a real-data experiment that your rebuttal seems to presume. As I'm sure you know, papers often have simulations performed under ideal conditions and then experiments where those conditions might be violated. Ultimately, there should be some practical consequence to the paper (in most cases, including this one). I wish you the best of luck on your revisions.\"}",
"{\"title\": \"Response to the reviewers\", \"comment\": \"We will address the response from all of you in a single comment here to avoid redundancy. First and foremost, thanks to each of you for taking the time to read the paper and offer your thoughts on our work.\\n\\nIt is clear that the main concern of all of the reviewers is that the paper lacks empirical results. We acknowledge that this is a fair criticism. However, we wish to defend a little our choice of this example, and respond to some of the proposals for other experiments. First, to evaluate the effectiveness of our proposed method, knowing the pdf of the base distribution is necessary to evaluate how well the learned distributions approximate the base. This makes evaluating on standard image datasets difficult. Being constrained to datasets in which the pdf of the base is known, we decided to stick with the most simple example in which the phenomenon we describe occurs. We made this choice for primarily three reasons: clarity, space, and computational resources. We thought it unnecessary to make the base distribution excessively complicated for fears that it would make the result seem contrived and potentially an artifact of the choice of base distribution. \\n\\nR3 suggested that high-dimensional examples are still needed, but we argue that the 100 dimensional example shown is at least not low-dimensional. We felt that this example was sufficiently high dimensional to exhibit the effects of high-dimensional distributions, but low enough as to not require excessive computation and a finicky learning procedure. \\n\\nAlso on this note, R1 suggested that we make a dataset-to-dataset comparison as is done in other anomaly detection works. We believe that such comparisons are in a sense tangential to this work. While the comparisons made in those works do demonstrate the ability for the evaluated methods to reject samples from other chosen distributions, such a demonstration does not give confidence that the method will reject samples from some other distribution that was not evaluated against. In other words, such a comparison does not show how a model will perform on unknown unknowns, as a theoretical argument must be made for such a claim. Because prior work has already evaluated the effectiveness of using the typical set of a distribution as an acceptance region for one-sample tests, we felt that the most valuable use of space in our paper for experiments should go towards demonstrating the effectiveness of our method in better approximating the typical set itself. \\n\\nFinally, we would also like to acknowledge the many other comments made by all of the reviewers about other or more minor things, which have all been received and for the most part we agree with. However, there are a few things we would like to respond to. \\n\\nFirst, equations 3 and 4 are correct. Equation 3 is simply an objective. The objective that we wish to minimize can be anything we want -- whether we can evaluate and optimize for it is another question. The fact that in practice what is actually minimized is the KL between the empirical distribution of p and the parameterized distribution q is one interpretation. Another equally valid interpretation is that the objective that is evaluate in practice is a sample approximation of the objective listed in (3), which is the view we take. We agree with R1 that to say this view is incorrect is itself an incorrect statement. \\n\\nFinally, there was some questions about the application of the theoretical results to practical use. For example, R1 wondered why the result of theorem 1 never showed up again. It more or less does show up in the form of theorem 3, which states that for an ensemble of distributions which are sufficiently different will have low intersection. The metric that is used to show \\\"difference\\\" in distributions in this case is the same as that given in theorem 1. \\n\\nThe direct applicability of the theorems to practical situations, given as they are in a sense asymptotic results, is a fair thing to question, as R2 did. While we wish we were able to come up with strong bounds that are valid for any size sample, such analysis is very hard and might even be impossible without making further assumptions. This is left for future work. Instead, we just offer theorem 1, for example, as a means to give motivation to the test for typicality, which otherwise had no theoretical motivation that we are aware of, and theorems 2 and 3 as motivation for the idea of taking intersections of typical sets. \\n\\nNoting all of that, we wish to again thank all of the reviewers for taking the time to think about the results we present. We are thankful for the criticisms as they are important for elevating our work.\"}",
"{\"title\": \"Post-Reviews Update\", \"comment\": \"I have read the other two reviews and agree that the major deficiency is the lack of non-toy experiments.\\nI also agree with most of the points raised by R1 and R2.\\nEq.(3) becomes correct if p is an empirical distribution, while in the paper p is referred to as the ground truth distribution.\\nSince it usually does not mean the empirical distribution, this point should be clarified.\"}",
"{\"title\": \"Post-Reviews Update\", \"comment\": \"After reading the two other reviews, it seems that all reviewers agree that the lack of non-toy experiments is a major deficiency in the paper. I find R3 to be too harsh in claiming Eqs 3 and 4 are \\\"wrong\\\": the authors clearly show $p$ is approximated with samples in Eq 4. I also disagree with R3 that the included experiment is \\\"unclear\\\". Rather, I find its motivation and results easy to interpret (see my summary). I mostly concur with R2's review except for its final conclusion. The questions that R2 lists under 'cons' could indeed be answered with a more comprehensive and realistic set of experiments. And until high-dimensional experiments are included, I leave my recommendation at \\\"reject\\\".\"}",
"{\"title\": \"Thanks for response\", \"comment\": \"Dear authors,\\n\\nThanks very much for your detailed response. I'm glad to see that the previous questions are indeed helpful and I think most of them can be handled by a little effort of revision. BTW, I also guessed that the reason of not using two-sample testing was due to computation issues, which in my opinion shall be mentioned to make the motivation more convincing.\\n\\nMore about 'using entropy typical set of $p$ is far from optimal testing w.r.t. error probabilities': \\n\\nThis is about the problem of goodness of fit testing in the universal setting. Given a distribution $p$ and i.i.d. samples denoted by $x^n$, decide whether or not $x^n$ are from $p$. Assume that $q$ is the true yet unknown distribution of $x^n$, then this problem can be formulated as $H_0: p=q$ vs. $H_1: p\\\\neq q$. As the way you also used in the paper, for a fixed type-I error constraint, the question is: can we achieve the optimal type-II error probability for any $q$ even if we do not know $q$? For finite samples, I don't think it possible. There have been several (or many) works on the asymptotic case, that is, can we achieve the same type-II error exponent for any $q\\\\neq p$?\\n\\nThis problem probably dates back to W. Hoeffding in 1965 [1], where he showed the empirical KLD is indeed universally optimal for finite sample spaces. For more general space, like $\\\\mathbb R$, only some weaker optimality results exist. In fact, I couldn not find any reference for that statement about 'typical set'; I tried this entropy set to show achievability (existence) before but it was very hard (or impossible) to pick a universal $\\\\epsilon$. Our work [2] actually solved this problem for at least $\\\\mathbb R^n$. Please find more details and the above mentioned works in [2].\\n\\nBTW, this setting does not assume $\\\\text{KLD} \\\\geq d$. It basically assumes that $q\\\\neq p$ (and in fact $KLD<\\\\infty$ for regularization reasons). That said, as long as $q\\\\neq p$, then no matter how close they are, they can be identified with sufficiently many i.i.d. samples. If assuming $\\\\text{KLD}\\\\geq d$, then one have constructed minimax optimal tests (sorry, I could not remember a reference).\\n\\n[1] Hoeffding, W. (1965). Asymptotically optimal tests for multinomial distributions. The Annals of Mathematical Statistics, 369-401.\\n[2] Asymptotically Optimal One- and Two-Sample Testing with Kernels. https://arxiv.org/abs/1908.10037\"}",
"{\"title\": \"Thank you very much for your helpful comments\", \"comment\": \"Dear Dr. Zhu,\\n\\nThank you so much for your comments, and especially for linking your recent paper on the asymptotically optimal one- and two-sample kernel-based tests. In a final version of our paper we will be sure to reference your work, as it is very interesting and relevant. I will try to address all of your comments here, and will definitely address them in the final version of our paper, since they are all valid points.\\n\\n1. You are correct that the two-sample problem directly addresses the problem of discerning distributions when only samples from the base distribution are given. The angle that we approached our work from was in response to recent proposals in the deep learning community that learning a succinct representation of the base distribution from samples to then use in one-sample testing could be advantageous over two-sample testing for computational reasons. I think that such a proposal is not made explicit in those works (nor as you point out, in our own), perhaps due to the known computational issues with sample-to-sample comparisons, e.g. nearest-neighbors. I agree that we made a mistake in not mentioning two-sample testing at all, or the computational motivation in the test we propose. That being said, I do still think that the test we propose is still well-motivated, since for applications with massive, high-dimensional datasets, computing MMD- or KSD-based tests could be prohibitively slow to compute, where as the test we propose would not. We will be sure to make this point very explicit in an updated version of the paper.\\n\\n2. Again, you are correct that \\\"error-rate\\\" might not be the most appropriate term. Instead \\\"probability of error\\\" is a more accurate way to describe the term. Either way, while there is an additive $3\\\\epsilon$ term on the bound, and a stronger bound would not include this additive term, the bound is still valid and gives some insight into what we can say about the power of the test on typicality proposed in (Nalisnick, 2019). I am curious about what you mentioned regarding tests based on the entropy typical set being far from optimal. Would you mind sharing a reference that includes that negative result? I would be very interested in reading about that. \\n\\n3. The mixture-of-gaussians example was chosen since it is the simplest example we could think of that demonstrated the phenomenon we were interested in showing. The same phenomenon can be seen for much more complicated examples, although in all examples the true base distribution must be known. This makes such an effect difficult to show on real-world datasets, and therefore examples can quickly become seemingly contrived. We thought that for clarity we would show that such effects are evident in one of the simplest of cases, but we understand that there are also limitations to focusing on such type of examples. Your comment is helpful, and in the final revision of our work we can instead/additionally demonstrate the effect on more complicated examples that are more similar to real-world datasets one might encounter in practice. \\n\\nYour comment regarding the optimization of parameters is also valid. In that example we optimized parameters using a gradient-based method with momentum (Adam) as is commonly used when optimizing the parameters of large flow-based generative models, and when the structure of the base-distribution is not known a priori.\\n\\n4. In the definition of multi-typicality, we do mean $\\\\max$. This results in an acceptance region which is the intersection of the typical sets of each member in the ensemble. Theorem 2 gives sufficient conditions for such an intersection of typical sets to also have non-zero intersection with the typical set of the base distribution. These conditions do not guarantee that the resulting multi-typical set has large probability with respect to the base distribution, but provide a means to under-approximate the acceptance region defined by the test on entropy typicality ( as opposed to over-approximate the acceptance region, which is the usual result of using a learned approximation of the base typical set). \\n\\n5. This condition is to indicate that it may be impossible to define tests with non-trivial power for differentiating from the null hypothesis if alternate distributions can be assumed to be arbitrarily close to the base distribution. Admittedly, as you point out, we did not give this detail adequate consideration in the submitted version, and will address it in the revised version. \\n\\n6. All noted and good points, will fix in revised version.\\n\\nAgain, thank you very much for taking the time to read our submission and making thoughtful and informed comments. Each of your points are well founded and we look forward to incorporating them, and in doing so, strengthening our work. \\n\\nKindly, \\nThe authors of submission 1992\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\n\\nI machine learning, we often have training data representative of an underlying distribution, and we want to test whether additional data come from the same distribution as the training data (e.g. for outlier/anomaly detection, or model checking). One way to do this is to learn a model of the underlying distribution, and test whether the additional data fall within the typical set of the model. This paper points out that the typical set of the model may be very different from the typical set of the underlying distribution if the model is learned by maximum likelihood, in which case a test of typicality with respect to the model would be a poor test of typicality with respect to the underlying distribution. The paper shows theoretically that the intersection of the typical sets of an ensemble of models lies within the typical set of the underlying distribution, provided that (a) each model is a good enough approximation to the underlying distribution, and (b) the models are all sufficiently different from each other. Based on that, the paper argues that a better test of typicality would be to test whether the additional data fall within the intersection of the typical sets of the ensemble of models.\", \"pros\": \"The paper addresses an interesting problem in a sound and well motivated way. There is a lot of work on outlier/anomaly detection that uses the model's probability density to determine whether a dataset is out-of-distribution or not, which is known to not be a good proxy for typicality, because atypical data can have high probability density. In contrast, this paper uses a well-founded notion of typicality based on the information-theoretic definition of a typical set.\\n\\nThe toy example that is used to illustrate the problem is clear and illuminating, and motivates the paper well. In particular, the example clearly illustrates the issue of local minima when training models, and the mass-covering behaviour of maximum-likelihood training.\\n\\nThe idea of using the intersection of the typical sets of an ensemble of models is interesting and clever, and backed by strong theoretical results.\", \"cons\": \"Even though I appreciate the paper's theoretical contribution, there are no empirical results other than the motivating example. In particular, the paper proposes an idea and theory to back it up, but it doesn't really propose a practical method, and as a result it doesn't test the theory in practice.\\n\\nTheorems 2 and 3 provide a solid foundation for the proposed idea, but it's not clear how they can be used in practice. Specifically:\\n- How can we verify that in practice the KL between the models and the underlying distribution is small enough as required by theorem 2 when we can't usually evaluate it?\\n- In practice, how should we construct an ensemble such that the individual models in the ensemble are different enough from each other as required by theorem 3?\\n- Both theorem 2 and 3 are valid \\\"for large enough n\\\". However, in practice we may want to check e.g. individual datapoints for typicality (in which case n=1). Are the theorems relevant for small n?\\n\\nThe paper is generally well written, but some statements made are either inaccurate or subjective, and I worry that they might mislead readers. Later in my review I will point out exactly which statements I'm referring to. I strongly encourage the authors to fix or moderate these statements before the paper is published.\", \"decision\": \"I believe the paper to be an important contribution, but the work is clearly incomplete. For this reason, my recommendation is weak accept, with an encouragement to the authors to continue the good work.\\n\\nInaccuracies or subjective statements that I encourage the authors to fix/moderate:\\n\\n\\\"we are still bad at reliably predicting when those models will fail\\\"\\n\\\"we are unable to detect when the models are presented with out-of-distribution data\\\"\\nThese statements may come across as too strong. I suggest making the statements about our current methods, rather than about the ability of the research community, and be more specific in what ways the current methods are inadequate.\\n\\n\\\"detecting out-of-distribution data [...] is formally known as the goodness-of-fit problem\\\"\\nI'm not sure that detecting our-of-distribution data and goodness-of-fit are synonymous. Goodness-of-fit testing can be used in situations other than outlier detection, e.g. for testing whether a proposed model is a good fit to a dataset.\\n\\n(Second bullet-point of section 1) \\\"distributions having low KL divergence must have non-zero intersection\\\"\\nTo be more precise, the typical sets must have non-zero intersection, not the distributions.\\n\\n\\\"determining which of two hypotheses are more probable\\\"\\n\\\"H0 is deemed more probable\\\"\\nClassical hypothesis testing does not assign a probability to a hypothesis, which would be a Bayesian approach instead. Therefore, it's technically incorrect to talk about the probability of a hypothesis in this context.\\n\\n\\\"which accepts the null-hypothesis\\\"\\n\\\"f correctly accepting H0\\\"\\nHypothesis testing doesn't accept a hypothesis, it merely decides whether to reject the null hypothesis in favour of the alternative hypothesis. Therefore, it may \\\"fail to reject\\\" the null hypothesis, but it never accepts it.\\n\\n\\\"the KL-divergence is equal to zero if and only if p(x) = q(x; \\u03b8) \\u2200x \\u2208 X\\\"\\nThe KL is equal to zero if and only if the distributions are equal, but the densities may still differ in at most a set of measure zero. Therefore, it's not a requirement that the densities match for all x for the KL to be zero.\\n\\n\\\"For example, by looking at the form of the KL-divergence, there is no direct penalty for q(x; \\u03b8) in assigning a high density to points far away from any of the \\u00afxi\\u2019s\\\"\\nThe problem that this statement is talking about is the problem of overfitting, which is the problem of the model learning the specifics of the training data rather than the underlying distribution. However, the statement preceding the above is about the problem of local minima when optimizing then parameters of a model. These two problems are distinct and shouldn't be conflated, as they are here.\\n\\n\\\"this requires direct knowledge of p(x) to evaluate the objective\\\"\\nHowever we can evaluate the objective up to an additive constant when p(x) is known up to a multiplicative constant, which is enough to optimize it.\\n\\n\\\"as do all divergences other than the forward KL, to the best of our knowledge\\\"\\n\\\"This makes the forward KL-divergence special in that it is the only divergence which can directly be optimized for.\\\"\\nI don't think this is true. For example, the Maximum Mean Discrepancy is a divergence, since it's non-negative and zero if and only if the two distributions are equal, but it only involves expectations under p(x) and can be directly optimized over the parameters of q(x; \\\\theta). Moreover, the second statement doesn't follow from the first: it's incorrect to conclude that the forward KL is the only one that can be directly optimized for, based only on one's state of knowledge.\\n\\n\\\"Variational Auto-encoders [...] map a lower-dimensional, latent random variable\\\"\\nThere is no fundamental reason why the latent variable of a VAE has to be low-dimensional. We may do this often in practice, but a VAE with a high-dimensional latent variable may also be used.\\n\\n\\\"Because the image of any non-surjective function necessarily has measure zero\\\"\\nThis is not true; the absolute-value function is not surjective but its image doesn't have measure zero in the set of real numbers. I understand what the statement is trying to say, but it's important that it's said accurately.\\n\\n\\\"autoregressive models, such as PixelCNN\\\"\\nAutoregressive models can also be used to model discrete variables in which case they can't be thought of as flows. In fact, PixelCNN as first proposed is a model of discrete variables.\\n\\n\\\"all of these models rely on optimizing the forward KL-divergence in order to learn their parameters\\\"\\nNot necessarily, flow-based models don't have to be optimized by minimizing the forward KL. For example, they can be trained adversarially in the same way as GANs, and in principle can be trained with other divergences or integral probability metrics. The model and the loss are (at least in principle) orthogonal choices.\\n\\n\\\"advancements in the expressivity of the models are unlikely to fix the undesired effects\\\"\\nThis is a subjective assessment, and is not sufficiently backed by arguments where it first appears. I understand that the arguments are presented later in section 3, so I would at least suggest that a forward reference to the argumentation in section 3 is given here.\\n\\nFigure 5 gives the impression that the model samples have less variance than the ground-truth samples. Isn't that surprising given that the problem is that minimizing the forward KL leads to mass-covering behaviour? I suspect that the problem here is that there are more ground-truth samples than model samples, and the ground-truth samples saturate the scatter plot. If that's the case, I believe that figure 5 is very misleading.\\n\\n\\\"we see that the learning procedure converged\\\"\\nWe know however that the learning procedure hasn't really converged, instead it is stuck at a saddle point (where the model is using a single mode to cover two modes of the underlying distribution). In other words, it appears to us that the learning procedure has converged, even though it hasn't, and possibly if we wait for long enough we will see rapid improvement when the procedure escapes the saddle point. Therefore, I would at least say \\\"we see that the learning procedure has appeared to converge\\\".\\n\\nI would expect the bottom-right entry of table 1 to be higher than 90% like the other diagonal elements, so I suspect that it might be a typo.\\n\\nIn eq. (7), shouldn't each log q_k be divided by n?\\n\\n\\\"in practice we find that it is much easier to find an ensemble of models such that the multi-typical set approximates the ground-truth typical set than the bounds require\\\"\\nThere is no empirical evidence presented in the paper in support of this statement.\\n\\n\\\"least probable density\\\"\\n\\\"least typical density\\\"\\nI understand what the intended meaning of these terms is, but these terms make little sense mathematically nevertheless. I would suggest that the statement is rewritten in a more precise and direct way.\\n\\n\\\"This measure only corresponds to measuring typicality if the bijection is volume preserving\\\"\\nI'm not sure that the distance from a Gaussian mean is a valid measure of typicality. In high dimensions, the region around the mean is very atypical.\\n\\nMinor errors, typos, and suggestions for improvement:\\n\\nThe phrase \\\"the authors in Smith et al. (2019) propose\\\" is a bit awkward. Better say \\\"Smith et al. (2019) propose\\\", as Smith et al are indeed the authors.\\n\\nMissing full stop in first bullet-point of section 1.\\n\\nIt would be good to provide more details of the experiment in section 3. Specifically:\\n- What training algorithm was used to maximize the likelihood? SGD or EM?\\n- How many training datapoints were used?\\n\\n\\\"to index the 5 experiments ran\\\" --> run\\n\\n\\\"refer the k-th learned density\\\" --> refer to\\n\\ninterestig --> interesting\\n\\nMissing closing bracket in point 1 of section 4.\\n\\nCapital C in \\\"Consider\\\" in theorem 2.\\n\\n\\\"if every model in a density of learned distributions\\\" --> an ensemble of learned distributions\\n\\n\\\"where as the method we propose\\\" --> whereas\\n\\n\\\"can be found in in\\\", double \\\"in\\\"\"}",
"{\"comment\": \"It is really interesting to see that typicality, which I used quite a lot in my previous research, is also considered with machine learning. I have several questions with the current manuscript.\\n\\n1. Motivation: in your abstract and introduction, you mentioned \\n\\n'which is known as the goodness-of-fit problem, but those results require knowing a model of the base distribution. The ability to correctly reject anomalous data hinges on the accuracy of the model of the base distribution. For high dimensional data, learning an accurate-enough model of the base distribution such that anomaly detection works reliably is very challenging, as many researchers have noted in recent years. Existing methods for the goodness-of-fit problem do not account for the fact that a model of the base distribution is learned. '.\\n\\n'These type of bounds show that certain tests can be performed which are capable of discerning (with non-trivial probability) that populations of data sampled from distributions at least some positive distance away from the base distribution are anomalous. However, in order to perform the proposed tests, an explicit form of the probability density function (or probability mass function) describing the base distribution is needed. For most real-world data sets, this density is not known, and must be estimated. While there has been a lot of analysis on the ability to detect anomalous data, those analyses typically do not account for the fact that the base density for which the tests are designed is learned'\\n\\nThis is questionable. In the goodness of fit setting (also called one-sample problem), it is true that one has to consider the base distribution because this is the problem setting: a distribution is given and one tries to test how well this distribution fits observed data. However, if this distribution is not given but instead one has samples, it is straightforward to conduct a two-sample testing (e.g., using MMD). The current motivation of estimating the base distribution is not convincing.\\n\\n2. Theorem 1: in my experience with information theory (particularly with source coding and hypothesis testing), the result of this theorem cannot be called 'error rate', since you have a constant term $3\\\\epsilon$ on the r.h.s. In other words, it is not clear how you pick this $\\\\epsilon$ and this result does not indicate consistency (consistency: the type-II error probability goes to zero with $n\\\\to\\\\infty$, subject to a fixed type-I error probability). Indeed, many nonparametric goodness of fit tests are consistent, and using entropy typicality set of $p$ as acceptance region is far from optimal testing in either finite or asymptotic regime wrt. error probabilities. Actually a recent work has also shown the MMD based test, applied to goodness of fit testing, is universally optimal in the sense that it achieves the optimal type-II error exponent for any alternative distribution with $\\\\text{KLD}<\\\\infty$, with a fixed type-I error probability. Check https://arxiv.org/abs/1908.10037 if you are interested.\\n\\n3. The mixture of Gaussian example is also questionable. It is not clear how you optimize the parameters (I didn't find details on this part). Using EM method could lead to a much better estimate. By the way, picking the right parametric model is usually not easy in practice.\\n\\n4. Definition of multi-typicality in Eq. (7): I guess you mean $\\\\min$, rather than $\\\\max$; otherwise, as long as you have more than one $q_i$, you can find a small enough $\\\\epsilon>0$ so that this set has nearly zero probability with sufficiently many samples from distribution $p$.\\n\\n5. First line on Page 2: you assume $D_{\\\\text{KL}}(p\\\\|\\\\tilde{p})\\\\geq d$. What is $d$ and why this condition is placed? This isn't explained.\\n\\n6. Minor:\\n- No definition of $Vol()$; this concept may be new to people in machine learning community\\n- No definition of 'intersection', so it is hard to verify your related claim\\n- Missing statement on the i.i.d. assumption on samples.\\n- Eq.(2) and Theorem 1: you use $\\\\max$, but I don\\u2019t' think it is easy to see that the maximum indeed exists. So perhaps use $\\\\sup$?\", \"title\": \"Interesting. Yet questionable motivation and example, as well as missing definition/detail\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper proposes to use ensembles of estimated probability distributions in hypothesis testing for anomaly detection.\", \"While the problem of density estimation with its application to anomaly detection is relevant, I have a number of concerns listed below:\", \"Overall, this paper is not clearly written and it is difficult to follow.\", \"Discussion is not straightforward at many points.\", \"In particular, the objective of experiments on synthetic data in Section 3 is unclear. What is the proposal and how to evaluate it in the experiments?\", \"There are also many grammatical mistakes, which also deteriorates the quality of the paper.\", \"Technical quality is not high.\", \"Equations (3) and (4) are wrong. The distribution p should be not the ground truth but the empirical distribution.\", \"In experiments, only a simple Gaussian mixture model has been examined. A variety of distributions should be examined.\", \"How strong are the assumptions in Theorem 2 in practical situations?\", \"There is no experimental evaluation for the proposed method. Hence the effectiveness of the proposed method is not clear.\"], \"minor_comments\": [\"P.2, L.3 in Section 2: \\\", The\\\" -> \\\", the\\\"\", \"P.7, L.-6: \\\"q_1(x; \\\\theta_1,\\\" -> \\\"q_1(x; \\\\theta_1),\\\"\", \"P.8, L.1 in Theorem 2: \\\"x \\\\in X, Consider\\\" -> \\\"x \\\\in X, consider\\\"\"]}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary: This paper analyzes and extends a recently proposed goodness-of-fit test based on typicality [Nalisnick et al., ArXiv 2019]. Firstly, the authors give bounds on the type-II error of this test, showing it can be characterized as a function of KLD[q || p_true] where p is the true data generating process and q is an alternative generative process. The paper then shifts to the main contribution: an in-depth study of a Gaussian mixture simulation along with accompanying theoretical results. The simulation shows that maximum likelihood estimation (MLE)---due to it optimizing KLD[p_true || p_model]---does not penalize the model for placing probability in places not occupied by p_true. This means that while samples from p_true should fall within the model\\u2019s typical set, the model typical set may be broader than p_true\\u2019s. Table 1 makes this clear by showing that only 30-40% of samples from the model fall within the typical set of p_true. Yet >93% of samples from p_true fall within the models\\u2019 typical sets. The paper then makes the observation that the models do not have high overlap in their typical sets, and thus p_true\\u2019s typical set could be well approximated by the intersection of the various models\\u2019 typical sets. Applying this procedure to the Gaussian mixture simulation, the authors observe that ~95% of samples drawn from the intersection of the ensemble fall within p_true\\u2019s typical set. Moreover, ~97% of samples from p_true are in the ensemble (intersection) typical set. The paper closes by proving that the diversity of the ensemble controls the overlap in their typical sets, and hence increasing diversity should only improve the approximation of p_true\\u2019s typical set.\\n\\n____\", \"pros\": \"This paper contributes some interesting ideas to a recent topic of interest in the community---namely, that deep generative models assign high likelihood to out-of-distribution (OOD) data [Nalisnick et al., ICLR 2019] and how should we address this problem if we are to use them for anomaly detection, model validation [Bishop, 1994], etc. This paper makes some careful distinctions between the true data process, the model, and the alternative distribution, which I have not seen done often in this literature. And while the mass-covering effect of MLE on the resulting model fit is well known, this paper is the first with which I am aware that translates that fact into a practical recommendation (i.e. their intersection method). Furthermore, this connection to ensembling may provide important theoretical grounding to other ensemble-based methods for OOD detection [Choi et al., ArXiv 2019].\\n\\n____\", \"cons\": \"The primary deficiency in the paper is experimental. While the text does make some compelling arguments in the Gaussian mixture simulations, some validation on real data must be provided. Ideally experiments on CIFAR-10 vs SVHN (OOD) and FashionMNIST vs MNIST (OOD) should be reported as these data set pairings have become the benchmark cases in this line of literature.\\n\\nBesides the lack of experiments on real data, I find the paper\\u2019s material to be a bit disjointed and ununified. For instance, Theorem 1 is never discussed again after it is presented in Section 2.1. I thought for sure the presence of the KLD-term would be referenced again to relate the ensembling methodology back to the bound on the type-II error. For another example, normalizing flows are discussed in Section 2.3 and the change-of-variables formula given in Equation 5. However, normalizing flows are never mentioned again except in passing in the Related Work section. \\n\\n____\", \"final_evaluation\": \"While I find the paper to contain interesting ideas, it is too unfinished for me to recommend acceptance at this time. Experiments on real data must be included and the overall coherence of the draft improved.\"}"
]
} |
BJe8pkHFwS | GraphSAINT: Graph Sampling Based Inductive Learning Method | [
"Hanqing Zeng",
"Hongkuan Zhou",
"Ajitesh Srivastava",
"Rajgopal Kannan",
"Viktor Prasanna"
] | Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs. To scale GCNs to large graphs, state-of-the-art methods use various layer sampling techniques to alleviate the "neighbor explosion" problem during minibatch training. We propose GraphSAINT, a graph sampling based inductive learning method that improves training efficiency and accuracy in a fundamentally different way. By changing perspective, GraphSAINT constructs minibatches by sampling the training graph, rather than the nodes or edges across GCN layers. Each iteration, a complete GCN is built from the properly sampled subgraph. Thus, we ensure fixed number of well-connected nodes in all layers. We further propose normalization technique to eliminate bias, and sampling algorithms for variance reduction. Importantly, we can decouple the sampling from the forward and backward propagation, and extend GraphSAINT with many architecture variants (e.g., graph attention, jumping connection). GraphSAINT demonstrates superior performance in both accuracy and training time on five large graphs, and achieves new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970). | [
"Graph Convolutional Networks",
"Graph sampling",
"Network embedding"
] | Accept (Poster) | https://openreview.net/pdf?id=BJe8pkHFwS | https://openreview.net/forum?id=BJe8pkHFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"QmaaJ-k17w",
"S1x9OrPhjH",
"BklyDVDhsH",
"rygiO3JFjS",
"SyeIxl6MiB",
"HJl2Jtsfor",
"rJlxKLYhcH",
"BJeVLY_aFS",
"SJe_auDjFr",
"rkgtzFwvFB",
"B1e-fZBLtr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798737722,
1573840242460,
1573839959494,
1573612658669,
1573208046163,
1573202147681,
1572800120162,
1571813707804,
1571678400008,
1571416337402,
1571340553127
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1990/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1990/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1990/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1990/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1990/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1990/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1990/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1990/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1990/Authors"
],
[
"~Weilin_Cong1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"All three reviewers advocated acceptance. The AC agrees, feeling the paper is interesting.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"New Revision of the Paper Uploaded\", \"comment\": \"We have uploaded a new version of the paper which includes new experimental results.\\n\\nIn Appendix D.3, we have included the test set accuracy of baselines under various batch sizes in Table 11. The results support the \\\"point 1\\\" in our previous response. \\n\\nIn Appendix D.2, we have included an additional table comparing the total convergence time of GraphSAINT and ClusterGCN after considering the pre-processing cost and sampling cost. The results are in line with the \\\"point 2\\\" in our previous response.\"}",
"{\"title\": \"New Revision of the Paper Uploaded\", \"comment\": \"We have uploaded a new version of the paper after integrating your constructive suggestions.\", \"regarding_clarifying_the_theorem_statement\": \"We have updated the text in Section 3.2 as well as the statement of Proposition 3.1. Note that for given ${x}_u^{(\\\\ell)}$, Proposition 3.1 itself considers a single layer $\\\\ell+1$ and does not rely on the assumption that \\\"each layer learns embeddings independently\\\". On the other hand, as noted by the reviewer, such assumption is required when using the proposition to normalize the multi-layer GCN built by GraphSAINT. Therefore, in the updated paper, we clarify such assumption right before and after the statement of Proposition 3.1.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your valuable feedback. We agree that properly sampled subgraphs are critical to high accuracy, and sampling parameters need to be carefully chosen. In fact, we design samplers based on the theoretical analysis on bias and variance of the minibatch estimator.\\n\\nTo eliminate bias introduced by graph sampling, we derive the normalization on feature aggregator and minibatch loss. Note that such normalization ensures unbiasedness for an arbitrary graph sampler (Proposition 3.1) . To minimize the variance of the minibatch estimator, we derive the optimal sampling parameter for \\\"Edge\\\" sampler (Theorem 3.2). We further extend the proposed edge sampler to random walk samplers and determine the corresponding sampling parameters, based on insights into the GCN architecture.\\n\\nNote that our samplers derived from the theoretical analysis also satisfies the intuitive requirement for a \\\"proper\\\" sampler -- that nodes influential to each other should have high probability to be sampled together. Please see Section 3.3 for a detailed discussion.\\n\\nOur experiments show that the choices of normalization and graph samplers based on Proposition 3.1 and Theorem 3.2 do lead to improved accuracy. As shown in Table 2, accuracy results of GraphSAINT are indeed state-of-the-art.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We appreciate the valuable feedback from the reviewer! In the following, we would like to clarify the two \\\"Cons\\\":\\n\\n\\n1. Regarding batch size (defined by all the baselines as the number of node samples in the output layer):\\n\\nAs noted in the review, the four methods (GraphSAGE, FastGCN, S-GCN, AS-GCN) use samples of different sizes across different layers, even when the batch size is set to be the same. This leads to the intuition that the optimal batch size (w.r.t. accuracy) should be different for different methods. In the experiments, we have treated batch size as a hyperparameter dependent on the sampling method as well as the training graph topology.\\n\\nBy experiments on varying the batch sizes, we observe: for GraphSAGE, S-GCN and AS-GCN, their default batch sizes (512,1000 and 512, respectively) lead to the highest accuracy on all datasets. For FastGCN, increasing the default batch size (from 400 to 4000) leads to noticeable accuracy improvement. For ClusterGCN, different datasets correspond to different optimal batch sizes, and the accuracy in Section 5.1 is already tuned by identifying the optimal batch size on a per graph basis. See also Table 11 (or Table 10 in the original submission) of Appendix D.3 for experiment details. \\n\\n\\n2. Regarding sampling overhead:\\n\\nAs discussed in Appendix D.2, the two best samplers of GraphSAINT, \\\"Edge\\\" and \\\"RW\\\", are very light-weight. In addition, similar to ClusterGCN, our sampling can also be done offline since the sampler does not require node features. To be more specific, time to construct one subgraph by \\\"Edge\\\" or \\\"RW\\\" is always less than 25% of the time to perform one gradient update. On the other hand, as shown in Table 9, time to identify clusters by ClusterGCN can be much longer than its total training time if the training graph is large and dense. For example, clustering time on Amazon is over 5x the total training time of ClusterGCN. \\n\\nTaking into account the pre-processing time, sampling time and training time altogether, we summarize the total convergence time (in seconds) of GraphSAINT and ClusterGCN in the following (corresponding to Table 2 configuration):\\n---------------------------------------------------------------------------------------\\n PPI Flickr Reddit Yelp Amazon\\n---------------------------------------------------------------------------------------\\nGraphSAINT-Edge 91.0 7.0 16.6 273.9 401.0\\nGraphSAINT-RW 103.6 7.5 17.2 310.1 425.6\\nClusterGCN 163.2 12.9 55.3 256.0 2804.8\\n----------------------------------------------------------------------------------------\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thanks a lot for your valuable feedback. We will state our assumption on the theorem more explicitly in our next revision.\", \"answer_to_the_question\": \"Yes, there is a typo in Equation 3. Thanks for pointing this out! The correct expression should be $\\\\mathbb{E}(L_\\\\text{batch})=\\\\frac{1}{|\\\\mathbb{G}|} \\\\sum\\\\limits_{\\\\mathcal{G}_s \\\\in \\\\mathbb{G}}\\\\sum\\\\limits_{v\\\\in \\\\mathcal{V}_s} \\\\frac{L_v}{\\\\lambda_v}=\\\\frac{1}{|\\\\mathcal{V}|}\\\\sum\\\\limits_{v\\\\in\\\\mathcal{V}} L_v$.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Scaling GCNs to large graphs is important for real applications. Instead of sampling the nodes or edges across GCN layers, this paper proposes to sample the training graph to improve training efficiency and accuracy. It is a smart idea to construct a complete GCN from the sampled subgraphs. Convincing experiments can verify the effectiveness of the proposed method. It is a good work.\", \"question\": \"1. How can the authors guarantee that subgraphs are properly sampled? Are there any theoretical guarantee?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a training method for graph convolution networks on large graphs. The idea is to train a full GCN on partial samples of the graph. The graph samples are computed based on the graph connectivity, and the authors propose methods for reducing the bias and variance in the training procedure.\\n\\nThe idea is elegant and intuitive, and the fact that the approach can work with various graph sampling methods adds to its generality. The paper is well-written and the fact that code is published is valuable.\\n\\nThe results on bias and variance are under the assumption that each layer independently learns an embedding. This would be clearer if added explicitly in the theorem statements (and not as part of the main text). It would be interesting to discuss how realistic this assumption is, and how large the actual bias is. Perhaps this can be measured empirically? \\nNevertheless, the empirical result indeed support the claim that this simplifying assumption is enough to derive useful learning rules.\\n\\nOverall, I believe this is a solid contribution, and I can foresee future extensions that improve the results with more complex graph sampling methods.\", \"question_to_the_authors\": \"I did not understand the second equality in Eq. 3. Could there be a typo?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed a new sampling method to train GCN in the mini-batch manner. In particular, unlike existing methods which samples the mini-batch in the node-wise way, GraphSAINT proposed to sample a mini-batch in the graph-wise way. As a result, GraphSAINT uses the same graph across different GCN layers, while most existing methods use different graphs across different GCN layers. In addition, the authors show that this sampling method is unbiased. Extensive experimental results have shown improvement over existing methods. Overall, this idea is interesting and well presented.\", \"pros\": \"1. A new sampling method for the stochastic training of GCN. Have good performance.\\n2. Extensive experiments to verify the performance of the proposed method.\\n3. The theoretical analysis looks sound.\", \"cons\": \"1. GraphSAGE and FastGCN use different graphs across different GCN layers, while ClusterGCN and GraphSAINT use the same graph across different GCN layers. To make a fair comparison, it is necessary to have the same batch size for different methods. How do you deal with this issue in your experiment?\\n2. For ClusterGCN, the clustering procedure is done before the training. So, it needs much less computational overhead for sampling in the training course. However, GraphSAINT needs to do the heavy sampling online. Thus, it may consume more time than ClusterGCN for large graphs. It's better to show the running time of these two methods.\"}",
"{\"comment\": \"Thanks for your interest in our paper. This is a valid concern.\\n\\nAs stated at the beginning of Section 3.2, \\u201canalysis of the complete multi-layer GCN is difficult due to non-linear activations. Thus, we analyze the embedding of each layer independently.\\u201d In other words, to derive Proposition 3.1, we followed the same assumption as AS-GCN and FastGCN, that each layer independently learns an embedding. Thus, the condition on the previous layer can be removed. This assumption also motivates the proposed edge sampler.\\n\\nAlternatively, in Section 3.4, we have also performed analysis by assuming no non-linear activations. Then we can collapse L layers of A into an equivalent 1 layer of A^L. Analysis based on this assumption leads to the proposed random walk sampler. \\n\\nIn practice, it is possible that neither of the above two assumptions are exactly true, but they provide an approach to normalize loss and choose samplers and their parameters. Our experiments show that the choices of normalization and samplers based on these assumptions do lead to improved accuracy.\", \"title\": \"GraphSAINT is unbiased under our assumption\"}",
"{\"comment\": \"Thanks for your paper, it is indeed a very interesting idea. However, I cannot agree with all the claims you made.\\n\\nFor example in Proposition 3.1 you claim as $\\\\xi_v^{(l+1)}$ is an unbiased estimator of the aggregation of $v$ in full GCN if $\\\\alpha_{u,v} = p_{u,v}/p_v$, i.e., $\\\\mathbb{E}(\\\\xi_v^{(l+1)}) = \\\\sum_{u\\\\in V} A_{v,u}x_u^{(l)}$.\\n\\nHowever, I think $\\\\xi_v^{(l+1)}$ is unbiased only condition on the last layer feature $x_u^{(l)}$, i.e., $\\\\mathbb{E}(\\\\xi_v^{(l+1)} | x_u^{(l)}) = \\\\sum_{u\\\\in V} A_{v,u}x_u^{(l)} $ due to the non-linear activations. You cannot ignore the non-linear activation in your proof since $\\\\mathbb{E}(\\\\sigma(x)) \\\\neq \\\\sigma(\\\\mathbb{E}(x))$ if $\\\\sigma()$ is an non-linear activation.\\n\\nPlease clarify if possible. Thanks.\", \"title\": \"GraphSaint is not unbiased\"}"
]
} |
H1g8p1BYvS | Adversarial Filters of Dataset Biases | [
"Ronan Le Bras",
"Swabha Swayamdipta",
"Chandra Bhagavatula",
"Rowan Zellers",
"Matthew Peters",
"Ashish Sabharwal",
"Yejin Choi"
] | Large-scale benchmark datasets have been among the major driving forces in AI, supporting training of models and measuring their progress. The key assumption is that these benchmarks are realistic approximations of the target tasks in the real world. However, while machine performance on these benchmarks advances rapidly --- often surpassing human performance --- it still struggles on the target tasks in the wild. This raises an important question: whether the surreal high performance on existing benchmarks are inflated due to spurious biases in them, and if so, how we can effectively revise these benchmarks to better simulate more realistic problem distributions in the real world.
In this paper, we posit that while the real world problems consist of a great deal of long-tail problems, existing benchmarks are overly populated with a great deal of similar (thus non-tail) problems, which in turn, leads to a major overestimation of true AI performance. To address this challenge, we present a novel framework of Adversarial Filters to investigate model-based reduction of dataset biases. We discuss that the optimum bias reduction via AFOptimum is intractable, thus propose AFLite, an iterative greedy algorithm that adversarially filters out data points to identify a reduced dataset with more realistic problem distributions and considerably less spurious biases.
AFLite is lightweight and can in principle be applied to any task and dataset. We apply it to popular benchmarks that are practically solved --- ImageNet and Natural Language Inference (SNLI, MNLI, QNLI) --- and present filtered counterparts as new challenge datasets where the model performance drops considerably (e.g., from 84% to 24% for ImageNet and from 92% to 62% for SNLI), while human performance remains high. An extensive suite of analysis demonstrates that AFLite effectively reduces measurable dataset biases in both the synthetic and real datasets. Finally, we introduce new measures of dataset biases based on K-nearest-neighbors to help guide future research on dataset developments and bias reduction. | [
"benchmarks",
"dataset biases",
"aflite",
"adversarial filters",
"target tasks",
"real world",
"human performance",
"spurious biases",
"realistic problem distributions",
"great deal"
] | Reject | https://openreview.net/pdf?id=H1g8p1BYvS | https://openreview.net/forum?id=H1g8p1BYvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"lcvzjGEwzwH",
"2ntjJ4kyH3",
"rJgdQ_85sS",
"HylVBv8csr",
"SJxEp8U9oB",
"rJeVCr85sS",
"H1xnLtZkcS",
"HkgJVxk6tS",
"S1gBu5PLtB"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1596658062640,
1576798737693,
1573705760148,
1573705531725,
1573705404174,
1573705164183,
1571916115851,
1571774503146,
1571351149364
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1989/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1989/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1989/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1989/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1989/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1989/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1989/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1989/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Updated version (Accepted at ICML 2020)\", \"comment\": \"We have updated the version of our paper with a version that is now in the proceedings of ICML 2020. Also found here: https://arxiv.org/abs/2002.04108\\n\\nNote that the changes from the ICLR submission include experiments demonstrating the ability of models trained on AFLite-filtered data to generalize to out-of-distribution tasks in both NLP as well as vision. We also include detailed information about the choice of hyperparameters.\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to address the issue of biases and artifacts in benchmark datasets through the use of adversarial filtering. That is, removing training and test examples that a baseline model or ensemble gets wright.\\n\\nThe paper is borderline, and could have flipped to an accept if the target acceptance rate for the conference were a bit higher. All three reviewers ultimately voted weakly in favor of it, especially after the addition of the new out-of-domain generalization results. However, reviewers found it confusing in places, and R2 wasn't fully convinced that this should be applied in the settings the authors suggest. This paper raises some interesting and controversial points, but after some private discussion, there wasn't a clear consensus that publishing it as is would do more good than harm.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Models trained on AFLite-filtered data indeed perform better in the wild\", \"comment\": \"We thank Reviewer 3 for their comments, and address their main points below.\", \"aflite_and_generalization\": \"We now have evidence suggesting resultant models trained on AFLite-filtered data indeed perform better in the wild. Our new results on three additional benchmarks show that a model trained on the AFLite-filtered dataset generalizes better than a model trained on the full SNLI dataset (Sec 3.2.1; please also see overall response https://openreview.net/forum?id=H1g8p1BYvS¬eId=rJeVCr85sS ).\", \"terminology\": \"The notion of predictability score is indeed related to classification error and its empirical estimate is indeed reminiscent of k-fold cross-validation. Yet, to avoid confusion with the term classification error, we use the term \\u201cpredictability score of an instance\\u201d for the out-of-sample classification accuracy averaged over several models trained on a number of random subsets of the data.\", \"others\": \"We have updated the contents of Page 3 to remove the repeated paragraphs, and have changed the description of q() to solely distributions with a non-zero support.\"}",
"{\"title\": \"AFLite recalibrates benchmarks and promotes generalization\", \"comment\": \"We thank Reviewer 1 for their comments, and address each of the concerns below:\", \"impact_1\": \"AFLite recalibrates benchmarks:\\n[R1 (2)] We would like to clarify that the goal of this work is to recalibrate benchmarks, for the purpose of reporting true model performance. This does not necessarily involve removing the most important instances from the training data, but those which are spuriously correlated with the ground truth, making the overall task trivially easier. Once such correlations are minimized (i.e. after filtering with AFLite), the performance reduces.\", \"impact_2\": \"Do models trained on AFLite-filtered data generalize better? YES! :\\nA second contribution of our work is demonstrating that the presence of instances with spurious correlations with the ground truth prevents models from generalizing to real world data. Our new results on three additional benchmarks show that a model trained on the AFLite-filtered dataset generalizes better than a model trained on the full SNLI dataset (Sec 3.2.1; also please see overall response: https://openreview.net/forum?id=H1g8p1BYvS¬eId=rJeVCr85sS ).\", \"computational_overhead\": \"[R1 (2)] Our approach operates on precomputed representations of the instances and relies on inexpensive logistic regressions. As a result, it is very efficient, scalable and parallelizable. It can efficiently run on CPU machines as well, and an effective value for m is a multiple of the number of available cores (e.g., 64 or 128).\", \"hyperparameters\": \"[R1 (3)] We have updated the draft to provide more information about the different hyperparameters (Sec 2; Implementation). These were selected based on the learning curves observed when training the model that we use to generate feature embeddings for the rest of the data, as well as the available computational budget (#CPU cores etc.). The training size, t, is kept constant throughout the algorithm, as R1 correctly points out; hence we do not modify the hyperparameters across iterations.\", \"new_baselines\": \"[R1 (4)] As Reviewer 1 suggested, we have now provided a baseline (Sec 3.2 paragraph 6) which filters out the most predictable examples in a single pass. This corresponds to a non-iterative version of AFLite. For the SNLI task, this baseline (dev acc = 72.1%), however, is not as powerful as the full AFLite model (dev acc = 62.6%). This demonstrates the need for an iterative procedure involving models trained on multiple partitions in each iteration.\", \"retraining_post_aflite\": \"[R1 (5)] We do indeed completely retrain models after creating the filtered dataset. With the exception of finetuning the same pretrained representations such as RoBERTa (publicly available), there is no sharing of parameters between the older model trained on the full dataset, and the newer model trained on the filtered dataset. Moreover, the parameters used during the AFLite filtering are not reused when reporting benchmark performance. As shown in our experiments, these new representations, however, are unable to recover the original performance.\", \"others\": \"[R1 (1)] We apologize for the duplicates, and have fixed the issue in the updated draft.\"}",
"{\"title\": \"Thank you for the positive feedback!\", \"comment\": \"We thank Reviewer 2 for their positive feedback. Please also see our overall response for some new experimental evidence explicitly addressing the robustness of models trained on AFLite-filtered data: https://openreview.net/forum?id=H1g8p1BYvS¬eId=rJeVCr85sS\"}",
"{\"title\": \"Overall Comments (For All Reviewers)\", \"comment\": \"We thank the reviewers for their helpful comments.\\n\\nOur AFLite algorithm filters instances exhibiting spurious correlations with the gold labels, from several popular benchmarks, suggesting that the benchmarks have collection biases [as noted by R1,R2]. Training on the filtered subset prevents models from overfitting to such correlations, hence yields decreased benchmark performance [as noted by R3]. This, in itself, is an important finding because a significant fraction of empirical, applied ML research is evaluated off of these benchmarks. \\n\\nIn addition, in response to R1 and R3, we have now explicitly tried to address this question: \\n- Do models trained on AF-Lite filtered data generalize better to data in the wild?\\n\\nWe found the answer to be \\u201cyes\\u201d! More concretely, we provide new experimental results on three additional benchmarks: zero-shot evaluation on two diagnostic Natural Language Inference (NLI) datasets (HANS; McCoy et al., 2019, NLI Diagnostics; Wang et al., 2018) as well as transfer learning on the newly released Adversarial-NLI dataset (Nie et al., 2019). We obtain the following results using a RoBERTa model, with all results in accuracy (also in our updated Section 3.2.1):\\n\\n HANS\\nDataset All Lex. Subsequence Constituent \\n100% of SNLI 70.7% 84.4% 35.4% 13.4%\\nAFLite-filtered SNLI 74.5% 96.3% 56.6% 57.4%\\n\\n NLI Diagnostics\\nDataset All Logic Knowledge\\n100% of SNLI 59.3% 52.8% 48.9% \\nAFLite-filtered SNLI 62.0% 53.2% 57.7%\\n\\n Adversarial NLI\\nDataset Rd1 Rd2 Rd3\\n100% of SNLI 58.5% 48.3% 50.1%\\nAFLite-filtered SNLI 65.1% 49.1% 52.8%\\n\\nIn particular, our model is very robust to the challenging examples in the HANS benchmark (upto 44% improvement), which is aimed at confusing models purely relying on simple linguistic constructions in the input. These results are very encouraging, considering that the AFLite-filtered data is a small subset of the original data. Moreover, we can infer that the AFLite-filtered distribution is closer to the real-world data distribution - training on just the AFLite-filtered data produces a more robust model than training on the entire dataset. \\n\\nThe reviewers also brought up other helpful changes and suggestions, we have addressed those in responses to each reviewer.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"the paper proposes an algorithm that adversarially filters out examples to reduce dataset-specific spurious bias. the key intuition is that the datasets are curated in a way that easy to obtain samples have higher probability to be admitted to the dataset. however, not all real world samples are easy to obtain. in other words, real world samples may follow a completely different distribution than curated samples with easy-to-obtain ones.\\n\\nthe proposed approach discounts the data-rich head of the datasets and emphasizes the data-low tail. they quantify data-rich / data-low by the best possible out-of-sample classification accuracy achievable by models when predicting. \\n\\nthen adjust the dataset via the expected out-of-sample classification accuracy. the idea of the paper is interesting and the experiments show a substantial reduction in the performance of existing algorithms. this make the paper a promising proposal.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary: This paper hypothesizes that even though we are able to achieve very impressive performance on benchmark\\u00a0datasets as of now (e.g. image net), it might be due to the fact that benchmarks themselves have biases. They introduce an algorithm that selects more representative data points from the dataset that allow to get a better estimate of the performance\\u00a0in the wild. The algorithm ends up selecting more difficult/confusing instances.\\n\\nThis paper is easy to read and follow (apart from some hickup with a copy of three paragraphs), but in my opinion of limited use/impact.\", \"comments\": \"1) There is a repetition of the \\\" while this expression\\u00a0formalizes..\\\" paragraph and the next paragraph and the paragraph \\\"As opposed to ..\\\" is out of place. Please fix\\n2) I am not sure \\n- What applications the authors suggest. They seem to say that benchmark authors should run their algorithm and make benchmarks harder. To me it seems that benchmarks become harder because you remove most important instances from the training data (so Table 4 is not surprising - you remove the most representative instances so the model can't learn)\\n- how practically feasible it is. Even if in previous point I am wrong, the algo requires retraining the models on subsets (m iterations). How large is this m?\\n3) Other potential considerations:\\n- When you change the training size, the model potentially needs to be re-tuned (regularization etc) (although it might be not that severe since the size of the training data is preserved at t)\\n- How do u chose the values of hyperparams\\u00a0(t, m,k eta), how is performance of your algorithm depends on it\\n4) I don't see any good baselines to compare with - what if i just chose instances that get the highest prediction score on a model and remove these. How would that do? For NLP (SNLI) task i think this would be a more reasonable baseline than just randomly dropping the instances,\\n5) I wonder if you actually retrain the features after creating filtered dataset, new representation would be able to recover the performance.\\u00a0\\n\\nI read authors rebuttal and new experiments that show that the models trained on filtered data generalize better are proving the point, thanks. Changing to weak accept\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to learn a subset of a given dataset that acts as an adversary, that hurts the model performance when used as a training dataset. The central claim of the paper is that existing datasets on which models are trained are potentially biased, and are not reflective of real world scenarios. By discarding samples that add to this bias, the idea is to make the model perform better in the wild. The authors propose a method to do so, and then refine it so that the resulting solution is tractable. They implement the method on several datasets and show that by finding these adversarial samples, they indeed hurt model performance.\", \"comments\": [\"Overall the method seems to be something like what is done in k-fold CV, except here we want to find a subset that is the worst at predicting model performance. To this end, I find the introduction of terms like \\\"representation bias\\\" and \\\"predictability scores\\\" unnecessary. Why not model the entire problem in terms of classification error?\", \"Page 3 : the last 2 paragraphs are repeated from above.\"], \"e\": [\"i read the author responses and they addressed my concern about model performance in the wild. I have updated my score to reflect this.\", \"eqn (3) and the set of equations above: for the math to work, you need q(i) to have non-zero support on all samples. To that end, the sentence that says it works for \\\"any\\\" q() is incorrect.\", \"The experiments back your claim that your method makes the data more challenging to train on. But that does not address the central idea, that the resultant models do better in the wild. If the aim is to make the models robust to real world, you have provided no evidence that your method does so.\", \"Table 1: the D_92k column is good comparison to have. Thanks.\"]}"
]
} |
rJxBa1HFvS | Value-Driven Hindsight Modelling | [
"Arthur Guez",
"Fabio Viola",
"Theophane Weber",
"Lars Buesing",
"Steven Kapturowski",
"Doina Precup",
"David Silver",
"Nicolas Heess"
] | Value estimation is a critical component of the reinforcement learning (RL) paradigm. The question of how to effectively learn predictors for value from data is one of the major problems studied by the RL community, and different approaches exploit structure in the problem domain in different ways. Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function. In contrast, model-free methods directly leverage the quantity of interest from the future but have to compose with a potentially weak scalar signal (an estimate of the return). In this paper we develop an approach for representation learning in RL that sits in between these two extremes: we propose to learn what to model in a way that can directly help value prediction. To this end we determine which features of the future trajectory provide useful information to predict the associated return. This provides us with tractable prediction targets that are directly relevant for a task, and can thus accelerate learning of the value function. The idea can be understood as reasoning, in hindsight, about which aspects of the future observations could help past value prediction. We show how this can help dramatically even in simple policy evaluation settings. We then test our approach at scale in challenging domains, including on 57 Atari 2600 games. | [
"hindsight",
"value prediction",
"value estimation",
"critical component",
"reinforcement learning",
"paradigm",
"question",
"predictors",
"value"
] | Reject | https://openreview.net/pdf?id=rJxBa1HFvS | https://openreview.net/forum?id=rJxBa1HFvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"VfCNrPDP5H",
"Hkeh6HcjsB",
"ryezItX9jr",
"HyeFfYm5sS",
"Skg30_m5sB",
"Skg7ydsXcr",
"BylXiem0KH",
"Byev_nUoFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737664,
1573787075701,
1573693769683,
1573693713218,
1573693652308,
1572218843389,
1571856539197,
1571675246596
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1988/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1988/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1988/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1988/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1988/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1988/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1988/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper studies the problem of estimating the value function in an RL setting by learning a representation of the value function. While this topic is one of general interest to the ICLR community, the paper would benefit from a more careful revision and reorganization following the suggestions of the reviewers.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Authors - Final Decision\", \"comment\": \"I agree that it is an interesting idea and shows promise. However, given the current exposition and investigation done in the paper about the approach, I feel that a 'weak accept' is the right decision for this manuscript. I hope this doesn't deter the authors from working on this in the future, and I hope to see a more polished version of this manuscript out soon :)\\n\\nThanks!\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for taking the time to review our paper.\", \"let_us_clarify_the_two_major_concerns_first\": \"\", \"re\": \"the proposal of giving h_{t-k} as input\\n\\nThe network in our experiments is recurrent, so providing past information as additional input would only help if there was some memory requirement that the network is not able to satisfy. \\nIn the portal task, the important decision doesn\\u2019t require any memory (everything is observed in the portal room to select the portal), but there is a memory demand when in the reward room. We ran an extra experiment where we gave h_{t-k} as an additional input to the policy and value for the baseline and it did not perform better than what is reported in Figure 5a for the actor-critic baseline.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thank you for taking the time to review our paper and the detailed and specific feedback.\\n\\n1- Even when the model also predicts the reward, the observation model is usually asked to indifferently predict all of the high-dimensional observation, even if some aspects are not relevant to the task (i.e. to the rewards). The transition model does not focus on what is most important for the task, so it may not be data-efficient to learn (and it may be limited by capacity).\\n\\nNonetheless, we\\u2019ll attempt to clarify that specific sentence in the abstract. \\n\\n2- Thanks for pointing out this work, it is indeed relevant but the method is quite different. In the value-aware model learning work (iterative Value-Aware Model Learning), the model loss minimizes some form of value consistency: the difference between the value at some next state and the expected value from starting in the previous state, applying the model and computing the value. While this makes the model sensitive to the value, it only exploits the future real state through V as a learning signal (just like in bootstrapping). In contrast, our model is both sensitive to the value and can exploit a richer signal from the future observations. We\\u2019ll discuss that work in the related work section.\\n\\n3- This is a typo, the state shouldn\\u2019t be part of \\\\phi_{\\\\theta_2}\\u2019s arguments in Eq 3.\\n\\n4- Thanks for the feedback. We will update the legend of Fig 1 and the example description to make it clearer.\\n\\n5-7 We are sorry you found the current exposition of the work in these sections to not be ideal. We will attempt to improve the balance between certain sections and especially expand the discussion of the architecture. \\n\\n8- In all our experiments, the number of parameters and the computational cost of evaluating the network is the same in HiMo and the baseline because we use the exact same network architecture and only set some of the extra losses to 0 to obtain the baseline. We proposed two domains (the illustrative task and the portal example) where we have isolated some problem features where hindsight modelling is particularly relevant. We thought it was also important to test it in domains that were not specially conceived to test the idea and we chose the 57 Atari games for that. Of course, more extensive empirical investigations is always desirable but we believe this is sufficient to establish that this novel idea can be successful in practice.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for taking the time to review our paper.\", \"1__re\": \"large-scale experiment\\nWe ran a control experiment on the bowling Atari game using Impala (see Figure 7-c) that tested whether the gains using R2D2 were not specific to Q-value based methods. These results suggest the benefits at scale are not limited to the value-based R2D2 setting. Testing the approach more broadly (on dmlab or challenging continuous control tasks as you suggested) is certainly something we want to look at in the future.\", \"2__re\": \"sensitivity:\\nYes this is a good point. It is not overly sensitive to these exact values (a dimension of 16 for \\\\phi does fine for example) but much larger values did tend to perform worse when we were tuning the architecture. One hypothesis is that a \\\\phi with small dimensionality regularize the representation to only include relevant features, while larger dimensional \\\\phi may contain less relevant information that will distract the modeling effort on phi. We plan to investigate that aspect more in future work.\\n\\nThank you for the suggestions regarding the figures. We\\u2019ll include the learning curves for all the games in the appendix.\\n\\nWe take your point about Figure 3, we\\u2019ll think about a way to make it more useful without relying too much on the text.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Value-Driven Hindsight Modelling proposes a method to improve value function learning. The paper introduces the hindsight value function which estimates the expected return at a state conditioned on the future trajectory of the agent. How use this hindsight value function is not obvious, since an agent does not have access to the future states needed in order to take actions (for Q-Learning) and the hindsight value function is a biased gradient estimator for training policy gradient methods.\\n\\nThe authors train the standard value function (which does not have access to future information) to predict the features which the highsight value function learns to summarize the value relevant parts of the future trajectory. These predicted features can then be used in place of the actual hindsight value function, circumventing the issues discussed above. The authors argue that this auxiliary objective provides a richer training signal to the normal value function, helping it to better learn what information in a given state is relevant to predicting future rewards.\\n\\nThe paper is well structured and written, flowing from high level motivation and review into the core of the method, followed by analysis of the approach, and then proceeds through three experiments. The first two are toy / crafted experiments which build intuition and probe the behavior of the method and finally a large scale test on the Atari 57 benchmark demonstrating improvements when augmenting a state-of-the-art method with HiMo.\\n\\nThis reviewer recommends acceptance (I would give a 7 given more granularity) based on the contribution of a new auxiliary objective for value functions and the strength of the experimental suite. The Portal Choice environment is well crafted and instrumented with the graphs of figure 5b and 5c to show the behavior of the approach and the clean demonstration of an improvement over a previously SOTA method for Atari 57 is encouraging (the same architecture and the ablation simply sets the auxiliary objective\\u2019s weight to 0). However, the reviewer has some caution and concerns as follows:\\n\\n1) The lack of a large scale experiment demonstrating improvement with an actor-critic method. While the Portal Choice experiments are informative and use Impala, it is a bit toy, and it would increase the reviewer\\u2019s confidence in the generality and robustness of the approach if improvements were also demonstrated for an actor-critic method on a large environment suite. Atair 57 could work but ideally a different setting such as DMLab 30 or continuous control from pixels. Demonstrating improvements in one of these additional settings would raise the reviewer to a strong acceptance.\\n\\n2) The potential sensitivity of the approach to the two important hyperparameters that the authors mention, the dimensionality of the hindsight feature space (to reduce approximation error) and the # of future states it conditions on (to avoid just observing the full return directly). The very low dimensionality of the hindsight feature space (d=3 for Atari) seems a bit at odds with the explanation that the hindsight features provide a strong training signal for learning to better extract value relevant information from the state. Experiments that studied sensitivity to these would provide better perspective on the robustness of HiMo.\", \"questions_and_suggestions_for_improving_the_paper\": \"For Figure 6 the dynamic range gets squashed by a few games with relatively large performance improvements or regressions. Changing to a log-scale on the y-axis could be more informative? For instance, I find it pretty difficult to eyeball the ~1 human normalized score median improvement according to Table 1 from the chart.\\n\\nFigure 3 could also be improved. It requires significant context from definitions in the paper in order to understand. It could be reworked into a stand alone expository overview of HiMo that helps readers quickly grok the idea of the paper such that abstract + figure is enough.\\n\\nCould the authors consider showing / adding full learning curves (median human normalized score?) for HiMO vs the baseline on Atari 57? This would help readers get a qualitative feel for the learning dynamics of the algorithm instead of only having a final scalar measure at the end of training.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\nThe paper proposes a way to learn better representation for RL by employing a hindsight model-based approach. The reasoning is that during training, we can observe the future trajectory and use features from it to better predict past values/returns. However to make this practical, the proposed approach fits an approximator to predict these features of the future trajectory from the current state and then subsequently, use them to predict the value. The authors claim that this extra information can be used to learn a better representation in some problems and lead to faster learning of good policies (or optimal value functions)\", \"decision\": \"Weak Accept\\nMy decision is influenced by the following two key reasons\\n\\n(1) I like the idea of hindsight modeling a lot. It is true that a trajectory gives much more information than just a weak scalar signal indicating return from each state in the trajectory. Identifying a way to make use of all the extra information in the trajectory to aid in value prediction is useful. The proposed approach is a step towards that, and I think the community should be made aware of that for sake of future research in this direction.\\n\\n(2) Having said that, I am not super satisfied with the way the authors have presented their approach. The explanation is jumbled and confusing, at times. The paper needs careful rewriting to communicate ideas better and notation needs to be standardized earlier. Some of the sections are either redundant or lack insights. Even if they do have insights, they are not highlighted leaving the reader to search for them. The experimental setup is not clear and the authors could have spent more space in the paper dedicated to how the hindsight modeling approach can be implemented within an existing RL method.\", \"comments\": \"(1) The line in abstract \\\"but this approach is usually not sensitive to reward function\\\" doesn't make sense. Isn't reward function part of the model? So you are learning the reward function, so how is it not sensitive to it? I think I understand what the authors are saying but it took me until the end of Sec 3.2 to get that. \\n\\n(2) How does this work relate to Value-aware model learning works from Farahmand (AISTATS 2017, NeurIPS 2018). The premise seems to be similar: learn a model taking into account the underlying decision-making problem to be solved and the structure of the value function. The paper needs a discussion of these set of works\\n\\n(3) In Section 3.3, \\\\phi_{\\\\theta_2} has conflicting function parameters in eq (2) and (3). \\n\\n(4) Section 3.4 is very confusing. I understood the setup of the problem and it seemed like it was very illustrative of an example where proposed approach will excel. However, Fig 1 and its caption are unclear and I found it hard to understand what the figure is conveying. The paragraph underneath the figure had no explanation for the Fig 1, and instead directly jumped to the results in Fig 2. The paper could use a better explanation of Fig 1. and explain why the proposed approach can learn the structure of s' and better predict value at s\\n\\n(5) Section 3.5 partially answers the question \\\"when is it advantageous to model in hindsight?\\\" In cases, where L_model is low, of course its advantageous to model in hindsight! But the real question that needs to be answered is buried in the last paragraph. What if learning a good \\\\phi is as hard as predicting the return? In this case, do we still gain any advantage? I am not sure how having a limited view of future observations and low dimensional \\\\phi helps. If the feature that decides future return lies beyond the limited view of future observations, does it still not give any advantage? Questions like these might be useful in aiding the reader to understand why hindsight modeling is better\\n\\n(6) Section 4 needs more text to explain what components of the architecture are learnt using what losses, and provide intuitions for why that is the case. It seems like that is very crucial to ensure that \\\\phi doesn't learn something trivial and non-useful. I am surprised section 4 is so small, and Fig 3 is not useful. Maybe, you can combine section 3.4, 3.5 and condense them, and using the obtained space in expanding sec 4.\\n\\n(7) The experiments section immediately dives into the problem setup and results. It will be useful to have a subsection explaining how the proposed hindsight model is implemented within an RL algorithm. Currently, it is hard for the reader to connect what he/she has read until Section 4 with what's presented in Section 5.\\n\\n(8) The results are convincing. However, my biggest concern is the experiments were not designed carefully to analyze how much the hindsight modeling contributed in the increase of performance? Are the number of parameters in the value function approximator the same between the hindsight RL algorithm and the baseline? Can we have a simplistic example that is amenable to isolate the influence of hindsight modeling from other factors? Fig. 2 does a reasonable job at it but I think the hindsight modeling approach can achieve improvement in more diverse problems. In a way, the proposed feature is doing state space augmentation so that value can be easily predicted from the features of the augmented state. So, identifying the characteristics of the problems where this can be done is very useful to the RL practictioner.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a new model-based reinforcement learning method, termed hindsight modelling. The method works by training a value function which, in addition to depending on information available at the present time is conditioned on some learned embedding of a partial future trajectory. A model is then trained to predict the learned embedding based on information available at the current time-step. This predicted value is fed in place of the actual embedding to the same value model, to generate a value prediction for the current time-step. So instead of just learning a value function based on future returns, the method uses a two-step process of learning an embedding of value relevant information from the future and then learns to predict that embedding.\\n\\nThe paper gives some motivating examples of why and when such an approach could yield an advantage over standard value learning methods like Monte-Carlo or temporal difference learning. The basic idea is that when the returns obey some causal structure like X->Y->Z it may be easier to learn P(Y|X) and P(Z|Y) than to learn P(Z|X) directly. In particular, the authors point out that in the discrete case when Y takes relatively few values the size of the respective probability tables can be smaller in the former case than the latter. This motivates the approach of discovering a set of future variables to predict which are themselves predictive of return, rather than predicting the expected future return directly.\\n\\nOn a high level, I like the idea of specifically learning to model relevant aspects of the environment. However, I lean toward rejecting this work in its current form because I feel the motivation for when this particular method would be useful is unclear.\\n\\nIn particular, I don't really understand how this method could, in general, be expected to improve on regular bootstrapping. Why learn a prediction of the return at time t based on future information when we could just use the value function at a later time to improve the prediction at time t? It seems to me that the future value function itself concisely summarizes the information in the future state that is relevant for predicting the past return, while better exploiting the structure of the problem. Of course in cases like partial observability, it could be that the future value function lacks information from the past that is important for accurately predicting the return (for example in the portal example of this paper). However, if partial observability is really the case of interest, the method presented in this paper seems like a rather roundabout solution method. For example, instead of conditioning v+ on a future hidden state h_{t+k} (as the authors do in the experiments) perhaps one could simply condition the value function on a past hidden state h_{t-k} and obtain similar benefit from bootstrapping?\\n\\nAside from partial observability, for which I feel there are better approaches, the only situation I can understand the method having an advantage is when later states contain information which helps to predict earlier rewards. This is essentially the situation presented in the illustrative example. However, currently I feel such situations are rather contrived and unintuitive so I would need more supporting evidence to accept these situations as a good motivation.\\n\\nOn a deeper level, I don't see how the probability table motivation given in the introduction applies when what is being learned is an expectation (i.e. a value function) and not a distribution.\\n\\nThe approach also suffers from well-known issues with using the output of an expectation model of a variable as the input to a nonlinear function approximator in place of the variable itself. Namely, there is no guarantee that the expectation value of a variable is a possible value for the variable so giving it as input to a predictor trained on the variable itself could easily yield nonsense output in the stochastic case. As far as I can tell the method does nothing to mitigate this (please correct me if I'm wrong), so there is no reason to assume the method is generally applicable in settings with nontrivial stochasticity.\\n\\nDespite these concerns, I feel the experiments are for the most part quite well thought out and executed. The paper is also quite well written, motivation issues aside, so I would not be upset if it was accepted with the hope that it leads to future work addressing the above-mentioned concerns.\\n\\nIf possible I think this paper would benefit significantly from a detailed explanation of how and when the proposed approach should be expected to improve on bootstrapping, including bootstrapping off a value function which uses an analogous architecture to v+.\", \"questions_for_authors\": \"Given the hyper-parameters of R2D2 deviate somewhat from those used in the original paper, and nothing is said about how they were chosen, how confident can we be that the observed advantage of hindsight modelling is not simply due to hyper-parameters being selected which are more favourable for the proposed method?\\n\\nGiven that you are not learning distributions but expectations in the form of value functions, how pertinent is the motivation of learning P(Y|X) and P(Z|Y) instead of P(Z|X) directly described in the introduction?\\n\\nHow much of the benefit observed in the portal example an ATARI could also be gained from simply providing the value function approximation with h_{t-k} as input to help span larger time-gaps?\", \"update\": \"While I still feel the exposition could be improved to make the underlying idea clearer, I feel the authors did a good job of addressing my major concerns in their reply, hence I have raised my score to a weak accept.\\n\\nI have to admit I missed the point that v^+ and v^m were using entirely different parameter sets. In light of this, I agree that the expectation model issue I mentioned is not a major concern.\\n\\nI also appreciate the clarification of the hyperparameters, if they were really tuned to improve the baseline then this detail should be added to the paper and would negate my concern there. \\n\\nFinally, I thank the authors for providing the value-function oriented example. I found this example to be more illustrative than the one in the introduction of the paper, and I now feel that I have a better grasp of the motivation. I still have doubts about the general benefit of the approach over bootstrapping but since it is not entirely clear to me one way or the other I feel the idea at least warrants further exploration, and it would be reasonable to accept the paper to make the community aware of it.\"}"
]
} |
H1gN6kSFwS | Learning Neural Causal Models from Unknown Interventions | [
"Nan Rosemary Ke",
"Olexa Bilaniuk",
"Anirudh Goyal",
"Stephan Bauer",
"Hugol Larochelle",
"Chris Pal",
"Yoshua Bengio"
] | Meta-learning over a set of distributions can be interpreted as learning different types of parameters corresponding to short-term vs long-term aspects of the mechanisms underlying the generation of data. These are respectively captured by quickly-changing \textit{parameters} and slowly-changing \textit{meta-parameters}. We present a new framework for meta-learning causal models where the relationship between each variable and its parents is modeled by a neural network, modulated by structural meta-parameters which capture the overall topology of a directed graphical model. Our approach avoids a discrete search over models in favour of a continuous optimization procedure. We study a setting where interventional distributions are induced as a result of a random intervention on a single unknown variable of an unknown ground truth causal model, and the observations arising after such an intervention constitute one meta-example. To disentangle the slow-changing aspects of each conditional from the fast-changing adaptations to each intervention, we parametrize the neural network into fast parameters and slow meta-parameters. We introduce a meta-learning objective that favours solutions \textit{robust} to frequent but sparse interventional distribution change, and which generalize well to previously unseen interventions. Optimizing this objective is shown experimentally to recover the structure of the causal graph. Finally, we find that when the learner is unaware of the intervention variable, it is able to infer that information, improving results further and focusing the parameter and meta-parameter updates where needed. | [
"deep learning",
"graphical models",
"meta learning"
] | Reject | https://openreview.net/pdf?id=H1gN6kSFwS | https://openreview.net/forum?id=H1gN6kSFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ERpMJZEjqI",
"rJlp-hUnjr",
"SJlO234hsr",
"BkgBSh0iiS",
"r1eGmbwssr",
"B1gMQPO5jS",
"BJlYsorcsS",
"ByxAt1yFsH",
"Hyg7Tw9OsH",
"BJgyDD5dsS",
"H1lA_85_oH",
"r1lSLH9diH",
"SJlarIt_sS",
"r1ekeUYOjH",
"rylbZRDpYS",
"Skx29AZdFH",
"HJxO6r5HYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737634,
1573837828629,
1573829807985,
1573805117311,
1573773594007,
1573713689962,
1573702560536,
1573609350439,
1573590971268,
1573590870521,
1573590646246,
1573590349431,
1573586500611,
1573586406577,
1571810809108,
1571458707959,
1571296704372
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1986/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1986/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1986/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1986/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1986/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1986/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1986/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1986/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1986/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1986/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1986/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1986/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1986/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1986/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1986/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1986/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a metalearning objective to infer causal graphs from data based on masked neural networks to capture arbitrary conditional relationships. While the authors agree that the paper contains various interesting ideas, the theoretical and conceptual underpinnings of the proposed methodology are still lacking and the experiments cannot sufficiently make up for this. The method is definitely worth exploring more and a revision is likely to be accepted at another venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Rebuttal Discussion\", \"comment\": \"Dear Reviewer,\\n\\nCould you let us know if our response has addressed the concerns raised in your review? I think our response in point (b) above clarifies your main concern about insufficient comparisons (as in Asia graph, it was not simulated from MLP).\\n\\nWe would be happy to provide further revisions to address any remaining issues and would appreciate a response from you on the points that we raised (as rebuttal period is going to end soonish).\\n\\nThanks for taking time and discussing with the authors. We appreciate it. :)\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for his feedback. We would like to point out that:\\n\\n(a) We agree that the concern about hyper-parameter selection is valid for all neural network based methods but would like to point out that for each method we applied the same budget for hyper-parameter search. In addition, for full clarity and future benchmarking all our code will be released and made accessible. \\n\\n(b) We politely disagree with the reviewer that the comparison is inappropriate or unfair. The asia graph (which is defined in the BN Learn repository) dataset we use to evaluate all comparison methods is a real-world dataset and the underlying relationships are not known. In particular, they were not simulated from an MLP. \\n\\n(c) While we agree that the non-linearity in non-linear ICP adds flexible, the method has the fundamental problem that it relies on conditional independence testing, which is hard (Peters and Shah 2019). As pointed out by the author\\u2019s of non-linear ICP themselves and referenced in our response Part 3, a comparison is not recommended \\u201cIn practice\\u201d (Conclusion p.24 in non-linear ICP, Heinze-Deml et.al 2018). Nevertheless, we will try to add a comparison to non-linear ICP. \\n\\n(d) We would like to point out that we already compare against 3 methods, in particular to the state-of-the art method for causal induction from interventional data, as noted in [1]. In general, we examined an array of causal learning methods where an open-source implementation is available. However, many of these methods can only handle continuous data (not discrete data) e.g. LinGAM, while many others do not handle interventions. We compare against all methods, which are applicable in our case, provide open-source code and were the authors themselves do not provide alternative recommendations on how to proceed in practice. \\n\\n[1] Versteg, Boosting Local Causal Discovery in High-Dimensional Expression Data https://arxiv.org/pdf/1910.02505v2.pdf\"}",
"{\"title\": \"Comments re revision\", \"comment\": \"I applaud the authors for making good revisions to their paper.\\n\\nHowever, my main concern still stands. Without any theoretical/conceptual underpinnings of when the proposed methodology will/won't work, all support for the proposed methodology must come from the empirical experiments. Unfortunately, I still do not believe experiments are comprehensive enough to convince me of the\\u00a0strength of the proposed methodology. In particular, there are simply insufficient comparisons against other methods. The proposed methodology does not appear better than Eaton & Murphy (2007), and the ICP method is clearly inappropriate as a linear model being applied in a setting where the underlying (simulated) relationships are known to be nonlinear (in fact match the MLP used in the authors method which is even more unfair). Finally, the method of Zheng et al (2018) depends on various hyperparameters, and it is unclear whether these were set more or less favorably than the authors' method.\"}",
"{\"title\": \"Response to \\\"Official Blind Review #2\\\"\", \"comment\": \"We are grateful to the reviewer for their enthusiastic feedback and comments!\", \"q\": \"\\u201cIt would be informative if the paper had a paragraph discussing also the fundamental limitations of the approach more openly. For instance, the choice of the neural net architecture used for the structural assignment might have a huge impact on the outcome, especially because the same architecture is repetitively used for all variables of the SCM.\\u201d\\n\\nWe thank the reviewer for pointing this out. We had chosen to implement all variables using the same neural network architecture for computational reasons (vectorization, batching), but it indeed might have had a significant impact on the learning process. A wider variety of architectures, incorporating heterogeneity in each variable\\u2019s model, would strengthen the case for the approach.\\n\\nThere are, however, recent demonstrations that overparameterized neural networks can generalize well (with some regularization) [1]. This suggests that we may get away with deliberate over-parametrization, whether of each module separately or the whole network globally. The reviewer\\u2019s proposal below to allow some cross-variable parameter sharing is compatible with the latter; The right capacity and level of sharing for each variable would then be allocated according to the different pressures from the data and the training objective.\\n\\n[1]. Belkin, Mikhail, Daniel Hsu, Siyuan Ma, and Soumik Mandal. \\\"Reconciling modern machine-learning practice and the classical bias\\u2013variance trade-off.\\\" Proceedings of the National Academy of Sciences 116, no. 32 (2019): 15849-15854.\\n\\nQ. \\u201cThe paper makes a strong scalability claim across the variable size thanks to independent Bernoullis assigned on the adjacency matrix entries. However, it reports results only for very small SCMs. It is understandable that given the premature stage of the causal inference research might not grant standardized data sets at a larger scale, but at least lack of this quantitative scalability test could be acknowledged and the related claims could be a little bit softened.\\u201d\\n\\nWe thank the reviewer for pointing this out. We agree with the reviewer\\u2019s point and a necessary continuation of our work is to demonstrate scaling to larger graphs available from e.g. the Bayesian Networks Repository. We will soften our scalability claims to better accord with the size of the problems solved in the paper.\\n\\nQ. \\u201cI do not buy the argument in the first paragraph of Sec 3.5 about why the structural assignment functions need to be independent. As the model does not pose a distribution on neural net weights, sharing some weights (i.e. conditioning on them) would only bring conditional independence across the variables. I do not see a solid reason to try to avoid this. What is wrong for multiple variables to share some functional characteristics in their structural assignment? After all, some sort of conditional independence will be inevitable in modeling. If the variables share the same architecture, this is also conditional independence, not full independence. Relaxing the independence assumption and allowing some weight sharing could be beneficial at least for scalability of the model, could even bring about improved model fit due to cross-variable knowledge transfer.\\u201d\\n\\nWe appreciate the thought-provoking idea from the reviewer of cross-variable knowledge transfer via sharing. This is especially applicable to the real world setting, where it is likely that functional characteristics will be shared between variables of a similar nature. We will gladly investigate further this issue.\\n\\nOverall, we would like to thank the reviewer for the positive feedback and comments. We will perform the changes the reviewer recommends and relax the enforced independences to see if scalability or performance gains materialize.\"}",
"{\"title\": \"Revised paper uploaded\", \"comment\": \"Dear reviewer #1,\\n\\nWe\\u2019d like to thank you again for your review and feedback! We have updated our paper with your suggestions and those of others. In particular, we made the suggested citations to related work in section 4, included a section explaining the intervention in section 3.5, and began re-running the experiments with 5 random seeds each and reporting the error bars (Figure 5 Left is a beginning). We are also running experiments while varying the number of hidden states, as you have suggested. Would you have any other questions regarding the rebuttal? We would be happy to provide further revisions or experiments to address any remaining issues. Many thanks again for your review and feedback.\"}",
"{\"title\": \"Response to \\\"Opinion changed a little\\\"\", \"comment\": \"We thank the reviewer for the very prompt response and we thank the reviewer for noting our contributions. We have begun updating our paper\\u2019s existing figures with error bars and are running the reviewer\\u2019s suggested additional experiments.\\n\\nQ. \\u201cError bars: Thank you for this. I am interested in seeing how well MAML does.\\u201d\\n\\nA. We uploaded a new revision of the paper and updated Figure 5 Left in the paper to reflect the errors bars. We have conducted experiments for all 3-variable graphs with PRNG seeds 1 to 5. The graphs for remaining experiments will be updated in due course, but limited compute resources may delay them beyond the rebuttals deadline.\\n\\nQ. \\u201dHowever, the idea of masking was used in https://arxiv.org/pdf/1803.04929.pdf which your method also depends on. I believe that this same technique gives you both the ability to model the causal structure and avoids exponential search, correct? Could you also clarify why the comparison against this work was not done? Unless I am missing something, while Kalainathan et al. learn from observational data, the method can be run on your setup. And they do not suffer from the exponential time-complexity.\\u201d\\n\\nA. We appreciate the reviewer for pointing us to this work. It is true that this technique would also have the ability to model the causal structure and avoids the exponential search.\\n\\nThe main reason we did not compare to Kalainathan et al. was because their technique only handles continuous data and our method tackles discrete data. We settled on the discrete case because we needed datasets that are large and allow for interventions. We are aware of no large, multi-variable datasets for continuous variables, and an effort [1] to create such a dataset is only in its infancy: It supports only two variables (cause and effect pairs) and its authors themselves have made an urgent public call for far more validation pairs. By contrast, the Bayesian Networks Repository has a variety of discrete, multi-variable networks publicly available for benchmarking.\\n\\nAn additional complication is Kalainathan et al.\\u2019s use of a GAN framework, which is not trivial to extend to the discrete case. The authors themselves admit as much in the conclusion of their paper: \\u201cAn on-going extension regards the case of categorical and mixed variables, taking inspiration from discrete GANs (Hjelm et al., 2017).\\u201d As of today, no such extension has been published.\\n\\nWe compared to 3 other methods. In particular, we compared to ICP, which is one of the state-of-the-art methods for causal induction from interventional data, as noted in [2]. In general, we examined an array of causal learning methods where an open-source implementation is available. However, many of these methods can only handle continuous data (not discrete data) e.g. LinGAM, while many others do not handle interventions.\\n\\n[1] J. M. Mooij, J. Peters, D. Janzing, J. Zscheischler, B. Schoelkopf: \\\"Distinguishing cause from effect using observational data: methods and benchmarks\\\", Journal of Machine Learning Research 17(32):1-102, 2016\\n[2]. Versteg, Boosting Local Causal Discovery in High-Dimensional Expression Data https://arxiv.org/pdf/1910.02505v2.pdf\\n\\nQ. \\u201cUniformly sampling of intervention:\\nI think it is much more interesting to restrict the set of nodes you perform interventions on. This is sort of like a held-out intervention evaluation of your method. I think I was unclear about my issue here. It was not the uniformity, rather it was that the interventions were being done on all nodes.\\u201d\\n\\nA. We thank the reviewer for clarifying this point. Although many SCMs\\u2019 true causal structures can be recovered with only a restricted set of interventions, in the general case one needs the ability to intervene on all variables, and this is why we allow the method to do so. That being said, the reviewer\\u2019s proposed experiments would be a valuable addition to the paper, and although limited computing resources and a backlog of other suggested experiments might prevent us from inserting these before the close of the rebuttal period, we will dedicate a series of experiments to this topic.\"}",
"{\"title\": \"Opinion changed a little\", \"comment\": \"Regarding contributions:\\nI agree that these contributions are noteworthy. \\n\\nHowever, the idea of masking was used in https://arxiv.org/pdf/1803.04929.pdf which your method also depends on. I believe that this same technique gives you both the ability to model the causal structure and avoids exponential search, correct?\\n\\nCould you also clarify why the comparison against this work was not done? Unless I am missing something, while Kalainathan et al. learn from observational data, the method can be run on your setup. And they do not suffer from the exponential time-complexity.\", \"error_bars\": \"Thank you for this. I am interested in seeing how well MAML does.\", \"about_cyclic_regularizer\": \"Thank you for pointing this out.\", \"about_predicting_interventions\": \"You claim in the paper that \\\"We find that ignoring this issue considerably hurts or slows down meta-learning, suggesting that we should try to infer on which variable the intervention took place.\\\"\\nSo this seems strongly coupled with the ability of MAML to recover causal structure. Then, am I correct in saying that MAML's quality of causal discovery is maintained even when the intervention prediction quality goes down?\\n\\nMaybe I'm missing something but if the intervention cannot be predicted, does the setup not boil down to estimating structure from observational data?\", \"uniform_sampling_of_intervention\": \"I think it is more interesting to restrict the set of nodes you perform interventions on. This is sort of like a held-out intervention evaluation of your method. I think I was unclear about my issue here. It was not the uniformity, rather it was that the interventions were being done on all nodes.\\n\\nOverall, I believe this method shows promise but needs a little more evaluation and understanding.\"}",
"{\"title\": \"Response to Official Blind Review #1 (Part 4)\", \"comment\": \"Q. \\u201cWhy do the authors report cross entropy loss in Table 1?\\u201d\\n\\nWe appreciate the reviewer for pointing this out. We reported cross entropy because we learn the likelihood of edges between variables rather than iterating through all possible graphs (which is typically done e.g. in ICP). Hence we maintain a distribution over graphs and we need to score how good that distribution is. This loss thus gives a better indication of how our method learned and converged over time. There is also a direct comparison to the ground-truth graph and the cross entropy should converge close to 0 if our model has learned the correct structure. On top of reporting cross entropy, we also evaluate our model on predicted likelihood for out-of-distribution generalization as shown in Table 3. \\n \\nQ. \\u201cInstead of ICP (which is constrained to be linear which is unrealistically simple), why don't the authors compare against nonlinearICP (which is more flexible like their neural networks): \\u201c\\n \\nWhile we agree that the non-linearity in the non-linear version of ICP adds flexibility, it likewise increases the difficulty since it is unclear which non-linear and non-parameteric conditional independence test to use in practice. The performance of nonlinear ICP critically depends on the conditional independence tests. That is one reason why in the non-linear ICP paper it is explicitly recommended for practical purposes to use non-linear ICP (over its linear version, see discussion p24-p25 in (1)) only if all linear models are rejected. However, not all linear models were rejected by ICP in our case. Moreover conditional independence testing was shown to be hard [1] which might be one reason why our method shows superior performance over the state-of-the-art method. \\n\\n[1]. Shah, Rajen D., and Jonas Peters. \\\"The hardness of conditional independence testing and the generalised covariance measure.\\\" arXiv preprint arXiv:1804.07203 (2018).\"}",
"{\"title\": \"Response to Official Blind Review #1 (Part 3)\", \"comment\": \"Q. \\u201cWhy does one even care about the graph being acyclic in this setting?\\u201d\\n\\nWe defined our groundtruth SCM to be acyclic for the simplicity of sampling, otherwise we could not perform ancestral sampling. That being the case, an acyclic regularizer restricts the set of solutions, encouraging faster convergence of the model from a statistical point of view. Adding the regularizer speeds up convergence, but asymptotically both models with and without regularization converge towards the same point.\\n\\nQ. \\u201cOne main reason for SCM modeling in science and policy-making is for analysts to better understand the data generating phenomena. However your use of neural networks here seems to hamper interpretability, so how do you reconcile this issue?\\u201d\\n\\nIn the foreword, we had lightly touched on the general concerns raised about neural networks and their interpretability. We will dive into greater detail here.\\n\\nThe (learned) structural parameters, which define the causal structure of the solution, are directly interpretable as an adjacency matrix. Examples of the learned adjacency matrix extracted from our model can be found in Figure 3 and 4 in the paper.\\n\\nAs regards the MLP-parametrized conditionals, they are as interpretable as conditional probability tables. This is because the MLP\\u2019s learned functional parameters can always be reduced to such a table by querying the MLP for all possible discrete values of all possible ancestors.\\n\\nThere is a vast literature on interpretability of deep learning, of course, but we must admit that our main goal is to design better learning algorithms for autonomous intelligent systems (like robots) where the ability of those systems to understand the world is the primary goal (as opposed to extracting that knowledge for human consumption). Our neural networks\\u2019 interpretability by analysts was therefore only a secondary objective, although in the end the model remains quite interpretable.\\n\\nQ. \\u201cRelated papers that utilize the same idea of predicting a variable conditioned on subset of other variables via neural network + masking strategy:\\u201d\\n\\nWe thank the reviewers for pointing such a complete list of papers, they are indeed relevant and we will update our relevant work section with the list of papers.\\n\\nQ \\u201cour faith in the proposed methodology rests entirely on the empirical experiments. However, I find these a bit too basic to be very convincing, and would at least like to see more methods being compared (in particular for the simulated graphs as well).\\u201d\\n\\nWe thank the reviewers for pointing this out. There are several aspects that are relevant as listed below. \\n\\n a. As mentioned in other recent works e.g. [1], ICP is one of the state-of-the-art algorithms. We would refer to the extensive experiments in their paper for additional baselines comparisons and while we can likewise add more baselines, we do not expect any changes in results. Please likewise see our answer for the comparison against non-linear ICP below. \\n\\n [1]. Versteg, Boosting Local Causal Discovery in High-Dimensional Expression Data https://arxiv.org/pdf/1910.02505v2.pdf\\n\\n b. We have examined an array of causal learning methods where an open-source implementation is available. However, few of these are applicable to discrete data from interventions. Many of these methods can only handle continuous data (not discrete data) e.g. LinGAM and many others do not handle interventions. Hence we only compared to the ones that we had in the paper. If the reviewer is aware of an implementation of an algorithm applicable to our setup (leaving aside non-linear ICP, which we discuss in the answer below), we would be more than happy to run it.\\n\\n c. We also present experiments aimed at measuring generalization (in terms of predictive power and likelihood) using the learned causal structure. Ideally, If the model has learned the right structure, it should generalize better. This is shown in Table 3 of our paper.\\n\\n\\nQ. \\u201cThe authors should describe what are the underlying interventions in each dataset a bit more.\\u201d\\n\\nWe thank the reviewer for pointing this out, this is a good point and we will include this in the next revision.\"}",
"{\"title\": \"Response to Official Blind Review #1 (Part 2)\", \"comment\": \"Q. \\u201dIn this setting, how do the authors propose selecting hyperparameter values? How does the reader know the authors did not simply tune their hyperparameters to best match the underlying ground truth (I assume the proposed methodology has many more hyperparameters and thus more degrees of freedom here).\\u201d\", \"very_little_effort_was_required_for_tuning_the_neural_network_hyperparameters\": \"1. All of our experiments (synthetic and real data) use the same hyperparameters unless otherwise specified.\\n2. We used common strategies for training a neural network and this does usually include several hyperparameters. Among others, there are the specific architecture, activation, number of hidden layers, size of hidden layers, learning rate and optimizer.\\n a. The choice of the architecture was the simplest feedforward neural network, an MLP, with\\n b. The smallest possible number of hidden layers which is 1\\n c. The number of hidden neurons was designed only to be greater than the number of input or output neurons.\\n d. Given that ReLUs are standard in the literature, we selected a simple, well-known variant called LeakyReLU that avoids a common problem called the dying neuron problem [1].\\n i. The alpha parameter was arbitrarily set to 0.1 and never tuned.\\n ii. Since we are training (a set of) MLPs, we adapted some of the commonly used strategies for training MLPs. We used the Adam optimizer, one of the most successful ones in the literature [2] and selected the best learning rate from [0.01, 0.05, 0.001, 0.005].\\n 3. We are running additional experiments with various size of hidden units. We will update our paper with the new results once these experiments are completed.\\n 4. For reproducibility, future benchmarking and baseline comparisons, all code will be released. \\n \\n [1]. Lu, Lu, Yeonjong Shin, Yanhui Su, and George Em Karniadakis. \\\"Dying ReLU and Initialization: Theory and Numerical Examples.\\\" arXiv preprint arXiv:1903.06733 (2019).\\n\\n [2]. Kingma, Diederik P., and Jimmy Ba. \\\"Adam: A method for stochastic optimization.\\\" arXiv preprint arXiv:1412.6980 (2014).\\n\\nQ. \\u201cWhy does one even care about the graph being acyclic in this setting?\\u201d\\n\\nWe defined our groundtruth SCM to be acyclic for the simplicity of sampling, otherwise we could not perform ancestral sampling. That being the case, an acyclic regularizer restricts the set of solutions, encouraging faster convergence of the model from a statistical point of view. Adding the regularizer speeds up convergence, but asymptotically both models with and without regularization converge towards the same point.\\n\\nQ. \\u201cOne main reason for SCM modeling in science and policy-making is for analysts to better understand the data generating phenomena. However your use of neural networks here seems to hamper interpretability, so how do you reconcile this issue?\\u201d\\n\\nIn the foreword, we had lightly touched on the general concerns raised about neural networks and their interpretability. We will dive into greater detail here.\\n\\nThe (learned) structural parameters, which define the causal structure of the solution, are directly interpretable as an adjacency matrix. Examples of the learned adjacency matrix extracted from our model can be found in Figure 3 and 4 in the paper.\\n\\nAs regards the MLP-parametrized conditionals, they are as interpretable as conditional probability tables. This is because the MLP\\u2019s learned functional parameters can always be reduced to such a table by querying the MLP for all possible discrete values of all possible ancestors.\\n\\nThere is a vast literature on interpretability of deep learning, of course, but we must admit that our main goal is to design better learning algorithms for autonomous intelligent systems (like robots) where the ability of those systems to understand the world is the primary goal (as opposed to extracting that knowledge for human consumption). Our neural networks\\u2019 interpretability by analysts was therefore only a secondary objective, although in the end the model remains quite interpretable.\"}",
"{\"title\": \"Response to Official Blind Review #1 (Part 1)\", \"comment\": \"We thank the reviewer for such detailed feedback. We are conducting additional experiments based on the feedback and will update the paper and rebuttal once the experiments are completed.\\n\\nThe reviewer expresses several general concerns about the use of neural networks for causal inference, focusing on attributes such as their large design space and their interpretability. We would like to underscore that this paper is intended as a step from today\\u2019s completely non-causal neural networks towards incorporating more of the abilities required for handling causality. As such, our proposed method will indeed retain most of the benefits and limitations of neural networks, but improve on them by identifying causal structures.\\n\\nQ. \\u201dHow come there is hardly any discussion of the identifiability issue beyond the few sentences in A.3. This is one of the key issues in learning SCMs and it is strange that the concept of \\\"faithfulness\\\" is not even mentioned in the paper. \\u2026.In general, there is hardly any discussion of what conditions are required for the proposed estimates to even be valid. The authors seem to be optimistically assuming that their neural network + metalearning model will somehow pick up on the correct structure, without any actual conceptual investigation of this issue.\\u201d\\n\\nBecause our task setup allows a random intervention over any variable, per (Eberhardt et al., 2012) it is at least in theory possible to identify the correct graph. The rest of the paper was mostly directed at showing that this is not only possible in theory but in practice as well.\\n\\nWe thank the reviewer for mentioning that some discussion of faithfulness would enhance the paper. There are several aspects that are relevant as listed below. \\nOur model does indeed assume faithfulness, however, this is not a limitation in practice. Because of the continuous evolution of the functional parameters for the conditional distributions MLP, we believe that occurrences of unfaithful populations will be extremely short-lived and exceedingly rare to begin with. Lastly, because our procedure invokes an outer-loop optimization procedure, gradient estimate errors induced by unfaithfulness can be compensated.\\nThe faithfulness assumption (Pearl 2009, Peters et al. 2017) implies that any d-separation in the graph corresponds to a conditional independence in the data generating random variables. Under the assumption of faithfulness and a sufficiently large sample size, the Markov blanket can consistently be recovered given the availability of an efficient feature selection algorithm [1]. Neural Networks have been shown to be able to learn good features [2, 3] and require large datasets for training, which we assumed to be given here. We agree with the reviewer that similarly to the already mentioned assumptions of Markov equivalence and causal sufficiency (see A.3 PRELIMINARIES) we will add a discussion on faithfulness and the assumptions of the availability of large datasets to the manuscript. \\n[1] J.-P. Pellet and A. Elisseeff. Using markov blankets for causal structure learning. Journal of Machine Learning Research, 2008\\n[2] Bengio, Yoshua. Learning deep architectures for AI. Foundations and trends in Machine Learning, 2009\\n[3] Bengio, Yoshua et. al, Representation learning: A review and new perspectives, arxiv 1206.5538, 2012\"}",
"{\"title\": \"Response to Official Blind Review #3 (Part 2)\", \"comment\": \"Q. \\u201cMLP-specification of the SCM also seemed a bit artificial to me\\u201d.\\n\\nBoth the groundtruth SCM and our model are parameterized by MLPs. \\nCould we ask the reviewer to clarify if there is a specific aspect of this setup using an MLP the reviewer finds artificial here? MLPs are used successfully in a large number of state-of-the-art solutions to many real-world ML problems. We think of our work as a step to bring causality to deep learning, which as Pearl would say, would be helpful to further climb the ladder of intelligence.\\n\\nWe chose to parameterize the ground truth SCM by MLPs for the ease of defining the conditional probability table (CPT), such that we do not have to exhaustively define the CPT for different variables and graphs, something very convenient as the number of variables increases (and the size of a full CPT would grow exponentially). \\nAs for our model, one of the key contributions is to parametrize a learned SCM using a neural network. It is suggested by the recent review paper on causal structural learning. The paper [1] concluded that \\\"more efficient algorithms are needed\\\". One possibility of a more efficient algorithm is one that avoids an explicit exponential search over all possible DAGs and our framework of learning a SCM parameterized by a neural network using a meta-learning approach is a step towards this goal. \\n\\nAnother contribution is that our framework/ method of MLP specification of the SCM generalizes well to the challenge of out-of-distribution interventions.\\n\\n[1]. Heinze-Deml, Christina, Marloes H. Maathuis, and Nicolai Meinshausen. \\\"Causal structure learning.\\\" Annual Review of Statistics and Its Application 5 (2018): 371-391.\\n\\nQ. \\u201c had trouble tracking terms around the paper.\\u201d\\n\\nWe thank the reviewer for pointing this out. We have double checked our use of terms and updated the paper to improve clarity in this regard.\"}",
"{\"title\": \"Response to Official Blind Review #3 (Part 1)\", \"comment\": \"We thank the reviewer for the feedback. We have conducted additional experiments based on the feedback and will update our paper once the experiments are completed.\\n\\nOur paper is related to MAML like procedures for meta-learning, but goes beyond the usual setting, making a significant contribution through developing more sophisticated algorithms that enable causal structure learning.\\nThe difficulties those changes addressed are intrinsic to causal structure learning, especially in the challenging unknown-intervention scenario that we have set ourselves. The challenges we solve are 1) how to handle unknown interventions, 2) how to avoid an exponential search over all possible DAGs, 3) how to model the effect of the intervention, and finally 4) how to model the underlying causal structure.\\n\\nQ. \\u201cNo error bars for cross-entropy are reported in the experiments.\\u201d\\nWe thank the reviewer for pointing this out. We have conducted additional experiments and will update our paper once the experiments have been completed.\\n\\nQ. \\u201cThe acyclic regularizer does not reject large length cycles than 3.\\u2018 \\nWe appreciate the reviewer\\u2019s concern. The regularizer can be extended to length-n cycles, however, this becomes more computationally demanding as n increases. However, we found that a smaller n does not affect our model empirically. As shown in Figure 2, 3 and 4, our model did not learn cycles of any length greater than 2. In fact, we have found that even completely removing this regularizer does not hurt the asymptotic performance of our model. The regularizer helps the model to converge faster, however, the model still converges reasonably fast without the regularizer, as shown in Figure 6 Right.\\n\\nQ. \\u201cThe ability to predict interventions seems to drop off sharply as the number of nodes increases.\\u201d\\nWe are aware of this limitation. It makes sense that guessing which node has been intervened becomes harder as the number of nodes increases and we find that empirically, without surprise. We agree that it is a challenge to scale to larger graphs (namely graphs with more than 20 variables), however even for the sizes of graphs we consider our paper finds greatly improved solutions and this is already a significant advance over past work. One extension we hope will help to overcome this difficulty would be to perform a soft prediction of the interventional nodes, instead of the hard decision that we have now. We also would like to highlight that the intervention prediction performs significantly better than random at all times.\", \"one_note_on_the_recent_papers_on_this_topic\": \"although ICP and non-linear ICP consider a larger number of covariates, they only aim to identify the causal parents of one variable. This task alone already has exponential cost, which would be further increased if the algorithm were applied for reconstructing the whole graph by applying it iteratively to each node. Due to the computational cost, this is infeasible for larger graphs.. Other recent papers e.g. [1] likewise only consider similar number of variables given the computational cost of the proposed algorithms. In contrast one contribution of our paper is a proposal how to avoid an exponential search over all possible DAGs.\\n\\n[1]. Ghassami, AmirEmad, Saber Salehkaleybar, Negar Kiyavash, and Kun Zhang. \\\"Learning causal structures using regression invariance.\\\" In Advances in Neural Information Processing Systems, pp. 3011-3021. 2017.\\n\\nQ. \\u201cThe experimental setup of uniformly sampling an intervening variable seems artificial to me.\\u201d\\n\\nWe thank the reviewer for pointing this out. We agree that in the real world, interventions rarely appear to be chosen uniformly randomly. However, given the lack of better real-world causal structures than those from the BNLearn graph repository, and the lack of a commonly-agreed intervention probability on each node, uniform sampling seemed reasonable. Doing otherwise would have required us to justify why we picked those specific intervention probabilities. However, if the reviewer has suggestions for specific non-uniform intervention probabilities, we will be happy to perform additional experiments with them.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a MAML objective to learn causal graphs from data. The data in question is randomized but the algorithm does not have access to the identity of the intervention variable. So there is an added layer of complexity of deciphering which variable was intervened on. The MAML objective, in this case, links the causal structure to the slow-moving parameter theta_slow.\\n\\nThe novelty of the paper seems to be in the application of the MAML framework to causal discovery which is interesting to me. I think a little theory about the sensitivity of the claim of ' theta slow changes relate to the causal structure ' is important. Even showing empirically which sort of graphs and functions become issues for the model would be useful.\", \"here_are_my_issues_with_the_paper\": [\"No error bars for cross-entropy are reported in the experiments.\", \"The acyclic regularizer does not reject large length cycles than 3.\", \"The ability to predict interventions seems to drop off sharply as the number of nodes increases. This suggests an inability to scale to more than 20 variables.\", \"The experimental setup of uniformly sampling an intervening variable seems artificial to me.\", \"MLP-specification of the SCM also seemed a bit artificial to me.\", \"Overall, the experiments look reasonable and the method itself seems interesting although further work is needed to show it is useful.\", \"(writing comments) The paper could use a more structured re-write. I had trouble tracking terms around the paper. For example, there seems to be a difference between P_i and P because the former uses theta_i and the latter only uses theta_slow only.\", \"---------------------------------\", \"Updated score to 6 after rebuttal.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes an SCM-model based on masked neural networks to capture arbitrary conditional relationships combined with meta-learning-style adaptation to reflect the effects of various unknown interventions. Overall the paper is well written and easy to follow, but some conceptual issues remain.\\n\\n\\n- How come there is hardly any discussion of the identifiability issue beyond the few sentences in A.3. This is one of the key issues in learning SCMs and it is strange that the concept of \\\"faithfulness\\\" is not even mentioned in the paper.\\n\\nIn general, there is hardly any discussion of what conditions are required for the proposed estimates to even be valid. The authors seem to be optimistically assuming that their neural network + metalearning model will\\u00a0somehow pick up on the correct structure, without any actual conceptual investigation of this issue.\\n\\n- The massive downside of neural nets is all the various hyperparameters one has to set (eg. architecture, optimizer, activations, etc). In this setting, how do the authors propose selecting hyperparameter values? How does the reader know the authors did not simply tune their hyperparameters to best match the underlying ground truth (I assume the proposed methodology has many more\\u00a0hyperparameters and thus more degrees of freedom here).\\nI would like to see the empirical performance of different variants of your model with different hyperparameter values to assess its sensitivity to these choices. \\n\\n- Why does one even care about the graph being acylic in this setting?\\nThe mere fact that the authors require a regularizer to ensure acylicity suggests this approach is prone to mis-identifying the ground truth structure (which is always acyclic in the experiments).\\n\\n- One main reason for SCM modeling in science and policy-making is for analysts to better understand the data generating phenomena. However your use of neural networks here seems to hamper interpretability, so how do you reconcile this issue? Also is your sparsity regularizer satisfactory to confidently diagnose presence/absence of an edge (in constrast to statistical hypothesis tests, say based on conditional independence). Isn't this heavily influenced by the particular sparsity-regularizer value that happened to be selected?\\n\\n\\n- Related papers that utilize the same idea of predicting a variable conditioned on subset of other variables via neural network + masking strategy:\\n\\nIvanov et al (2019). VARIATIONAL AUTOENCODER WITH ARBITRARY CONDITIONING.\", \"https\": \"//arxiv.org/abs/1909.06319\\n\\nYoon et al. GAIN: Missing data imputation using generative adversarial nets. Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, 2018. http://proceedings.mlr.press/v80/yoon18a.html\\n\\nDouglas et al. A universal marginalizer for amortized inference in generative\\nmodels. arXiv preprint arXiv:1711.00695, 2017\\n\\nFor clarity, the authors should highlight the differences of their approach from these works (beyond the causal setting).\\n\\n- Given the lack of theoretical / conceptual guarantees that the methodology will work, our faith in the proposed methodology rests entirely on the empirical experiments. However, I find these a bit too basic to be very convincing, and would at least like to see more methods being compared (in particular for the simulated graphs as well).\\n\\n- The authors should describe what are the underlying interventions in each dataset a bit more.\\n\\n- The Figures should be better explained (took me a while to figure out what dots/colors represent).\\n\\n- Why do the authors report cross entropy loss in Table 1? To my knowledge this is not a standard metric for measuring the quality of structure-estimates.\\n \\n- Instead of ICP (which is constrained to be linear which is unrealistically simple), why don't the authors compare against nonlinearICP (which is more flexible like their neural networks): \\n\\nHeinze-Deml et al (2018). Invariant Causal Prediction for Nonlinear Models. https://arxiv.org/pdf/1706.08576.pdf\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper develops a learning-based causal inference method that performs multiple tasks jointly:\\n\\ni) scalable discovery of the underlying Structured Causal Model (SCM) by modeling both structural assignments and the SCM as a continuously parameterized chain of distributions, \\n\\nii) identification of the intervened variables, which are not known to the model a-priori unlike the mainstream causal inference setups,\\n\\niii) achieving the two aforementioned goals using meta-learning driven heuristics, i.e. interventions cause distributional shifts. \\n\\nWhile the paper adopts the core design choices from recent prior art (Bengio et al., 2019), the proposed methodology (especially ii)) is sufficiently novel to be published as a main-track conference paper. The paper is very well-written, follows a concrete and easy-to-follow story line. It solves multiple ambitious problems end-to-end and justifies the methodological novelty claims by a properly conducted set of experiments. The paper also successfully employs simple and useful but forgotten old techniques such as fast/slow parameter decomposition in the proposed model pipeline.\\n\\nThe intervention prediction heuristic is splendid. It is simple, sensible, and has been proven by experiments to be very effective. I would rate this as the primary novelty presented in this paper.\", \"the_paper_can_be_improved_if_the_below_relatively_minor_concerns_are_addressed\": \"i) It would be informative if the paper had a paragraph discussing also the fundamental limitations of the approach more openly. For instance, the choice of the neural net architecture used for the structural assignment might have a huge impact on the outcome, especially because the same architecture is repetitively used for all variables of the SCM. Furthermore, treatment of each variable with a fully independent neural net could cause overparameterization as the SCM grows in number of variables.\\n\\n ii) The paper makes a strong scalability claim across the variable size thanks to independent Bernoullis assigned on the adjacency matrix entries. However, it reports results only for very small SCMs. It is understandable that given the premature stage of the causal inference research might not grant standardized data sets at a larger scale, but at least lack of this quantitative scalability test could be acknowledged and the related claims could be a little bit softened.\\n\\n iii) I do not buy the argument in the first paragraph of Sec 3.5 about why the structural assignment functions need to be independent. As the model does not pose a distribution on neural net weights, sharing some weights (i.e. conditioning on them) would only bring conditional independence across the variables. I do not see a solid reason to try to avoid this. What is wrong for multiple variables to share some functional characteristics in their structural assignment? After all, some sort of conditional independence will be inevitable in modeling. If the variables share the same architecture, this is also conditional independence, not full independence. Relaxing the independence assumption and allowing some weight sharing could be beneficial at least for scalability of the model, could even bring about improved model fit due to cross-variable knowledge transfer.\\n\\nOverall, none of the aforementioned three weaknesses is fundamental. In the status-quo, this is a spectactular research paper and my initial vote is an accept.\"}"
]
} |
rJg46kHYwH | Adaptive Generation of Unrestricted Adversarial Inputs | [
"Isaac Dunn",
"Hadrien Pouget",
"Tom Melham",
"Daniel Kroening"
] | Neural networks are vulnerable to adversarially-constructed perturbations of their inputs. Most research so far has considered perturbations of a fixed magnitude under some $l_p$ norm. Although studying these attacks is valuable, there has been increasing interest in the construction of—and robustness to—unrestricted attacks, which are not constrained to a small and rather artificial subset of all possible adversarial inputs. We introduce a novel algorithm for generating such unrestricted adversarial inputs which, unlike prior work, is adaptive: it is able to tune its attacks to the classifier being targeted. It also offers a 400–2,000× speedup over the existing state of the art. We demonstrate our approach by generating unrestricted adversarial inputs that fool classifiers robust to perturbation-based attacks. We also show that, by virtue of being adaptive and unrestricted, our attack is able to bypass adversarial training against it. | [
"Adversarial Examples",
"Adversarial Robustness",
"Generative Adversarial Networks",
"Image Classification"
] | Reject | https://openreview.net/pdf?id=rJg46kHYwH | https://openreview.net/forum?id=rJg46kHYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"iJ2_ZGdq1",
"HyetH8SnjH",
"SkgDv7mhoS",
"rJxomm72or",
"HJeH1uLssB",
"ryeTAIYdsr",
"SylNpBFusr",
"Syl3VrKOsB",
"HklaLEY_iS",
"rJgbx4Y_sH",
"rJxptTW6tH",
"rkgqAJxpFr",
"rJeOnu0jYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737606,
1573832256628,
1573823327167,
1573823266714,
1573771229502,
1573586645091,
1573586364296,
1573586228201,
1573586005349,
1573585896788,
1571786117025,
1571778513721,
1571707056405
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1985/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1985/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1985/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1985/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1985/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1985/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1985/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1985/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1985/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1985/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1985/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1985/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents an interesting method for creating adversarial examples using a GAN. Reviewers are concerned that ImageNet Results, while successfully evading a classifier, do not appear to be natural images. Furthermore, the attacks are demonstrated on fairly weak baseline classifiers that are known to be easily broken. They attack Resnet50 (without adv training), for which Lp-bounded attacks empirically seem to produce more convincing images. For MNIST, they attack Wong and Kolter\\u2019s \\\"certifiable\\\" defense, which is empirically much weaker than an adversarially trained network, and also weaker than more recent certifiable baselines.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"I have read the authors reply and will keep my score at 3.\", \"comment\": \"Thank you for the clarifications. After reading your response, I still feel the same way as before regarding the significance of the proposed method, but it does clarify things further. I understand the authors had technical limitations, but I am still not convinced that the authors\\u2019 findings of such unrestricted adversarial examples is very useful.\\n\\nAgain, I do appreciate the time the authors put into answering my questions.\"}",
"{\"title\": \"(Part 2)\", \"comment\": \"Adaptivity is Significant\\n\\nIf we interpret your comment correctly, you state that adaptivity is a natural consequence of generating unrestricted adversarial examples. This is not the case: it is possible to use a fixed, easy-to-mitigate attack which is not constrained to norm-balls around test points. (Note that \\u2018unrestricted\\u2019 currently means just \\u2018not restricted to an $L_p$ norm ball\\u2019.) Our experiments show that Song et al.\\u2019s method is such an easy-to-mitigate method, and that ours is not.\\n\\nAnother perspective on this is that all adversarial example algorithms so far involve a certain search procedure in image space for an example that fools the classifier, whereas our approach entails an optimisation over the weights of a generator, in effect searching for an adversarial example generation procedure which is effective against the target network.\\n\\nAlthough there is room for improvement regarding the realism of the generated images, it seems likely to us that further tuning and development of the procedure - and possibly scaling up the compute used - will remedy this. The tremendous pace of improvement in GAN image quality since 2014 is strong evidence in favour of this hypothesis.\\n\\nIn short, the significance of introducing the first adaptive method for unrestricted adversarial example generation outweighs any current minor realism limitations imposed by GAN training difficulties.\"}",
"{\"title\": \"Thank you - we believe we can address your further concerns (part 1)\", \"comment\": \"Thank you for taking the time to receptively read and reflect upon our comments - we are of course very glad that you feel able to raise your score.\\n\\nWe are also grateful that you have been so specific and clear about your reasons for not yet recommending acceptance, which once again makes it easy for us to improve our paper and respond to any points of disagreement. In particular, we believe we can allay your remaining concerns.\", \"imagenet_realism\": \"Hardware Limitations\\n\\nIn addition, the codebase we use is intended for \\u201c4-8 GPUs\\u201d [1]. However, we only have access to one 16GB GPU. This means that the 15 is the greatest minibatch size we can use without running out of memory. This causes problems, since \\u201ca small batch leads to inaccurate estimation of the batch statistics, and reducing batch normalisation\\u2019s batch size increases the model error dramatically\\u201d [2]. This has caused problems for others attempting to use the same codebase [3], and the author has warned that using a smaller batch size \\u201cwill likely negatively impact model performance\\u201d [4] and for this reason \\u201cthis is not really a model for small hardware\\u201d [5]. Unfortunately, this is true for all state-of-the-art ImageNet GANs.\\n\\nIn an attempt to clarify to what extent the unrealistic ImageNet results are a product of adversarial finetuning, rather than these external limitations, we have added images to the paper generated by the BigGAN on our machine after training (not adversarial finetuning) for 10 gradient steps. These can now be found in Appendix A. To our eyes, it appears that these images are little better than our adversarial examples, and so adversarial finetuning is not the primary cause of the unrealistic images (such as deformed dogs).\", \"mnist_realism\": \"Interpretation of Results\\n\\nYou correctly point out that MNIST is a simple dataset. However, this in fact makes our results more impressive, not less impressive. The simplicity of classification means that the adversarially-robust classifiers are much better than for any other dataset, so finding adversarial examples is more challenging. Equally importantly, the highly-structured images (black background with simple white figures) makes it relatively easy to spot deviations from the usual data distribution. To be clear, our 50% figure is not that 50% of the time, human judges do not think the image looks realistic, but rather that 50% of the time, human judges are unable to distinguish our adversarial examples from examples in the dataset. This is not a trivial achievement.\\n\\nOur key message regarding realism is that unrestricted adversarial examples are valuable not only when they are indistinguishable from real data (for us, 50% of the time on MNIST), but also when they are unambiguous inputs for which a human could give a meaningful answer (as you point out). We achieve this on MNIST, and although you are correct to identify that GAN training issues hinder this on ImageNet, we still feel that these results are a significant step in a useful direction.\\n\\n[1] https://github.com/ajbrock/BigGAN-PyTorch\\n[2] https://arxiv.org/abs/1803.08494\\n[3] https://github.com/ajbrock/BigGAN-PyTorch/issues/40\\n[4] https://github.com/ajbrock/BigGAN-PyTorch/issues/39\\n[5] https://github.com/ajbrock/BigGAN-PyTorch/issues/31\"}",
"{\"title\": \"Thank you for your detailed response. Increasing score to 3 but still not in favor of acceptance.\", \"comment\": \"Thank you for the detailed and clear response. I appreciate all the revisions the authors have made in response to my feedback, as well as the additional ablation studies and baselines that the authors ran in section 4.4. Thank you also for pointing out that GANS are notoriously tough to train, which clarifies why it is helpful to include all the modifications necessary to train the GAN. Overall, I think the paper has improved from before. Thus, I will increase my score from reject (1) to weak reject (3). However, I am still not convinced that the author's results are interesting enough for me to raise my score any further.\\n\\n\\nI will explain my reasons for my current score of 3.\\n\\n1) Regarding the realism of the generated images:\\n\\nI looked at a few BigGAN samples (which you fine-tune from) and looked at Imagenet samples from your appendix and it's fairly clear to me which images are more realistic. For example, I think very few of the Imagenet samples that you generate actually resemble anything realistic at all (e.g. your dog images are completely deformed and hardly have recognizable faces/eyes/noses/etc., while BigGAN dogs actually look fairly convincing). I am happy that you did realism tests with MTurkers on MNIST, but MNIST is a fairly simple dataset, and having 50% realism (where 90% is the best possible) on MNIST is not impressive to me either.\\n\\nMore importantly, though, I disagree with the authors on one point. The authors argue that we want neural networks to generalize to these particular unrestricted adversarial examples. For clarity, I am focusing on the Imagenet images the authors generate here. Because none of the authors' generated images really make much sense to a human (as a disclaimer, this is my personal opinion upon visual inspection of Figures 7,8,9,10 in the Appendix), why would we want or expect neural networks to produce anything sensible on these images? This is why I am stressing the realism so much.\\n\\n\\n2) Regarding the comparison to Song et. al:\\n\\nI do acknowledge that the authors have some improvements over Song et. al. Their method can be used with different GANs.\\n\\nI do not not think adaptivity is a particularly surprising or interesting result, as the authors generate unrestricted adversarial examples. I could easily do something close to an unrestricted gradient-based attack (for example, doing an L2 attack with a very very large epsilon), and this will probably generate some unrealistic image that fools a classifier. It's unreasonable (and not necessarily helpful) to require classifiers to be robust to all L2 attacks within a huge epsilon ball, just as it's unreasonable to expect classifiers to succeed on images that are frequently not realistic to humans.\"}",
"{\"title\": \"Summary of Improvements Made\", \"comment\": [\"Thank you to all three reviewers for your thoughtful and constructive comments. We have responded in detail to each point made, and have uploaded an updated version of our paper incorporating your feedback. The key changes that have been made are:\", \"Removal of out-of-date and confusing explanatory diagram identified by reviewers #1 and #2; redrafting of exposition in section 3.1 which we hope is a much clearer explanation of our method.\", \"Addition of additional adversarial training experiment (4.2) suggested by reviewer #1.\", \"Addition of further ablative experiments (4.4) for strategies outlined in section 3.3 to address concerns of reviewer #3.\", \"Addition of baseline (4.4) suggested by reviewer #3.\", \"Smaller corrections and writing improvements.\", \"We hope that this improved paper together with our responses to your individual comments will reassure you and allow you to increase your scores.\"]}",
"{\"title\": \"Response to Reviewer #3 (Part 2)\", \"comment\": \"Ablation Study for Section 3.3\\n\\nYou correctly point out that we have not made clear how necessary each training strategy is. We have therefore carried out ablative experiments demonstrating the effect of omitting each in turn, reported in section 4.4 and appendix L. In short, pretraining and use of an attack rate other than 1 are not strictly necessary, but improve performance. Use of the naive loss function of simply summing the two loss terms degrades performance so badly as to be unusable. GANs are notoriously difficult to train at the best of times; adding an extra loss term which conflicts with the ordinary loss makes this even more difficult; strategies to improve training are helpful.\\n\\nOther Baseline\\n\\nThank you for the suggestion to compare to norm-bounded perturbation attacks on images generated by a pretrained GAN. We have implemented this baseline, and found that the results are only slightly more effective than norm-bounded perturbations on the test set; this makes sense, since the pre-trained generator is supposed to have learnt this data distribution.\\n\\nComparison to Non-Finetuned GAN\\n\\nWe apologise for the confusion regarding the pretrained-only baseline - our description was unclear. To carry out the baseline, we first generate many examples with a particular intended true label, then filter these to keep only those which match the \\u2018target label\\u2019, then report the proportion of these are judged to indeed visually resemble the intended true label. We have updated the paper clarify this: please do let us know if we have still caused confusion.\\n\\nAdditional Feedback\\n\\nThe revision we have uploaded includes all of your suggestions of minor improvements to writing, referencing and formatting, for which we are grateful.\\n\\nWe believe that this response fully addresses all the concerns you have raised - we look forward to hearing from you, either to raise your score or to continue the conversation with any further concerns you have.\"}",
"{\"title\": \"Response to Reviewer #3 (Part 1)\", \"comment\": \"Thank you for your detailed and thoughtful review. Although we are disappointed about your recommendation, we are grateful for the specificity and cogency of your feedback, which makes it especially easy to either improve our paper or respond to any points of disagreement.\\n\\nAs we understand it, you have raised two specific concerns regarding the significance of our work: that unrestricted adversarial examples are significant only if they are realistic, and that our work is too incremental in comparison to the prior work of Song et al. We address these separately.\", \"significance_of_results\": \"Comparison to Prior Work\\n\\nOur work is not incremental over Song et al. since it presents an entirely new method, with several important advantages.\\n\\nThe most fundamental of these is that our method is adaptive. While $L_1$/$L_2$ attacks, rotation/translation attacks and Song et al.\\u2019s attack are all able to attack a network defended against $L_\\\\infty$ attacks, they all share a weakness: they are not adaptive. That is, their attack procedure is fixed, and does not depend on the target network. It seems likely that all such attacks can be mitigated by adversarial training, since the classifier can learn not to rely on features targeted by that particular threat. Empirical studies show that this is true for $L_p$ perturbations [2], translations/rotations [3], and Song et al. (section 4.2).\\n\\nConversely, our method is adaptive, since the generator is essentially finetuned to find a set of features to attack that the target classifier is reliant upon; this search is not constrained by an $L_p$ norm or any other requirement, and so the classifier is unable to anticipate all kinds of attack that the generator may learn next. Our preliminary attempts at defence against our attack have failed; we believe this challenge to be significant enough to be left as a fruitful direction for future work.\\n\\nAs described in section 5.1, our method is also three orders of magnitude more efficient, demonstrably scales to a dataset orders of magnitude more complex than Song et al, and allows any existing GAN to be used out-of-the-box.\\n\\n[1] https://arxiv.org/abs/1807.06732\\n[2] https://arxiv.org/abs/1908.08016\\n[3] https://arxiv.org/pdf/1905.01034\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your detailed and thoughtful review. We are especially glad to read that you find the experimental evaluation extensive and particularly enjoyed the demonstration that our method is able to find new ways of fooling standard adversarial training (which is very effective at mitigating the state of the art).\\n\\nWe are sorry to hear that some parts of our exposition - especially Figure 1, which was indeed out-of-date - caused you (and another reviewer) some confusion. We have removed Figure 1 and rewritten our exposition in light of your feedback, and hope that this revision greatly improves the clarity of this point.\\n\\nTo address your concern directly, any conditional GAN can be used with our method, with no other restrictions on the architecture. Although it is completely standard for a contemporary GAN to be class-conditional, we have clarified this condition in our revision.\\n\\nWe have also incorporated your minor writing improvements, for which we are grateful.\\n\\nWe believe that this response fully addresses all the concerns you have raised - we look forward to hearing from you, either to raise your score or to continue the conversation with any further concerns you have.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your detailed and thoughtful review. We are especially glad to read that you find the method to be well-motivated, the empirical evaluation to be comprehensive, and the writing to be lucid.\\n\\nWe are grateful for your suggestions of minor improvements to the clarity of the paper. We have uploaded a revision of the paper with these changes incorporated, including clarification of the overall objective function.\\n \\nYou are correct to point out that Song et al. require 100-500 iterations to generate an adversarial example, yet we claim a 400-2000x efficiency improvement. The extra factor of four is because our method requires only a single forward pass through the generator network, while each iteration of Song et al. requires both forward and backward passes through both the generator and classifier networks. We hope that our revised wording makes this more explicit.\\n\\nWhile it could be interesting to repeat the nearest-neighbour calculations with a larger sample size, ten handpicked images are sufficient for a sanity check that our examples really are unrestricted. We believe a sanity check is all that is required: the only case in which our generated adversarial examples would be within a typical $L_p$ norm radius is if the generator were simply memorising dataset images. As a result we have prioritised other experiments and improvements during this time-limited response period.\\n\\nYour proposal of an adversarial training procedure in which the classifier is trained simultaneously online with the GAN finetuning is very sensible. We have implemented this experiment (see section 4.2), and found that our method is again able to easily evade this adversarial training procedure; the generator is always able to find a new way of fooling the classifier, since it has no restrictions.\\n\\nWe believe that this response fully addresses all the concerns you have raised - we look forward to hearing from you, either to raise your score or to continue the conversation with any further concerns you have.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"======== update ========\\nI have read the authors' response and it has addressed most of my concerns. I am glad to see the authors' experiments on online adversarial training. \\n\\nHowever, there is one additional concern that I didn't realize previously. Currently the performance of adversarial training is measured in \\\"success rates\\\". However it seems to me this success rates were not computed using human evaluation (since the authors claim once the classifier is finished training, the attack success rate can be larger than 99%). I would have changed my score to 8 if either 1) some adversarial images from the generator were included after finishing adversarial training or 2) success rates using human evaluation is reported. Unfortunately, I only realized this after the author rebuttal period, and the authors didn't have the chance to address this.\\n\\nThat being said, I feel this paper still presents interesting contribution to the field. I am still largely in favor of the acceptance of this paper, and will remain my rating of 6 for now. If this paper gets accepted, I strongly encourage the authors to address the concern I mentioned above in their camera ready.\\n\\n\\n======== original reviews ========\\n\\nThis paper proposes a novel method on generating unrestricted adversarial examples by finetuning GANs. The authors have conducted comprehensive experiments on evaluating the advantages of their approach. They demonstrated that their attack is harder to mitigate using adversarial training, produces unrestricted adversarial examples faster than existing methods, and can generate some unrestricted adversarial examples for complex high-dimensional datasets such as ImageNet.\\n\\nI feel although the approach is straightforward, the authors have done a good job in motivating the method and have demonstrated its advantages via a good cohort of experiments. I like how the authors motivated finetuning in section 3.2, and I am glad that the authors have conducted ablative experiments to support their arguments in section 4.4. The experiments on adversarial training are especially interesting, since previous work hasn't considered this straightforward defense against unrestricted adversarial attacks. I am also glad that the authors can generate unrestricted adversarial examples for data as complicated as ImageNet images using the latent technique in GANs. Although still not perfect, some of the unrestricted adversarial examples on ImageNet are surprisingly good to the sense that they may be used as practical attacks.\\n\\nThe writing is great, and it is a pleasure to read this paper. \\n\\nI do have some suggestions and questions for further improvement of the paper, and I strongly recommend the authors to address those before publication.\\n\\n- Section 3 is lacking an explicit form of the combined objective function. Currently some loss functions such as $l_ordinary$, $l_targeted$, $l_d$ and $l_finetune$ are only defined in Figure 1 but not in the main text. It is not clear their explicit mathematical form.\\n\\n- In section 3.2, it is better to also mention the ablative study you did later in section 4.4. \\n\\n- In section 4.1, the authors showed nearest neighbors to some of the unrestricted adversarial examples they generated. It is more convincing to have some quantitive results of that. For example, what is the average minimum distance to training data for a group of 10000 unrestricted adversarial examples? In addition, what is the distance function used in computing nearest neighbors? Did you use Euclidean distance? If so, it would be better to also have results using distances computed in the feature space of a pre-trained convolutional network.\\n\\n- In section 4.2, the adversarial training was done by alternating two phases of training rounds. I am wondering whether this makes the classifier harder to adapt to the newly generated unrestricted adversarial examples? Can you use some procedure more similar to traditional adversarial training, i.e., the attacker and the classifier are learned together at each step? \\n\\n- Song et al. require 100-500 iterations to generate an adversarial example, whereas your approach only need one iteration. Why is your approach 400 to 2000x more efficient? What is the additional reason that speeds up your approach?\\n\\n- In section 4.5 line 1, the word \\\"replies\\\" was repeated twice.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a GAN architecture that generates realistic adversarial\\ninputs that fool a targeted classifier. Adversarial inputs are unrestricted:\\nthey may be any realistic images that humans will often classify as real\\nexamples of the intended class, whereas the target model misclassifies them.\\nThe novelty is that they finetune the generator itself during training, the\\nmethod can be applied to a variety of GAN architectures, and the method is fast.\\n\\nTricks used to successfully train the GAN are clearly described, and the\\nexperimental evaluation was of good scope, covering a good selection of\\nexperiments. I particularly enjoyed the short Section 4.2 and Fig 7a+b, where\\nthey show that a local defense can always be fooled somewhere else along\\nthe input manifold of that class.\\n\\nWhile the modifications to existing solutions may at first seem minor, they\\nhave significant impact in applicability, effectiveness and speed of generating\\nunrestricted adversarial images. So I think this paper can be accepted.\\n\\nI had a bit of avoidable confusion in the introductory sections. Figure 1\\ndescribing the GAN is never referred to. It includes components not exactly\\nagreeing with my naive expectations from surrounding text. Are any Fig. 1 features\\noptional? It would help to highlight the novel elements in Fig. 1. Or does\\nFig.1 correspond perhaps to the combined GAN elements in Section 4 (\\\"In our\\nexperiments, we combine three ...\\\"). My uncertainty was really relieved only\", \"by_the_time_i_got_to_related_work_and_appendix_e\": \"(\\n\\nThe main claims seemed well supported by experiments, apart from claim 3\\n(applicability to \\\"any\\\" checkpointed GAN codebase). Might the scope of their\\napproach also be clarified by clearly identifying required and optional GAN\\ncomponents in Fig. 1?\\n\\n---- misc comments ----\", \"some_sentences_were_long_and_difficult_to_parse\": [\"4.1: \\\"Our method generates..., else ....\\\" Perhaps make the else clause a second sentence.\", \"4.2: \\\"Image quality as measured ....\\\" length and references made this difficult to read. Can you\", \"rewrite as separate shorter sentences?\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes using GANs to generate unrestricted adversarial examples. They seek to generate examples that are adversarial for a specific classifier, and they do so by using class-conditional GANs and a fine-tuning loss. The fine-tuning loss consists of both the ordinary GAN loss (to fool the discriminator) as well as an adversarial loss (which rewards the GAN for generating examples misclassified by the specific classifier). The authors perform various experiments on their generated examples to check for realism and how adversarial the generated images are.\\n\\nI would reject this paper for two key reasons. First, I feel that the contributions are not significant enough (in comparison to the prior work of Song et. al). Second, I feel that some of the methods (and some of the writing) are not too principled.\\n\\nIn my opinion, unrestricted adversarial examples are significant if they can be made to be realistic. If our current deep learning models often mislabeled very realistic images, that would properly expose a big failure mode of our current models. However, if our machine learning models perform poorly on images that look fake/generated 40% of the time (which is what the authors state) and don\\u2019t look too realistic to humans, it is less worrying.\\n\\nIn comparison to Song et. al, the authors state that their methods result in very similar results in terms of realism and how adversarial their images are (arguably, Song et. al actually produces better results in terms of being adversarial). In my opinion, the authors\\u2019 claimed improvements are not significant enough, because I think realism should be the primary metric to evaluate this field. Improving speed of generation is nice, and being able to bypass a simple adversarial training procedure is interesting but not significant unless this insight is expanded upon. The results on MNIST in Fig. 5 and Fig. 6 are not too convincing, as simpler attacks that generate (arguably) more realistic images like translations and rotations [1] or L1/L2 attacks [2] (since the networks are trained for L_inf robustness) can also degrade accuracy. Finally, I can also think of another reasonable baseline that I would have liked to see the authors compare their method against. Because the authors want to attack a specific network, they could have (1) generated realistic images using a pre-trained GAN (2) used a norm-bounded attack on the specific classifier and the generated GAN images. These images could be even more realistic if the norm-bound of the attack is fairly small, and would still be able to attack specific classifiers.\\n\\nFinally, I am confused by the comparison to a not-fine-tuned GAN in Fig. 14/Fig. 15 and would appreciate a clarification so that I can understand the results. For example, what does it mean for intended true label = 9, target label = 0 to have 90% success in Fig. 15? Does this mean that when you try to generate a 9 with the GAN, the classifier misclassifies it as a 0 90% of the time? In particular, I\\u2019m struggling to understand what the target label is for the case of the not-fine-tuned GAN.\\n\\nSecondly, I feel that there are many instances in the paper where the methods used are not explained in a principled way. For example, one of the key parts of this work is the fine-tuning loss function. Why does the loss function involve multiplying the ordinary GAN loss (with some additional transformation applied to it which seems unnecessary) with the adversarial loss? It seems most reasonable add the adversarial loss and the ordinary GAN loss (without the additional transformation). Is the stochastic loss selection procedure necessary? If all these peculiarities of the method are necessary, it seems that the success of this method is quite brittle.\", \"additional_feedback\": [\"In the intro, I think citing [3] in addition to Xu et. al is more appropriate.\", \"You should refer to Figure 1 somewhere in the text of your work\", \"In section 3.2, you can use \\u201ccosine similarity\\u201d to describe what you are doing faster.\", \"When you talk about \\u201cglobal optima of realistic adversarial examples\\u201d and \\u201clocal optimal of unrealistic adversarial examples,\\u201d it sounds weird. I would try to reword this because I don\\u2019t think you are trying to make a precise mathematical statement but it sounds like one when you write it this way.\", \"In Table 1, I would format the numbers better to be vertically aligned\", \"You should provide a citation for MixTrain on page 5\", \"[1] https://arxiv.org/abs/1712.02779,\", \"[2] https://arxiv.org/abs/1905.01034\", \"[3] https://arxiv.org/abs/1802.00420\"]}"
]
} |
BJeXaJHKvB | P-BN: Towards Effective Batch Normalization in the Path Space | [
"Xufang Luo",
"Qi Meng",
"Wei Chen",
"Tie-Yan Liu"
] | Neural networks with ReLU activation functions have demonstrated their success in many applications. Recently, researchers noticed a potential issue with the optimization of ReLU networks: the ReLU activation functions are positively scale-invariant (PSI), while the weights are not. This mismatch may lead to undesirable behaviors in the optimization process. Hence, some new algorithms that conduct optimizations directly in the path space (the path space is proven to be PSI) were developed, such as Stochastic Gradient Descent (SGD) in the path space, and it was shown that SGD in the path space is superior to that in the weight space. However, it is still unknown whether other deep learning techniques beyond SGD, such as batch normalization (BN), could also have their counterparts in the path space. In this paper, we conduct a formal study on the design of BN in the path space. According to our study, the key challenge is how to ensure the forward propagation in the path space, because BN is utilized during the forward process. To tackle such challenge, we propose a novel re-parameterization of ReLU networks, with which we replace each weight in the original neural network, with a new value calculated from one or several paths, while keeping the outputs of the network unchanged for any input. Then we show that BN in the path space, namely P-BN, is just a slightly modified conventional BN on the re-parameterized ReLU networks. Our experiments on two benchmark datasets, CIFAR and ImageNet, show that the proposed P-BN can significantly outperform the conventional BN in the weight space. | [
"path space",
"relu networks",
"sgd",
"relu activation functions",
"weight space",
"conventional bn",
"success"
] | Reject | https://openreview.net/pdf?id=BJeXaJHKvB | https://openreview.net/forum?id=BJeXaJHKvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"1tPaz9X-m_",
"rygGgK_roS",
"BJl__VdBiH",
"BygbSE_riH",
"HygQZEdrir",
"ryeSKamyoB",
"HkxkpF_j5S",
"r1leWJ7TYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737577,
1573386474063,
1573385328301,
1573385272969,
1573385210758,
1572973949241,
1572731319491,
1571790583752
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1984/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1984/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1984/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1984/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1984/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1984/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1984/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper addresses the extension of path-space-based SGD (which has some previously-acknowledged advantages over traditional weight-space SGD) to handle batch normalization. Given the success of BN in traditional settings, this is a reasonable scenario to consider. The analysis and algorithm development involved exploits a reparameterization process to transition from the weight space to the path space. Empirical tests are then conducted on CIFAR and ImageNet.\\n\\nOverall, there was a consensus among reviewers to reject this paper, and the AC did not find sufficient justification to overrule this consensus. Note that some of the negative feedback was likely due, at least in part, to unclear aspects of the paper, an issue either explicitly stated or implied by all reviewers. While obviously some revisions were made, at this point it seems that a new round of review is required to reevaluate the contribution and ensure that it is properly appreciated.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of the new version\", \"comment\": \"Thanks for all reviewers and their comments, which are helpful for us. We have uploaded a new version of our paper, and the main changes include:\\n1. We have fixed a typo in Theorem 3.1 by adding the activation g. \\n2. We have added Corollary 4.2. \\n3. We have added some more descriptions about CNN in Appendix D.1.\"}",
"{\"title\": \"To Reviewer #2\", \"comment\": \"Thanks for your comments. The following is our responses.\", \"q1\": \"\\u201cBy comparing theorem 3.2 and theorem 4.1, it seems P-BN even gives even worse upper bound of gradient norm.\\u201d\", \"a1\": \"This is not a worse upper bound. Theorem 3.2 and 4.1 provide the norm of gradient w.r.t outputs in every layer, when conventional BN and P-BN are applied to re-parameterized networks, respectively. Here, larger gradient norm means greater gradient exploding, so the larger one is the worse one. Please note that the diagonal elements of $\\\\hat{W}_s$ in Theorem 3.2 are equal to 1 and in Theorem 4.1, we have stated that $\\\\hat{W}\\u2019_s=\\\\hat{W}_s-I$. Thus, theorem 4.1 demonstrates that the gradient exploding problem will be weaken by P-BN, i.e., the gradient norm can become smaller after applying P-BN, because the identical matrix is separated from $W\\u2019_s$, and the variance term $\\\\|\\\\sigma^{s,/}\\\\|$ becomes larger. We have provided a comparison with clearer description in Corollary 4.2 in our new version.\", \"q2\": \"\\u201cwe don't actually care that much about the issue of gradient exploding since one could always do gradient clipping.\\u201d\", \"a2\": \"Gradient clipping is a trick which needs tuning hyper-parameters, and here we want to get rid of this trick and find a well-performed design for the problem. This paper aims to propose a suitable BN method in path space to ensure stable gradient propagation, which is more fundamental and have theoretical guarantee. It is unfair to criticize this paper\\u2019s significance to state that gradient exploding is not an important issue.\", \"q3\": \"\\u201cThe formulation of the P-BN seems to be closely related to ResNet \\u2026\\u201d\", \"a3\": \"We have some following differences. Frist, our motivation is quite different, as we start from the path space, while ResNet is not motivated by the new parameter space. Second, the novelty of ResNet is adding a skip-connection, but the identical connection is naturally exists in the path space, when the network is re-parameterized. Then, P-BN exclude the term related to the constant coefficient. Third, P-BN can also be used for ResNet since path space for ResNet is also established in previous work, which means that they are compatible.\", \"q4\": \"\\u201cIt would be better to describe the method in a broader sense.\\u201d\", \"a4\": \"We provided some details on CNN in Appendix D.1 in our original version. In that way, the number of hidden nodes in MLP corresponds to the number of channels in CNN, and CNN can be operated similarly with MLP. We have also added some additional details on CNN in the updated version.\", \"q5\": \"\\u201cThe assumption of diagonal elements of matrix w to be all positive is very restrictive and simply removes the effect of ReLU activations.\\u201d\", \"a5\": \"First, we lost the activation g in Theorem 3.1 and we have fixed it in our updated new version. We are sorry for this typo, and it may cause the misunderstanding that our theorem removes the effect of ReLU activations. In fact, ReLU activations still works after re-parameterization. Second, in the remark of theorem 3.1, we show that the positive constraint is not essential for proving theorem 3.1. Third, this constraint will not bring much influence on the model expressiveness, because the number of the constrained weights is tiny compared with the total weights, and according to experiment results in Meng et al., 2019 the practical performances are not harmed by this constrain. Actually, this constraint comes from the optimization algorithm in the path space (Meng et al., 2019), which is not introduced by our paper.\\n\\nWe sincerely hope that we have addressed your concerns and you can reconsider your ratings after reading the responses and our updated version.\"}",
"{\"title\": \"To Reviewer #3\", \"comment\": \"Thanks for your suggestions. The following is our responses.\", \"q1\": \"\\u201cwhy P-BN helps path optimization is not clear in the paper.\\u201d\", \"a1\": \"The reason that P-BN helps path optimization is described in section 3.2 and section 4. Specifically, in theorem 3.2, we demonstrate that gradients explode along network depth (layer index), because the variance term $\\\\|\\\\sigma^s\\\\|$ is less than 1 and there is an identity matrix contained in $\\\\hat{W}_s$ (please note that the diagonal elements of $\\\\hat{W}_s$ in Theorem 3.2 are equal to 1 and in Theorem 4.1, we have stated that $\\\\hat{W}\\u2019_s=\\\\hat{W}_s-I$). Therefore, we propose P-BN, which only normalize the terms related to the trained coefficients, and exclude the term related to the constant coefficient. Then, as shown in theorem 4.1, the gradient exploding problem will be weaken by P-BN, because the identical matrix (constant coefficients) is separated from W\\u2019_s, and the variance term $\\\\|\\\\sigma^{s,/}\\\\|$ becomes larger. We have provided a comparison with clearer description in Corollary 4.2 in our new version.\", \"q2\": \"\\u201cThe experimental part is not convincing.\\u201d\", \"a2\": \"We use the SGD without momentum in our experiments, because the way to utilize momentum in the path space remains unclear now. Our experiments follow the experimental settings of the work (Meng et al., 2019). We have clarified it in Appendix F.1 (Experimental Setting Details) in our updated version.\", \"q3\": \"\\u201cIt is not easy to imagine how the re-parameterization works on CNNs since the kernel is applied over the entire image (\\\"hidden activations\\\").\\u201d\", \"a3\": \"We provided some details on CNN in Appendix D.1 in our original version. In that way, the number of hidden nodes in MLP corresponds to the number of channels in CNN, and CNN can be operated similarly with MLP. We have also added some descriptions about this in Appendix D.1 in our new version.\\n\\nWe sincerely hope that we have addressed your concerns and you can reconsider your ratings.\"}",
"{\"title\": \"To Reviewer #4\", \"comment\": \"Thank you for your comments. The following is our responses.\", \"q1\": \"\\u201cLet start with Theorem 3.1: I am not sure about the statement of the theorem. Is this result for a linear net? I think for a Relu net, outputs need an additional scaling parameter that depends on all past hidden states (outputs).\\u201d\", \"a1\": \"No, all results in our paper are for non-linear neural networks with ReLU activations. We are sorry that we lost the activation function g in theorem 3.1, and we have fixed it in our new version.\", \"q2\": \"\\u201cTheorem 3.2 and 4.1 do not seem informative to me.\\u201d\", \"a2\": \"Theorem 3.2 and 4.1 provide the norm of gradient w.r.t outputs in every layer, when conventional BN and P-BN are applied to re-parameterized networks, respectively. Please note that the diagonal elements of $\\\\hat{W}_s$ in Theorem 3.2 are equal to 1 and in Theorem 4.1, we have stated that $\\\\hat{W}\\u2019_s=\\\\hat{W}_s-I$. Theorem 3.2 demonstrates that gradients explode along network depth (layer index), because the variance term $\\\\|\\\\sigma^s\\\\|$ is less than 1 and there is an identity matrix contained in $\\\\hat{W}_s$. Theorem 4.1 demonstrates that the gradient exploding problem will be weaken by P-BN, because the identical matrix is separated from $W\\u2019_s$, and the variance term $\\\\|\\\\sigma^{s,/}\\\\|$ becomes larger. We provide a comparison with clearer description in Corollary 4.2 in our new version.\", \"q3\": \"\\u201cIn fact, batch normalization naturally remedies this type of singularity since lengths of weights are trained separately from the direction of weights.\\u201d\", \"a3\": \"Batch Normalization cannot fully remedy this type of singularity. Batch normalization can keep the outputs unchanged when weights in one layer and its successive layer is multiplied and divided by a positive constant. However, the gradients w.r.t weights are changed by such rescaling operation because the Lipschitz constants w.r.t weights at different layers have changed. An intuitive explanation is that if a BN network whose weights at different layers have unbalanced magnitudes, stochastic gradient descent will suffer from such \\u201cunbalanced\\u201d scale of weights. On the other hand, gradients can still keep unchanged when the network is optimized in the path space. Thus, studying batch normalization in the path space is important.\\n\\nWe hope that we have answered your questions and addressed your concerns. We also hope that you can reconsider your ratings.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The proposal is an adapted batch normalization method for path regularization methods used in the optimization of neural networks. For neural networks with Relu activations, there exits a particular singularity structure, called positively\\nscale-invariant, which may slow optimization. In that regard, it is natural to remove these singularities by optimizing along invariant input-output paths. Yet, the paper does not motivate this type of regularization for batchnormalized nets. In fact, batch normalization naturally remedies this type of singularity since lengths of weights are trained separately from the direction of weights. Then, the authors motivate their novel batch-normalization to gradient exploding (/vanishing) which is a completely different issue. \\nI am not sure whether I understood the established theoretical results in this paper. Let start with Theorem 3.1: I am not sure about the statement of the theorem. Is this result for a linear net? I think for a Relu net, outputs need an additional scaling parameter that depends on all past hidden states (outputs). Theorem 3.2 and 4.1 do not seem informative to me. Authors are saying that if some terms in the established bound in Theorem 4.1 is small, then exploding gradient does not occur for their novel method. The same argument can be applied to the plain batchnorm result in Theorem 3.2. For me, it is not clear to see the reason why the proposed method remedies the gradient exploding (/vanishing).\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Originality: The paper proposed a new Path-BatchNormalization in path space and compared the proposed method with traditional CNN with BN.\", \"quality\": \"The theoretical part is messy but intuitive. Also, why P-BN helps path optimization is not clear in the paper. The experimental part is not convincing. All CNN with BN networks have much lower accuracy than people reported, e.g. https://pytorch.org/docs/stable/torchvision/models.html for ResNet on ImageNet.\", \"clarity\": \"The written is not clear enough. It is not easy to imagine how the re-parameterization works on CNNs since the kernel is applied over the entire image (\\\"hidden activations\\\").\", \"significance\": \"See above.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper analyzes a reparametrization of the network that migrates from the weight space to the path space. It enables an easier way to understand the batch normalization (BN). Then the authors propose a variant of BN on the path space and empirically show better performance with the new proposal.\\n\\nTo study BN in the reparameterized space is well-intuited and a natural idea. Theorem 3.1 itself is interesting and has some value in understanding BN. However, the main contribution of the paper, i.e., the proposal of the P-BN, is not motivated enough. It is merely mentioned in the beginning of section 4 and it's not clear why this modification is better compared to conventional BN. This is not verified by theory either. By comparing theorem 3.2 and theorem 4.1, it seems P-BN even gives even worse upper bound of gradient norm. \\nPlus we don't actually care that much about the issue of gradient exploding since one could always do gradient clipping. The notorious gradient vanishing problem on the other hand, is not address in the theorems. \\nThe formulation of the P-BN seems to be closely related to ResNet, since it sets aside the identity mapping and only normalizes on the other part. It would be better to have some discussions. \\nAlso, the reparameterization and P-BN seems only to apply to fully connected layer from Eqn. (3-5) where they are proposed, but the experiments applies to ResNet. It would be better to describe the method in a broader sense. How would you do this P-BN in more complicated networks?\\n\\nFinally, it's very unclear to me the value of Theorem 3.1 and the proof that takes almost one page in the main context. The assumption of diagonal elements of matrix w to be all positive is very restrictive and simply removes the effect of ReLU activations. \\n\\nTherefore I think the paper has some room for improvement and is not very suitable for publication right now.\"}"
]
} |
rJg76kStwH | Efficient Probabilistic Logic Reasoning with Graph Neural Networks | [
"Yuyu Zhang",
"Xinshi Chen",
"Yuan Yang",
"Arun Ramamurthy",
"Bo Li",
"Yuan Qi",
"Le Song"
] | Markov Logic Networks (MLNs), which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems. However, inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult. In recent years, graph neural networks (GNNs) have emerged as efficient and effective tools for large-scale graph problems. Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task. In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN. We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model. Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning. | [
"probabilistic logic reasoning",
"Markov Logic Networks",
"graph neural networks"
] | Accept (Poster) | https://openreview.net/pdf?id=rJg76kStwH | https://openreview.net/forum?id=rJg76kStwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"vsaAdgEfxW1",
"TXUY1l-7dzu",
"_XQfYM9RPMH",
"183qWZC3oo4",
"xKZA5JcijpQ",
"WrD9fhS6x9Q",
"HwXdAlXVxUW",
"t3Sq1jijkS1",
"OrmuhnOurbr",
"mi6d0bsUkrt",
"HoVabnORN0g",
"bw9iZgJ3f4",
"IvKNzJ56b34",
"RYjLpvoys3",
"cJHZGnjZl",
"HJxrNxsciS",
"BJlgEyicsr",
"S1xWpo55or",
"HygnUyGAYH",
"SkgLKByAtS",
"rkgx3TK2tr"
],
"note_type": [
"comment",
"comment",
"comment",
"official_comment",
"comment",
"comment",
"official_comment",
"comment",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1609744348650,
1591266420398,
1589346326842,
1589309214897,
1589011651126,
1588171444608,
1588023933768,
1587978999265,
1585736856719,
1585736646638,
1585681863783,
1585318558752,
1585285792046,
1584030729701,
1576798737549,
1573724204863,
1573723943685,
1573723064789,
1571852116313,
1571841405749,
1571753383736
],
"note_signatures": [
[
"~Xu_Li3"
],
[
"~Rainer_Gemulla1"
],
[
"~Bin_Dai1"
],
[
"ICLR.cc/2020/Conference/Paper1983/Authors"
],
[
"~Bin_Dai1"
],
[
"~Rainer_Gemulla1"
],
[
"ICLR.cc/2020/Conference/Paper1983/Authors"
],
[
"~Rainer_Gemulla1"
],
[
"~Rainer_Gemulla1"
],
[
"~Rainer_Gemulla1"
],
[
"ICLR.cc/2020/Conference/Paper1983/Authors"
],
[
"~Rainer_Gemulla1"
],
[
"ICLR.cc/2020/Conference/Paper1983/Authors"
],
[
"~Rainer_Gemulla1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1983/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1983/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1983/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1983/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1983/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1983/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Concerns about the code in github, seems like using test data in training?\", \"comment\": \"I have noticed that the main loop of your code use test data during training.\", \"in_file\": \"\", \"https\": \"//github.com/expressGNN/ExpressGNN/blob/master/main/train.py\", \"line\": \"69-70\\nthe function is called with the default argument of validation which is False\\nso that the data you use in the training loop is test_fact_ls right?\\n\\nby running your code I get the same results (mmr and hist) in your paper.\\nHowever, if I changed the data from test_fact_ls to fact_ls(which is built from the training set), the mmr and hits are far lower than the result in the paper.\\n\\nIs there any misunderstanding of the code or the data stored in test_fact_ls?\\nLooking forward to your reply.\", \"you_define_the_function\": \"get_batch_by_q with the argument validation=False\", \"in_line\": \"488-491\\nthe fact_ls is ether equals to valid_fact_ls or test_fact_ls depending on the argument validation\"}",
"{\"title\": \"Any updates?\", \"comment\": \"Are there any updates or preliminary results that you can share with us? (The GitHub page has not been updated so far.)\"}",
"{\"title\": \"Question about the express power\", \"comment\": \"Thanks a lot for your quick response. The clarification is very helpful to me. I have another question about the expressive power of the GNN and the tunable embedding. In your paper, you mentioned that the flattened embedding table proposed in Qu & Tang (2019) is not able to capture the structure knowledge encoded in the knowledge graph. I don't understand why. Here is my thought. Suppose there are $N$ entities and $D$ embedding dimensions. Denote the embedding as $x \\\\in R^{N\\\\times D}$. The optimization space of the tunable embedding is the whole $R^{N\\\\times D}$ space but that of the GNN embedding scheme is just a subset of $R^{N\\\\times D}$ (denoted as $G$). The optimal solution of the GNN embedding is $x^* = \\\\text{\\\\argmin}_{x \\\\in G} L$, where $L$ is the objective while the optimal solution of the tunable embedding is $x^{**} = \\\\text{\\\\argmin}_{x \\\\in R^{N\\\\times D}} L$. It is obvious that $x^{**}$ is at least not worse than $x^*$. The potential reason why $x^*$ will be better than $x^{**}$ could be 1) there are some mechanisms preventing the model to achieve the optimal solution $x^{**}$ in the tunable embedding case, 2) $x^{**}$ is better than $x^*$ on the training set but it generalizes poorly to the test set (if there exists the issue of generalization). Did I make some mistake here?\"}",
"{\"title\": \"Re: Question about the tunable embedding\", \"comment\": \"Thanks for your interest in our work. With regard to the embedding size of GNN / tunable embeddings, we used grid search on the validation set, and the optimal setup is indeed 127-dimensional tunable embeddings plus 1-dimensional GNN embedding, as provided in the default command line. So we also noticed that on the Freebase dataset, higher dimensional GNN is not improving the performance. We assume the reason could be that the supervised learning signal from Freebase dataset is powerful, thus the tunable embedding part is fitting the data pretty well, while the topology of graph is playing less important role in this case. This is reasonable due to the nature of dataset. On the other datasets such as Cora, higher dimensional GNN is not hurting the performance (please refer to Table 2). As discussed in Section 5.1, the inductive GNN embeddings are designed to reduce the number of parameters (more compact model) without hurting the model capacity or expressiveness too much, which is a trade-off between model compactness and expressiveness.\"}",
"{\"title\": \"Question about the tunable embedding\", \"comment\": \"Hi, thanks for your nice work and the attached code. I tried to reproduce your work using your code. I noticed that you used 127 tunable dimensions and only 1 GNN dimensions in the FB15K-237 experiments (see the last command line in the readme file). Does this setting corresponds to the results in table 3 in the paper? If so, each layer in the GNN has only 1 dimension, making the network meaningless. I tried to use more GNN dimensions by setting gcn_free_size to a smaller value, but the performance becomes worse. Did I misunderstand something here? Thanks very much.\"}",
"{\"title\": \"Revised experimental study\", \"comment\": \"Great, thanks! I like your suggestions. Just to clarify (it's probably what you intend to do):\\n\\n1) Training should not access any test data facts, i.e., it should be possible to do this with the test data file removed.\\n\\n2) Evaluation should only see individual test queries (but not test triples), as you say. An ideal solution would be to provide an interface for prediction (takes solely a trained model and a query, outputs the ranked results), and use this interface for evaluation.\"}",
"{\"title\": \"Re: Reproducing the experiments without test data leakage\", \"comment\": \"Thanks for reaching out again. We just started to work on the new experiments. Just to clarify our data setup, the original training data of FB15K-237 is randomly split into \\\"facts.txt\\\" and \\\"train.txt\\\", since we need a subset of training facts to generate first-order logic rules using NeuralLP. So we use \\\"facts.txt\\\" as the knowledge base and \\\"train.txt\\\" as the training data to generate rules.\\n\\nThe crux of our previous discussions seems to be the \\\"clean\\\" way of accessing the test data and updating the GNN parameters during the prediction (inference) phase. As clarified before, we would 1) learn the GNN parameters from supervised KG data with observed facts only; 2) perform query-by-query inference by sampling the query-related ground formulae and updating the GNN parameters in a \\\"sandbox\\\" only for this query, i.e., the update of parameters will not affect the inference of other queries. Please let us know if you have further questions about this new evaluation scheme.\\n\\nWe also noticed that a recent paper published in WWW'20 titled \\\"Probabilistic Logic Graph Attention Networks for Reasoning\\\" is very similar to our work ( https://dl.acm.org/doi/pdf/10.1145/3366424.3391265 ), which reports similar (slightly higher MRR and Hits@10) performance on FB15K-237 compared to our results. Since their paper does not cite our work, we assume they complete the work independently and this may help validate our experimental results.\"}",
"{\"title\": \"Reproducing the experiments without test data leakage\", \"comment\": \"Thanks again for your willingness to redo the experimental study!\\n\\nAre there already any updates or is new code available?\"}",
"{\"title\": \"Additional files in dataset folder\", \"comment\": \"I forgot: in your implementation, there is another file called \\\"facts.txt\\\". It's unclear to me what that file contains. Generally, the safe way is to only access \\\"train.txt\\\" and \\\"valid.txt\\\" during training (step 1), and \\\"train.txt\\\" and a query during prediction (step 2).\"}",
"{\"title\": \"Reproducing the experiments without test leakage\", \"comment\": \"Thank you!\\n\\nThe way you describe parameter updating during prediction seems to be another potential source for potential leakage, but it's not necessarily the only one. I agree with you that a suitable approach is to reproduce the study in a safe way. It's great that you are willing to do this!\", \"to_rule_out_leakage_of_test_data_for_learning_and_prediction\": \"1. During learning, the test data should not be accessed at all. I had tried to do that by \\\"clearing\\\" the test file in your implementation, but that broke training completely. \\n\\nTo me, the best approach is to update your implementation so that training can be done without accessing the test data file at all. The corresponding model / rule weights should be stored somewhere to use for prediction.\\n\\n2. Query-by-query inference is good, but not enough. Again, no access to the test set should arise before seeing a query, and each query should not influence what happens for the next query (in particular, the one for the same triple). For example, any latent variable relevant for a query should be created only once the query is seen and not retained afterwards. \\n\\nTo make sure that there is no leakage in this step, I suggest to provide an CLI that takes a trained model from (1) plus the training data plus a single query (not a triple) and returns the ranked results. Again, a test data file should not be accessed and the trained model must not be changed.\\n\\nIt this feasible to do? Also: in step 2, which variables and factors would be created by your method once seeing a query (say, query (s,p,?)).\"}",
"{\"title\": \"Clarification on test data usage and inference approach\", \"comment\": \"Thanks for the follow-up questions. We further clarify these questions as follows.\\n\\n1. We would like to clarify that the training of MLN typically refers to learning the weights of logic formulae, while the inference of MLN is to predict the query (with the current formula weights). In fact, once the MLN is defined with a set of logic formulae and a set of observed facts, the probability of any latent variable (query) is already determined. The reason why we need to update GNN parameters during inference is because the exact inference of MLN is computationally infeasible, and we employ GNN as the variational posterior to perform approximate inference. For the knowledge graph used by GNN, it only contains observed facts and neither latent variables nor test query facts exist in the graph. So when we update the GNN parameters, there are no \\u201ccreated query nodes\\u201d in the graph that may leak test data information. We optimize the GNN parameters to make it a better posterior model, so that the variational inference can better approximate the underlying true probability distribution defined by the MLN.\\n\\n2. For each query in the Freebase dataset formed as (s,p,o), we perform inference of (s,p,?) and (?,p,o) sequentially, where we treat the queries as an input stream so that we do keep updating the GNN parameters during the inference. We assume that what you described is with regard to the parameter updating scheme here. Just to confirm with your points, what if we perform the inference of each query independently, i.e., we use the GNN parameters initially learned from supervised data with observed facts only, and perform the inference query by query by updating the GNN parameters always from the initially learned ones, would this way of evaluation rule out the possibility of test data leakage? If so, we can definitely try it out and we\\u2019ll update the experimental results here.\"}",
"{\"title\": \"Test data leakage?\", \"comment\": \"Thanks a lot for your feedback so far!\\n\\nI understand that the labels of the test data are not used as labels for their corresponding variables during training. My concern is about your second and third points. As you state in your response, test data is used during training and prediction to create certain variables and factors in the MLN. I am worried that this approach leaks information from the test data.\\n\\nIn particular, the performance numbers presented in this paper are far ahead of all numbers I have seen so far (and that haven't been invalidated yet). I'd like to understand whether ExpressGNN really obtains such large improvements, and if so, why that's the case.\\n\\nA quick way to push forward this discussion would be (1) to train ExpressGNN without any access to test data, and (2) to perform evaluation query by query without further access to test data. For example, test triple (s,p,o) has two queries (s,p,?) and (?,p,o), which should be evaluated separately to ensure that there is no leakage.\\n\\nIs this possible? I'd be immediately convinced since (1) ensures that no test data is leaked into training and (2) that no test data is leaked into prediction. If it's not possible to use ExpressGNN like this, why not?\\n\\nAs for the discussion, my current understanding is that for a given test triple (s,p,o), the following variables are created:\\n\\n1. Variable (s,p,o), latent\\n2. Variables of form (s,p,o') and (s',p,o), latent or observed\\n3. Variables that occur in the body of (some) rules that have (s,p,o) in their head, latent or observed\\n\\nIs this accurate?\\n\\nIf so, both (2) and (3) would leak information.\\n\\nFor (2), consider a pathological example: just one test triple (s,p,o), no training or validation data, no rules. Now consider query (s,p,?). Due to the set of variables created in (2), there are many more variables of form (s',p,o)---i.e, with the right answer o---than of form (s',p,o') with o'!=o---i.e., with the wrong answer. Likewise for (?,p,o). One can infer (s,p,o) directly from the set of variables created in (2). In a real setup, the case may not be that pathological, but there is still leakage. That's especially true in conjunction with (3).\\n\\nFor (3), the factors introduced for the ground rules are more likely to touch the test triples (the correct answers) than other triples (in particular, the incorrect answers). Again, that's a form of leakage. I understand that \\\"partial grounding\\\" is motivated by performance considerations, but unfortunately it also leaks test data into the resulting distribution. That's also what seems to happen when the sampling strategy \\\"focus[es] on the ground formulae that contain at least one query fact\\\".\"}",
"{\"title\": \"Clarification on test data usage and inference approach\", \"comment\": \"Thanks for your interest in our paper. We appreciate your detailed comments. We are truly sorry for being late to respond. Here we clarify our test data usage and details of our inference approach. Hopefully our response clarifies your question, which may also help other readers understand our paper and code.\\n\\n1. Potential leakage of validation and test data: 1) We confirm that the truth values of the validation and test data are not used anywhere during the inference and learning process; 2) Our graph neural network (GNN) is built on the knowledge graph, rather than the ground MLN, and we construct the knowledge graph only based on the observed facts in the training data (refer to Fig. 2 for a comparison of the ground MLN and the knowledge graph). That being said, there is no nodes of test set fact in the knowledge graph, thus there is no way of \\\"looking\\\" at the set of created variables and reduce the set of potential answers when updating the tunable embeddings and other trainable parameters of the GNN; 3) For the second part of our E-step objective function in Eq. 6, which is the supervised learning objective for training the GNN, it is only using the observed facts in the training data; 4) When predicting the query married_to(JohnDoe, ?), we consider all the entities in the knowledge graph to replace \\\"?\\\" to construct the test tuples, and predict the probabilities of all the constructed tuples for ranking and computing the evaluation metrics as MRR and Hits@N.\\n\\n2. Construction of MLN: The construction of the fully ground Markov Logic Network is computationally infeasible. With the mean-field approximation, we are able to decompose the global expectation over the entire MLN into local expectations over ground formulae. We perform the inference of MLN in a stochastic fashion: we sample mini-batches of ground formulae which may contain both observed and latent variables (facts). For each ground formula in the sampled batch, we take the expectation of the corresponding potential function w.r.t. the posterior of the involved latent variables (first term in Eq. 4), and compute a local sum of entropy using the posterior of the latent variables (second term in Eq. 4). Note that we not only have the facts in training, validation and test data, but also have all the latent variables used in the ground formulae. For example, given there is a logic formula Smoke(x) \\u2227 Hypertention(x) => Cancer(x), and there's a test fact Cancer(David), then the corresponding variables Smoke(David) and Hypertention(David) in the ground formula will be used for inference and learning, no matter whether they are observed or latent. In summary, we construct MLN based on each ground formula, rather than just adding the facts in training, validation and test data.\\n\\n3. How we use test_fact_ls which contains the test set facts: During the inference process, we use test_fact_ls to guide the sampling of ground formulae. There are exponential number of all possible ground formulae, however, most of them are irrelevant to the query facts in the test data. Our sampling strategy is to focus on the ground formulae that contain at least one query fact as the latent variable. We also require that each sampled ground formula should have no truth values yet, i.e., the truth value of ground formula should depend on the truth values of the latent variables in it. This sampling strategy helps find relevant logic formulae that are relevant to the query, so that the inference can be more efficient. Note that there is no guarantee that the sampled formula can derive the correct test fact, since the logic formulae are auto-generated for FB15K-237 by NeuralLP and could be noisy. Our model has no access to the truth values of any latent variables in any ground formulae.\\n\\nPlease kindly let us know if you have any further questions. Thanks.\"}",
"{\"title\": \"Test data leakage?\", \"comment\": \"I had posted the comment below on the Github page accompanying this paper, but perhaps this is a better place for discussion. Quoting:\\n\\nI've had a look at your recent ICLR20 paper; the results for FB15k-237 are outright amazing! I browsed the source code in this repository to better understand what you do. I stumbled across the following lines in dataset.py:\", \"for_fact_in_query_ls\": \"self.test_fact_ls.append((fact.val, fact.pred_name, tuple(fact.const_ls)))\\n self.test_fact_dict[fact.pred_name].add((fact.val, tuple(fact.const_ls)))\\n add_ht(fact.pred_name, fact.const_ls, self.ht_dict)\\n\\nHere query_ls contains the test set facts, and add_ht registers the fact.\\n\\nIf I interpret this correctly, the MLN is constructed as follows. It first adds a variable for each fact r(e1,e2) in the training, validation, and test data. Afterwards, for each such fact, additional variables are (conceptually) added by perturbing e1 or e2: i.e., variables for all facts of form r(e1,?) and r(?,e2) are added as well.\\n\\nEach of the so-obtained variables is marked as observed (if it appears in the training data) or latent (otherwise).\\n\\nIs this understanding correct?\\n\\nThe reason I am asking is because such an approach seems to leak validation and test data into training. Why? It's true that the truth values of the validation and test data are not used during training. But: the choice of variables in the MLN already tells the MLN that r(e1,?) and r(?,e2) are sensible queries, and consequently provides information about e1 and e2. That's fine for the training data facts. For validation and test facts, however, it's problematic.\\n\\nFor example, consider a test set fact married_to(JohnDoe, JaneDoe). The mere existence of the variables married_to(JohnDoe, ?) informs the (tuneable) embedding of JohnDoe: it must be a person. Likewise for married_to(?, JaneDoe). That's the first reason for potential leakage. Another reason is that, without any inference or learning, one may \\\"look\\\" at the set of created variables and reduce the set of potential wifes for JohnDoe to the set of persons that have been seen as wifes in the validation or test data. (All facts from the training data are observed so that the corresponding wifes are ruled out.) If so, this would significantly simplify the task.\\n\\nI'd appreciate if you clarified whether the above description is accurate and, in particular, where I misunderstood the approach.\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper is far more borderline than the review scores indicate. The authors certainly did themselves no favours by posting a response so close to the end of the discussion period, but there was sufficient time to consider the responses after this, and it is somewhat disappointing that the reviewers did not engage.\\n\\nReviewer 2 states that their only reason for not recommending acceptance is the lack of experiments on more than one KG. The authors point out they have experiments on more than one KG in the paper. From my reading, this is the case. I will consider R2 in favour of the paper in the absence of a response.\\n\\nReviewer 3 gives a fairly clear initial review which states the main reasons they do not recommend acceptance. While not an expert on the topic of GNNs, I have enough of a technical understanding to deem that the detailed response from the authors to each of the points does address these concerns. In the absence of a response from the reviewer, it is difficult to ascertain whether they would agree, but I will lean towards assuming they are satisfied.\\n\\nReviewer 1 gives a positive sounding review, with as main criticism \\\"Overall, the work of this paper seems technically sound but I don\\u2019t find the contributions particularly surprising or novel. Along with plogicnet, there have been many extensions and applications of Gnns, and I didn\\u2019t find that the paper expands this perspective in any surprising way.\\\" This statement is simply re-asserted after the author response. I find this style of review entirely inappropriate and unfair: it is not a the role of a good scientific publication to \\\"surprise\\\". If it is technically sound, and in an area that the reviewer admits generates interest from reviewers, vague weasel words do not a reason for rejection make.\\n\\nI recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"First of all, thank you for your valuable comments. We briefly respond to a couple of points as follows.\\n\\n\\n> Why traditional MLN is computationally inefficient? Provide the inference time complexities.\\n\\nThe computational complexity of probabilistic MLN inference is known to be #P-complete when MLN was proposed [1]. To make it feasible, there are three categories of approximate inference methods: Monte Carlo methods, loopy belief BP, and variational methods [2]. Previous methods (including MCMC, BP, lifted BP) require to fully construct the ground Markov network before performing approximate inference, and the size of the ground Markov network is O(M^d) where M is the number of entities and d is the highest arity of the logic formula. Typically, there are a large number of entities in a practical knowledge graph, making the full grounding infeasible.\\n\\nWith mean-field approximation, our stochastic inference method avoids to fully construct the grounded Markov network, which only requires local grounding of the formulae in each sampled minibatch. Our method has constant time complexity for each sampled minibatch, and the overall time complexity is O(N) where N is the number of iterations. We have compared the inference efficiency on two benchmark datasets. Experimental results reported Fig. 4 show that our method is both more efficient and scalable than traditional MLN inference methods.\\n\\n\\n> Does Lifted BP reduce the computational cost of grounding?\\n\\nLifted BP constructs the minimal lifted network via merging the nodes as the first step, and then performs belief propagation on the lifted network to save the computational cost. However, there is no guarantee that the lifted network is much smaller than the ground network. In the worst case, the lifted network can have the same size as the original ground network [2]. Moreover, the construction of the lifted network is also computationally expensive, which is even slower than the construction of the full network as reported in Table 3 of their paper [2]. In fact, our experiments demonstrate that Lifted BP is NOT efficient even on small dataset like UW-CSE and Kinship (please refer to Fig. 4 in our paper), and it certainly cannot scale up to the FB15K-237 dataset.\\n\\n\\n> Why use Neural LP to learn the rules?\\n\\nThe FB15K-237 dataset is not designed for evaluating MLN inference / learning methods, and hence, have no logic formulae provided. Our work focuses on MLN inference and learning with a set of logic formulae, thus we need to generate the rules first. Similarly, recent work [3] uses simple brute-force search to generate the rules for MLN. However, brute-force rule search can be very inefficient on large-scale data. Instead, our method employs Neural LP to efficiently generate the rules. We use the training set only for rule learning, which guarantees that there is no information leakage during the evaluation on the test set.\\n\\n\\n> Why not compare to BoostSRL?\\n\\nThe BoostSRL work uses MC-SAT as the inference method, which has been compared with our work in the experiments. According to the inference time reported in Fig. 4, our method is much more efficient and scalable than MC-SAT.\\n\\nMoreover, BoostSRL is not directly comparable to our method, since the task is completely different. Our method is designed for MLN inference and rule weight learning with logic rules provided, while BoostSRL was proposed for MLN structure learning, i.e., learning logic rules for MLN. We chose Neural LP instead of this method to generate the rules, since Neural LP has been demonstrated to be effective in rule induction on the Freebase dataset. In the updated paper, we have included BoostSRL as related work to supplement our literature review.\\n\\n\\n> MLN is fairly general, does GNN result in any loss of expressivity?\\n\\nWe have discussed the expressive power of GNNs in our paper in the section titled \\u201cWhy combine GNN and tunable embeddings\\u201d. To make it more clear, in the updated paper, we change the section title to: \\u201cExpressive power of GNN as inference network \\u201d. In this section, we have shown an example in Fig. 3 where GNN produces the same embedding for nodes that should be distinguished. We have also formally proved the sufficient and necessary condition to distinguish any non-isomorphic nodes in the knowledge graph. Inspired by this, we augment GNN with additional tunable embeddings to trade-off the compactness and expressiveness of the model.\\n\\n\\n> Related work should appear in the main paper.\\n\\nThanks for the suggestion. In the updated paper, we\\u2019ve added the related work section right after the introduction to provide a clear background of statistical relational learning and Markov Logic Networks.\\n\\n\\nReferences\\n\\n[1] Richardson, Matthew, and Pedro Domingos. \\u201cMarkov Logic Networks.\\u201d Machine Learning.\\n\\n[2] Singla, Parag, and Pedro M. Domingos. \\u201cLifted First-Order Belief Propagation.\\u201d AAAI.\\n\\n[3] Qu, Meng, and Jian Tang. \\u201cProbabilistic Logic Neural Networks for Reasoning.\\u201d arXiv.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thanks for your review comments. We briefly respond to your questions as follows.\\n\\n\\n> The proposed system should be evaluated on more KGs.\\n\\nIn fact, our method is evaluated on four benchmark datasets with four different KGs: UW-CSE, Cora, Kinship, and Freebase. These knowledge graphs are of different knowledge types and data distributions, and are widely used as benchmark datasets to evaluate MLNs and knowledge graph reasoning methods.\\n\\n\\n> Page 3. \\u201cThe equality holds\\u201d which equality are you talking about?\\n\\n\\u201cThe equality holds\\u201d points to the equality in Eq. (2). To make it more clear, we have added a reference to Eq. (2) in the updated paper.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thanks for your comments. We briefly respond to a couple of points as follows.\\n\\n\\n> The integration of variational EM and MLN has been explored in another work pLogicNet.\\n\\nWe have to clarify that our ExpressGNN work was proposed earlier than the pLogicNet. In fact, we have submitted an earlier version of our work to arXiv 15 days before the pLogicNet appeared on arXiv ( https://arxiv.org/abs/1906.08495 ). Due to the ongoing anonymous period, we could not provide the link of our arXiv submission here.\\n\\n\\n> With pLogicNet, the contributions are not surprising or novel.\\n\\n1) As claimed above, we proposed the idea of integrating stochastic variational inference and MLN before the pLogicNet work appeared. As a concurrent and later work, pLogicNet also employs variational EM for MLN inference, which should not hurt the originality and novelty of our work.\\n\\n2) Compared to pLogicNet, our work employs GNNs to capture the structure knowledge that is implicitly encoded in the knowledge graph. For example, an entity can be affected by its neighborhood entities, which is not modeled in pLogicNet but can be captured by GNNs. Our work models such implicit knowledge encoded in the graph structure to supplement the knowledge from logic formulae, while pLogicNet has no graph structure knowledge and only has a flattened embedding table for all the entities.\\n\\n3) Our method is a general framework that can trade-off the model compactness and expressiveness by tuning the dimensionality of the GNN part and the embedding part. Thus, pLogicNet can be viewed as a special case of our work with the embedding part only.\\n\\n4) We compared our method with pLogicNet in the experiments. Please refer to Table 3 for the experimental results. Our method achieves significantly better performance than pLogicNet (MRR 0.49 vs 0.33, Hits@10 60.8 vs 52.8) on the FB15K-237 dataset.\\n\\nWe have updated the paper to incorporate the discussions above.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes to use graph neural networks (GNN) for inference in MLN. The main motivation seems to be that inference in traditional MLN is computationally inefficient. The paper is cryptic about precisely why this is the case. There is some allusion in the introduction as to grounding being exponential in the number of entities and the exponent being related to the number of variables in the clauses of the MLN but this should be more clearly stated (e.g., does inference being exponential in the number of entities hold for lifted BP?). In an effort to speed up inference, the authors propose to use GNN instead. Since GNN expressivity is limited, the authors propose to use entity specific embeddings to increase expressivity. The final ingredient is a mean-field approximation that helps break up the likelihood expression. Experiments are conducted on standard MLN benchmarks (UW-CSE, Kinship, Cora) and link prediction tasks. ExpressGNN achieves a 5-10X speedup compared to HL-MRF. On Cora HL-MRF seems to have run out of memory. On link prediction tasks, ExpressGNN seems to achieve better accuracy but this result is a bit difficult to appreciate since the ExpressGNN can't learn rules and the authors used NeuralLP to learn the rules followed by using ExpressGNN to learn parameters and inference.\", \"here_are_the_various_reasons_that_prevent_me_from_rating_the_paper_favorably\": [\"MLNs were proposed in 2006. Statistical relational learning is even older. This is not a paper where the related work section should be delegated to the appendix. The reader will want to know the state of inference and its computational complexity right at the very beginning. Otherwise, its very difficult to read the paper and appreciate the results.\", \"Recently, a number of papers have been tried to quantify the expressive power of GNNs. MLN is fairly general, being able to incorporate any clause in first-order logic. Does the combination with GNN result in any loss of expressivity? This question deserves an answer. If so, then the speedup isn't free and ExpressGNN would be a special case of MLN, albeit with the advantage of fast inference.\", \"Why doesn't the paper provide clear inference time complexities to help the reader appreciate the results? At the very least, the paper should provide clear time complexities for each of the baselines.\", \"There are cheaper incarnations of MLN that the authors should compare against (or provide clear reasons as to why this is not needed). Please see BoostSRL (Khot, T.; Natarajan, S.; Kersting, K.; and Shavlik, J. 2011. Learning Markov logic networks via functional gradient boosting. In ICDM)\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a framework for solving the probabilistic logic reasoning problem by integrating Markov neural networks and graph neural networks to combine their individual features into a more expressive and scalable framework. Graph neural networks are used for learning representations for Knowledge graphs and are quite scalable when it comes to probabilistic inference. But no prior rules can be incorporated and it requires significant amount of examples per target in order to converge. On the other hand, MLN are quite powerful for logical reasoning and dealing with noisy data but its inference process is computationally intensive and does not scale. Combining these two frameworks seem to result in a powerful framework which generalizes well to new knowledge graphs, does inference and is able to scale to large entities.\\n\\nRegarding its contribution, the paper seems to consider a training process which is done using the variational EM algorithm. The variational EM is used to optimize the ELBO term (motivation for this is the intractability of the computing the partition term). In the E-step, they infer the posterior distribution and in the M-step they learn the weights. The integration of variational EM algorithm and MLN has been explored in another work (pLogicNet: Probabilistic Logic Neural Networks for Reasoning), but this paper proposes a new pipeline of tools: MLN, GNN and variational EM which seem to outperform all the existing baseline methods.The paper looks technically sound to me and the evaluations results are delivered neatly, however the flow of the paper makes it a bit difficult to follow sometimes due to many topics covered in it.\\nRegarding the significance of the paper, it tries to combine logic reasoning and probabilistic inference which is of great interest among the researchers recently. ExpressGNN proves to generalise well and perform accurate inference due to the tunable embeddings added at the GNN.\\n\\nOverall, the work of this paper seems technically sound but I don\\u2019t find the contributions particularly surprising or novel. Along with plogicnet, there have been many extensions and applications of Gnns, and I didn\\u2019t find that the paper expands this perspective in any surprising way.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper the authors propose a system, called ExpressGNN, that combines MLNs and GNNs. This system is able to perform inference and learning the weights of the logic formulas.\\n\\nThe proposed approach seems valid and really intriguing. Moreover the problems it tackles, i.e. inference and learning over big knowledge graphs, are of foremost importance and are interesting for a wide community of researchers.\\nI have just one concern and it is about the experiments for the knowledge graph completion task. In fact, this task was performed only on one KG. I think the proposed system should be evaluated on more KGs.\\n\\nFor these reasons I think the paper, after an extension of the experimental results, should be accepted.\\n\\n[Minor]\\nPage 3. \\u201cThe equality holds\\u201d which equality are you talking about?\"}"
]
} |
SkxQp1StDH | Low-dimensional statistical manifold embedding of directed graphs | [
"Thorben Funke",
"Tian Guo",
"Alen Lancic",
"Nino Antulov-Fantulin"
] | We propose a novel node embedding of directed graphs to statistical manifolds, which is based on a global minimization of pairwise relative entropy and graph geodesics in a non-linear way. Each node is encoded with a probability density function over a measurable space. Furthermore, we analyze the connection of the geometrical properties of such embedding and their efficient learning procedure. Extensive experiments show that our proposed embedding is better preserving the global geodesic information of graphs, as well as outperforming existing embedding models on directed graphs in a variety of evaluation metrics, in an unsupervised setting. | [
"graph embedding",
"information geometry",
"graph representations"
] | Accept (Poster) | https://openreview.net/pdf?id=SkxQp1StDH | https://openreview.net/forum?id=SkxQp1StDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"bqQIdMDJP2",
"SkxP3rGcjr",
"BklRqyqtsH",
"HygOs6FtoB",
"HJgXmiYFsr",
"SyetJDjTYH",
"B1lxARc3KS",
"Sye7nK0ItH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737519,
1573688751050,
1573654421630,
1573653920318,
1573653274739,
1571825376926,
1571757768243,
1571379627320
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1982/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1982/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1982/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1982/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1982/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1982/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1982/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes an embedding for nodes in a directed graph, which takes into account the asymmetry. The proposed method learns an embedding of a node as an exponential distribution (e.g. Gaussian), on a statistical manifold. The authors also provide an approximation for large graphs, and show that the method performs well in empirical comparisons.\\n\\nThe authors were very responsive in the discussion phase, providing new experiments in response to the reviews. This is a nice example where a good paper is improved by several extra suggestions by reviewers. I encourage the authors to provide all the software for reproducing their work in the final version.\\n\\nOverall, this is a great paper which proposes a new graph embedding approach that is scalable and provides nice empirical results.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Acknowledging Rebuttals\", \"comment\": \"The current reviewer has read the authors' rebuttal.\\n\\nBased on my comments, the author performed additional experiments including (1) increased dimensionality of the target embedding; (2) embedding an undirected graph. These additional experiments have further strengthened this contribution.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"1. In our paper, we have used the following baselines for directed graphs: HOPE [Ou et al. in ACM SIGKDD 2016], APP [ZHOU et al. in AAAI 2017] and Graph2Gauss [BOJCHEVSKI et al. in ICLR 2018]. All of them (HOPE, APP, Graph2Gauss) have explicitly written that they consider directions. We will make sure to stress this in our paper. However, DeppWalk model [Perozzi et. al. in ACM SIGKDD 2014] was only used as a representative baseline for the un-directed class of algorithms. \\n\\nWe thank the reviewer for the suggestions for the new baselines. However, Okada-Imaizumi model [Behaviormetrika 14.21 (1987)] was under a pay-wall that was not accessible with our institution's subscription. Without any success, we have tried our best to find any publicly available material on the manuscript or the code. \\nFurthermore, as you have suggested, the work MUZELLEC et al. in NIPS 2018 [35] is available and we have included additional experiments with elliptical embedding. We observe that elliptical embedding is not outperforming our method on Political Blogs network:\\nmethod (Pearson, Spearman, avg. MI, std. MI) \\nKL (0.88, 0.89, 0.85, 0.006)\\nElliptical (-0.17, -0.14, 0.036, 0.004)\\nAPP (0.16, 0.29, 0.15, 0.006)\\nHOPE (0.45, 0.45, 0.65, 0.007)\\nGraph2Gauss (-0.17, -0.33, 0.09, 0.005)\\nDeepWalk (0.25, 0.24, 0.12, 0.005)\\n\\nThe work of Muzellec et al. studies the problem of embedding objects as elliptical probability distributions, which are the generalization of Gaussian multivariate densities. Their work is rooted in the optimal transport theory by using the Wasserstein distance. The physical interpretation of this distance in the case of optimal transport is given by the cost of moving mass from one distribution to another one. In particular, for univariate Gaussian distributions, the Wasserstein distance between embedded points becomes the two-dimensional Euclidean distance of ($\\\\mu_1,\\\\sigma_1$) and ($\\\\mu_2,\\\\sigma_2$), i.e. flat geometry.\\nIn our case, the geometry of univariate Gaussians has constant negative curvature, and distance is measured with the Fisher distance. Our work is rooted in statistical manifold theory, where the Fisher distances arise from measuring the distinguishability between different distributions.\\n\\n2. Since the down-stream application was also raised by reviewer #2 point 2, we reply in a joint manner: \\nWe agree that the majority of node embeddings are used for down-stream learning tasks. In this paper, we have focused on the unsupervised setting and finding representations that can preserve geodesic relationships between nodes in a directed graph first. We have focused on the graph representation itself, its geometrical properties, connections to existing mathematical frameworks and learning. We believe that for supervised tasks a modification to the loss function is needed. \\nAt the same time, we agree with the reviewer and thus include a down-stream learning task that is suitable for the unsupervised scenarios. In particular, for every node u with outdegree k, we retrieve the k best candidate neighbors from each embedding and compare them to the direct successors of node u. To assess the performance, precision is computed, which tells us the ratio of actual neighboring links the embedding can extract. The same is done for incoming links, using the in-degree of each node. This kind of task was previously used in several papers that study graph embeddings e.g. [Tsitsulin et al., WWW 2018] and [Khosla et al., ECMLPKDD 2019]. \\nHere, we report the precision for outdegree and indegree link reconstruction for Political Blogs network. \\nMethod \\t\\t (outdegree precision, indegree precision)\\nAPP \\t\\t (0.1077, 0.2624)\\nHOPE \\t\\t (0.1010, 0.1125)\\nDeepWalk \\t (0.2620, 0.1669)\\nGraph2Gauss \\t(0.0258, 0.0003)\\nElliptical \\t\\t(0.0383, 0.0250)\\nKL \\t\\t (0.2861, 0.2329)\\n\\n\\nAlthough, on average, our method (KL) is performing well, it was not designed to encode only the local neighborhood but rather the whole spectrum of finite and infinite distances. In our paper, we claim that our representation is designed for preserving the global geodesic information of directed graphs in an unsupervised setting. To excel at other supervised tasks, one would have to modify the loss function to include additional terms more suitable for that down-streaming task, which is part of our future work. \\n\\n\\nAn updated version of the paper is uploaded.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"* Experimental questions *\\n1. In the updated version, we correct grammatical mistakes and polish the content that was not clear. Cross-validation was not used since an unsupervised setting is used. The training time or, more specifically, the number of training epochs were selected with the convergence criteria of the loss function. We will extend the Appendix A.6, to provide more details on the experimental procedure and hyperparameters (lambda - distribution class, beta - distance scaling exponent, learning rate in ADAM optimizer and batch size). \\n\\n2. Since the down-stream application was also raised by reviewer #1 point 2, we reply in a joint manner: \\nWe agree that the majority of node embeddings are used for down-stream learning tasks. In this paper, we have focused on the unsupervised setting and finding representations that can preserve geodesic relationships between nodes in a directed graph first. We have focused on the graph representation itself, its geometrical properties, connections to existing mathematical frameworks and learning. We believe that for supervised tasks a modification to the loss function is needed. \\nAt the same time, we agree with the reviewer and thus include a down-stream learning task that is suitable for the unsupervised scenarios. In particular, for every node u with outdegree k, we retrieve the k best candidate neighbors from each embedding and compare them to the direct successors of node u. To assess the performance, precision is computed, which tells us the ratio of actual neighboring links the embedding can extract. The same is done for incoming links, using the in-degree of each node. This kind of task was previously used in several papers that study graph embeddings e.g. [Tsitsulin et al., WWW 2018] and [Khosla et al., ECMLPKDD 2019]. \\nHere, we report the precision for outdegree and indegree link reconstruction for Political Blogs network. \\nMethod \\t\\t (outdegree precision, indegree precision)\\nAPP \\t\\t (0.1077, 0.2624)\\nHOPE \\t\\t (0.1010, 0.1125)\\nDeepWalk \\t (0.2620, 0.1669)\\nGraph2Gauss \\t(0.0258, 0.0003)\\nElliptical \\t\\t(0.0383, 0.0250)\\nKL \\t\\t (0.2861, 0.2329)\\n\\nAlthough, on average, our method (KL) is performing well, it was not designed to encode only the local neighborhood but rather the whole spectrum of finite and infinite distances. In our paper, we claim that our representation is designed for preserving the global geodesic information of directed graphs in an unsupervised setting. To excel at other supervised tasks, one would have to modify the loss function to include additional terms more suitable for that down-streaming task, which is part of our future work. \\n\\n\\n* Spelling / grammar / layout *\\nWe revised all the writing issues and have rephrased the title to be more clear.\\n\\nAn updated version of the paper will be uploaded soon.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We include the results where we use 5-variate, 10-variate, and 50-variate exponential power distribution. Note that the k-variate exponential power distribution is parametrized by a mean vector (k-dimensional) and diagonal covariance matrix with k parameters.\\nIn particular, for the Political Blogs, with KL Full on Political blogs we obtain the following:\\n\\ndistribution dimensionality (Pearson, Spearman, avg. MI, std. MI) \\n2-variate 0.88, 0.89, 0.85, 0.006\\n5-variate 0.90, 0.90, 0.90, 0.007 \\n10-variate 0.91, 0.92, 0.97, 0.006 \\n50-variate 0.88, 0.90, 0.90, 0.006. \\n\\nWe observe that low dimensional embeddings (2-variate distributions) are quite efficient w.r.t. different performance measures. \\nWe have changed the phrase \\\"good proposal function\\\", and we have included more details around this in Appendix A.8. Additionally, we have corrected the wordings and fixed the typos. \\n\\nAs suggested by the reviewer, we have made additional experiments on the undirected network. \\nResults on Petster-hamster network (http://konect.uni-koblenz.de/networks/petster-hamster) \\nMethod \\t\\t(Pearson, Spearman, avg. MI, std. MI):\\nAPP \\t\\t (-0.03, 0.45, 0.29, 0.006)\\nHOPE \\t\\t (0.23, 0.37, 0.43, 0.007)\\nDeepWalk \\t (0.36, 0.37, 0.10, 0.006)\\nGraph2Gauss (-0.26, -0.76, 0.45, 0.005)\\nKL \\t\\t (0.91 , 0.89, 0.89, 0.005)\\n\\nWe observe that our representation is still capable of preserving the global geodesic information of undirected graphs. Why is that the case if undirected graphs have symmetric shortest path distances between nodes? \\nAlthough KL divergence is an asymmetric function, in special cases it can also become symmetric. E.g. in the case of two Gaussian distributions with equal standard deviations, the KL divergence is symmetric. This demonstrates the generality of representation. Note that we did not set the additional equality constraint on the standard deviation parameters in the learning phase for this experiment. \\n\\nAn updated version of the paper will be uploaded soon.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper proposes an unsupervised method for learning node embeddings of directed graphs into statistical manifolds. Each node in the graph is mapped to a distribution in the space of k-variate power distributions, endowed with the KL divergence as asymetric similarity. The authors propose an optimization method based on a regularized KL divergence objective. They also propose an approximation of this objective based on finite neighborhoods, with a separate treatment of infinite distances based on a topological sorting. They also introduce a natural gradient correction to the gradient descent algorithm in this setting. They validate the fitness of their approach by showing that asymmetric distances in the graph translate into correlated asymetric distances between the node embeddings for various datasets.\", \"The paper appears to bring a valuable method for directed graph embedding. However, a more thorough experimental study would help validating the improvements and hyperparameter setting of the method. Moreover, I suggest that the authors work on an improved version of the manuscript, as it contains many grammatical and spelling mistakes, some of which listed under.\", \"Experimental questions *\", \"1. The hyperparameters and training time seems to have been set using the evaluation metric on the datasets. Could the authors provide a more principled validation approach to their experiments, e.g. using cross-validation?\", \"2. While the focus on preserving asymetric similarities is understandable, it would be interesting to know how the method performs for conventional evaluation tasks of network embedding, and to show that the gain in correlation can translate into gains for the end task in practice.\", \"Spelling / grammar / layout *\", \"Title: \\u201cOn the geometry and learning low-dimensional embeddings\\u2026\\u201d does not make sense.\", \"abstract: \\u201cis better preserving the global geodesic\\u201d\", \"Fig 1: The sigma ellipse*s*\", \"2.1 Intuition.: \\u201ccolor codded\\u201d\", \"Figure 2: \\u201cwhich was reflected the highest mutual information\\u201d\", \"Figure 2 should visually identify the rows and the columns, rather than relying on the caption.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors proposed an embedding method for directed graphs.\\nEach node is represented by a normal distribution. \\nAccordingly, for each pair of nodes, the authors used the KL divergence between their distributions to fit the observed distance. \\nThe asymmetric property of the KL divergence matches well with the nature of directed graphs.\\nA scalable algorithm is designed. \\nThe property of the learned space is analyzed in detail, which verifies the rationality of using KL divergence. \\n\\nMy main concerns are about the experiments and the baselines.\\n\\n1. The baselines are not very representative. Except for APP, the remaining three methods are not motivated by directed graphs. To my knowledge, authors can further consider the following two methods as their baselines.\\n\\na) The classic method like the Okada-Imaizumi Radius Distance Model in \\u201cOkada, Akinori, and Tadashi Imaizumi. \\\"Nonmetric multidimensional scaling of asymmetric proximities.\\\" Behaviormetrika 14.21 (1987): 81-96.\\u201d This method represents each node in a directed graph as an embedding vector with a radius and proposed a Hausdorff-like distance.\\n\\nb) The recent work in [35]. This method also embeds nodes by elliptical distributions, but the distance is measured in the Wasserstein space. \\n\\nThis work will be stronger if the authors discuss the advantages of the proposed model compared with these two methods and add more comparison experiments.\\n\\n\\n2. In practice, node embeddings are always used in down-stream learning tasks. Besides the classic statistical measurements, I would like to see a down-stream application of the proposed method, e.g., node classification/clustering. Adding such an experiment will make the advantage of the proposed method more convincing.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposed another graph embedding method. It focuses on directed graphs, and it embedded the graph nodes into exponential power distributions, which include the Gaussian distribution as a special case. The method is implemented by optimizing with respect to the free distributions on a statistical manifold so as to achieve the minimum distortion between the input/output distances. The method is tested on several directed graph datasets and showed superior performance based on several metrics.\\n\\nOverall, the submission forms a complete and novel contribution in the area of graph embeddings. A key novelty is that the authors used the asymmetry of KL divergences to model the asymmetry of the distances in directed graphs, and they use the fact that KL is unbounded to model the infinite distances in undirected graphs. The proposed method has three main hyperparameters, \\\\lambda in eq.(1), \\\\beta in eq.(2), and the dimensionality of the target embedding. The author showed that \\\\lambda and \\\\beta are not sensitive and can be set to the default values, and 2-dimensional distributions already give much better results as compared to alternative embeddings. Moreover, the author proposed a scalable implementation based on sampling. Furthermore, the authored justified their choice of the target embedding space through some minor theoretical analysis given in section 3.\\n\\nThe writing quality and clarity are good (well above average).\\n\\nTo further improve this paper (e.g., in the final version), the authors are suggested to incorporate the following comments:\\n\\nIn the experimental evaluation, it should include some cases when the dimensionality of the target embedding has a large value (e.g., 50). This will make the evaluation more complete.\\n\\nThere are some typos and unusual expressions. For example, page 3, what is \\\"a good proposal function\\\"?\\n\\nAfter eq.(1), mention \\\\lambda is a hyperparameter (that is not to be learned).\\n\\nTheorem 1 (2), mention the Fisher information matrix is wrt the coordinate system (\\\\sigma^1, \\\\cdots,\\\\sigma^k, \\\\mu^1, \\\\cdots, \\\\mu^k)\\n\\nIdeally, the experiments can include an undirected graph and show for example that the advantages of the proposed method become smaller in this case.\"}"
]
} |
BylfTySYvB | GATO: Gates Are Not the Only Option | [
"Mark Goldstein*",
"Xintian Han*",
"Rajesh Ranganath"
] | Recurrent Neural Networks (RNNs) facilitate prediction and generation of structured temporal data such as text and sound. However, training RNNs is hard. Vanishing gradients cause difficulties for learning long-range dependencies. Hidden states can explode for long sequences and send unbounded gradients to model parameters, even when hidden-to-hidden Jacobians are bounded. Models like the LSTM and GRU use gates to bound their hidden state, but most choices of gating functions lead to saturating gradients that contribute to, instead of alleviate, vanishing gradients. Moreover, performance of these models is not robust across random initializations. In this work, we specify desiderata for sequence models. We develop one model that satisfies them and that is capable of learning long-term dependencies, called GATO. GATO is constructed so that part of its hidden state does not have vanishing gradients, regardless of sequence length. We study GATO on copying and arithmetic tasks with long dependencies and on modeling intensive care unit and language data. Training GATO is more stable across random seeds and learning rates than GRUs and LSTMs. GATO solves these tasks using an order of magnitude fewer parameters. | [
"Sequence Models",
"Vanishing Gradients",
"Recurrent neural networks",
"Long-term dependence"
] | Reject | https://openreview.net/pdf?id=BylfTySYvB | https://openreview.net/forum?id=BylfTySYvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"SRTv8_N6-D",
"SJxnjAcnoB",
"Syxg9BPojH",
"B1lAmSwojH",
"BJezyHPiir",
"Syxn8EDojB",
"BJxeizHRtS",
"SJxmtISTFS",
"SygdaoLEtr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737487,
1573854883843,
1573774728167,
1573774629810,
1573774553910,
1573774420516,
1571865239864,
1571800699436,
1571216319868
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1981/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1981/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1981/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1981/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1981/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1981/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1981/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1981/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a modification of RNN that does not suffer from vanishing and exploding gradient problems. The proposed model, GATO partitions the RNN hidden state into two channels, and both are updated by the previous state. This model ensures that the state in one of the parts is time-independent by using residual connections.\\n\\nThe reviews are mixed for this paper, but the general consensus was that the experiments could be better (baseline comparisons could have been fairer). The reviewers have low confidence in the revised/updated results. Moreover, it remains unclear what the critical components are that make things work. It would be great to read a paper and understand why something works and not that something works.\", \"overall\": \"Nice idea, but the paper is not quite ready yet.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Plots in PDF Rendering Slowly in Browser\", \"comment\": \"Hello Reviewers and AC,\\n\\nWe have noticed that our submission's plots render slowly when viewing the PDF in some browsers. For us, it views OK in Mac Preview, but is slow on Chrome browser.\\n\\nApologies for the trouble. Thank you!\"}",
"{\"title\": \"Thank You For Your Feedback\", \"comment\": \"> [Difference between GATO, SRU/FRU, and RHN]\\n\\nThank you for these references.\\n\\nThe difference between SRU with alpha=1 and GATO with respect to the identity Jacobian is subtle. In SRU, if alpha^j = 1, then the hidden state mu^j for that alpha has an identity Jacobian [d mu^j_t / d mu^j_t-1]. But in this case, mu^j stays constant for all t and cannot be used to capture long-term dependencies. When alpha^j does not equal 1, the Jacobian is not an identity matrix. In GATO, the hidden state is broken in h=[r,s]. Though [ds_t / ds_t-1] is an identity matrix, s is updated based on r and x and can be used to capture long term dependencies. Therefore GATO is not a special case of SRU. We have added this discussion to new Appendix Section I.\\n \\nFRU is a follow-up work to SRU. Each hidden state h_t in FRU depends on h_{t-1} so it does not have the identity matrix Jacobian.\\n\\nGATO is also different from Recurrent Highway Network. The s_t in GATO does use a skip-connection/residual structure. But the residual part depends only on r_{t-1} and x_t, not on s_{t-1}. This novel residual update renders the identity matrix Jacobian. The highway network does not have this special residual update. \\n\\n>[Compare with other models that address vanishing gradients such as URNN and SRU]\\n\\nThanks for the suggestion. We have included comparisons against other recently proposed RNNs that address the vanishing gradient issue: RHN, EURNN (Jing, 2017), and SRU. RHN and EURNN perform poorly and NAN because they have unbounded forward propagation. SRU performs well on MIMIC but not on other tasks. \\n\\nWe fix RHN\\u2019s performance with one principle from our paper. See the Penn TreeBank result.\"}",
"{\"title\": \"Thank You For Your Feedback\", \"comment\": \">[Explore other periodic functions for decoder]\\n\\nIn new Appendix B, we observe that sin has similar performance. We believe our criteria are necessary, but not necessarily sufficient. We believe finite sums of cos/sin may work well too.\\n\\n>[Experiment with different proportion of \\u201cpassive\\u201d variables in GATO]\\n\\nIn revised Appendix A, we explore using no passive variables and 1/4 instead of 1/2 of the state. Our findings are that Add and Penn TreeBank suffer with less passive variables. This suggests that having a larger fraction of the state devoted to long-term gradient propagation is important.\\n\\nExploring fully-interacting GATO is a worthwhile direction. We believe it is not necessary for the Adding and Copying tasks. For the real data tasks, it would be interesting to study a more powerful model (with interaction) combined with regularization (like recurrent dropout).\\n\\n>[MIMIC is a seemingly private database of vitals]\\n\\nSorry for the incomplete information. MIMIC-III is a publicly accessible critical care database.\\n\\n>[Results do not go beyond \\u201cours is better\\u201d. Look more closely at what makes the difference]\\n\\nIn addition to better accuracy and perplexity, we emphasize stability across learning rates and seeds. As mentioned, we added new experiments on varying GATO\\u2019s non-linearities and proportion of \\u201cpassive\\u201d variables.\\n\\n>[Compare against GRU/LSTM with diagonal weight matrices]\\n\\nThank you for this suggestion. As mentioned, we have added diagonal LSTM and GRU, in one case matching hidden size and in another matching number of parameters. GATO outperforms all GRU and LSTM variants on our experiments.\\n\\n\\n>[Unfair claims about fewer parameters, compare against models with similar parameter counts]\\n\\nThis is a great point. As mentioned, we included comparisons against GRU/LSTM where we balanced the parameter counts. We have also removed excess emphasis on fewer parameters on the tasks where held-out metrics are reported.\\n\\nTheory suggests that optimization is easier in the overparameterized setting. See Arora et al. 2018.\\n\\nArora et al. On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization. 2018\\n\\n>[ Multi-layer RNNs are not compared or mentioned in this work ]\\n\\nWe follow other recent RNN papers (SRU, FRU, EURNN) by focusing on fundamental design choices and basic tasks rather than explore the full range of deep variants or regularization.\\n\\nWe have added details that some LSTM papers we cited use multiple layers, and mentioned deep variants in our conclusion.\"}",
"{\"title\": \"Thanks For Your Feedback\", \"comment\": \">[Large LSTM/GRU with no regularization may have overfit]\\n\\nThank you for your comments.\\n\\nOur goal for this work is not to investigate generalization but rather to design sequence models that capture long-term dependencies.\\n\\nOn Copy and Add, the models are trained on new samples from the data distribution at each batch. There is no notion of overfitting on these experiments.\\n\\nFor MIMIC and Penn, we report held-out accuracy and perplexity.\\n\\nWe have added comparisons against GRUs/LSTMs with similar parameter counts as GATO. GATO still outperforms these models on generalization in this non-regularized setting.\\n\\nYears of research has been devoted to regularizing GRUs/LSTMs. A future direction is to see whether the same techniques apply to alternate models such as GATO and RHN.\\n\\n>[Clarify why the lowest perplexity score in this paper is higher than those in other recent works]\\n\\nThe best held-out perplexity score on Penn TreeBank of 112.85 is for unregularized models with no dropout. [4] and [6] achieve 65.4 and 52.8 using many training and regularization techniques.\\n\\nRNN Regularization (Zaremba, 2015) reports 114.5 test perplexity for this task for an unregularized LSTM.\", \"we_have_included_comparisons_against_other_recently_proposed_rnns_that_address_the_vanishing_gradient_issue\": \"RHN [4], EURNN (Jing, 2017), and SRU (Oliva, 2017). RHN and EURNN perform poorly and NAN because they have unbounded forward propagation. SRU performs well on MIMIC but not on other tasks.\\n\\nWe fix RHN\\u2019s performance with one principle from our paper. See the Penn TreeBank result.\\n\\n>[GATO used 1 or 2 layers. How does GATO change with >2 layers?]\\n\\nOur \\u201ctwo layer\\u201d variant of GATO was not named clearly. \\u201cTwo layer\\u201d for GATO referred to increasing the depth of the function used to compute the recurrence. This is different from using N stacked identical RNNs for N layers. The \\u201ctwo layers\\u201d in GATO is more similar to how the GRU takes several functions of the input state.\\n\\nWe follow other recent RNN papers (SRU, FRU, EURNN) by focusing on fundamental design choices and basic tasks rather than explore the full range of deep variants or regularization.\\n\\n>[Other non-linear functions?]\\n\\nIn new section Appendix B, we replace the sigmoid in the GATO update with tanh, and we replace the decoder cos with sin. We observe little to no variation.\\n\\n>[typo r -> r_t in Eq. 6]\\n\\nWe fixed this typo. Thank you.\"}",
"{\"title\": \"Updated Paper and Response to Reviews\", \"comment\": \"We thank all of the reviewers and the AC for their time and feedback. The reviewers\\u2019 responses are positive in general.\\n\\nThe main contribution of this work is a list of criteria necessary for sequence models to capture long-term dependencies when trained with gradient-based optimization, along with one instantiation of a model that meets these criteria.\\n\\nReviewer #2 finds \\u201cboth the design and the practical performance of the proposed GATO unit very interesting, and potentially valuable for the ICLR crowd.\\u201d and mentions \\u201cThis is one of the more convincing RNN papers I have recently read.\\u201d\\n\\nAll reviewers suggested additional experiments. Our added experiments include 3 recent alternative RNN models and LSTM/GRU variants.\\n\\nThe reviewers asked for ablations on the partition of GATO\\u2019s hidden state and choice of non-linear functions. We have added this to Appendices A and B.\", \"two_points_of_clarification\": \"1. We follow other recent RNN papers (SRU, FRU, EURNN) by focusing on fundamental design choices and basic tasks rather than explore the full range of deep variants or regularization. A future direction is to see whether the GRU/LSTM regularization techniques apply to our model or its deep variants.\\n\\n2.GATO is not a special case of the SRU as mentioned by Reviewer #3. Additional Discussion in Appendix I\\n\\nWe uploaded a new version of the paper that addresses all of the reviewers' concerns. We cite the papers they mentioned. \\n\\nOur model competes with or outperforms GRU, LSTM, and other recent RNNs. Our model exhibits stability in cases where the alternatives are unstable. Ablations do not indicate sensitivity of GATO to small choices of non-linearities.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a modification of RNN that does not suffer from vanishing and exploding gradient problems. The proposed model, GATO partitions the RNN hidden state into two channels, and both are updated by the previous state. This model ensures that the state in one of the parts is time-independent by using residual connections. The experiments on the long copy and adding tasks, as well as language modeling on the Penn TreeBank dataset show the performance improvement against the basic LSTM and GRU.\\n\\nThe paper tackles an interesting and challenging problem with a novel approach in sequence modeling. The idea is clear and the paper is well-written. The mathematical insights are well reasoned. \\n\\nThe proposed method outperforms LSTM and RNN with much fewer number of parameters. However, there is no regularization is used for such big LSTM/GRU models. There is a chance that such a big LSTM/GRU model increased the chance of overfitting, and therefore the performance is low. I would like to see the comparison after adding any common regularization that prevents overfitting across the recurrent connections. \\n\\nThere are many advanced RNN/LSTMs proposed in recent years [1-5] addressing the vanishing gradient problem. It is hard to judge the quality of the proposed method due to the lack of evaluation/comparisons. This paper needs more intensive evaluations with recent RNN-based methods. For instance based on [6], AWD-LSTM [6] and RHN [4] achieved 52.8 and 65.4 test perplexity scores on the Penn TreeBank dataset respectively. The the best score in this paper is 112.85. \\n\\n[1] \\\"Phased LSTM: Accelerating recurrent network training for long or event-based sequences.\\\" 2016.\\n[2] \\\"Fast-slow recurrent neural networks.\\\" 2017.\\n[3] \\\"Skip RNN: Learning to skip state updates in recurrent neural networks.\\\" 2017.\\n[4] \\\"Recurrent highway networks.\\\" 2017.\\n[5] \\\"Dilated recurrent neural networks.\\\" 2017.\\n[6] \\\"Regularizing and optimizing LSTM language models.\\\" 2017\\n\\nAll experiments are performed with 1 or 2 layers. Hierarchical RNN/LSTM performs much better in sequence learning. Is there any reason authors only showed 1 or 2 layers? How does GATO change with more than 2 layers?\\n\\nHow do other choices of non-linear functions affect the performance in practice? \\n\\n\\nTypo\\nr -> r_t in Eq. 6\\n\\n---\", \"after_rebuttal\": \"One of my main concerns, weak baselines and unfair comparisons, was partially answered in the updated paper. I am not fully convinced by their new comparisons.\\nFor instance, authors mentioned that 'RHN and EURNN performed poorly because they have unbounded forward propagation'. To overcome this, they introduced 'Bounded RHN' in Append G and it performs similarly to GATO and GRU. However, this 'Bounded RHN' is the one used in the original RHN paper. Overall, it is hard to trust their additional comparisons. \\nAlthough, I believe that this paper is well-structured and justified. Also it has high potential for the community. However, the paper itself is not ready to be published.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors propose a novel recurrent architecture called GATO.\\nSpecifically, the authors focused on sequence modeling tasks and developed criteria for RNN models on such tasks. \\nTha GATO model can resolve the vanishing/exploding gradient issue and is robust to initializations. \\nEmpirical results show GATO can outperform LSTM and RNN in both synthetic datasets and real datasets.\\n\\nThe key insight of the proposed model is that only part of the hidden states is recurrently updated. \\nThe GATO achieves this by adding the skip connection channel (or residual connection) along the temporal dimension.\\nGATO summarizes the hidden states r_t by recurrently adding (transformed) r_t to s_t.\\nThis idea also appears in many previous RNN models, such as highway RNN/LSTM, Statistical/Fourier Recurrent Units [1][2].\\nSpecifically, the proposed GATO is a special case of SRU ( alpha=1). This limits the novelty of the paper and thus make the contribution marginal.\\n\\nAs for the experimental studies, the authors only provide comparisons with LSTM and GRU. There are a lot of advanced RNN architectures to address vanishing/exploding gradient issues, such as uRNN[3], oRNN[4], Spectral-RNN[5] and SRU/FRU [1][2]. It would be more convincing if the \\nauthors could include these models into comparison.\\n\\nOverall I think this paper should be further improved before being accepted. \\n\\n\\n\\n[1] Oliva, J.B., P\\u00f3czos, B. and Schneider, J., The statistical recurrent unit. \\nIn ICML 2017 (pp. 2671-2680).\\n\\n[2] Zhang, J., Lin, Y., Song, Z. and Dhillon, I., Learning Long Term Dependencies via Fourier Recurrent Units. \\nIn ICML 2018 (pp. 5810-5818).\\n\\n[3] Arjovsky, M., Shah, A. and Bengio, Y., Unitary evolution recurrent neural networks. \\nIn ICML 2016 (pp. 1120-1128).\\n\\n[4] Mhammedi, Z., Hellicar, A., Rahman, A. and Bailey, J., Efficient orthogonal parametrisation of recurrent neural networks using householder reflections. \\nIn ICML 2017 (pp. 2401-2409).\\n\\n[5] Zhang, J., Lei, Q. and Dhillon, I., Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization. \\nIn ICML 2018 (pp. 5801-5809).\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a new RNN architecture designed to overcome vanishing/exploding gradient problems and to improve long-term memory for sequence modelling. The main ideas are (i) to split the hidden state into two parts, one of which does not influence the recurrence relation, and can therefore not blow up or contract by self-feedback; and (ii) to use periodic functions, in particular the cosine, as non-linearity in the decoder, so that the output is bounded but does not saturate.\\n\\nThe paper puts forward a fairly systematic analysis of the gradients in RNNs. The analysis appears correct, and is in fact quite similar to considerations in earlier RNN work (which is correctly cited), and forms the basis for the proposed GATO unit. There are two loose ends in this part:\\n1) the cosine non-linearity results from a purely negative selection - the function should be bounded, but not saturating. The paper does not even ask the question which periodic function might be a good choice.\\n2) While the method is presented as a grand theory, with the only constraint that a part of the hidden state does not influence the recurrence function; the actual implementation and experiments are limited to the narrow special case of \\\"non-interacting\\\" GATO, where the \\\"passive\\\" variables make up exactly half of the hidden vector, and the update of each individual hidden variable is influenced only by a single variable from the previous state. So there are in fact no empirical results, not even on toy data, for the general case that the paper claims to introduce.\\n\\nIn the experiments, there are two artificial problems (copying, adding) for sequences of symbols. These are illustrative and sensible to verify and analyse the behaviour of GATO in a controlled setting, but rather far from most real sequence modelling tasks. In a third experiment the task is to classify whether or not patients will stay in intensive care for >1 week, based on a (seemingly private) database of time series of vital parameters. Unfortunately, the results for that experiment do not go beyond the usual \\\"ours is better\\\". While the numbers clearly support GATO, it would have been nice to look a bit closer and pinpoint what makes the difference. Also, the comparison is a bit loose. It would have been better to add additional baselines where also LSTM and GRU are restricted to \\\"non-interacting\\\", element-wise recurrence (as far as technically feasible). As it stands, it is unfair to claim \\\"we can do it with much fewer parameters\\\" - perhaps LSTM / GRU could, too. In fact, it could even be that the task is just simple, so that more restricted model with fewer parameters generally perform better - I do not claim this is the case, but the experiments do not rule it out and, hence, do not confirm that the clever GATO recurrence makes the difference.\\n\\nA small gap is also that even the \\\"real\\\" experiment might not be completely realistic. Nowadays it is a popular strategy to use deep LSTMS / GRUs, i.e., stack multiple levels of recurrence, possibly with temporal sub-sampling, to better capture long-term relations. While this is potentially even more brittle, because of the additional gradient flow across layers, it does seem to work. But deep RNNs are not tested (in fact, not even mentioned) in the paper.\\n\\nOverall, in spite of a few loose ends, I find both the design and the practical performance of the proposed GATO unit very interesting, and potentially valuable for the ICLR crowd. This is one of the more convincing RNN papers I have recently read.\"}"
]
} |
S1ef6JBtPr | Probabilistic View of Multi-agent Reinforcement Learning: A Unified Approach | [
"Shubham Gupta",
"Ambedkar Dukkipati"
] | Formulating the reinforcement learning (RL) problem in the framework of probabilistic inference not only offers a new perspective about RL, but also yields practical algorithms that are more robust and easier to train. While this connection between RL and probabilistic inference has been extensively studied in the single-agent setting, it has not yet been fully understood in the multi-agent setup. In this paper, we pose the problem of multi-agent reinforcement learning as the problem of performing inference in a particular graphical model. We model the environment, as seen by each of the agents, using separate but related Markov decision processes. We derive a practical off-policy maximum-entropy actor-critic algorithm that we call Multi-agent Soft Actor-Critic (MA-SAC) for performing approximate inference in the proposed model using variational inference. MA-SAC can be employed in both cooperative and competitive settings. Through experiments, we demonstrate that MA-SAC outperforms a strong baseline on several multi-agent scenarios. While MA-SAC is one resultant multi-agent RL algorithm that can be derived from the proposed probabilistic framework, our work provides a unified view of maximum-entropy algorithms in the multi-agent setting. | [
"multi-agent reinforcement learning",
"maximum entropy reinforcement learning"
] | Reject | https://openreview.net/pdf?id=S1ef6JBtPr | https://openreview.net/forum?id=S1ef6JBtPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"c8I8ONyEoc",
"BJlf6OhIsr",
"BJeG9_38jS",
"H1lxwu3LiS",
"S1lHqUjRKB",
"r1etri0TtS",
"BkgSsm3TKS",
"Skxdcm4zdB",
"SyghKkD5PB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798737450,
1573468346312,
1573468297687,
1573468247547,
1571890828619,
1571838785405,
1571828636947,
1570026384455,
1569513348390
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1980/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1980/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1980/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1980/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1980/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1980/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1980/Authors"
],
[
"~Yaodong_Yang1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper takes the perspective of \\\"reinforcement learning as inference\\\", extends it to the multi-agent setting and derives a multi-agent RL algorithm that extends Soft Actor Critic. Several reviewer questions were addressed in the rebuttal phase, including key design choices. A common concern was the limited empirical comparison, including comparisons to existing approaches.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewer's comments\", \"comment\": \"We thank the reviewer for their comments on the manuscript. These are truly valuable to us and we will surely incorporate the suggestions in the next version of the manuscript.\\n\\nREGARDING PARTIAL OBSERVABILITY\\nFrom a practical point of view, one can simply train the policy networks of agents to use only the local observations made by them. The challenge is that complete state information is required for training the critics. A more careful derivation involving partial observability presents an interesting research problem and we will surely pursue it in future.\\n\\nREGARDING MISSING DETAILS\\nWe will add intermediate steps leading up to equation 5 in order to make the paper self contained as suggested by the reviewer. We will also add details about the process that we used for selecting various hyper-parameters to the supplementary material. The parameter alpha_i is relevant only for MA-SAC and hence it was optimized only for MA-SAC.\\n\\nREGARDING EXPERIMENTS\\nWe are in the process of executing more experiments, some with continuous action spaces, some with more complicated environments like StarCraft-II and so on. Also, all reviewers have pointed out some other experiments that may be added to the manuscript to strengthen it. We already have the results for environments with continuous action spaces and we will add these and many such experiments to the next version of the manuscript.\\n\\nWe will also add experiments comparing our approach with Probabilistic Recursive Reasoning [1]. We believe that MA-SAC achieves significant performance improvement over MADDPG on predator-prey because of added stochasticity which is useful in competitive tasks. This information will be added to the paper. \\n\\n\\n[1] Probabilistic Recursive Reasoning for Multi-Agent Reinforcement Learning. Y Wen, Y Yang, R Luo, J Wang, W Pan. ICLR 2019.\"}",
"{\"title\": \"Response to reviewer's comments\", \"comment\": \"We thank the reviewer for their valuable suggestions. We will modify the manuscript to incorporate the proposed changes.\\n\\nREGARDING NOVELTY\\nOur major contribution lies in providing the multi-MDP view of the environment where we model the environment as seen by each agent using a separate but related MDP. Using this model, we show that the multi-agent variant of soft actor-critic can be derived by applying simple techniques. As we have noted in the paper, even though we only derive an actor-critic based algorithm, one can also use the model that we have described in the paper to derive multi-agent variants of other algorithms like Q-learning. MA-SAC is simply a representative example.\\n\\nREGARDING USAGE OF VALUE FUNCTION\\nWe did experiment with the setting where value function is also used as done in the original soft actor-critic paper. However, on the tasks that we experimented with, we did not see any noticeable change in the performance of the algorithm or the training characteristics. Thus, in order to reduce the number of parameters being trained, we did not use the value function in our final implementation.\\n\\nREGARDING EXPERIMENTAL EVALUATION\\nWe are in the process of executing more experiments, some with continuous action spaces, some with more complicated environments like StarCraft-II and so on. Also, all reviewers have pointed out some other experiments that may be added to the manuscript to strengthen it. We already have the results for environments with continuous action spaces and we will add these and many such experiments to the next version of the manuscript.\"}",
"{\"title\": \"Response to reviewer's comments\", \"comment\": \"We thank the reviewer for their valuable feedback. These suggestions would surely help us in improving the quality of the manuscript.\\n\\nHOW CAN AGENTS COORDINATE DESPITE THE INDEPENDENCE ASSUMPTION IN EQUATION 4?\\nThe policy for each agent i is trained by optimizing equation 9. Note that equation 9 involves computation of Q_i which in turn requires sampling actions of all agents given the current environment state. Assume for a moment that the policies being followed by all agents except agent i were fixed. In this case, agent i will tend to choose an action that is the best response to the policies being followed by other agents. Thus, because of the use of centralized training, the agent is implicitly learning to consider the possible actions that can be taken by other agents and act accordingly. Conditioning on actions of other agents and then hallucinating their actions during testing (as done in [1]) is useful when agents are trained in a decentralized fashion. We will further clarify the argument and add a comparison with [1] in the next version of the manuscript.\\n\\nREGARDING EXPERIMENTAL EVALUATION\\nWe are in the process of executing more experiments, some with continuous action spaces, some with more complicated environments like StarCraft-II and so on. The reviewer has rightly pointed out that exploring different properties of MA-SAC, understanding the role played by different assumptions and comparing with stronger baselines would further strengthen the paper. We already have the results for environments with continuous action spaces and we will add these and many such experiments to the next version of the manuscript.\\n\\nREGARDING NOVELTY\\nOur major contribution lies in providing the multi-MDP view of the environment where we model the environment as seen by each agent using a separate but related MDP. Using this model, we show that the multi-agent variant of soft actor-critic can be derived by applying simple techniques. As we have noted in the paper, even though we only derive an actor-critic based algorithm, one can also use the model that we have described in the paper to derive multi-agent variants of other algorithms like Q-learning. MA-SAC is simply a representative example.\\n\\nREGARDING DERIVATION OF ACTUAL SOFT Q-FUNCTION ALONG THE LINES OF LEVINE (2018)\\nWe have a derivation for computing the analogous soft Q-function for multi-agent setting using the forward-backward algorithm as done in Levine (2018). We will add these details in the supplementary material. \\n\\n\\n[1] Probabilistic Recursive Reasoning for Multi-Agent Reinforcement Learning. Y Wen, Y Yang, R Luo, J Wang, W Pan. ICLR 2019.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper extends soft actor-critic (SAC) to Markov games, or in other words multi-agent reinforcement learning setting. The paper is very nicely written, derives MA-SAC in a fairly general way, and introduces a variational approximation of the distribution over optimal trajectories which enables centralized training and decentralized execution. While I like the paper, I find the novelty aspect of it quite limited, since it's quite a straightforward combination of centralized training and decentralized execution idea with an algebraic extension of SAC to Markov games. The paper would have been much stronger if it had a much more thorough evaluation of the properties and limitations of MA-SAC as well as better comparison with the related work.\\n\\n\\nQuestions/comments:\\n\\n1. One of the key points of the paper is equation 4 that proposes the variational approximation of the distribution over trajectories. The authors assume that agents take actions independently which enables decentralized execution. However, it looks like this construction neglects the fact that optimal policies *must* take into account the other agents. It seems that with q structured this way, dependencies between agent actions are not taken into account even when training is centralized (all equations 5-7 fully factorize, neglecting all dependencies). In other words, given the proposed q, what is the benefit of centralized training?\\n\\n2. Following up on the previous question, from Levine (2018) we know that Eq. (3) results in a particular soft-Q function that can be computed using the forward-backward algorithm (assuming the knowledge of the dynamics), which would account for dependencies between agent policies/actions. On the other hand, it's unclear whether/how the Q function obtained through centralized training (Eq. 8) approximates the optimal soft-Q function. Can the authors comment on that?\\n\\n3. As mentioned, the proposed MA-SAC is really a fairly straightforward extension of SAC to Markov games. What could make the paper interesting in my opinion is a much more detailed (experimental) analysis of the approximations the authors had to make in order to enable decentralized execution and the corresponding advantages and limitations. The current evaluation falls short on that front as it just shows that the proposed algorithm works better than MADDPG in a few standard multi-agent environments.\\n\\n4. Although the authors position this paper as the first that introduces a probabilistic perspective on RL in multi-agent systems, there is other recent work (https://arxiv.org/abs/1901.09207) that already does that and, in fact, takes one more step and enables decentralized training with (probabilistic) reasoning about other agents. Discussion of advantages/disadvantages and comparison with the previous work I see as necessary.\\n\\n----\\n\\nI acknowledge that I have read the author's response. My assessment of the paper stays the same.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper proposes a new algorithm named Multi-Agent Soft Actor-Critic (MA-SAC) based on the off-policy maximum-entropy actor critic algorithm Soft Actor-Critic (SAC). Based on variational inference framework, the authors derive the objectives for multi-agent reinforcement learning. In experiments section, the authors compare the proposed algorithm with the previous algorithm called Multi-Agent Deep Deterministic Policy Gradient (MADDPG) on several multi-agent domain.\", \"comments\": [\"Based on inference, the authors derive the objectives as the equation (8) and (9). However, the proposed objectives are almost identical to SAC. First, the objectives for Q functions are just replacing \\\\hat{Q} in the equation (7) in SAC by \\\\bar{Q}, which has a very similar meaning. Also, the objectives for policy \\\\pi are exactly the same as in the SAC with only the added index. Thus, the proposed algorithm seems to be a naive extension of SAC into multi-agent cases. To avoid such questions, authors need to emphasize the difference from simple extension.\", \"Is there a reason not to use additional neural networks to estimate value function like SAC even the proposed algorithm is based on SAC?\", \"Additional experimental results are needed to ensure the algorithm since there is no theoretical guarantee.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"21st November Update: Thank you for your detailed response. I agree with the suggested future work and revisions to the paper. However, as no updates were made to the submitted paper, I will not be raising my score as the revisions represent significant changes that I cannot support acceptance of without further peer review. I encourage the authors to carefully consider all reviewers advice in updates to the paper for a future submission elsewhere, which I think will significantly improve the paper and its potential impact on the community.\\n\\n--\\nThis paper contributes a probabilistic framework for multi-agent RL and demonstrates a derivation of multi-agent SAC using it. The framework could be more broadly applicable if it included partial observability (as is often a requirement of multi-agent systems). The derivation could be improved by showing the full working for equation 5, as this would make the work self contained instead of assuming prior knowledge of ELBO by the reader.\\n\\nThe derived algorithm (previously published by Iqbal and Sha [ICML 2019] as noted in the paper) is then evaluated on 4 existing benchmark tasks against the baseline algorithm originally proposed with the environments - MADDPG. The environments represent a good range of multi-agent scenarios of suitable complexity to test modern deep RL algorithms. However, the empirical evaluation and methodology have issues that reduce the significance of their contribution.\\n\\nOn Page 8, it is noted that \\\"the value of alpha_i is empirically tuned for each environment\\\" but for which algorithm was it optimized? It is then noted that the values for alpha were \\\"found using grid search\\\" but details of the range of the search are not included nor details of how any other hyperparameters were set. Were other parameters tuned? If so please report all values searched for both algorithms and how they were set.\\n\\nAt the end of page 8, the caption of Figure 2 concludes \\\"it can be seen that MA-SAC controlled agents outperform MADDPG controlled agents on majority of tasks.\\\" This statement is not supported by the graphs in this figure. I suspect Figures 2a and b show no significant difference as the confidence intervals overlap and that Figure 2d is not significantly different throughout training but may be with a small effect size at the current end of training. Figure 2d also looks like training for longer may be beneficial. Therefore, Figure 2c is the only environment that shows a significant improvement. Please provide further evidence that MA-SAC outperforms MADDPG or weaken this conclusion. It would also be interesting to investigate deeper, why MA-SAC shows such higher performance than MADDPG in the Predator-Prey domain.\\n\\nOn Page 9, the conclusion is reiterated and claimed to be in comparison to a state-of-the-art algorithm. However, the benefits of SAC over DDPG have been previously shown both in single agent [Haarnoja et al, ICML 2018] and multi agent domains [Iqbal and Sha, ICML 2019]. A stronger baseline to compare against would improve the significance of any resultant improvements.\\n\\nThe research direction is interesting but the earlier publication of the derived algorithm (Iqbal and Sha, ICML 2019) and the issues discussed above with the experimental results lead me to conclude that the contribution is not yet sufficient to warrant publication. With further work I believe this line of work could lead to a high impact publication, but feel the paper requires more changes than are feasible within the time frame of the ICLR rebuttal period.\", \"minor_comments\": [\"Page 4, \\\"the transition function of underlying Markov game\\\" -> the transition function of the underlying Markov game\", \"Page 9, \\\"in Figure 2c, the red curve corresponds to\\\" -> dark blue curve\", \"Page 9, \\\"MA-SAC performs at least at par with MADDPG\\\" -> at least on par with\", \"Page 9, \\\"outperforms it on majority of the tasks\\\" -> outperforms it on the majority of tasks\"]}",
"{\"comment\": \"Thank you for pointing out these related works. We have gone through these papers carefully and we will cite them in our paper.\", \"we_will_revise_the_manuscript_to\": \"1. Add the following paragraph to the Related Works section:\\n\\\"[1] integrates probabilistic recursive reasoning while training several maximum entropy RL agents in a decentralized fashion. [2] proposes a probabilistic model where, conditioned on the optimality of other agents, each cooperative agent aims at maximizing its own probability of being optimal. In [4], the objective is to train a sub-optimal policy for agents in two-player video games. This policy must be close to a reference policy in Kullback-Leibler divergence. While [2] and [4] are not general enough to be used across all our experiments, we compare MA-SAC with [1] in Section 5.\\\"\\n\\n2. Add experiments comparing MA-SAC with PR2 on both cooperative and competitive tasks\\n\\nOur comments on these related works are as follows. \\n\\nWe would like to emphasize that the way we model each agent using a separate but related MDP is one of our major contributions. It provides a different perspective on the problem and, as we have also noted in our paper, it allows one to derive a variety of efficient MARL algorithms of which MA-SAC is an example.\\n\\nIn [1], it has been shown that PR2 does not perform well against MADDPG on competitive tasks from the multiagent particle environment while our experiments show that MA-SAC outperforms MADDPG. We will add comparison against PR2 on multiple cooperative and competitive tasks to the revised version of our manuscript. Note that even MA-SAC can be potentially trained in a decentralized fashion by using an opponent modelling trick in the same way as it was used in MADDPG. We will explore this issue in our experiments.\\n\\nApproaches [2] and [3] are restricted to cooperative settings only whereas our formulation allows cooperative as well as competitive agents. Approach [3] has already been cited in the paper.\\n\\nApproach [4] has been developed for two agents whereas our framework supports more than two agents as well.\\n\\nWe will only add experiments with PR2 because approaches proposed in [2-4] are either not suitable for competitive scenarios or do not support an arbitrary number of agents.\\n\\nWhen we say our proposed framework presents a unified view, what we mean is that many different algorithms (Q-learning based, policy gradient based and actor-critic based) can be derived using our framework.\\n\\nThank you! \\n\\n[1] Probabilistic Recursive Reasoning for Multi-Agent Reinforcement Learning. Y Wen, Y Yang, R Luo, J Wang, W Pan. ICLR 2019.\\n\\n[2] A Regularized Opponent Model with Maximum Entropy Objective. Z Tian, Y Wen, Z Gong, F Punakkath, S Zou, J Wang. IJCAI 2019.\\n\\n[3] Multiagent soft q-learning. Wei et al. AAAI Spring Symposium Series 2018.\\n\\n[4] Balancing two-player stochastic games with soft q-learning. J Grau-Moya, F Leibfried, H Bou-Ammar. AAAI 2018.\", \"title\": \"Thanks for pointing out interesting related works\"}",
"{\"comment\": \"Hello:\\n\\nThanks for presenting this work.\\n\\nHowever, we have serious concerns about the novelty of this work. The effort of the mapping the multi-agent learning question into the probabilistic inference on the graphical model, i.e. the multi-agent soft learning, has actually been done by multiple previous work, however, the author has cited none of them.\\n\\n1. Probabilistic Recursive Reasoning for Multi-Agent Reinforcement Learning\\n Y Wen, Y Yang, R Luo, J Wang, W Pan\\n ICLR 2019\\n\\n2. A Regularized Opponent Model with Maximum Entropy Objective\\n Z Tian, Y Wen, Z Gong, F Punakkath, S Zou, J Wang\\n IJCAI 2019\\n\\n3. Wei, Ermo, et al. \\\"Multiagent soft q-learning.\\\" 2018 AAAI Spring Symposium Series. 2018.\\n\\n4. Balancing two-player stochastic games with soft q-learning\\n J Grau-Moya, F Leibfried, H Bou-Ammar\\n AAAI 2018\\t\\n\\nMore importantly, the author claim this work to be \\\"a unified view\\\", however, it turns out if you define your optimality variable in the graphical model solely based on mapping the single-agent case to the multiagent case, i.e, P(o=1 | s1, a1, a2) \\\\propotional exp(R(s,a1,a2)), then this framework could NOT even solve the simple zero-sum setting in multi-agent learning. \\n\\n\\nEND.\", \"title\": \"multi-agent soft-actor-critic has been developed by multiple previous work, do PLEASE cite them\"}"
]
} |
HJx-akSKPS | Neural Subgraph Isomorphism Counting | [
"Xin Liu",
"Haojie Pan",
"Mutian He",
"Yangqiu Song",
"Xin Jiang"
] | In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms. Although the learning based approach is inexact, we are able to generalize to count large patterns and data graphs in polynomial time compared to the exponential time of the original NP-complete problem. Different from other traditional graph learning problems such as node classification and link prediction, subgraph isomorphism counting requires more global inference to oversee the whole graph. To tackle this problem, we propose a dynamic intermedium attention memory network (DIAMNet) which augments different representation learning architectures and iteratively attends pattern and target data graphs to memorize different subgraph isomorphisms for the global counting. We develop both small graphs (<= 1,024 subgraph isomorphisms in each) and large graphs (<= 4,096 subgraph isomorphisms in each) sets to evaluate different models. Experimental results show that learning based subgraph isomorphism counting can help reduce the time complexity with acceptable accuracy. Our DIAMNet can further improve existing representation learning models for this more global problem. | [
"subgraph isomorphism",
"graph neural networks"
] | Reject | https://openreview.net/pdf?id=HJx-akSKPS | https://openreview.net/forum?id=HJx-akSKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0HpAtHFe03",
"Syec_BjYsr",
"r1xZ5ViKiB",
"SJgrUVjFjS",
"HJe_WNjFiS",
"HJg6JmstoH",
"B1lIjMJPjr",
"Hkeg_NnxoB",
"SylbC532YB",
"r1x1gwAcYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737419,
1573660018409,
1573659784769,
1573659724737,
1573659648007,
1573659365157,
1573479070256,
1573074023767,
1571764937340,
1571641062579
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1979/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1979/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1979/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1979/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1979/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1979/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1979/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1979/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1979/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a method called Dynamic Intermedium Attention Memory Network (DIAMNet) to learn the subgraph isomorphism counting for a given pattern graph P and target graph G. However, the reviewers think the experimental comparisons are insufficient. Furthermore, the evaluation is only for synthetic dataset for which generating process is designed by the authors. If possible, evaluation on benchmark graph datasets would be convincing though creating the ground truth might be difficult for larger graphs.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"Thanks for your questions.\\n\\n1) Please refer to the general response of \\u201cwhy counting\\u201d.\\n\\n2) The reason we didn't compare with TurboISO and VF3 is that our graphs are generated by Algorithm 2, where the idea comes from TurboISO and VF3. When generating a graph, we do not need to run traditional algorithms to get the count but to add pattern isomorphisms. Random edges are added following the rule that breaks necessary conditions (Line 20 in Alg 2). These necessary conditions are used in TurboISO and VF3 to find candidate subregions. If we use TurboISO and VF3 as baseline algorithms, we believe the two methods will terminate in a short time. VF2 is considered one of the most representative algorithms and we use it to demonstrate the time of the magnitude of traditional methods.\\n\\nAs for other approximation methods, [Q1] is designed for graph isomorphism rather than subgraph isomorphism; [Q2] is still not suitable due to the exponential space requirement (19 vertices requires 1.2 Mbyte of disk space, shown Page 23). [Q2] only compares their method with Ullman\\u2019s algorithm on graphs with 19 vertices and achieves 16 times speedup. However, VF2 is 1,000 times faster than Ullman\\u2019s algorithm when |V| > 200. We do not think [Q2] can be applied to our two datasets. Isomorphism and subgraph isomorphism problem cannot be solved by sampling as [Q3], and that\\u2019s the reason why RGCN is worse than RGCN-SUM. \\n\\n[Q1] A Neural Graph Isomorphism Algorithm Based on Local Invariants, ESANN'2003\\n[Q2] Subgraph Isomorphism in Polynomial Time\\n[Q3] FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling\\n\\n3) The output of DIAMNet is the memory itself, where it has M blocks and each block is a d-dimensional vector shown in Section 4.3. \\n\\n4) Graphlets are small connected non-isomorphic induced subgraphs (usually 3-5 nodes) of a large network. We want to use neural models to approximately solve a general pattern counting problem. The pattern can be sparse or dense, homogeneous or heterogeneous. As Table 4 shows, we have many diverse structures of both patterns and graphs. This generalization requires a much more powerful ability of inference.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thanks for your questions.\\n\\n1) Thanks for correcting the Zero baseline. Corrected precision, recall, and F1 scores have been updated.\\n\\n2) The constant prediction, e.g., the average count of training data, is also added in the latest version as well as follows. \\n\\nSmall\\n | RMSE | MAE | F1_0 | F1_nonzero\\nZero | 67.195 | 13.716 | 0.761 | 0.0\\nAvg | 65.780 | 21.986 | 0.0 | 0.557\\n\\nLarge\\n | RMSE | MAE | F1_0 | F1_nonzero\\nZero | 237.904 | 35.445 | 0.769 | 0.0\\nAvg | 235.253 | 60.260 | 0.0 | 0.545\\n\\nThe average of training data is very close to that of test data. But using the average of training data is fairer than using the average of test data. This baseline (Avg) is worse than Zero. \\n\\n3) \\u201c75% zero counting graphs\\u201d. This setting is designed in purpose to evaluate neural models. As traditional algorithms do not have problems when there is no subgraph isomorphism detected in a data graph, neural models would fit the training data. However, in practice, there will be a lot of applications that zero counting exists for most of the cases. Therefore, we also add a lot of zero counting data. In addition, we can also get some sense of the performance when evaluating the F1_0 and F1_nonzero to compare different models, although we are not building a binary classifier. It may be possible to set different percentages of zero counting data. However, the results will be similar in terms of relative performance. We also agree that current MSE/MAE values might be underestimated compared with when all the test points have non-zero countings, and this also means that we are challenging ourselves and the community with a more difficult problem.\\n\\n4) Because this problem is NP-complete, there is no suitable dataset for both traditional algorithms and neural algorithms. Traditional algorithms are hard to scale to large graphs while neural models require plenty of data. But we agree with you that benchmark datasets are more convincing. We will release our generation code and learning code as one benchmark for future research.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thanks for your questions.\\n\\nQ.1 \\nQ.1.1. Besides what we explained in the \\u201cgeneral response of why counting\\u201d, we would like to emphasize that simply using a binary classifier for subgraph isomorphism and graph isomorphism would be less useful than counting in knowledge discovery and \\\"how many\\\" based KBQA, although the same representation and ways of representation learning could be applied. \\n\\nQ.1.2. \\u201cNote that there is some existing research on GNNs targeted 'graph matching' and 'graph similarity'.\\u201d\\nYes, we have cited some of these existing works in the related work section. However, for subgraph isomorphism counting, we need different types of graph encoding, which is shown in sections 4.1.1 and 4.2.1.\\n\\nQ.1.3. \\u201cthe used datasets intentionally restrict the possible values for the number of subgraph isomorphisms, but the counts would be exponentially large if we consider practical (dense) graphs.\\u201d\\nIt is true for homogeneous patterns querying dense graphs, but in practice, heterogeneous patterns with node and edge types are more useful, e.g., to query knowledge graphs with node and edge types. \\n\\n2) The objectives based on (R)MSE or (R)MAE correspond to a regression loss. Although the labels seem to be skewed, given the power of deep representation learning, it will be able to map the graph and pattern pair in a semantic space that can better perform regression. Log errors are not good because final models cannot handle complex cases (whose counts are large). Errors between log predictions and log counts are also not suitable because predictions at the early training steps can be negative. If we simply use ReLU (in prediction) when computing losses, models are easy to get stuck in a local optimum to predict zero all the time. We have tried all the above options but training processes did not even converge. \\n\\nWe constrain counts for the computational time of traditional algorithms and the interpretability of errors. Traditional algorithms will spend much more time on complex graphs. Errors are easy to be disturbed by those cases. In our datasets, we limit the count <= 1024 when |E| <= 256 and the count <= 4096 when |E| <= 2048. \\n\\n3) We use Zero as one of our baselines because it is a local optimum. We have added the average count of training data as our baseline in the latest submission as well as follows. The average count is worse than Zero prediction.\\n\\nSmall\\n | RMSE | MAE | F1_0 | F1_nonzero\\nZero | 67.195 | 13.716 | 0.761 | 0.0\\nAvg | 65.780 | 21.986 | 0.0 | 0.557\\n\\nLarge\\n | RMSE | MAE | F1_0 | F1_nonzero\\nZero | 237.904 | 35.445 | 0.769 | 0.0\\nAvg | 235.253 | 60.260 | 0.0 | 0.545\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thanks for your suggestions.\\n\\n1) The hardware information and software information has been added to the latest version. Training and evaluating were finished on one single NVIDIA GTX 1080 Ti GPU under the PyTorch framework.\\n\\n2) When the edge size of a graph increases to 256, it is already hard for neural models to do self-attention for graphs. Transformer-XL [1] is proposed to solve the computational cost problem. Generally, a 6-layer Transformer-XL should be better than a 3-layer GRU, but results in Table 2 and Table 3 show that Transformer is worse instead. Subgraph isomorphism counting requires the whole pattern information and the whole graph information. \\n\\nWe can try to implement a model with self-attention and source attention, but it can be only trained in rather small batch sizes and applied to toy data. We think this model cannot be helpful to solve the subgraph isomorphism counting problem.\\n\\n[1] Z. Dai, Z. Yang, Y. Yang, J. G. Carbonell, Q. V. Le, and R. Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. In ACL, pp. 2978\\u20132988, 2019.\\n\\n3) We have provided 24 more figures and further discussions in Appendix F. Those figures can provide more information about the behaviors of different models with different data.\"}",
"{\"title\": \"General response of \\u201cwhy counting\\u201d\", \"comment\": \"Although solving subgraph matching/enumeration can solve subgraph counting but not the other way round, counting itself is still very useful. Counting the number of isomorphic copies has been proven to be useful for bioinformatics [1], [2], chemoinformatics [3], and online social network analysis [4]. Especially, when counting a new structure that a professional may query (e.g., a gene structure, a protein structure, or a social network structure), a first step may be a rough estimation instead of exact finding. Then a fast algorithm to estimate may save a lot of time for such kind of knowledge discovery. Our counting task is also related to graphlet counting in database and data mining fields. However, our framework can count patterns with much more heterogeneous nodes and edges rather than 3-5 nodes in graphlets.\\n\\nCounting is also an important task especially for knowledge-based question answering (KBQA). More importantly, nowadays, most modern knowledge graphs are stored in RDF graph databases. The schema of such databases are more complex and counting based on the graph is preferred, for example:\\nEx (used in our introduction): \\u201chow many languages are there in Africa speaking by people living near the banks of the Nile River?\\u201d\\nAfter semantic parsing, such questions should be mapped to be a subgraph counting problem, where the subgraphs should follow some types of nodes and relations. Therefore, automatically counting can solve a particular KBQA problem in the future. However, to our best knowledge, we haven\\u2019t found any existing large-scale KBQA dataset that is specifically developed for the subgraph counting problem. This is why we developed our training and test datasets, which can serve as a pre-training step for higher-order \\u201chow many\\u201d KBQA problems.\\n\\n[1] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, \\u201cNetwork motifs: Simple building blocks of complex networks,\\u201d Science, vol. 298, no. 5594, pp. 824\\u2013827, 2002.\\n[2] N. Alon, P. Dao, I. Hajirasouliha, F. Hormozdiari, and S. C. Sahinalp, Biomolecular network motif counting and discovery by color coding, Bioinformatics, vol. 24, no. 13, pp. i241\\u2013i249, 2008.\\n[3] J. Huan, W. Wang, and J. Prins, Efficient mining of frequent subgraphs in the presence of isomorphism, ICDM, 2003, p. 549.\\n[4] M. Kuramochi and G. Karypis, Frequent subgraph discovery, ICDM, 2001, pp. 313\\u2013320.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a dynamic inter-medium attention memory network and model the sub-graph isomorphism counting problem as a learning problem with both polynomial training and prediction time complexities.\\nSince the testing time is reported in this paper, and the time complexity is one of the main contribution of this paper. The hardware and software used to run the algorithm should be reported in the main article.\\n\\nThe author argues that if we use neural networks to learn distributed representations for V_G and V_p or \\\\xi_G and \\\\xi_P without self-attention, the computational cost will acceptable for large graphs, but the missing of self-attention will hurt the performance. It\\u2019s encouraged to do corresponding experiments to compare it with the proposed method and better support the algorithm.\\n\\nOne of the main advantages of this paper is that the proposed method can efficiently deal with large graph tasks, so the model behaviors of different models in large dataset similar to Figure 5 is encouraged to be given.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studied how to leverage the power of graph neural networks for counting subgraph isomorphism. The motivation is that the current subgraph isomorphism detection is NP-complete problem and a proposed approach based on GNN could approximately solve the counting problem in polynomial time. Then they relaxed original subgraph isomorphism (which is equivalent to the exact subgraph matching problem) and proposed the problem of doing subgraph isomorphism counting task. The GNN and sequence modeling methods are discussed for solving this problem. The experimental results confirmed the effectiveness of these methods.\\n\\nAlthough I found the subgraph isomorphism counting problem is an interesting problem, I did not know how much practical usefulness of this task. More practical use case would be search for the matched subgraphs given the sub-graph query using subgraph isomorphism detection. \\n\\nAlso, although authors mentioned some approximation systems/methods in graph database community such as TurboISO (Han et al., 2013), VF3 (Carletti et al., 2018), and other approximation techniques [1][2], authors did not consider them as baselines to compare. These methods may also have limitations to deal with real-large graph but for the graph size that this paper studied I think they are fine to deal with. A parallel issue is that GNN also has scalability issues as well when dealing with large graphs [3]. Without comparing these existing fast (approximation) methods, it is really unfair to compare with only non-DL baseline VF2, which seems served as ground-truth as well. \\n\\n[1] A Neural Graph Isomorphism Algorithm Based on Local Invariants, ESANN'2003\\n[2] Subgraph Isomorphism in Polynomial Time\\n[3] FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling\\n\\nIn terms of technical contributions, they leverage some existing sequence models (CNN, RNN and so on) and graph models (RGNN) and the whole framework is similar to doing a graph matching networks (without considering node alignment) for a regression task. The DYNAMIC INTERMEDIUM ATTENTION MEMORY NETWORK is interesting yet simple. I am not entirely clear what's the output of this interactional module. The figure 4 shows the overall architecture of subgraph isomorphism counting model, which needs better descriptions to understand exact input and output for each module. In general, the novelty of this part is incremental. \\n\\nFinally, this subgraph isomorphism counting problem is closely related to graphlet counting problem. In the paper, the subgraph pattern considered seems like almost identical to graphlets the previous research extensively studied. I did not see any discussion about the connection of these two tasks either.\", \"minor_comments\": \"|V_G| is is the number of pattern nodes -> |V_p| is is the number of pattern nodes\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a method called Dynamic Intermedium Attention Memory Network (DIAMNet) to learn the subgraph isomorphism counting for a given pattern graph P and target graph G. This requires global information unlike usual GNN cases such as node classification, link prediction, community detection. First, input graphs P and G are converted embedding vectors through sequence models (CNN, RNN, Transformer-XL) or graph models (RGCN), and fed into their DIAMNet that uses an external memory as an intermedium to attend both the pattern and the graph. The external memory is updated based on multi-head attention as in Transformer. The output of DIAMNet is passed to FC that outputs 'count' directly. The training is based on minimizing MSE loss as a regression problem. Extensive experimental evaluations report that DIAMNet showed superior performance over competing methods and baselines.\\n\\nThis paper targets subgraph isomorphism counting as a learning problem for the first time I guess, and the proposed method combined with both graph- and sequence-based encoding is technically interesting. However, there are still two major issues of 1) why counting? 2) the RMSE loss for regression on counts 3) baseline of 'Zero'.\\n\\n1) the most unclear point is 'why counting?'. If I understand it, this method can be applied to subgraph isomorphism (NP-hard) or graph isomorphism (unknown complexity) as binary classification, and experimental evaluations can use the datasets used in evaluating VF2 or Naughty. It would be better to start this fundamental problem that would have many clear applications. Compared to subgraph isomorphism or graph isomorphism, the need for knowing accurate 'counts' of subgraph isomorphisms is unconvincing (given that we cannot explicitly obtains all subgraph matchings). Note that there is some existing research on GNNs targeted 'graph matching' and 'graph similarity'. \\n\\nAlso, the used datasets intentionally restrict the possible values for the number of subgraph isomorphisms, but the counts would be exponentially large if we consider practical (dense) graphs. \\n\\n2) the method fits the model using (R)MSE loss, but minimizing log errors ((R)MSLE) would be better considering distributions of response values (counts) of the used datasets in Figure 6. Fitting the MSE loss is not good for such highly skewed cases, and for example, might focus only on the few instances having very large count values. Or, if such instances are very small, training ignores all such extreme instances. Either way would be questionable when we consider learning 'subgraph isomorphism counting' in general. \\n\\nAlso, the error of counts by MSE or MAE would be less informative and it would be unclear how much errors are tolerant in practical use cases of this method. \\n\\n3) To interpret the RMSE and MAE values, Table 2 has the value for 'Zero'. This is for a constant predictor always returning zeros for any inputs. However, given that the loss is MSE, constant prediction values should be the average counts in the training data, not zero.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposed NN based subgraph counting. By using synthetically generated graphs, NN learns the number of occurences of a given queried graph called 'pattern'. The author proposes a specific architecture for learning the count based on the multi-head attention method. The authors empirically evaluated the performance on the synthetic dataset.\\n\\nThe problem setting would be interesting. Applying NN to counting a subgraph is novel as far as I know. My current concerns are mainly on the appropriateness of the experimental evaluation. \\n\\nIn the tables, the trivial baseline 'Zero' is shown as F1_zero = 0, but is this correct? I think this should be non-zero. If zero is 'positive' in F1_zero, recall is 1 and precision is 0.75 (because the author set 75% of data as zero). F-score is harmonic mean of them, which is 0.86.\\n\\nRMSE and MAE of the Zero prediction is shown, but the more standard baseline of the error would be a constant prediction (e.g., the average of test points is often used, which can evaluate how much variance can be explained by the model).\\n\\nWhy were 75 percent of countings set as 0 in the evaluation dataset? This rate is seemingly a bit large for the evaluation purpose. I guess that when this percentage is much more smaller, MSE would increase. In other words, current MSE/MAE values might be underestiamted compared with when all the test points have non-zero countings.\\n\\nThe evaluation is only for synthetic dataset for which generating process is designed by the authors. If possible, evaluation on benchmark graph datasets would be convincing though creating the ground truth might be difficult for larger graphs.\", \"minor_comment\": \"At the third line of Sec 3.2: '|V_G| is the number of pattern nodes' should be |V_P|.\"}"
]
} |
rkg-TJBFPB | RIDE: Rewarding Impact-Driven Exploration for Procedurally-Generated Environments | [
"Roberta Raileanu",
"Tim Rocktäschel"
] | Exploration in sparse reward environments remains one of the key challenges of model-free reinforcement learning. Instead of solely relying on extrinsic rewards provided by the environment, many state-of-the-art methods use intrinsic rewards to encourage exploration. However, we show that existing methods fall short in procedurally-generated environments where an agent is unlikely to visit a state more than once. We propose a novel type of intrinsic reward which encourages the agent to take actions that lead to significant changes in its learned state representation. We evaluate our method on multiple challenging procedurally-generated tasks in MiniGrid, as well as on tasks with high-dimensional observations used in prior work. Our experiments demonstrate that this approach is more sample efficient than existing exploration methods, particularly for procedurally-generated MiniGrid environments. Furthermore, we analyze the learned behavior as well as the intrinsic reward received by our agent. In contrast to previous approaches, our intrinsic reward does not diminish during the course of training and it rewards the agent substantially more for interacting with objects that it can control. | [
"reinforcement learning",
"exploration",
"curiosity"
] | Accept (Poster) | https://openreview.net/pdf?id=rkg-TJBFPB | https://openreview.net/forum?id=rkg-TJBFPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"v32WGpka0",
"SkxCUr2QsH",
"Bkg-fH3QiH",
"rkgST4hQsS",
"SyxiSE2XiB",
"S1xw9yhZcH",
"HkxHBiPAKr",
"r1l45lqTKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737390,
1573270869644,
1573270793430,
1573270716552,
1573270594982,
1572089742766,
1571875645370,
1571819659523
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1978/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1978/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1978/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1978/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1978/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1978/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1978/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper tackles the problem of exploration in deep reinforcement learning in procedurally-generated environments, where the same state is rarely encountered twice. The authors show that existing methods do not perform well in these settings and propose an approach based on intrinsic reward bonus to address this problem. More specifically, they combine two existing ideas for training RL policies: 1) using implicit reward based on latent state representations (Pathak et al. 2017) and 2) using implicit rewards based on difference between subsequent states (Marino et al. 2019).\\n\\nMost concerns of the reviewers have been addressed in the rebuttals. Given that it builds so closely on existing ideas, the main weakness of this work seems to be the novelty. The strength of this paper resides in the extensive experiments and analysis that highlight the shortcomings of current techniques and provide insight into the behaviour of trained agents, in addition to proposing a strategy which improves upon existing methods.\\n\\nThe reviewers all agree that the paper should be accepted. I therefore recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We thank the reviewer for the detailed and thoughtful comments. We appreciate they consider our work to be a \\u201cworthwhile contribution\\u201d, our experimental section \\u201cvery thorough\\u201d and our visualizations \\u201cinsightful\\u201d.\\n\\n\\n\\u201cThe motivation for augmenting the RIDE reward with an episodic count term is that the IDE loss alone would cause an agent to loop between two maximally different states.\\nIt would be interesting to know whether this suspected behavior actually occurs in practice, and how much the episodic count term changes this behavior.\\u201d\\n\\nWe thank the reviewer for suggesting to investigate this question in more detail. We carried out additional analyses and updated the draft. We have found that this behavior does occur in practice and can be observed by visualizing the agents\\u2019 trajectories. After training on the MultiRoom-N12-S10 task, the NoEpisodicCounts ablation visits two of the states a large number of times going back and forth between them, while RIDE visits each state once on its path to the goal. \\n \\nFigure 10 in the Appendix further supports this claim by showing the number of different states the agent visits within an episode. While the NoEpisodicCounts ablation always visits a low number of different states (~< 10) each episode, RIDE visits an increasing number of states throughout training (converging to ~100 for an optimal policy). From this, we can infer that NoEpisodicCounts revisits some of the states. \\n\\n\\n\\u201cIt is surprising that in the ablation in section A.5, removing the state count term does not lead to the expected behavior of looping between two states, but instead the agent converges to the same behavior as without the state count term.\\u201d\\n\\nThe agent is also encouraged to explore different (state, action) pairs via the entropy regularization term in the IMPALA loss, which can help avoid local optimum in certain cases. During training, the NoEpisodicCounts agent exhibits the behavior of looping between two states, but due to entropy regularization, it can get unstuck once it finds some extrinsic reward. This can explain why the NoEpisodicCounts ablation takes longer to converge than RIDE, which is less prone to this cycling behavior. Note that in the more challenging MultiRoom-N12-S10 environment, NoEpisodicCounts does not learn a useful policy, likely because the extrinsic reward is too sparse, so the agent remains stuck in a cycle. \\n\\nTo further support the above hypothesis, we have added experiments with the NoEpisodicCounts model without entropy regularization. The results show that without the entropy loss term, NoEpisodicCounts is more likely to completely fail or converge to a suboptimal policy. \\n\\n\\n\\u201cAlso, in Figure 9, was the OnlyEpisodicCounts ablation model subjected to the same grid search described in A.2, or was it trained with the same intrinsic reward coefficient as the other models?\\u201d\\n\\nWe ran the same grid search for the OnlyEpisodicCounts ablation.\\n\\n\\n\\u201cBased on the values in Table 4, it seems like replacing the L2 term with 1 without changing the reward coefficient would multiply the intrinsic reward by a large value.\\u201d\\n\\nWe are unsure of what you mean by \\u201creplacing the L2 term with 1\\u201d. Could you kindly clarify the question?\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We thank the reviewer for the positive feedback and are happy to hear they found our work \\u201cimportantly novel and valuable\\u201d, containing \\u201cdetailed experiments\\u201d and a \\u201cthorough discussion of how the technique addresses shortcomings of past methods\\u201d.\\n\\n\\n\\u201cIn partially observable environments that require agents to wait for something, should a RIDE-motivated agent consider changes in its own internal clocks (part of the recurrent state) impactful moves?...\\u201d\\n\\nThese questions open up exciting avenues for future work. We have recently began to explore similar ideas in which the embeddings are learned using recurrent networks (instead of feed-forward ones), but we do not have conclusive answers to the above questions yet. We believe these would be better addressed as a separate contribution.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We thank the reviewer for their time and feedback.\\n\\n\\u201cIn reinforcement learning, the agent should explore the experiment due to uncertainty. If everything in the environment is certain to the agent, then it does not have to explore and just exploiting the past experience would be the best. My major concern about the paper is 'impact-driven' reward bonus may not account for the uncertainty. \\u201d\\n\\nWhile we agree that uncertainty estimation has been useful for developing exploration methods in the past, one of our main findings is that such methods can in fact be ineffective in certain settings. It seems that existing methods estimate the uncertainty poorly in sparse-reward partially-observed procedurally-generated environments. In such environments, the dynamics can be learned early in training without being helpful in guiding exploration towards extrinsic rewards in the environment. For example, in Fig 7 we demonstrate that the intrinsic reward of the ICM, RND, and Count methods diminishes very fast during training, suggesting that the agent has a good model of the transition dynamics (ICM) or has seen similar states before (RND, Count) so its uncertainty about the world is low, yet it fails to solve the task because it hasn\\u2019t found extrinsic reward. We believe the MiniGrid environments used in our work present a more challenging and realistic setting than previously used environments that are fully observable or do not change across episodes.\\n\\n\\n\\u201cConstantly encouraging the states that have a high impact would not always good, and it may interfere to converge to an optimal policy...\\u201d\\n\\u201cIt seems that RIDE assumes that 'high-impact' states are always good, thus rewarded...\\u201d\\n\\nThank you for your question. While we agree that there exist settings in which certain \\u201chigh-impact\\u201d actions may not help the agent to solve a task, we believe there are a few ways in which this issue is already addressed in our current formulation. First, the agent also learns from extrinsic reward, so if that action is negatively correlated with the extrinsic reward, the agent can learn, in principle, to avoid that action. Second, the agent also explores via entropy regularization, which can help to avoid getting stuck in a local optimum. For example, the MultiRoom-NoisyTV environment contains a high-impact action that is not useful for solving the task and it isn\\u2019t penalized by negative extrinsic reward. Even in this more challenging setting, RIDE learns an optimal policy.\\n\\n\\n\\u201cSimilarly, in the problems where high-impact states have to be avoided, can RIDE still work effectively? For example, how about 'Dynamic-Obstacles' domains implemented in MiniGrid?...\\u201d\\n\\nWe have updated the paper with experiments on Dynamic-Obstacles. RIDE learns to avoid the obstacles and reach the goal. \\n\\n\\n\\u201cIn MiniGrid problems, if the colors of walls and the goal are changed at every episode, does RIDE work well?\\u201d\\n\\nWe added experiments for answering this question in the revised draft. RIDE learns to solve this task and can even generalize to unseen colors at test time without any further fine-tuning. \\n\\n\\n\\u201cIn Figure 4, why the intrinsic reward heatmaps are drawn only on the straight paths?\\u201d\\n\\nFigure 4 shows the trajectories of fully-trained models on MultiRoom-N7-S4. On this task, all agents learn optimal policies, so their behavior follows a shortest path to the goal.\"}",
"{\"title\": \"Paper Update\", \"comment\": \"We thank all the reviewers for their constructive feedback.\", \"we_have_updated_the_paper_with_the_following\": \"1. Experiments on 4 settings with varying degrees of difficulty in the Dynamic-Obstacles environment (see Appendix A.7 and Figure 14).\\n 2. Experiments on a modified version of MultiRoom-N7-S4 in which the colors of the walls and the goal change at every episode. We also evaluate the models on a held-out set of colors (see Appendix A.8, Figure 15, and Table 5). \\n 3. Extra qualitative and quantitative analysis on the effects of augmenting the intrinsic reward with the episodic count term, comparing RIDE with the NoEpisodicCounts ablation (see Appendix A.5 and Figure 10).\\n 4. Experiments with NoEpisodicCounts without entropy regularization to better understand the effect of the entropy loss term on avoiding local optima (see Appendix A.5 and Figure 9).\\n\\nWe also made minor corrections to the text taking into account reviewers\\u2019 suggestions.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary\\nThis paper proposes a Rewarding Impact-Driven Exploration (RIDE), which is an intrinsic exploration bonus for procedurally-generated environments. RIDE is built upon the ICM architecture (Pathak et al. 2017), which learns a state feature representation by minimizing the L2 distance between the actual next state feature and the predicted next state feature while minimizing the cross-entropy loss between the true action and the estimated action from the consecutive state features. Finally, RIDE's intrinsic reward bonus is computed by L2 norm of the difference between the current state feature and the next state feature, divided by the square root of the visitation count of the next state within the episode. Experimental results show that RIDE outperforms the existing exploration methods in the procedurally-generated environments (MiniGrd), and is competitive in singleton environments.\", \"comments_and_questions\": [\"In reinforcement learning, the agent should explore the experiment due to uncertainty. If everything in the environment is certain to the agent, then it does not have to explore and just exploiting the past experience would be the best. My major concern about the paper is 'impact-driven' reward bonus may not account for the uncertainty. Constantly encouraging the states that have a high impact would not always good, and it may interfere to converge to an optimal policy.\", \"It seems that RIDE assumes that 'high-impact' states are always good, thus rewarded. It could be true on the conducted MiniGrid domains, but this assumption may not hold in general. Could 'impact-driven' exploration be realistic and be applied to more general problems?\", \"Similarly, in the problems where high-impact states have to be avoided, can RIDE still work effectively? For example, how about 'Dynamic-Obstacles' domains implemented in MiniGrid? In this task, RIDE may promote to chase obstacles that have to be avoided, interfering with learning optimal policy. It would be great to show the effectiveness of RIDE in such environments.\", \"In MiniGrid problems, if the colors of walls and the goal are changed at every episode, does RIDE work well?\", \"In Figure 4, why the intrinsic reward heatmaps are drawn only on the straight paths?\", \"Minor: In the last sentence of Section 3, \\\"the current state and the next state predicted by the forward model\\\" -> \\\"the actual next state and the next state predicted by the forward model\\\"\", \"---\"], \"after_rebuttal\": \"Thank the authors for clarifying my questions and concerns. Most of my concerns are addressed, and I raise my score accordingly.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new intrinsic reward method for model-free reinforcement learning agents in environments with sparse reward. The method, Impact-Driven Exploration, learns a state representation of the environment separate from the agent to be trained, based on a combied forward and inverse dynamics loss. The agent is then separately trained with a reward encouraging sequences of actions that maximally change the learned state.\\n\\nLike other latent state transition models (Pathak et al. 2017), RIDE learns a state representation based on a combined forward and inverse dynamics loss. However, Pathak et al. rewards the agent for taking actions that lead to large difference between the actual next state and the predicted next state. RIDE instead rewards the agent for taking actions that lead to a large difference between the actual next state and the current state. However, because rewarding one-step state differences may cause an agent to loop between two maximally-different states, the RIDE loss term is augmented with a state visitation count term, which decreases intrinsic reward for a state based on the number of times that state has been visited in the current episode.\\n\\nThe experiments compare RIDE to a selection of other intrinsic reward methods in the MiniGrid, Mario, and VizDoom environments. RIDE provides improved performance on a number of tasks, and solves challenging versions of the MiniGrid tasks that are not solved by other algorithms.\", \"decision\": \"Weak Accept.\\n\\nThe main weakness of the paper seems to be a limitation in novelty.\\nPrevious papers such as (Pathak et al. 2017) have trained RL policies using an implicit reward based on learned latent states. Previous papers such as (Marino et al. 2019) have used difference between subsequent states as an implicit reward for training an RL policy. It is not a large leap to combine these two ideas by training with difference between subsequent learned states. However, this paper seems to be the first to do so.\", \"strengths\": \"The experiments section is very thorough, and the visualizations of state counts and intrinsic reward returns are insightful.\\nThe results appear to be state of the art for RL agents on the larger MiniGridWorld tasks.\\nThe paper is clearly-written and easy to follow.\\nThe Mario environment result discussed in section 6.2 is interesting in its own right, and provides some insight into previous work.\\n\\nDespite the limited novelty of the IDE reward term, the experiments and analysis provide insight into the behavior of trained agents and the results seem to improve on existing methods.\\nOverall, the paper seems like a worthwhile contribution.\", \"notes\": \"In section 2 paragraph 4, \\\"sintrinsic\\\" should be \\\"intrinsic\\\".\\nIn section 3, at \\\"minimizes its discounted expected return,\\\" seems like it should be \\\"maximizes\\\".\\nThe explanation of IMPALA (Espeholt et al., 2018) should occur before the references to IMPALA on page 5.\\nLabels for the axes in figures 4 and 6 would be helpful for readability.\\n\\nThe motivation for augmenting the RIDE reward with an episodic count term is that the IDE loss alone would cause an agent to loop between two maximally different states.\\nIt would be interesting to know whether this suspected behavior actually occurs in practice, and how much the episodic count term changes this behavior.\\nIt is surprising that in the ablation in section A.5, removing the state count term does not lead to the expected behavior of looping between two states, but instead the agent converges to the same behavior as without the state count term.\\n\\nAlso, in Figure 9, was the OnlyEpisodicCounts ablation model subjected to the same grid search described in A.2, or was it trained with the same intrinsic reward coefficient as the other models?\\nBased on the values in Table 4, it seems like replacing the L2 term with 1 without changing the reward coefficient would multiply the intrinsic reward by a large value.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper addresses the problem of intrinsically motivating in DRL. In particular, it focuses on exploration of procedurally generated environments where many states are novel compared to training experiences. It offers an intrinsic reward based on large movement in a state embedding space where this state embedding representation is co-trained on the same data already collected for learning. The paper claims to overcome shortcomings of specific past approaches (e.g. count-based / curiosity).\\n\\nThe need for intrinsic motivation in exploration is well motivated, and the approach for training a state embedding is anchored in multiple past works. The use of movement in this state embedding as an intrinsic reward is importantly novel and valuable. The problematic propensity for RL researchers to train on the test environments or design agents that are confused by proverbial noisy TVs and/or sacrifice extrinsic rewards in favor of intrinsic rewards is satisfyingly discussed and addressed through detailed experiments.\\n\\nThis reviewer moves to accept the paper for its contributions to intrinsically motivated exploration with thorough discussion of how the technique addresses shortcomings of past methods. This reviewer is thankful that the authors do not overinterpret the MiniGrid results and that they provide intuition for why the state embedding functions capture what we want them to capture. The fact that this approach makes joint use of the whole (s,a,r,s') tuple feels significant, as does the fact that this approach does not require any changes to the policy network (e.g. presuming that features useful for computing intrinsic rewards are also going to be useful for directly acting to optimize extrinsic rewards).\", \"question\": [\"In partially observable environments that require agents to wait for something, should a RIDE-motivated agent consider changes in its own internal clocks (part of the recurrent state) impactful moves? If an environment might require a recurrent / history-aware action policy, should RIDE also be made history aware? Might a history-aware RIDE reward sufficiently motivate a stateless/reactive policy?\"]}"
]
} |
SJlgTJHKwB | Continual Learning with Delayed Feedback | [
"THEIVENDIRAM PRANAVAN",
"TERENCE SIM"
] | Most of the artificial neural networks are using the benefit of labeled datasets whereas in human brain, the learning is often unsupervised. The feedback or a label for a given input or a sensory stimuli is not often available instantly. After some time when brain gets the feedback, it updates its knowledge. That's how brain learns. Moreover, there is no training or testing phase. Human learns continually. This work proposes a model-agnostic continual learning framework which can be used with neural networks as well as decision trees to incorporate continual learning. Specifically, this work investigates how delayed feedback can be handled. In addition, a way to update the Machine Learning models with unlabeled data is proposed. Promising results are received from the experiments done on neural networks and decision trees. | [
"feedback",
"continual learning",
"brain",
"work",
"neural networks",
"decision trees",
"artificial neural networks",
"benefit",
"labeled datasets"
] | Reject | https://openreview.net/pdf?id=SJlgTJHKwB | https://openreview.net/forum?id=SJlgTJHKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"WGvdHqD34S",
"BklCS-XNqS",
"SklhMPn0Fr",
"Hyx_QzKsFH"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737362,
1572249925824,
1571895060005,
1571684896056
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1977/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1977/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1977/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper claims to present a model-agnostic continual learning framework which uses a queue to work with delayed feedback. All reviewers agree that the paper is difficult to follow. I also have a difficult time reading the paper.\\n\\nIn addition, all reviewers mentioned there is no baseline in the experiments, which makes it difficult to empirically analyze the strengths and weaknesses of the proposed model. R2 and R3 also have some concerns regarding the motivation and claim made in the paper, especially in relation to previous work in this area.\\n\\nThe authors did not respond to any of the concerns raised by the reviewers. It is very clear that the paper is not ready for publication at a venue such as ICLR at the current state, so I recommend rejecting the paper.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"General\\n\\nThe paper is quite hard to follow. The figures are very coarse and the problem definition is not given very well. The paper claims that it is about continual learning, but it does not give ANY experimental results on the continual learning benchmarks. The paper seems to be dealing with an online learning with unlabeled data. \\n\\nCon & Questions:\\n\\n- Fig 3 shows the update rule, but there is no explanation on I_ref or X_1. \\n- The paper says it generates random one-hot vector when queue is full, but what does it have to do with delayed feedback?\\nThe algorithm requires a completely trained model and a queue that needs to store large amount of data. Then, what is the good nature of this method?\\nThere is no baseline in the experimental results. \\nThe T-CNN result on Cifar-10 is too low. This makes the result dubious.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper claims to tackle a semi-supervised continual learning problem where the feedback or the labeled data is delayed and is provided based on the model performance. Authors do not provide standard benchmarks for comparison and no baseline is considered.\\n\\nThe idea of using unlabeled data for continual learning is interesting and to the best of my knowledge this is the first work that suggests using delayed feedback for continual learning but unfortunately they do not consider measuring forgetting and this work seems an online learning method using delayed feedback.\\n\\nI vote for rejecting this paper due to the following reasons. I have listed the issues in chronological order and not their importance.\\n\\n1- I start with the writing. The paper, in its current form, needs to be thoroughly proofread and reorganized. The text does not read well and is vague in most parts (for example section 3 and 4). The text is informal in some parts (ex. in Figure 1). There are also grammar errors and typos for which I have found passing my writing through the free version of Grammarly very helpful in getting rid of most such errors. \\n\\n2- As one of the main motivations for the paper, authors claim humans learn continuously in an unsupervised fashion (paragraph one). I disagree with this statement because we all have been constantly learning from the feedback we have been receiving the environment throughout our lives. For example we all have learned how to walk by falling on the ground multiple times and using the pain signal in our muscles as a negative feedback to correct our movements. Getting corrected while speaking or question answering in our dialogues are examples of receiving feedback from the environment letting our learning behavior receiving lots of supervision.\\n\\n3- The related work section misses significant number of prior work on continual learning (I have provided a short list at the end [4,5,6] but authors are strongly encouraged to read more on this literature). However, my biggest concern is that I this work should not be introduced as a continual learning algorithm. The proposed method is an online learning method with delayed feedback which has been extensively studied before. Authors should consider citing the pioneering work in this field such as Weinberger & Ordentlich (2002) [2] or Joulani et al from ICML 2013 [3]. Providing comparison to [3] is strongly encouraged. Also note that the citation for \\u201ccatastrophic forgetting\\u201d is wrong and should be corrected to McCloskey & Cohen (1989). \\n\\n4- The figures and tables do not meet the conventional scientific standards and have to be significantly improved.\\n\\n5- Authors use softmax probabilities as a confidence score which are known to be uncalibrated by large as deep models are usually overconfident about their predictions. (see [1] for example). Was this investigated at all? Using a calibration technique might be able to help with this [1].\\n\\n6- On page 4, paragraph 5, the authors claim that \\u201cin continual learning instant update is not done\\u201d. This is vague to me and I think it is not true because there are plenty of supervised continual learning approaches where the labeled data is available when a task is learned (for example [4,5,6])\\n\\n7- The experimental setting is not well designed and does not use a standard continual learning setting and there is no baseline included which are very important reasons for rejecting this paper. Authors can benefit from applying their method on standard benchmark datasets commonly used in the literature to provide a fair comparison. Most importantly authors should evaluate their method against prior work. \\n\\n8- Exploring continual learning for decision trees is completely vague not justified in the paper.\\n\\n[1] Guo, Chuan, et al. \\\"On calibration of modern neural networks.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\\n[2] Joulani, Pooria, Andras Gyorgy, and Csaba Szepesv\\u00e1ri. \\\"Online learning under delayed feedback.\\\" International Conference on Machine Learning. 2013.\\n[3] Weinberger, Marcelo J., and Erik Ordentlich. \\\"On delayed prediction of individual sequences.\\\" IEEE Transactions on Information Theory 48.7 (2002): 1959-1976.\\n[4] Kirkpatrick, James, et al. \\\"Overcoming catastrophic forgetting in neural networks.\\\" Proceedings of the national academy of sciences 114.13 (2017): 3521-3526.\\n[5]Lopez-Paz, David, and Marc'Aurelio Ranzato. \\\"Gradient episodic memory for continual learning.\\\" Advances in Neural Information Processing Systems. 2017.\\n[6] Serr\\u00e0, J., Sur\\u00eds, D., Miron, M. & Karatzoglou, A.. (2018). Overcoming Catastrophic Forgetting with Hard Attention to the Task. Proceedings of the 35th International Conference on Machine Learning, in PMLR 80:4548-4557\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper describes a method that draws inspiration from neuroscience and aims to handle delayed feedback in continual learning (ie. When labels are provided for images after a phase of unsupervised learning on the same classes). It is an interesting idea, and worth exploring.\\n\\nI found the paper quite hard to follow at times, and I suggest the authors go through the paper in detail to address some of the issues with grammar and clarity of explanation.\\n\\nIn addition, I think the paper is lacking some grounding and context in terms of what problem is being solved and what previous work exists.\\nFor example, based on the experimental setup section it seems like the problem being addressed is that of unsupervised learning on CIFAR images followed by supervised learning on images with labels (IE. Delayed feedback) - is this the case? If so, this is much more a semi-supervised learning or fine-tuning problem than a continual learning problem (which typically looks at a single class at a time or some other non-stationary sequence of tasks). Either way, the recent literature in semi-supervised learning and continual learning should be referenced - see the citations below as a few examples, and consider referencing and more closely perusing some of the examples in the cited review paper by Parisi et al (2019).\\n\\nLastly, the experiments show the performance as a function of queue length for different features and for CNNs versus decision trees, but there is no comparison to existing methods and very simple models are used - this means that again, it's difficult to gauge the efficacy of the approach and place this in the context of prior art.\\n\\nUnfortunately, I think the paper in its current state does not meet the bar for ICLR - I suggest the authors consult the vast literature in semi-supervised and continual learning, and try to place their work in this context, along with external comparisons.\\n\\n\\nNguyen, Cuong V., et al. \\\"Variational continual learning.\\\"\\u00a0arXiv preprint arXiv:1710.10628\\u00a0(2017).\\n\\nLopez-Paz, David, and Marc'Aurelio Ranzato. \\\"Gradient episodic memory for continual learning.\\\"\\u00a0Advances in Neural Information Processing Systems. 2017.\\n\\nMiyato, Takeru, et al. \\\"Virtual adversarial training: a regularization method for supervised and semi-supervised learning.\\\"\\u00a0IEEE transactions on pattern analysis and machine intelligence\\u00a041.8 (2018): 1979-1993.\"}"
]
} |
SklgTkBKDr | Neural Non-additive Utility Aggregation | [
"Markus Zopf"
] | Neural architectures for set regression problems aim at learning representations such that good predictions can be made based on the learned representations. This strategy, however, ignores the fact that meaningful intermediate results might be helpful to perform well. We study two new architectures that explicitly model latent intermediate utilities and use non-additive utility aggregation to estimate the set utility based on the latent utilities. We evaluate the new architectures with visual and textual datasets, which have non-additive set utilities due to redundancy and synergy effects. We find that the new architectures perform substantially better in this setup. | [
"new architectures",
"neural",
"utility aggregation neural",
"set regression problems",
"representations",
"good predictions",
"learned representations",
"strategy",
"fact"
] | Reject | https://openreview.net/pdf?id=SklgTkBKDr | https://openreview.net/forum?id=SklgTkBKDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9pnD3ESmy",
"ByguiZo2iS",
"Hkex-DF3oS",
"SJgNfb_tor",
"B1eKyZOtoH",
"ryeOhg_YiS",
"BJeididLqS",
"ryxs4ypg9r",
"BJl-jwU6tB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737331,
1573855647737,
1573848823787,
1573646603655,
1573646560934,
1573646511866,
1572404083466,
1572028211438,
1571805080667
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1976/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1976/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1976/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1976/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1976/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1976/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1976/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1976/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents two new architectures that model latent intermediate utilities and use non-additive utility aggregation to estimate the set utility based on the computed latent utilities. These two extensions are easy to understand and seem like a simple extension to the existing RNN model architectures, so that they can be implemented easily. However, the connection to Choquet integral is not clear and no theory has been provided to make that connection. Hence, it is hard for the reader to understand why the integral is useful here. The reviewers have also raised objection about the evaluation which does not seem to be fair to existing methods. These comments can be incorporated to make the paper more accessible and the results more appreciable.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Re: baselines\", \"comment\": \"Dear Reviewer 2, thank you very much for your reply, we appreciate the effort.\\n\\nWe agree that it would be great to test our work on the same datasets/tasks previous approaches such as the DeepSets approach have been tested on. Unfortunately, the problems in the DeepSets paper do not have the properties we are mainly interested in, namely problems that have redundancy and synergy effects. The reason for this situation is the generality of DeepSets: DeepSets is a general architecture that models permutation invariant sets of objects for arbitrary problems. We do not claim that our architectures are as general as DeepSets or that our architecture can compete with DeepSets on a wide range of datasets, because, as stated above, we are mainly interested in a specific type of problems that have said redundancy and/or synergy effects. Hence, using our architectures on the datasets used in the DeepSets paper is not what we want to focus on in this work.\\n\\nFurthermore, please note that we did not spend a lot of effort to fine-tune the results of our work. All hyper-parameters such as learning rate, optimizer, early stopping, random weight initialization, etc. are simple default choices that have not been optimized in a sophisticated way, and more importantly, the same for all models and all experiments. We report this information in Section 3.3. We will add more experimental results for different hyper-parameter settings to the appendix to show that our approaches perform consistently better than the reference systems.\"}",
"{\"title\": \"baselines\", \"comment\": \"Standard practice in machine learning is to compare against existing baselines when possible. Even with the best intentions, it is rare for authors to spend nearly as much effort carefully tuning competing methods as they do their own approach, which is why it's important to compare against previously reported performances for competing approaches.\\n\\nFor instance, your work compares itself extensively against DeepSets, and the performance of DeepSets is reported on several tasks in the DeepSets paper. It might make sense to apply your method to those tasks.\"}",
"{\"title\": \"Reply to Official Blind Review #3\", \"comment\": \"Thanks a lot for your review and for pointing us to the reference, we will add and discuss this work in our paper. The referenced paper mimics the Choquet integral to fuse different neural networks such as CaffeNet, GoogLeNet, and ResNet50 that have been pre-trained for classification problems and can be viewed as ensemble method for multiple noisy classifiers. Contrary, we are interested in regression problems that have inherent non-additive effect such as automatic summarization. Furthermore, the referenced paper is much closer to the Choquet as we intent to be. As we describe in the paper, the proposed architectures are only inspired by the Choquet integral. This idea can be found in both of our architectures. In Figure 1c, u_i and in Figure 1d g_i * u_i model these meaningful intermediate values. We do not claim that we obtain any theoretical guarantees or properties of the Choquet integral.\\n\\n\\\"How do you guarantee that the representation learned by the neural network still obeys the property of Choquet integral?\\\"\\nAs described above, the proposed approaches are inspired by the way Choquet integrals handle non-additive utility aggregations. We do not claim that we obtain any theoretical guarantees or properties of the Choquet integral. Furthermore, the main idea of this work is to not learn a representation. Instead, we propose to predict many meaningful intermediate values that can simply be summed to obtain a set utility.\\n\\n\\\"What is your loss or your algorithm?\\\"\\nWe describe in Section 3.3 that we use mean squared error (MSE) and mean absolute error (MAE) in our experiments. We use MSE because it is usually used in regression problems. We were also interested in the mean absolute error because minimizing this loss might be more appropriate in a task such as automatic summarization, in which we don't want to punish a model strong if it makes a few severe mistakes compared to making many small mistakes. We also describe in Section 3.3. that we use Adam as optimizer.\\n\\n\\\"According to the illustration, it seems that you first obtain \\u201cfeatures/representations\\u201d. Then the representations are fed to the four architectures you listed in figure one.\\\"\\nThis is correct.\\n\\n\\\"RNN-based approaches are with better \\u201ccomplexity\\u201d comparing to your sum baseline and \\u201cDeepset\\u201d approach.\\\"\\nWe also compare against an RNN-based approach (abbreviated with \\\"RNN\\\" in the paper). The RCN approach is the smallest modification one can make to implement our idea into a standard RNN. Hence, we think that the comparison is fair and meaningful. Furthermore, we demonstrate in the extrapolation experiments that standard RNNs tend to overfit. The simple sum baselines and deepsets perform better in this experiments. Hence, a \\\"better\\\" complexity turns out to be prone to overfitting, which shows that larger models are not necessarily better.\"}",
"{\"title\": \"Reply to Official Blind Review #2\", \"comment\": \"Thanks a lot for your review. We address your remarks below:\\n\\n\\\"RNN with an accumulator / too minor a contribution \\\"\", \"we_want_to_emphasis_that_the_accumulator_implemented_in_the_newly_proposed_architectures_has_an_inherently_different_nature_than_accumulators_used_so_far\": \"While accumulators such as LSTM cells accumulate knowledge about the state of a sequence, our architectures produce meaningful intermediate results, which can simply summed up to estimate the final set utility. Producing such intermediate results, which model the nature of the problem much better than previous approaches, is the key idea presented in this paper and a major benefit of the proposed architectures. This also follows the idea of the Choquet integral.\\n\\n\\\"Additionally, I believe the experimental tasks are new, and as a result all implementations of competing techniques are by the paper authors. This makes it difficult to have confidence in the higher reported performance of the proposed techniques.\\\"\\nTo improve reproducibility, we published the data and the code. We implemented in our work the most basic version of our idea as well as the most basic version of each reference model. Hence, the code of the implemented architectures only consists of few lines and can be checked easily. We think that it is not fair to simply mistrust our results since we made our work fully transparent.\"}",
"{\"title\": \"Reply to Official Blind Review #1\", \"comment\": \"We are happy that you found the paper clearly written and easily understandable. We would like to address your remarks below:\\n\\n\\\"the authors propose two RNN-based models\\\"\\nWhile the proposed architectures have a recurrent structure, they are fundamentally different to RNNs that used to day. RNNs such as basic RNNs, LSTMs, and GRUs learn one representation of the input on which the final prediction (in our case: the set utility) is based on. No meaningful intermediate results are generated. Contrary, the proposed architectures produce meaningful intermediate results in every step and model the task at hand much better. We illustrate this fundamental difference in Figure 1. Our experiments validate that the proposed architectures perform better than standard RNNs.\\n\\n\\\"The proposed models seem very basic [...]\\\"\\nWe focus in this paper on the most basic versions of our idea to communicate our idea as clear as possible. We describe that multiple extensions of the idea can be investigated in future work. However, all further extensions such as more complex memory cells, which can enhance the proposed architectures, would distract from our core idea: to generate and aggregate meaningful intermediate results for set utility estimation in a non-additive way. Hence, we think the simplicity of the presented architectures is a desirable property in this work.\\n\\n\\\"The proposed models [...] do not have much novelty.\\\"\\nWe disagree with this statement. The idea implemented in the proposed architectures has never been proposed before. We discuss [1] in our work, which is the work that is most similar to ours. Even though [1] is the most similar work, it is substantially different from ours since [1] does not use neural networks but only shallow learning. Our paper is the first that demonstrates that the idea in [1] can successfully be used with deep learning. Furthermore, the potential impact of our work is large. The problem setting we address in this paper appears in many situations. For example, a recent work [2] in automatic summarization uses the prior strategy, which we use as reference in our work. This work can potentially be improved by using non-additive utility aggregation as proposed by our work.\\n\\n\\\"The tasks in the experimental study seem overly simple.\\\"\\nThe purpose of the experiments is to demonstrate the differences between the proposed approaches based on non-additive utility aggregation and prior ideas such as deep sets and RNNs. We show that our approaches perform substantially better than the reference approaches in computer vision and a natural language processing problems. The computer vision problem has also been used in well-known recent works [3,4]. Estimating redundancy and synergy effects of multiple sentences for automatic summarization is a well-known, hard, and unsolved problem. Hence, we think the problems used in our experiments are not overly simple but actually very hard.\\n\\n\\\"The authors might want to consider other tasks, for example, Point Cloud Classification in [1].\\\"\\nPoint Cloud Classification is a classification problem. However, we focus on regression problems in our work. More importantly, Point Cloud Classification is not a problem in which synergy or redundancy effects are important. Dealing with these effects is the main motivation of our work. Hence, we think that performing experiments on Point Cloud Classification is not appropriate for this work.\\n\\n\\\"For RCN and DCR, how to decide the ordering of phi_i, given that they are the objects of an unordered set?\\\"\\nTraining, validation, and testing examples are generated randomly and do not have a specific order. We feed all input elements, i.e. all phi_is, in the order in which they have been generated during the randomized data generation process. Hence, we use for all models the very same order. No model includes a re-ordering step. We will clarify this in the paper.\\n\\n\\\"It would be helpful it the authors can also provide the number of parameters of the baseline models in Tables 1, 2, and 3.\\\"\\nThank you for the recommendation, we will add the number of model parameters to the paper. The complexity of the models can already be inspected in the published code.\\n\\n[1] Tehrani, Ali Fallah, et al. \\\"Learning monotone nonlinear models using the Choquet integral\\\" Machine Learning 89.1-2 (2012): 183-211.\\n[2] Zhou, Qingyu, et al. \\\"Neural Document Summarization by Jointly Learning to Score and Select Sentences.\\\" Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2018.\\n[3] Zaheer, Manzil, et al. \\\"Deep sets.\\\" Advances in neural information processing systems. 2017.\\n[4] Ilse, Maximilian, Jakub Tomczak, and Max Welling. \\\"Attention-based Deep Multiple Instance Learning.\\\" International Conference on Machine Learning. 2018.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors propose two RNN-based models to learn non-additive utility functions for sets of objects. The two architectures are inspired by the discrete Choquet integral. The proposed models are evaluated on visual and textual data against an MLP baseline and deep sets.\\n\\nOverall, the paper is clearly written and easily understandable. However, the novelty of the paper is limited and the empirical support of the proposed models is insufficient. The motivation of using \\\"Choquet integral\\\" seems obscure to me. The author might want to provide a short introduction to Choquet integral and elaborate on the connection with the proposed models. The proposed models seem very basic and do not have much novelty. The tasks in the experimental study seem overly simple. The authors might want to consider other tasks, for example, Point Cloud Classification in [1].\", \"questions\": [\"For RCN and DCR, how to decide the ordering of phi_i, given that they are the objects of an unordered set?\", \"It would be helpful it the authors can also provide the number of parameters of the baseline models in Tables 1, 2, and 3.\", \"To model the interaction among objects in a set, GNN might be a better choice than RNN.\", \"[1] Zaheer, Manzil, et al. \\\"Deep sets.\\\" Advances in neural information processing systems. 2017.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes two new architectures for processing set-structured data: An RNN with an accumulator on its output, and an RNN with gating followed by an accumulator on its output. While sensible, this seems to me to be too minor a contribution to stand alone as a paper.\\n\\nAdditionally, I believe the experimental tasks are new, and as a result all implementations of competing techniques are by the paper authors. This makes it difficult to have confidence in the higher reported performance of the proposed techniques.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies non-additive utility aggregation for sets. The problem is very interesting. Choquet Integral is used to deal with set input. The authors propose two architectures. The two architectures, though not novel enough, are towards representing \\u201cnon-additive utility\\u201d.\\nHowever, the experimental comparison is not fair, the description of the model (e.g. how Choquet is integrated into the model and help to learn \\u201cintermediate meaningful results\\u201d) is not clear, some claims are not true.\\n\\nFirst, the authors claim that they are the first to combine Choquet integral with deep learning. However, there are a few, though not many, works in the literature trying to combine Choquet integral with deep learning. For example, \\u201cFuzzy Choquet Integration of Deep Convolutional Neural Networks for Remote Sensing\\u201d by Derek T. Anderson et al. \\n\\nSecond, the authors claim they are using/motivated by Choquet integral, but do not have any (appendix) sections to explain how this mathematical tool is really integrated into their models. How do you guarantee that the representation learned by the neural network still obeys the property of Choquet integral? What is your loss or your algorithm? These need to be further clarified.\\n\\nThird, the comparison to baseline and \\u201cDeepSet\\u201d is not fair. According to the illustration, it seems that you first obtain \\u201cfeatures/representations\\u201d. Then the representations are fed to the four architectures you listed in figure one. RNN-based approaches are with better \\u201ccomplexity\\u201d comparing to your sum baseline and \\u201cDeepset\\u201d approach. So, I have some doubts about the experimental results.\"}"
]
} |
ByggpyrFPS | Bayesian Variational Autoencoders for Unsupervised Out-of-Distribution Detection | [
"Erik Daxberger",
"José Miguel Hernández-Lobato"
] | Despite their successes, deep neural networks still make unreliable predictions when faced with test data drawn from a distribution different to that of the training data, constituting a major problem for AI safety. While this motivated a recent surge in interest in developing methods to detect such out-of-distribution (OoD) inputs, a robust solution is still lacking. We propose a new probabilistic, unsupervised approach to this problem based on a Bayesian variational autoencoder model, which estimates a full posterior distribution over the decoder parameters using stochastic gradient Markov chain Monte Carlo, instead of fitting a point estimate. We describe how information-theoretic measures based on this posterior can then be used to detect OoD data both in input space as well as in the model’s latent space. The effectiveness of our approach is empirically demonstrated. | [
"variational autoencoders",
"out-of-distribution detection",
"stochastic gradient MCMC"
] | Reject | https://openreview.net/pdf?id=ByggpyrFPS | https://openreview.net/forum?id=ByggpyrFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"RyOltrjCHx",
"ryx7HOcniS",
"rJe_GY83sB",
"HJeMtmghjH",
"S1lvLqFYor",
"SyxKnFtKsB",
"BJlSh3OFiH",
"H1lxgQ5-9B",
"Syg9CadRFr",
"H1gQjvYpFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737297,
1573853243332,
1573837072476,
1573811065788,
1573653070717,
1573652913310,
1573649581154,
1572082408107,
1571880402103,
1571817370616
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1975/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1975/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1975/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1975/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1975/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1975/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1975/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1975/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1975/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper tackles the problem of detection out-of-distribution (OoD) samples. The proposed solution is based on a Bayesian variational autoencoder. The authors show that information-theoretic measures applied on the posterior distribution over the decoder parameters can be used to detect OoD samples. The resulting approach is shown to outperform baselines in experiments conducted on three benchmarks (CIFAR-10 vs SVNH and two based on FashionMNIST).\\n\\nFollowing the rebuttal, major concerns remained regarding the justification of the approach. The reason why relying on active learning principles should allow for OoD detection would need to be clarified, and the use of the effective sample size (ESS) would require a stronger motivation. Overall, although a theoretically-informed OoD strategy is indeed interesting and relevant, reviewers were not convinced by the provided theoretical justifications. I therefore recommend to reject this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you very much for your quick response and for facilitating this discussion!\\n\\nWe see the point you are trying to make.\\nThe thing is that formalizing the OoD detection problem to allow for a principled solution is somewhat difficult.\\n\\nOoD detection is often \\\"formalized\\\" by assuming that we are given a set of samples from some distribution p, and need to decide if a single, previously unseen datum x was sampled from p or from a different distribution q.\\nUnfortunately, this problem is inherently ill-posed: How \\\"different\\\" are p and q? \\nAlso, can we even draw meaningful conclusions about the distribution of x by only observing a single sample?\\nClearly, for arbitrary distributions p and q, there cannot exist a classifier that can perfectly distinguish if x was drawn from p or q (e.g. if their supports overlap).\\nIf we had access to the density of p, then the natural solution would be to classify x based on the probability p(x) (which could be viewed as a principled solution), since OoD data drawn from q should have lower probability under p than the training data drawn from p.\\n\\nIf we do not have access to the density of p, then there exist the following alternatives:\\n1) As you pointed out, we could have access to an infinite number of samples from p, which would allow us to perfectly characterize p (e.g. we could perfectly fit a Gaussian, as you described).\\nIn practice, we will of course never have an infinite amount of data, but let us consider this case for the sake of argument.\\nYou are right that in that case, the information gain would be zero for any point.\\nHowever, in that case, we could simply rely on our characterization of p to detect OoD inputs by looking at p(x).\\nIn particular, if the supports of p and q do not overlap, then OoD data drawn from q will have zero probability under p.\\nIn contrast, as soon as we have a lack of information (i.e., only a finite amount of data), then we will not be able to fit a perfect model for p, in which case the information gain will be a useful measure to tell us which data points are likely to be outliers.\\n\\n2) If we do not have infinite data, then we could try to estimate p(x) based on the given samples of p.\\nHowever, estimating complex high-dimensional probability distributions from samples is an open problem, and as recent research has shown (Nalisnick et al, 2019, Choi et al, 2018), the deep generative models we typically use fail to provide reliable estimates of p(x), which somewhat invalidates this approach for detecting OoD data.\\nThis motivated some recent work (described in our paper) that tries to correct the likelihood estimate p(x); however, these methods would probably not qualify as being principled under your definition.\\nApart from that, there are many other OoD detection methods which do not try to estimate p(x) using a generative model, but instead use other ad-hoc heuristics to tweak a supervised classifier to tackle this problem (which we call supervised/discrmininative OoD detection methods in our paper).\\nMost (perhaps all) of these methods would probably also not qualify as being principled.\\nThis motivates our work of trying to find an alternative, practical measure to detect OoD inputs which does not rely on the likelihood, but is still as principled as possible.\\nAs there seems to be a discrepancy between how principled a method is vs. how well it works in practice, we believe we found a good trade-off between the two (e.g., our experiments demonstrate while the log-likelihood score might be more principled, it may perform much worse than our approach).\\n \\nAlso in our view, active learning and OoD detection are very much related, as both problems are concerned with identifying data points which are \\\"different\\\" to all data points we have seen so far (i.e., during training), where this difference can be quantified using information-theoretic measures.\\nThe main difference might be that in active learning, we typically assume that all possible points that we can pick come from the training data distribution, such that the most informative point will help us fit our model; in contrast, in OoD detection, we more generally assume that data might come from a distribution different to the training data distribution, such that the most informative point will likely be OoD. Due to these inherent connections, we argue that principled active learning techniques can also be used for OoD detection\\n \\nIn conclusion, you are right in that our method might not be principled in your sense. However, as most previous work has focused on devising heuristics, mostly with little/no theoretical justification, we view our work as being at least more principled than previous approaches, by having an information-theoretic justification and strong connections to the very much related active learning problem.\\nIn any case, if you think it might be an overstatement, we are happy to remove the predicate \\\"principled\\\" from the paper!\"}",
"{\"title\": \"Paper revision with new experimental results\", \"comment\": \"We uploaded a revised version of our paper with a new set of experiments on a higher-dimensional benchmark, involving the SVHN and CIFAR10 datasets (as requested by reviewer #1); see Appendix A.\\nPlease also note that we had previously uploaded a revision including additional experimental results on out-of-distribution detection in latent space (as requested by all three reviewers); see Section 5.2.\\nWe would again like to thank all reviewers for their helpful suggestions to improve our paper!\"}",
"{\"title\": \"reply\", \"comment\": \"Thanks for the reply.\\n\\nIMHO, principled means that a problem is formally posed and the solution is then derived from it. In that sense, information gain is principled for active learning, but it is not for out of distribution\\u2013at least I don't see the argument yet.\\n\\nE.g., if we have a single Gaussian, infer its posterior given an infinite data set, then my intuition tells me that the information gain for *all* possible data points will be 0. Hence, the property of OoD detection collapses, because all samples are treated equally. I don't feel convinced.\"}",
"{\"title\": \"Author Response\", \"comment\": \"We thank you for your insightful and constructive feedback, which we will take into account when revising our paper. We address all points you raised below.\\n\\n1. \\\"What is the relationship of information gain to the marginal likelihood of the data? Since both can be expressed in entropies, I can see a very strong relationship, but would enjoy the authors opinion here\\u2013what exactly is it that gives the edge?\\\"\\nWhile the two quantities are indeed related, they measure two very different things (we will add a discussion of this to the paper -- thanks for pointing this out):\\nThe marginal likelihood measures the probability that the model gives to the observed data.\\nIn contrast, the information gain measures the reduction in entropy about the posterior over model parameters after having observed a datum x; viewed differently, the information gain (or the ESS that we use) quantifies the >variation< of the marginal likelihoods of different models under the posterior, which we argue is more indicative for OoD detection than the marginal likelihoods themselves.\\n\\n2. \\\"The experiments report results on the likelihood based score. Were these results taken from previous publications or obtained from exactly the same pipeline?\\\"\\nFor comparability, all results reported in the paper were obtained from our experimental pipeline -- we did not report any results from previous publications.\\n\\n3. \\\"Why is the \\\"outlier in latent space\\\" section included even though it is not experimentally verified?\\\"\\nWe agree that such an evaluation is desirable, even in absence of an established experimental protocol or strong baselines for comparison.\\nWe thus now designed an experimental protocol and added initial results for out-of-distribution detection in latent space to the paper (see Section 5.2), and plan to add further results (e.g., on more datasets).\\nIn future work, we plan to apply our method to more complex settings (beyond the scope of this paper), such as molecular design and other applications relying on optimization in latent space of a VAE (as described in the introduction).\\n\\n4. \\\"Is the method really principled? Where is the connection from the assumption that the score should be high for out of distribution and low for in distribution? If a method is called \\\"principled\\\" I want to see a rigorous derivation of how a method derives from what principles exactly and how it is approximated.\\\"\\nWe call our method principled for two reasons (which we will clarify in the paper):\\n1) As you correctly mentioned, the ESS score measures the information gain, i.e., the reduction in entropy about the model parameters after having observed an input x.\\nSuch information-theoretic metrics to quantify the novelty of data points are widely and successfully used in information-theoretic active learning; see e.g. (MacKay, 1992; see our paper for references) or BALD (Houlsby et al, 2011) for motivations for such metrics.\\nWe argue that the notion of novelty of a datum x captured by the information gain is exactly the notion of novelty required to effectively detect outliers, revealing a fundamental connection between active learning and OoD detection.\\nIn particular, we argue that such established measures rooted in information theory are more principled than many of the previous OoD detection methods, which are often ad-hoc heuristics.\\n2) We use more principled approximate inference techniques to estimate the posterior over model parameters than previous work such as (Choi et al, 2018), which simply use an ensemble of independently trained models as a proxy for posterior samples.\\n\\n5. \\\"Since only the 10 most recent samples are kept to represent the posterior, I am worried about their diversity. I think the authors should back up that this is sufficient to represent the posterior.\\\"\\nPreliminary experiments (not included in the paper) showed that using more samples and/or a larger thinning interval does not significantly improve performance. We will add a systematic evaluation of this matter to the paper; thank you for the suggestion!\\n\\n6. \\\"What happens in the non-parametric limit, where the posterior will collapse to a point? Does the method not rely on an insufficiently inferred model?\\\"\\nAre you referring to the case in which the inference network has very large capacity and can approximate the true posterior over latent variables z arbitrarily well?\\nIn that case, there will indeed not be any (epistemic) uncertainty left in the inference network, such that estimating a posterior over encoder parameters will not help for OoD detection.\\nHowever, there will in general still be parametric uncertainty in the generative network, which can be captured by estimating a posterior distribution over the decoder parameters.\\nEven if we can infer the true latent code z corresponding to an input x, the decoder posterior samples will still agree in their likelihood estimates for in-distribution data, and disagree for OoD data, such that our method should still work in this setting.\"}",
"{\"title\": \"Author Response\", \"comment\": \"We want to thank you for your helpful feedback, which will help us to improve our paper. Please see below for our clarifications to the concerns youraised.\\n\\n1. \\\"First, I\\u2019m not sure this is true that all samples from the posterior should explain the data equally well even if it is in-distribution. Second, if the data is OoD, it is likely that all samples explain the data equally bad which also resultsin high ESS.\\\"\\nThis becomes clearer when looking at the analogous (but more intuitive) supervised regression setting: Consider a Bayesian regression model p(y|x), which captures epistemic uncertainty via a posterior over the model parameters, inducing uncertainty in the predictions y. This model will have >low< predictive uncertainty for an in-distribution input x, meaning that posterior parameter samples explain x equally well and thus >agree< on their predictions y. Conversely, for an OoD input x, the predictive uncertainty (i.e., >variation< in the predictions) will be >high<, i.e., posterior samples >disagree< on their predictions y. Importantly, not all samples will be equally bad at explaining x, but some will (by chance) be better than others.In our setting, we can view the likelihood estimate p(x|theta) as a (implicit) function from inputs x to outputs y = p(x|theta) which is fully specified by the inference and generative networks, such that a Bayesian treatment of the parameters yields a similar effect as in the regression setting. I.e., for an in-distribution input x, the likelihood \\\"predictions\\\" y = p(x|theta) will be similar for different posterior samples (i.e., the samples explain the input equally well). While for an OoD input x, the \\\"predictions\\\" y = p(x|theta) will likely all be bad, the important thing is that their >variation< will be high, i.e., the samples will not explain x equally bad, but some samples will (by chance) explain x better than others (see also our answer to point 2. below).\\nThus, since the ESS measures the variation across likelihoods, we expect it to be a good metric for OoD detection.\\nWe will add an explanation to the paper.\\n\\n2. \\\"In practice, it is very likely that p(x*|theta) are low for all the theta when x* is OoD.\\\"\\nUnfortunately, it was shown that this is not generally true, as deep generative models might assign higher likelihood to OoD samples than to in-distribution samples (Nalisnick et al, 2019; Choi et al, 2018; see our paper for references).\\nAs a result, the likelihood cannot be used as a robust measure for OoD detection, which is one of the motivations for our work.\\n\\n3. \\\"How to determine whether a data is out-of-distribution or not based on ESS? Is the threshold of ESS a hyperparameter to tune?\\\"\\nYes, in practice, one needs to define a threshold for the ESS score.\\nNote, however, that this is not a limitation of our method, as all other scores proposed in the literature also require a threshold.\\nThere exist some proposals in the literature for defining such a threshold; we will add a discussion of this to the paper.\\n\\n4. \\\"For the experiments, I wonder why the authors put Gamma hyper priors for BVAE which was not used in the previous work that use SGHMC. Is there any reason for doing this?\\\"\\nIn fact, as mentioned in the paper, previous work does propose to use Gamma hyper priors, including the paper introducing SGHMC (Chen et al. 2014; see their Section 4.2 and Appendix H.1).\\n\\n5. \\\"Again, it is unclear to me how the authors decide whether a data is out-of-distribution or not based on ESS.\\\"\\nAll metrics we report in our experiments (i.e., AUPRC, AUROC, FPR80) are threshold independent, as they take into account the performance across all possible thresholds.\\nThis is common practice in the OoD detection literature, to avoid having to specify thresholds for each method.\\n\\n6. \\\"It seems like simply applying SGHMC for the decoder parameters is sufficient, as the other treatments only improve the results incrementally but adding large computational and storage cost.\\\"\\nWhile using SGHMC over only the decoder parameters might be sufficient, using SGHMC also over the encoder parameters induces only neglible additional computational cost (SG-MCMC methods are as expensive as stochastic optimizers such as SGD), and only double the memory cost.\\n\\n7. \\\"In the experiments, BVAE only keeps the most recent 10 samples. Aren\\u2019t the samples very similar? Since the thinning interval is only 1 epoch.\\\"\\nPreliminary experiments (not included in the paper) showed that using more samples and/or a larger thinning interval does not significantly improve performance. We will add a systematic evaluation of this matter to the paper; thank you for the suggestion!\\n\\n8. \\\"It would make the paper stronger if the authors are able to demonstrate the usefulness of detecting OoD in latent space through experiments.\\\"\\nThank you for this suggestion! We now added initial results for out-of-distribution detection in latent space to the paper (see Section 5.2) and plan to add further results (e.g., on more datasets).\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for your helpful comments and suggestions!\\n\\n1. \\\"The experiment section is too limited. The authors should at least use one more dataset such as CIFAR10.\\\"\\nThank you for this suggestion. We will add additional experimental results on CIFAR10 to the paper (before the end of the rebuttal period on Friday).\\n\\n2. \\\"It would strengthen the paper if the authors could show at least initial result about how the model performs to detect out of distribution in the latent space, given that it is considered as part of the contribution\\\".\\nYes, this is a fair point, thank you for the suggestion. We added initial results for out-of-distribution detection in latent space to the paper (see Section 5.2) and plan to add further results (e.g., on more datasets).\\n\\n3. \\\"The paper lacks some references such as: Predictive uncertainty estimation via prior networks, NEURIPS 2018.\\\"\\nThank you for pointing out this work. We added it to the related work section of our paper, along with a few other works that do out-of-distribution detection by estimating predictive uncertainties.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the problem of out-of-distribution data detection, which is an important problem in machine learning. The authors propose to use Bayesian variational autoencoder which applies SGHMC to get samples of the weights of the encoder and the decoder. The proposed method is tested on two benchmarks to demonstrate effectiveness.\\n\\nThe proposed Bayesian variational autoencoder appears to be technically sound. When applying it to OoD detection, effective sample size is used to quantify how much the posterior changes given the new data. The authors claim that ESS will be large when the data is in-distribution since all samples explain the data equally well. First, I\\u2019m not sure this is true that all samples from the posterior should explain the data equally well even if it is in-distribution. Second, if the data is out of distribution, it is likely that all samples explain the data equally bad which also results in high ESS. In practice, it is very likely that p(x*|theta) are low for all the theta when x* is out-of-distribution. Am I missing something here?\\n\\nHow to determine whether a data is out-of-distribution or not based on ESS? Is the threshold of ESS a hyperparameter to tune?\\n\\nFor the experiments, I wonder why the authors put Gamma hyper priors for BVAE which was not used in the previous work that use SGHMC. Is there any reason for doing this? Again, it is unclear to me how the authors decide whether a data is out-of-distribution or not based on ESS.\\n\\nIt seems like simply applying SGHMC for the decoder parameters is sufficient, as the other treatments only improve the results incrementally but adding large computational and storage cost. I\\u2019m not familiar with the literature enough to tell whether the results of previous methods are reasonable or not. By looking at the table, it seems that the proposed method achieves some gain over the previous methods. \\n\\nIn the experiments, BVAE only keeps the most recent 10 samples. Aren\\u2019t the samples very similar? Since the thinning interval is only 1 epoch.\\n\\nIt would make the paper stronger if the authors are able to demonstrate the usefulness of detecting OoD in latent space through experiments.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"After reading all the reviews, the comments, and the additional work done by the Authors, I have decided to confirm my rating.\\n\\n==================\\n\\nThis paper leverage probabilistic inference techniques to maintain a posterior distribution over the parameters of a variational autoencoder (VAE). This results in a Bayesian VAE (BVAE) model, where instead of fitting a point estimate of the decoder parameters via maximum likelihood, they estimate their posterior distribution using samples generated via stochastic gradient Markov chain Monte Carlo (MCMC).\\nThe informativeness of an unobserved input x* / latent z* is then quantified by measuring the (expected) change in the posterior over model parameters after having observed x* / z*. The motivation is clear, when considered inputs which are uninformative about the model parameters, they are likely similar to the data points already in the training set. In contrast, inputs which are very informative about the model parameters are likely different from everything in the training data.\", \"the_contributions_are\": \"- A Bayesian VAE model which uses state-of-the-art Bayesian inference techniques to estimate a posterior distribution over the decoder parameters.\\n- A description of how this model can be used to detect outliers both in input space and in the model\\u2019s latent space.\\n- Results showing that this approach outperforms state-of-the-art outlier detection methods.\\n\\nThe paper is well written, and the proposed ideas are well motivated.\\nHowever, the experiment section is too limited. The authors should at least use one more dataset such as CIFAR10. They just use FashionMNIST vs MNIST FashionMNIST (held-out).\\nIn addition, it would strengthen the paper if the authors could show at least initial result about how the model performs to detect out of distribution in the latent space, given that it is considered as part of the contribution.\", \"the_paper_lacks_some_references_such_as\": [\"Predictive uncertainty estimation via prior networks, NEURIPS 2018.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper advocates to use information gain to detect whether a sample is out of distribution. To that end, a Bayesian VAE is introduced for which that quantity is tractable. The experiments show a solid improvement over previous methods.\\n\\nI like the paper, but I have a few questions. I am more than willing to increase my score from weak reject to weak or strong accept if these are addressed properly.\\n\\nWhat is the relationship of information gain to the marginal likelihood of the data? Since both can be expressed in entropies, I can see a very strong relationship, but would enjoy the authors opinion here\\u2013what exactly is it that gives the edge?\\n\\nThe experiments report results on the likelihood based score. Were these results taken from previous publications or obtained from exactly the same pipeline?\\n\\nWhy is the \\\"outlier in latent space\\\" section included even though it is not experimentally verified? I think it should go, as conducting experiments is cheap in ML. On the other hand, if we cannot come up with an experiment to conduct, then what is hypothesis is tested? I think the section needs to be removed and be revisited in future work.\\n\\nIs the method really principled? Where is the connection from the assumption that the score should be high for out of distribution and low for in distribution? If a method is called \\\"principled\\\" I want to see a rigorous derivation of how a method derives from what principles exactly and how it is approximated.\\n\\nSince only the 10 most recent samples are kept to represent the posterior, I am worried about their diversity. I think the authors should back up that this is sufficient to represent the posterior.\\n\\nWhat happens in the non-parametric limit, where the posterior will collapse to a point? Does the method not rely on an insufficiently inferred model?\"}"
]
} |
S1lk61BtvB | ``"Best-of-Many-Samples" Distribution Matching | [
"Apratim Bhattacharyya",
"Mario Fritz",
"Bernt Schiele"
] | Generative Adversarial Networks (GANs) can achieve state-of-the-art sample quality in generative modelling tasks but suffer from the mode collapse problem. Variational Autoencoders (VAE) on the other hand explicitly maximize a reconstruction-based data log-likelihood forcing it to cover all modes, but suffer from poorer sample quality. Recent works have proposed hybrid VAE-GAN frameworks which integrate a GAN-based synthetic likelihood to the VAE objective to address both the mode collapse and sample quality issues, with limited success. This is because the VAE objective forces a trade-off between the data log-likelihood and divergence to the latent prior. The synthetic likelihood ratio term also shows instability during training. We propose a novel objective with a ``"Best-of-Many-Samples" reconstruction cost and a stable direct estimate of the synthetic likelihood. This enables our hybrid VAE-GAN framework to achieve high data log-likelihood and low divergence to the latent prior at the same time and shows significant improvement over both hybrid VAE-GANS and plain GANs in mode coverage and quality. | [
"Distribution Matching",
"Generative Adversarial Networks",
"Variational Autoencoders"
] | Reject | https://openreview.net/pdf?id=S1lk61BtvB | https://openreview.net/forum?id=S1lk61BtvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"B-wu7JbbUM",
"r1ejzBAX5r",
"B1lD7rTsFB",
"SkxNbzojYB"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737266,
1572230418904,
1571702046791,
1571693052226
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1974/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1974/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1974/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposed an improvement on VAE-GAN which draws multiple samples from the reparameterized latent distribution for each inferred q(z|x), and only backpropagates reconstruction error for the resulting G(z) which has the lowest reconstruction. While the idea is interesting, the novelty is not high compared with existing similar works, and the improvement is not significant.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a new objective function for hybrid VAE-GANs. To overcome a number of known issues with VAE-GANs, this work uses multiple samples from the generator network to achieve a high data log-likelihood and low divergence to the latent prior.\\nIn the experimental section, the ``\\\"Best-of-Many-Samples\\\" approach is shown to outperform other state-of-the-art methods on CIFAR-10 and a synthetic dataset.\\n\\nThanks for submitting code with your submission!\", \"caveat\": \"I'm not an expert in this domain and did my best to review this paper.\", \"questions\": [\"Considering the smaller gap between \\u03b1-GAN+SN and BMS-VAE-GAN, I was wondering how much of the improvement is due to spectral normalization vs using multiple samples. Did you do an ablation study of BMS-VAE-GAN without SN?\", \"I noticed some minor typos in the text. Please fix (3.2 \\\"constrains\\\" -> \\\"constraints\\\", 3.3 \\\"traiend\\\", 3.3 \\\"unsure\\\" -> \\\"ensure\\\").\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"1. Using Discriminator to estimate the likelihood ratio is a commonly used approach, which was first proposed in [1]. This is also generalized as a reversed KL based GAN in [2] [3]. The authors failed to discuss this with these previous works in Section 3.3 and in Related works.\\n\\n2. How is the best of many comparing with importance sampling method? I think using importance sampling is the most intuitive baseline.\\n\\n3. this paper is not well written. L_1/L_2 has never explained throughout this paper, also has typos such as \\\"taiend\\\".\\n\\n\\n[1] Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks\\n[2] Variational Annealing of GANs: A Langevin Perspective\\n[3] Symmetric variational autoencoder and connections to adversarial learning\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"\\u201cBest of Many Samples\\u201d Distribution matching\", \"summary\": \"This paper proposes to a novel VAE-GAN hybrid which, during training, draws multiple samples from the reparameterized latent distribution for each inferred q(z|x), and only backpropagates reconstruction error for the resulting G(z) which has the lowest reconstruction. The authors appear to use the AAE method instead of analytically matching KL(p||q) for enforcing that the latents q(z|x) match the prior. The authors present results on MoG toy datasets, CIFAR-10, and CelebA, and compare against several other models.\", \"my_take\": \"The idea in this paper is moderately interesting, well-founded, has plenty of precedent in the literature (while still being reasonably novel), but the results present only a minimal improvement (a 5% relative improvement in FID over the baseline model from Rosca et al on CIFAR, especially when including SN [which is not a contribution of this paper]) and come at a substantial compute cost, requiring up to 30 extra samples per batch in order to attain this minimal increase. While I think the idea is interesting, the change in results over Rosca et. al does not seem to justify its increased computational expense (which is also not characterized in sufficient thoroughness). I am pretty borderline on this paper ( about a 5/10) but under the 1-3-6-8 scoring constraint I tend to lean reject because while I like the idea, I do not think the results are significant enough to support its adoption; I think the relative compute and implementation cost limit this method\\u2019s potential impact. I am keen to discuss this paper with the other reviewers.\", \"notes\": \"-The results on the 2D MoG toy datasets are good but are also suspect\\u2014the authors state that they use a 32-dimensional latent space, but the original code provided for VEEGAN uses a 2-dimensional latent space. The authors should re-run the experiment for BMS-VAE-GAN using a 2D latent space (this should be very easy and take less than an hour on a GPU to get several runs in).\\n\\n-\\u201cagain outperforming by a significant margin (21.8 vs 22.9 FID)\\u201d This is not a significant margin; this is less than a 5% margin and, at those FID scores, represents an imperceptible change in sample quality.\\n\\n-The authors seem to suggest that applying spectral norm to the GAN of Rosca et. al. is somehow a contribution (e.g. having \\u201cours\\u201d next to this model in the tables); I would advise against even appearing to suggest this as it is clearly not a contribution.\\n\\n-Characterize the increase in compute cost. \\u201c. We use as many samples during training as would fit in GPU memory so that we make the same number of forward/backward passes as other approaches and minimize the computational overhead of sampling multiple samples\\u201d is a qualitative description; I would like to see this quantitatively described. How do the runtimes differ between your baseline and the T=10 and T=30 runs? If they don\\u2019t differ, why? Are the authors e.g. reducing the batch size by a factor of 10 or 30 to make this computationally tractable?\\n\\n-The latent space discriminator D_L should be referred to in section 3; its formal introduction is deferred to later in the paper, hampering the presentation and flow. \\n\\n-CelebA is not multimodal; it is in fact, highly constrained, and primarily only has textural variation (virtually no pose variation).\\n\\n-ALI and BiGAN are listed under Hybrid VAE-GANs. These models are not VAE-GAN hybrids. Additionally, this section states that BiGAN builds upon ALI. This is not true, these papers are in fact proposing the same thing and were released at nearly the exact same time. Do not mischaracterize or incorrectly summarize papers. Please re-read both papers and refer to them correctly.\\n\\n-Mode collapse (when many points in z map to an unexpectedly small region in G(z)) is a different phenomenon from mode dropping (when many points in x are not represented in G(z), i.e. no point in z maps to a cluster of x\\u2019s, as is the case if e.g. a celebA model generates frowning and neutral faces but no smiling faces). While these phenomena often co-occur (especially during complete training collapse), they are not the same thing, and this paper conflates them in several places.\", \"minor\": \"Section 3, paragraph 2: \\u201cThe GAN (G\\u03b8,DI\\u2026\\u201d There\\u2019s a close parenthesis missing here. \\n\\nSection 3.3: \\u201cThe network is traiend\\u2026\\u201d \\n\\nPlease thoroughly proofread your paper for typos and grammatical mistakes.\"}"
]
} |
rkl03ySYDH | SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition | [
"Zhixuan Lin",
"Yi-Fu Wu",
"Skand Vishwanath Peri",
"Weihao Sun",
"Gautam Singh",
"Fei Deng",
"Jindong Jiang",
"Sungjin Ahn"
] | The ability to decompose complex multi-object scenes into meaningful abstractions like objects is fundamental to achieve higher-level cognition. Previous approaches for unsupervised object-oriented scene representation learning are either based on spatial-attention or scene-mixture approaches and limited in scalability which is a main obstacle towards modeling real-world scenes. In this paper, we propose a generative latent variable model, called SPACE, that provides a unified probabilistic modeling framework that combines the best of spatial-attention and scene-mixture approaches. SPACE can explicitly provide factorized object representations for foreground objects while also decomposing background segments of complex morphology. Previous models are good at either of these, but not both. SPACE also resolves the scalability problems of previous methods by incorporating parallel spatial-attention and thus is applicable to scenes with a large number of objects without performance degradations. We show through experiments on Atari and 3D-Rooms that SPACE achieves the above properties consistently in comparison to SPAIR, IODINE, and GENESIS. Results of our experiments can be found on our project website: https://sites.google.com/view/space-project-page | [
"Generative models",
"Unsupervised scene representation",
"Object-oriented representation",
"spatial attention"
] | Accept (Poster) | https://openreview.net/pdf?id=rkl03ySYDH | https://openreview.net/forum?id=rkl03ySYDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BFDsTBwtn1",
"HkePhKL3or",
"BJliMFUnjS",
"HJx8RO8hoS",
"HyejZnvPjS",
"r1x9ajDwiS",
"ByxTGjPvsB",
"Hkl0ycvvsH",
"HkxPUtvwsr",
"SkglMXumjr",
"S1xhornvcH",
"HJep1tq-qr",
"r1g_bxtAYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737236,
1573837230830,
1573837074635,
1573837005888,
1573514243480,
1573514178403,
1573514005177,
1573513701530,
1573513551271,
1573253896399,
1572484516285,
1572083940651,
1571880959886
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1972/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1972/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1972/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1972/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1972/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1972/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1972/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1972/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1972/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1972/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1972/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1972/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper makes a reasonable contribution to generative modeling for unsupervised scene decomposition. The revision and rebuttal addressed the primary criticisms concerning the qualitative comparison and clarity, which caused some of the reviewers to increase their rating. I think the authors have adequately addressed the reviewer concerns. The final version of the paper should still strive to improve clarity, and strengthen the evaluation and ablation studies.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Blind Review #2 Followup\", \"comment\": \"Update on the hand labeling experiment:\\n\\nFollowing the reviewer\\u2019s suggestion, we hand-labeled 100 images for SpaceInvaders. However, because of the small size of the objects, the IoU metric we use is very sensitive to small deviations -- just a 2 to 3 pixel difference in height and width between the predicted and ground truth bounding boxes caused the IoU to drop below 0.5. This makes the value of the metric extremely sensitive to the tightness of the boxes as well as the ground truth boxes that we labeled, and thus not a good indicator of the actual bounding box quality.\\n\\nAs an example, we have uploaded an anonymous image here: https://ibb.co/2MgSzs3. The green boxes are SPACE predicted bounding boxes and the red boxes are ground truth. As can be seen from this image, the predicted bounding boxes are generally larger than the ground truth bounding boxes. Because of the sensitivity of the IoU metric, the [email protected] for this image will be low. However, from a qualitative standpoint, one could not objectively say that the ground truth bounding boxes are better than the predicted bounding boxes (with the small bullet being the exception). \\n\\nWhile we did not decide to include these results in the paper at this time, this may be something we can reconsider for a camera-ready version of this paper when we have more time to investigate the results.\"}",
"{\"title\": \"Response to Blind Review #1 (2/2)\", \"comment\": \"\\u201cAs this is a generative model, reviewer would like to know the applicability to other tasks such as pure generation, denoising and inpainting. For example, how does the pre-trained model perform with noisy input (e.g., white noise added to the image)? Also, what\\u2019s the pure generation results following the chain rules given by Equation (1), (3) & (4)..\\u201d\\n\\nFirst, we want to note that the focus of this work is not generation. Rather, similar to other generative models (AIR and SPAIR) for object representation learning, we focus on inference, in which we decompose the scene into meaningful components defined in the generation process and produce a good representation for each of them. While it is possible for our model to have unconditional generation by sampling from the priors, due to the independence assumption in our model, each object would be generated and placed independently of other objects in the scene, resulting in unrealistic generation. For example, if we generate in Atari scenes, an object can be placed anywhere, and hence, would not be coherent with the actual game. For the same reason, we did not investigate applications like denoising or inpainting. Similarly, the related works AIR, SPAIR, and IODINE do not provide generation results in their papers. That being said, we believe investigating structured scene models whose main purpose is generation would be very interesting in future research.\"}",
"{\"title\": \"Response to Blind Review #1 (1/2)\", \"comment\": \"\\u201cThe qualitative improvements over the baseline method [Crawford & Pineau 2019] seem not very impressive (Figure 1: only works a bit better with cluttered scenes).\\u201d\\n\\n-> This is true. However, as noted in responses to other reviewers, we would like to emphasize that our aim is not necessarily to produce better bounding boxes than SPAIR. Rather, we want to show that our method can produce similar quality bounding boxes while achieving the following important properties: 1) having orders of magnitude faster inference and gradient step time, 2) learning more quickly than other methods, 3) scaling to a large number of objects without significant performance degradation, and 4) providing complex background segmentation. In our updated version of the paper, we have emphasized the above in the quantitative evaluation section to make this more clear.\\n\\n\\u201cHow does the proposed method perform in real world datasets (Outdoor: KITTI, CItyscape; Indoor: ADE20K, MS-COCO)?\\u201d\\n\\n-> This is a good question. First, we want to note that compared to previous models like AIR, SPAIR, IODINE, and GENESIS which use relatively simple datasets (few objects with a simple background), our dataset and task are far more complex, consisting of many objects (>20 in 3D-Room Large) with complex and dynamic backgrounds (Atari). This level of tasks has not been tested before in the related works. Thus, we believe our task is a fairly challenging one from the perspective of unsupervised generative representation learning of object representations. However, despite the above improvement over previous models, we believe that the proposed model and more generally unsupervised approaches to decomposed object representation learning, still require significant innovations to make it applicable to \\u201cin-the-wild datasets.\\u201d We believe that achieving this ability would require more unsupervised or self-supervised learning signals like interacting with objects and temporal observations.\\n\\n\\u201cSecond, the generalization to unseen scenarios are mentioned in the introduction but not really carefully studied or evaluated in the experiments. For example, one experiment would be to train the framework on the current 3D-Rooms dataset but then test on new environments (e.g., other room layout) or new objects (e.g. other shapes such as shapenet objects).\\u201d\\n\\nThis is an interesting question. Again, we want to emphasize that SPACE is an unsupervised generative model, which means that it is actually modeling the distribution that generates the dataset. Thus, we only expect generalization within this distribution. So, as we demonstrate in our experiments, it can generalize to unseen 3D Room scenes where the number, color and shape combinations, and placements of the objects are never seen in training, but it is difficult, in theory, to generalize to unseen scenes that are completely different from the ones we saw in the training set.\\n\\n\\u201cEquation (4) does not seem to be natural in practice: basically, the background latents depends on the foreground object latents. Alternatively, you can assume them to be independent with each other. It\\u2019s better to clarify this point in the rebuttal.\\u201d \\n\\n-> We agree that, although our experimental results show that it works quite well in practice, this may be a bit unnatural. Also, we agree that assuming them to be independent, or conditioning foreground latents on background latents are also reasonable choices, especially for generation. In the paper, however, our main focus is learning object representation, not generation. While the proposed modeling also has the potential to generate, like other works (AIR, SPAIR), we do not optimize the model toward that direction to focus on the main contribution. (as discussed in more detail below).\"}",
"{\"title\": \"Response to Blind Review #2 (2/2)\", \"comment\": \"\\u201c(d) For SPAIR in Table 1, it's not clear whether it's the slow SPAIR that was mentioned previously or the fast one (e.g., the predicted boxes are described as the same quality as SPAIR -- but is it the slow SPAIR or fast one?). I think the paper would benefit from being a bit clearer about this. I get that the parallel decomposition, in some sense, may be necessary to get any results. But I wish the paper were a bit more explicit.\\u201d\\n\\n-> Thank you for pointing this out. These results were run with the faster version of SPAIR. We have updated our paper to make this more explicit. Separately, we\\u2019ve also denoted the patch-based version of SPAIR as SPAIR-P to distinguish it from the version of SPAIR that trains on the entire image.\\n\\n\\u201c-There are some reasonable results. I realize that there isn't existing ground truth on atari and other games, but why not label a few hundred frames manually?\\u201d\\n\\n-> This is a good suggestion. We are in the process of hand-annotating images for Space Invaders (the game for which both SPAIR and SPACE perform well). We will update our paper with the results once we complete our experiment on this data.\\n\\n\\u201c-It would have been nice to have an ablation of some of the components, including the boundary loss. Unfortunately, there's a complex multi-part system and it's not clear how to break off components apart for reuse elsewhere.\\u201d\\n\\n-> We would like to point out that our experiments varying the background components of SPACE from K=1 to K=5 is actually an ablation study of our background module. Additionally the comparison of SPACE with K=1 with SPAIR can be seen as an ablation study of parallel vs sequential inference of the cell latents. We agree that an ablation study on the boundary loss is also interesting. We have conducted this experiment and included the results in the \\u201cAverage Precision and Error Rate\\u201d section.\\n\\n\\u201cI would be a little wary of making a big deal out of the discovery of the Montezuma's Revenge key. I realize this is indeed important, but I don't see why something like the slic superpixel objective, or felzenswalb-huttenlocher wouldn't find it either. I think it's great that there's a setting in terms of network capacity (for fg/bg networks) that yields this result, but this seems to depend heavily on the particular networks used for each of the parts of the method, and not on the general method. Also, it seems largely a function of the fact that they're a small region with a different color.\\u201d\\n\\n-> This is a good point. We agree that the key can be detected by using a segmentation method where the objective function is explicitly designed solely for segmentation. We think the detection of the key in SPACE is interesting because the detection emerges using unsupervised neural end-to-end learning where a segmentation algorithm is not explicitly implemented but the goal is to generate (reconstruct). In this setting, from our extensive experiments on SPAIR with background, we found that it is very difficult to detect the key even if it is a small region with a different color. This is because the key always appears in the same position with the same appearance, and thus the background module tends to model it very easily. (This is also observed in Fig 2, SPAIR 16x16 on SpaceInvaders. Here, we see the red obstacle is detected as background in SPAIR while SPACE detects it as a foreground.) Other dynamic objects were detected rather easily by the foreground module. The new knowledge we found from the SPACE experiment is that the background decomposition seems to allow the key to be detected as a foreground object. We think that this is because each of the background components is a weak module. One may ask why not use a weak background module in SPAIR, but since SPAIR only has a single background component, it would not be able to model complex backgrounds. SPACE, on the other hand, would still be able to model complex backgrounds with the cooperation of multiple background components. We nevertheless totally agree that we need more investigation about this interesting phenomenon.\"}",
"{\"title\": \"Response to Blind Review #2 (1/2)\", \"comment\": \"\\u201cWhy does Figure 1 show results from different systems on different images? This makes comparison impossible. Paired samples are always more informative.\\u201d\\n\\n-> We agree that we should present the results from the different methods using the same set of images. In our updated version of the paper, we have made this change.\\n\\n\\u201c(a) It's never listed how K for genesis was picked -- this should presumably be tuned somewhere to optimize performance. The paper mentions in the 4.2 that it was impossible to run the experiments for GENESIS for more than 256 components -- but the GENESIS paper has numbers more like K=9. If there are an overabundance of components, this might explain some of the object splitting observed in the paper.\\u201d\\n\\n-> For qualitative results in Section 4.1, we picked the best K via hyperparameter search for all methods. In this experiment, to focus on the best decomposition quality, we use the knowledge of the number of objects in the scene which is in general not given at test time. For GENESIS specifically, the best values of K we found range from 12 to 65 depending on the environment.\\n\\n-> For quantitative results in Section 4.2, the goal is to see the performance on decomposition capacity (we\\u2019ve updated our paper to denote this as C). The higher decomposition capacity is better. Because we do not know how many objects will be given at test time, we want a single model to have high decomposition capacity rather than having multiple models of different capacities. For example, if we set the capacity to C=256, it means that the model should be able to deal with scenes with the number of objects from 0 to 256. SPACE can flexibly deal with such a broad range of scenes with capacity C=256. In a scene with a single object, SPACE should not split it into multiple objects because of the spatially parallel local detection. In a scene with a large number of objects, say 240, SPACE should reasonably be able to deal with it due to the spatially parallel & local detection mechanism. SPAIR should work similarly but just much more slowly because it is also spatially local but sequential. What if GENESIS or IODINE is given a scene with about 240 objects? To deal with this, it should have capacity like C=256. What if it is then given a scene with a single object? (Again, it is clear that we do not want to use another model trained for e.g., C=10) As claimed in the GENESIS or IODINE paper, in the ideal case, it should learn to only use one component to capture the object while suppressing all other components, instead of splitting a single object into many (256) small parts. This seems pretty difficult due to its sequential nature ignoring spatial locality. Showing this is the goal of the experiment.\\n\\n\\u201c(b) Unless I'm missing something, in Figure 5, for 4x4 and 8x8, it doesn't appear that IODOINE or GENESIS have converged at all. Does the validation MSE just then flatten (or go up) there? This is also wall-clock, so I'm not sure why things would stop there. This seems to conflate training speed with performance (although also note that the wall clock times being discussed are pretty small -- the rightmost side of the graph is ~14 hours -- hardly a burdensome experiment).\\n(c) Similarly, for 16x16 cells, SPAIR seems to be improving consistently. Is it being cut off?\\u201d\\n\\n-> Our original intent for these charts was to show that SPACE converges more quickly than the other methods for a given decomposition capacity. However, this is a good point that we can additionally show both the actual time to convergence for each of the methods as well as the MSE to which each method converges. We have updated these charts so they are no longer cut off in our updated version of the paper.\\n\\n\\u201cFigure 5 -- The caption for the figure things appears to not make sense: GENESIS is listed as having K = HxW+5 components and SPACE has K=5 listed. Neither make sense to me. Are they out of order?\\u201d\\n\\n-> We hope this is clarified with our response to (a). We have also updated the paper to clarify these configurations.\"}",
"{\"title\": \"Response to Blind Review #3\", \"comment\": \"\\u201cHowever, the fact that the timings are only reported for the gradient step and not more comprehensively for entire training and inference step, is unsatisfying.\\u201d\\n\\n-> With regards to the entire training time, we have shown that our method is clearly faster than the others via the right-most three plots in Figure 5. In these plots, the x-axis is wall-clock time and thus shows the overall convergence rate during training. For the inference step, the first plot in Figure 5 shows that SPACE is also faster than the other methods. To clarify, when we mention the gradient step, we are actually referring to the time for both forward and backward propagation. Therefore, the inference step (which is part of the forward pass) is included in our definition of the gradient step. Also, even without this experiment, from the design of the methods, it should be clear that the comparing baseline methods should be slower than ours. Specifically, our method is parallel except for a small number of background components while other methods are all fully sequential. Thus, it should be clear to see other methods become slower as C (decomposition capacity) increases. The experiments just reaffirm this point to provide actual numbers. In our updated version of the paper, we made this point more clear.\\n\\n\\u201cI found the qualitative comparisons to be confusing as they were mostly for different input frames, making it hard to have a direct comparison of the quality between the proposed method and baselines.\\u201d\\n\\n-> We agree that we should present the results from the different methods using the same set of images. In our updated version of the paper, we have made this change.\\n\\n\\u201cMoreover, the quantitative results reporting bounding box precision are confusing. Why report precision at exactly IoU = 0.5 and IoU in [0.5, 0.95] instead of the more standard precision at IoU >= 0.5 (and higher threshold values such as 0.95)?\\u201d\\n\\n-> The result in the IoU = 0.5 column uses 0.5 as a threshold rather than an exact value. This is the standard metric used in the Pascal VOC challenge. The [0.5, 0.95] result is the mean average precision over different IoU thresholds from 0.5 to 0.95. More specifically, we take 0.05 increments, so we have average values for the following IoU thresholds: (0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95). This is also the same metric used in the MS COCO object detection challenge. In our updated version of the paper, we make this more clear.\\n\\n\\u201cThe differences in the reported results seem relatively small and in my opinion, not conclusive given the above unclear points\\u201d\\nOur aim is not to show that we can produce better bounding boxes than SPAIR. Rather, we want to show that our method can produce similar quality bounding boxes while 1) having orders of magnitude faster inference and gradient step time, 2) converging more quickly than other methods, 3) scaling to a large number of objects without significant performance degradations, and 4) providing complex background segmentation. In our updated version of the paper, we have emphasized the above in the quantitative evaluation section to make this more clear.\"}",
"{\"title\": \"Response to Blind Review #4\", \"comment\": \"\\u201c1, The organization of the paper should be improved. For example, the introduction to the generative model is too succinct: the spatial attention model did not be introduced in main text. Why this model is call 'spatial attention' is not clear to me.\\u201d\\n\\n-> We agree that we can be more clear about our generative model and how spatial attention is used. In our updated version of the paper, we clarified the description and included a diagram (see Figure 1) that better illustrates our model and depicts how spatial attention is used. We also provide a more thorough discussion of parallel spatial attention in the \\u201cParallel Inference of Cell Latents\\u201d section.\\n\\n\\u201cThe boundary loss seems an important component, however, it is never explicitly presented.\\u201d\\n\\n-> We present the boundary loss in the \\u201cPreventing Box-Splitting\\u201d subsection and provide implementation details in Appendix C. We\\u2019ve also included an ablation study that removes boundary loss in our Average Precision and Error Rate experiments. \\n\\n\\u201c2, The parallel inference is due the mean-field approximation, in which the posterior is approximated with factorized model, therefore, the flexibility is restricted. This is a trade-off between flexibility and parallel inference. The drawback of such parametrization should be explicitly discussed. I was wondering is there any negative effect of such the approximated posterior with fully factorized model comparing to the SPAIR?\\u201d\\n\\n-> We agree that the independence assumption for mean-field may provide a poor approximation in many cases. In our problem, one potential negative effect of this assumption is that it may result in duplicate detections due to objects not referring to each other. This independence assumption, however, is not always a poor approximation. It is actually a good choice if the underlying system actually shows weak dependency between the factors, which is actually the case in our problem. We observe that -- contrary to what the SPAIR authors conclude by saying that sequential processing is crucial for performance -- the independence assumption should not affect the performance much. This is because we observe that (1) due to the bottom-up encoding conditioning on the input image, each object latent should know what\\u2019s happening around its nearby area without communicating with each other, and that (2) in (physical) spatial space, two objects cannot exist at the same position. Thus, the relation and interference from other objects should not be severe. Based on this reasoning, questioning what the SPAIR authors concluded, we implement the parallelization and show empirically that our insight and reasoning are actually correct by showing comparable performance to SPAIR. Importantly, via a recent personal communication, the SPAIR authors also confirmed that they also recently realized that the independence assumption is correct (and thus parallelizable) even if they didn\\u2019t know it when they had published SPAIR. \\n\\nWe believe that this can also be considered a contribution because it corrects the previous state-of-the-art knowledge that was against the possibility of parallel processing.\\n\\n\\n\\u201c3, The empirical evaluation is not convincing. The quality illustration in Fig.1, 2 and 3 uses different examples for different methods. This cannot demonstrate the advantages of the proposed model.\\u201d\\n\\n-> We agree that we should present the results from the different methods using the same set of images. In our updated version of the paper, we have made this change.\\n\\n\\u201cThe quantitative evaluation only shows one baseline, SPAIR, in Table 1, and other baselines (IODINE and GENESIS) are missing. With such empirical results, the performances of the proposed method are not convincing.\\u201d\\n\\n-> For our quantitative evaluation, we compare gradient step latency and time to convergence for all methods. However, since IODINE and GENESIS do not produce bounding boxes, we cannot compare Average Precision and Error Rate for those methods. We would also like to emphasize that our aim is not to necessarily produce better bounding boxes than SPAIR. Rather, we want to show that our method can produce similar quality bounding boxes while 1) having orders of magnitude faster inference and gradient step time, 2) converging quicker than other methods, 3) scaling to a large number of objects without significant performance degradation, and 4) providing complex background segmentation. In our updated version of the paper, we have emphasized the above in the quantitative evaluation section to make this more clear.\"}",
"{\"title\": \"For All Reviewers\", \"comment\": \"We want to thank all the reviewers for taking the time to read our paper and provide insightful feedback. We have prepared responses for the first three reviewers (Reviewers #2, #3 and #4) and we will address Reviewer #1\\u2019s feedback shortly. We have uploaded a new version of the paper, which addresses the questions and concerns that were raised. Specifically, we have updated the following:\\n\\n1) Updated our qualitative experiments to use the same set of images across all different methods.\\n2) Included a diagram (Figure 1), that better illustrates our model and depicts how spatial attention is used.\\n3) Clarified a few points in the Quantitative Comparison section with regards to decomposition capacity (C) and how we chose C for the baselines. Also further emphasized that our goal is not to necessarily produce better bounding boxes than SPAIR, but rather to show that we can still produce similar quality bounding boxes while taking advantage of a parallel architecture and providing complex background decomposition.\\n4) Updated the convergence charts so no methods are cut off.\\n5) Updated our Average Precision and Error Rate experiments to include an ablation study of boundary loss. For table 1, we also further optimized the hyperparameters for SPACE and SPAIR. Specifically, we found that the scale prior (which controls the tightness of the boxes) and sigma (for computing likelihood) can significantly affect the results, so we tuned both models to make both average precision and error rates as good as possible.\\n6) Minor edits for typos, formatting, and clarity, including those pointed out by the reviewers. Additionally, we moved a few of the qualitative images to the appendix in the interest of space.\", \"we_have_also_created_a_website_with_additional_qualitative_examples_and_video_of_space\": \"https://sites.google.com/view/space-project-page/home\\n\\nWe will respond to each reviewer\\u2019s points in detail in the comments below. We believe we have addressed each reviewer\\u2019s concerns and look forward to hearing feedback about the updated version of our paper. We hope the reviewers can take our responses and revisions into consideration when evaluating our final score.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies the problem of unsupervised scene decomposition with a foreground-background probabilistic modeling framework. Building upon the idea from the previous work on probabilistic scene decomposition [Crawford & Pineau 2019], this paper further decomposes the scene background into a sequence of background segments. In addition, with the proposed framework, scene foreground-background interactions are decoupled into foreground objects and background segments using chain rules. Experimental evaluations have been conducted on several synthetic datasets including the Atari environments and 3D-Rooms. Results demonstrate that the proposed method is superior to the existing baseline methods in both decomposing objects and background segments.\\n\\nOverall, this paper studies an interesting problem in deep representation learning applied to scene decomposition. Experimental results demonstrated incremental improvements over the baseline method [Crawford & Pineau 2019] in terms of object detection. However, reviewer has a few questions regarding the intuition behind the foreground-background formulation and the generalization ability to unseen combinations or noisy inputs.\\n\\n== Qualitative results & generalization ==\\nThe qualitative improvements over the baseline method [Crawford & Pineau 2019] seem not very impressive (Figure 1: only works a bit better with cluttered scenes). First, how does the proposed method perform in real world datasets (Outdoor: KITTI, CItyscape; Indoor: ADE20K, MS-COCO)? Second, the generalization to unseen scenarios are mentioned in the introduction but not really carefully studied or evaluated in the experiments. For example, one experiment would be to train the framework on the current 3D-Rooms dataset but then test on new environments (e.g., other room layout) or new objects (e.g. other shapes such as shapenet objects). \\n\\n\\n== Application beyond object detection ==\\nEquation (4) does not seem to be natural in practice: basically, the background latents depends on the foreground object latents. Alternatively, you can assume them to be independent with each other. It\\u2019s better to clarify this point in the rebuttal. As this is a generative model, reviewer would like to know the applicability to other tasks such as pure generation, denoising and inpainting. For example, how does the pre-trained model perform with noisy input (e.g., white noise added to the image)? Also, what\\u2019s the pure generation results following the chain rules given by Equation (1), (3) & (4).\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"In this paper, the authors propose a generative latent variable model, which is named as SPACE, for unsupervised scene decomposition. The proposed model is built on a hierarchical mixture model: one component for generating foreground and the other one for generating the background, while the model for generating background is also a mixture model. The model is trained by standard ELBO with Gumbel-Softmax relaxation of the binary latent variable. To avoid the bounding box separation, the authors propose the boundary loss, which will be combined with the ELBO for training. The authors evaluated the proposed on 3D-room dataset and Atari.\", \"there_are_several_issues_need_to_be_addressed\": \"1, The organization of the paper should be improved. For example, the introduction to the generative model is too succinct: the spatial attention model did not be introduced in main text. Why this model is call 'spatial attention' is not clear to me. The boundary loss seems an important component, however, it is never explicitly presented. \\n\\n2, The parallel inference is due the mean-field approximation, in which the posterior is approximated with factorized model, therefore, the flexibility is restricted. This is a trade-off between flexibility and parallel inference. The drawback of such parametrization should be explicitly discussed. I was wondering is there any negative effect of such the approximated posterior with fully factorized model comparing to the SPAIR?\\n\\n3, The empirical evaluation is not convincing. The quality illustration in Fig.1, 2 and 3 uses different examples for different methods. This cannot demonstrate the advantages of the proposed model. The quantitative evaluation only shows one baseline, SPAIR, in Table 1, and other baselines (IODINE and GENESIS) are missing. With such empirical results, the performances of the proposed method are not convincing. \\n\\nIn sum, I think this paper is not ready to be published. \\n\\n====================================================================\\n\\nI have read the authors' reply and the updated version. I will raise my score to 6. \\n\\nAlthough the mean-field inference is standard, the model in the paper looks still interesting and the performances are promising. \\n\\nI expect the boundary loss should be specified formally in the final version.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes SPACE: a generative latent variable models for scene decomposition (foreground / background separation and object bounding box prediction). The authors state the following contributions relative to prior work in this space: 1) ability to simultaneously perform foreground/background segmentation and decompose the foreground into distinct object bounding box predictions, 2) a parallel spatial attention mechanism that improves the speed of the architecture relative to the closest prior work (SPAIR), 3) a demonstration through qualitative results that the approach can segment into foreground objects elements that remain static across observations (e.g. the key in Montezuma's Revenge).\", \"the_proposed_model_is_evaluated_on_two_sets_of_datasets\": \"recorded episodes from subsets of the Atari games, and \\\"objects in a 3D room\\\" datasets generated by random placement of colored primitive shapes in a room using MuJoCo. Qualitative results demonstrate the ability of the proposed model to separate foreground from background in both datasets, as well as predict bounding boxes for foreground objects. The qualitative results show comparisons against SPAIR, as well as two mixture-based generative models (IODINE and GENESIS), though mostly not for direct comparisons on the same input. Quantitative results compare the proposed model against the baselines in terms of: gradient step timings, and convergence plots of RMSE of reconstruction against wall clock time, and finally on object bounding box precision and object count error in the 3D room dataset.\\n\\nThe key novelty of the proposed model is that it decomposes the foreground latent variable into a set of latents (one for each detected object), and attends to these in parallel. This leads to improved speed compared to SPAIR, as demonstrated by the gradient step timings. I am convinced that the proposed model is asymptotically faster than SPAIR. However, the fact that the timings are only reported for the gradient step and not more comprehensively for entire training and inference step, is unsatisfying. I found the qualitative comparisons to be confusing as they were mostly for different input frames, making it hard to have a direct comparison of the quality between the proposed method and baselines. Moreover, the quantitative results reporting bounding box precision are confusing. Why report precision at exactly IoU = 0.5 and IoU in [0.5, 0.95] instead of the more standard precision at IoU >= 0.5 (and higher threshold values such as 0.95)? The differences in the reported results seem relatively small and in my opinion, not conclusive given the above unclear points.\\n\\nDue to the above weaknesses in the evaluation, I am not fully convinced that the claimed contributions are substantiated empirically. Thus I lean towards rejection. However, since I am not intimately familiar with the research area, I am open to being convinced by other reviewers and the authors about the conceptual contributions of the model. As it stands, I don't think this contribution is strong enough to merit acceptance.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Positives:\\n+The system makes sense and is explained well\\n+The factoring of scenes into objects and multiple background components is good\\n+I think overall the experiments are reasonable, although I have a number of questions about whether aspects of them are apples-to-apples\", \"negatives\": \"-Some of the experiments do not appear apples-to-apples\\n-There are a large number\\u00a0of changes, and there aren't any ablations. It's a little hard to follow and verify that the gains are credited properly. \\n\\nOverall, I'm favorably inclined towards accepting this paper so long as the experiments are more clearly made apples to apples. Right now, since I'm forced to give a binary decision and I'm not positive about comparisons, I have to lean towards rejection -- I'd peg my actual rating as 4.5.\", \"method\": \"+The method is well-explained and straight-forward (in a good way).\\u00a0\\n+The factoring of scenes into objects and multiple background components is good\\n+The parallelization is good, and the fact that it works far faster than SPAIR with similar results is quite nice\", \"experiments\": \"+Overall the experiments are pretty good and compare against\\u00a0the baselines I would expect, and have both qualitative and quantitative results.\\n+The method appears to do a good job of segmenting the objects, and if Figure 1 is representative, this is quite impressive.\\u00a0\\n-Why does Figure 1 show results from different systems on different images? This makes comparison impossible. Paired samples are always more informative.\\n\\n-It's not clear to me that fair comparisons were done, especially to GENESIS.\\u00a0\\n(a) It's never listed how K for genesis was picked -- this should presumably be tuned somewhere to optimize performance. The paper mentions in the 4.2 that it was impossible to run the experiments for GENESIS for more than 256 components -- but the GENESIS paper has numbers more like K=9. If there are an overabundance of components, this might explain some of the object splitting observed in the paper.\\n(b) Unless I'm missing something, in Figure 5, for 4x4 and 8x8, it doesn't appear that IODOINE\\u00a0or GENESIS have converged at all. Does the validation MSE just then flatten (or go up) there? This is also wall-clock, so I'm not sure why things\\u00a0would stop there. This seems to conflate training speed with performance (although also note that the wall clock times being discussed are pretty small -- the rightmost side of the graph is ~14 hours -- hardly a burdensome experiment).\\n(c)\\u00a0Similarly, for 16x16 cells, SPAIR seems to be improving consistently. Is it being cut off?-Figure 5 -- The caption for the figure things appears to not make sense: GENESIS is listed as having K = HxW+5 components and SPACE has K=5 listed. Neither make sense to me. Are they out of order?\\n(d) For SPAIR in Table 1, it's not clear whether it's the slow SPAIR that was mentioned previously or the fast one (e.g., the predicted boxes are described as the same quality as SPAIR -- but is it the slow SPAIR or fast one?). I think the paper would benefit from being a bit clearer about this. I get that the parallel decomposition, in some sense, may be necessary to get any results. But I wish the paper were a bit more explicit.\\n\\n\\n-There are some reasonable results. I realize that there isn't existing ground truth on atari and other games, but why not label a few hundred frames manually?\\u00a0\\n-It would have been nice to have an ablation of some of the components, including the boundary loss. Unfortunately, there's a complex multi-part system and it's not clear how to break off components apart for reuse elsewhere.\\n\\nSmall stuff that doesn't affect my review:\\n1) Figure 5 -- the figure text size is tiny and should be fixed.\\u00a0\\n2) Eqn 3, subscript of the product \\\"i\\\" -> \\\"i=1\\\"\\n3) Table 1 -- captions on tables go on top\\n4) Now that the systems work like this, I'd encourage the authors to go and try stuff on more realistic data.\\n5) I would be a little wary of making a big deal out of the discovery of the Montezuma's Revenge key. I realize this is indeed important, but I don't see why something like the slic superpixel objective, or felzenswalb-huttenlocher wouldn't find it either. I think it's great that there's a setting in terms of network capacity (for fg/bg networks) that yields this result, but this seems to depend heavily on the particular networks used for each of the parts of the method, and not on the general method. Also, it seems largely a function of the fact that they're a small region with a different color.\\n\\n-----------------------------------------\", \"post_rebuttal_update\": \"I have read the authors' response and I am happy to increase my rating to 6.\"}"
]
} |
HyeCnkHtwH | Efficient generation of structured objects with Constrained Adversarial Networks | [
"Jacopo Gobbi",
"Luca Di Liello",
"Pierfrancesco Ardino",
"Paolo Morettin",
"Stefano Teso",
"Andrea Passerini"
] | Despite their success, generative adversarial networks (GANs) cannot easily generate structured objects like molecules or game maps. The issue is that such objects must satisfy structural requirements (e.g., molecules must be chemically valid, game maps must guarantee reachability of the end goal) that are difficult to capture with examples alone. As a remedy, we propose constrained adversarial networks (CANs), which embed the constraints into the model during training by penalizing the generator whenever it outputs invalid structures. As in unconstrained GANs, new objects can be sampled straightforwardly from the generator, but in addition they satisfy the constraints with high probability. Our approach handles arbitrary logical constraints and leverages knowledge compilation techniques to efficiently evaluate the expected disagreement between the model and the constraints. This setup is further extended to hybrid logical-neural constraints for capturing complex requirements like graph reachability. An extensive empirical analysis on constrained images, molecules, and video game levels shows that CANs efficiently generate valid structures that are both high-quality and novel. | [
"deep generative models",
"generative adversarial networks",
"constraints"
] | Reject | https://openreview.net/pdf?id=HyeCnkHtwH | https://openreview.net/forum?id=HyeCnkHtwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"sB_v08TlTO",
"r1xvGsaZsr",
"Hyx9h93-oH",
"SJgSmV5bsr",
"SygPGB_3YB",
"rkeHQhM3Kr",
"BJgmAroiFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737204,
1573145359469,
1573141170088,
1573131292792,
1571747087438,
1571724317113,
1571694027294
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1971/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1971/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1971/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1971/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1971/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1971/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper develops ideas for enabling the data generation with GANs in the presence of structured constraints on the data manifold. This problem is interesting and quite relevant to the ICLR community. The reviewers raised concerns about the similarity to prior work (Xu et al '17), and missing comparisons to previous approaches that study this problem (e.g. Hu et al '18) that make it difficult to judge the significance of the work. Overall, the paper is slightly below the bar for acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Answer to Reviewer #1\", \"comment\": \"Thank you for your detailed review.\", \"clarity\": \"We are happy to rewrite any parts of the paper that may be unclear.\\n\\nEq. 3: Theta is the distribution output by the stochastic generator, as explained at the end of the section on GANs. We will make it more clear.\", \"no_details_on_knowledge_compilation\": \"All of the relevant details can be found in Xu et al. We will update the short introduction to KC in the methods section to be more self-contained.\\n\\nIs KC used in the experiments? All constraints used in the experiments required us to apply KC. We compiled the constraints into SDDs using the pysdd library.\\n\\nSome constraints result in extremely large circuits that cannot fit in memory. We show how to deal with these overly complex constraints by using simpler constraints on a projected space. We remark that this is not exclusive of our approach but, to the best of the authors\\u2019 knowledge, this is the first work that applies these ideas to effectively approximate the exact SL. We mentioned in Sections 1,3 that this approach is used in the level generation task to approximate with a propositional formula the reachability (cf. pages 5-6). Although showcased in the level generation setting only, the approach is general. We will clarify these aspects in the revised version.\", \"reproducibility\": \"Reproducibility is crucial for us. We will shortly share an archive with the anonymized version of the code with the reviewers. We will also add a link to the code to the paper, as well as details on the architectures and hyperparameters in an Appendix. An updated manuscript will be uploaded soon.\", \"significance\": \"The focus of the molecules generation experiment is not on comparing the SL with other forms of supervision nor on comparing every existing approach in molecule generation, but rather on showing how the SL can be used in conjunction with reinforcement-based approaches (as used in MolGAN, ORGANs, etc.) to mitigate the mode collapse and foster diversity. This is achieved by applying different constraints on different subregions of the latent space.\\n\\nOur experiments show that the SL improves the quality of the generated structures when combined with constrained baselines.\\n\\nNotice that in some cases the baseline already generates mostly valid structures, as in the molecule generation experiment. In this case, there is little gain in trying to improve the validity further. For this reason, we use the SL to improve the other quality measures. The results show that indeed the SL improves uniqueness and diversity, see Table 2.\", \"baselines\": \"Thank you for the additional references.\", \"technical_contributions\": \"We stress that our contribution does not equate to combining GANs and the SL. Our contributions are as follows:\\n\\nWe prove that GANs by design allocate non-zero mass to invalid structures whenever the dataset is noisy.\\n\\nWe fix this issue by pairing the generator with the SL and showing that the resulting architecture provably produces valid structures only (in the limit of $\\\\lambda \\\\to \\\\infty$). Compared to alternative architectures, the resulting model enjoys exact probabilistic semantics and can natively handle any arbitrary discrete constraint without special-purpose architectural modifications.\\n\\nWe show that the SL can be successfully supplemented with a neural component in practice when the constraints are beyond the reach of model counting technology.\\n\\nWe show that constraints can be turned on and off at test time despite being \\\"baked\\\" into the generator at training time.\"}",
"{\"title\": \"Reply to Reviewer #3\", \"comment\": \"Thank you for your careful review.\", \"naming\": \"We only coined the term CAN for recognizability. We are open to changing the\\nname of our approach, if necessary.\", \"missing_technical_details_and_discussion\": \"We will provide any missing material directly in an updated version of the manuscript very soon.\", \"technical_contributions\": \"Our paper does not equate to GAN + SL. Our technical contributions are:\\n\\nShowing that GANs by design allocate non-zero mass to invalid structures whenever the dataset is noisy.\\n\\nFixing this issue by pairing the generator with the SL and showing that the resulting architecture provably produces valid structures only (in the limit of $\\\\lambda \\\\to \\\\infty$). Compared to alternative architectures, the resulting model enjoys exact probabilistic semantics and can natively handle any arbitrary discrete constraint without special-purpose architectural modifications.\\n\\nWe show that the SL can be successfully supplemented with a neural component in practice when the constraints are beyond the reach of model counting technology.\\n\\nWe show that constraints can be turned on and off at test time despite being \\\"baked\\\" into the generator at training time.\", \"using_alternatives_like_constraint_solvers\": \"State-of-the-art approaches to discrete (weighted) sampling under constraints are either solver-based [a,b] or knowledge-compilation-based [c]. Approaches in the first group are approximate and rely on invoking a (usually NP-hard) oracle. Also, these approaches tackle sampling, not learning. Approaches in the second group, like PSDDs [c], make use of the same knowledge compilation techniques that underlie the Semantic Loss. If the constraints are very complex, KC may output very large circuits (polynomials) that in turn seriously affect inference runtime and space requirements. CANs on the other end only need the circuit during training (which can be handled on larger machines and is only performed once). Further, the complexity of inference in CANs does not depend on the complexity of the constraints, while in PSDDs it does. Finally, PSDDs can be learned from data, but just like GANs, they cannot acquire and apply constraints from a handful of potentially noisy examples.\\n\\n[a] Chakraborty et al. \\u201cDistribution-Aware Sampling and Weighted Model Counting for SAT\\u201d, 2014.\\n[b] Ermon et al. \\u201cEmbed and Project:Discrete Sampling with Universal Hashing\\u201d, 2013.\\n[c] Kisa et al. \\u201cProbabilistic sentential decision diagrams\\u201d, 2014.\\n\\nWe will make sure to discuss discrete sampling technology more in detail in the related work.\", \"cesagan\": \"The main differences with CANs are as follows:\\n\\n(1) It is unclear if CESAGANs (and specifically count vectors plus an embedding layer) can be extended to deal with arbitrary logical constraints, like CANs do.\\n\\n(2) In CESAGANS, the count vector is given as input to the discriminator, not directly to the generator, which introduces one layer of indirection; in CANs this is not necessary.\\n\\n(3) The relationship between the count vector and the decision of the discriminator must be learned, which is non-trival without extra supervision and, again, more indirect than imposing the SL loss term in CANs. \\n\\n(4) As in MarioGANs, the supervision on the playability is given by an A* agent, resulting in a much computationally expensive training. In our experiments the agent is only used for performance evaluation.\\n\\n(5) CESAGANs focus on level generation and were not tested on other generative tasks, while we applied CANs to multiple applications.\\n\\nAn empirical comparison could be interesting, but [d] is not peer-reviewed and the code is not available.\\n\\nFinally, the pre-print [d] was uploaded to ArXiV after the ICLR \\u201820 deadline.\\n\\n[d] https://arxiv.org/abs/1910.01603\", \"missing_details_and_missing_discussion\": \"Thank you for pointing out this deficiency of our paper. We will upload an updated version soon.\", \"training_on_a_single_level\": \"We can definitely use more than one level during the training. In order to compare with MarioGAN, we trained the generation on a single level. In the MarioGAN paper, the authors use 1-1. We choose 1-3 and 3-3 as they are more challenging with respect to the playability. This was mentioned in the caption of Table 1; we will make sure to make this more prominent.\"}",
"{\"title\": \"Replies to Reviewer #2\", \"comment\": \"Thank you for your thoughtful review.\\n\\n\\nIncrementality wrt [1]: Our contributions go beyond applying the SL to GANs:\\n\\nWe show that GANs by design generate invalid structures if the data is noisy. To the best of our knowledge, previous papers on deep generative models for structured outputs do not look into this at all.\\n\\nCANs generalize beyond existing ad-hoc architectures, as thanks to the SL they can natively handle any arbitrary discrete constraint.\\n\\nWe discuss one case where the SL *cannot* be used as-is (i.e., for the level-wide reachability constraint in the mario experiment) and show that in practice it is possible to replace parts of the SL using a neural network.\\n\\nWe show that the constraints, although \\\"baked\\\" into the generator at training time, can be turned on and off using an InfoGAN-like approach (cf. the molecules generation experiment). This technique can also be used to sample valid objects from different modes, thus also mitigating mode collapse.\\n\\nWe agree that these contributions were not made clear enough, and we will definitely amend the paper to this effect.\\n\\n\\nComparison with [2]: We had initially considered using posterior regularization for CANs, but a major issue it is that rewriting the constraint (e.g. applying De Morgan) is not guaranteed to preserve the semantics and thus may change the loss function. This issue does not affect the SL. Moreover, since the SL can be evaluated efficiently in the GAN case, we have no need to fit a variational distribution $q$. We agree that an empirical comparison with [2] is in order, however their code is not publicly available.\\n\\n\\nPlease expect an updated manuscript shortly.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors describe a method to improve the performance of generative adversarial networks in the task of generating structured objectives that have to satisfy complicated constraints. The proposed solution involves using an additional term in the GAN objective that penalizes the generation of invalid samples. This term, called the semantic loss, is given by a multiple of the log probability of the model generating valid samples.\", \"clarity\": \"The paper is not very well written and several parts need to be clarified. In particular, in equation 3. What is theta in this equation? how is it obtained? The authors mention briefly how their method could be used to deal with intractable constraints, but they're almost no specific details or examples of how this is done in practice. The proposed approach relays on the knowledge compilation method, but they're very few details of it in the document. Is it used at all in the experiments?\\n\\nI am concern about the lack of reproducibility of the paper. I believe, from the paper as it is, it will be impossible to reproduce the results. There are no details about public code release, hyper-parameters settings, etc. For example,\\nin section 4.3 the authors mention that they condition the constraint on 5 latent dimensions without giving details about which dimensions exactly.\", \"significance\": \"It is hard to quantify the significance of the contribution. The constrained images problem is very toy and simple and the experiments with molecules do not include any baseline (only the GAN model without the constraint). There have been\\nmany recent contributions improving the validity of generative models for molecules and the authors do not compare with any of them.\\n\\nThe authors also fail to cite relevant work such as\\n\\nJaques, Natasha, et al. \\\"Sequence tutor: Conservative fine-tuning of sequence\\ngeneration models with kl-control.\\\" Proceedings of the 34th International\\nConference on Machine Learning-Volume 70. JMLR. org, 2017.\\n\\nSeff, Ari, et al. \\\"Discrete Object Generation with Reversible Inductive\\nConstruction.\\\" arXiv preprint arXiv:1907.08268 (2019).\", \"novelty\": \"The proposed approach is rather incremental and lacks novelty. It consists in just applying the semantic loss approach of Xu et al. 2018 to GAN training, with very limited new methodological or algorithm contributions.\", \"quality\": \"The experiments performed are not strong enough to validate the proposed method. The authors do not consider strong baselines in their evaluations.\", \"summary\": \"I find that the problem addressed by the authors is highly relevant and the proposed approach has the potential to be useful in practice. However, the paper needs to be improved regarding its clarity, reproducibility and strength of experiments before it can be accepted for publication.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed Constrained Adversarial Networks (CAN), which incorporates structural constraints by augmenting a penalty term in the training object. The penalty term is formulated as the semantic loss proposed in [1] which can handle any logical constraints. Experiments are demonstrated to show the advantage of CAN over standard GAN in terms of whether the generated samples satisfy the hard constraints, and whether they are novel and unique.\\n\\nFirst, I'd like to thank the authors for making this paper easy to follow. I like the idea of encouraging constraints for generative models, which is useful and interesting. However, given the published paper [1], this work seems to be a bit incremental.\\n\\nThe semantic loss for incorporating constraints and the knowledge compilation techniques for efficient evaluation are both introduced and discussed in [1]. The novelty of this paper is to apply these techniques to generative models, which seem to be a bit straightforward. A similar idea is proposed in [2], where the authors also discussed logical constraints and generative models, but they call the augmented penalty as 'posterior regularization'. I will be interested in a comparison to their method in terms of both methodology level and experiment level.\\n\\nOverall the contribution of this paper does not seem to be strong enough. I would personally vote for weak rejection.\\n\\n[1] Xu, Jingyi, et al. \\\"A semantic loss function for deep learning with symbolic knowledge.\\\" arXiv preprint arXiv:1711.11157 (2017).\\n[2] Hu, Zhiting, et al. \\\"Deep generative models with learnable knowledge constraints.\\\" Advances in Neural Information Processing Systems. 2018.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper the authors present a Generative Adversarial Neural Networks with Xu et al.\\u2019s semantic loss applied to the generator. They call this GAN a Constrained Adversarial Network or (CAN) and identify it as a new class of GAN. The authors present three different problem domains for their experiments focused on the generation of constrained images, chunks of Super Mario Bros.-style levels, and molecules. For each domain they include particular constraints for the semantic loss, which biases the generator towards creating valid content according to these constraints.\\n\\nThe paper at present has a number of issues holding it back. First, I am not convinced by the author\\u2019s claims that the application of an existing loss function to the generator is sufficient to identify a new class of GAN. Second, there is a lack of technical detail in the experiments necessary to replicate them. Third, there is a lack of discussion of the experimental results to place them in context for readers. Finally following from the earlier points, there seems to be a lack of technical contributions in the paper. \\n\\nI certainly agree with the authors about the inability of GANs to learn structural constraints with insufficient training data, as this has been demonstrated in many examples of prior work. I also agree that particular problem domains, as identified by the authors, have stronger structural requirements. However, it is unclear to me why in these instances one would use GANs and not some alternative approach such as constraint-based solvers. Or even if one wanted to employ GANs, what the benefit of adapting the constraints into a loss function is compared to say constraining generated output in a post-hoc process.\\n\\nThe descriptions of the two of the three experiments do not include any discussion of the GAN architectures or hyperparameters. While this is not strictly necessary in the paper text some discussion in an appendix or a citation to a prior application of the architecture(s) would be appropriate. Without this, it is impossible for future researchers to replicate these results. Further, it is difficult for readers to place the results in context. For individual experiments, such as the Super Mario Bros. experiment, it is unclear why certain choices were made. For example, why train a GAN on just level 1-3 or 3-3, and not train a single model on multiple levels as is common in the field of procedural content generation via machine learning. \\n\\nThere is a lack of discussion in the paper on the results of each experiment. For example, the output of the GANs for all the experiments seems quite low, and the differences in terms of the results between the GAN and the CAN across the experiments do not seem to be substantial. Some discussion to put this into context for readers would be helpful.\", \"as_far_as_i_can_understand_the_primary_technical_contributions_of_the_paper_are\": \"(1) the application of Xu et al.\\u2019s semantic loss to GANS, (2) the constraints developed for the three experiments, and (3) the results of the three experiments. I am unconvinced of the utility of these contributions to a general machine learning audience.\\n\\n---\\n\\nUpdated my review as the authors included extra detail regarding the experiments in a new draft, which helped with the reproducibility issue. However, I am still unconvinced in the contributions of the paper outside of what I previously listed. While I am also unfamiliar with any prior example demonstrating that GANs produce invalid structure, this is not a surprising result. Especially as validity can be defined in an arbitrary, domain-specific manner.\"}"
]
} |
Hkxp3JHtPr | Deep Variational Semi-Supervised Novelty Detection | [
"Tal Daniel",
"Thanard Kurutach",
"Aviv Tamar"
] | In anomaly detection (AD), one seeks to identify whether a test sample is abnormal, given a data set of normal samples. A recent and promising approach to AD relies on deep generative models, such as variational autoencoders (VAEs),for unsupervised learning of the normal data distribution. In semi-supervised AD (SSAD), the data also includes a small sample of labeled anomalies. In this work,we propose two variational methods for training VAEs for SSAD. The intuitive idea in both methods is to train the encoder to ‘separate’ between latent vectors for normal and outlier data. We show that this idea can be derived from principled probabilistic formulations of the problem, and propose simple and effective algorithms. Our methods can be applied to various data types, as we demonstrate on SSAD datasets ranging from natural images to astronomy and medicine, and can be combined with any VAE model architecture. When comparing to state-of-the-art SSAD methods that are not specific to particular data types, we obtain marked improvement in outlier detection. | [
"anomaly detection",
"semi-supervised anomaly detection",
"variational autoencoder"
] | Reject | https://openreview.net/pdf?id=Hkxp3JHtPr | https://openreview.net/forum?id=Hkxp3JHtPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"QBmNQo6QO4",
"BylzJCALjS",
"H1eZ96CLir",
"B1xga208oH",
"SkgaN20Usr",
"H1x83bU7cS",
"BJgu6JrjtS",
"BkgL_iMqKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737175,
1573477850471,
1573477768640,
1573477559775,
1573477429268,
1572196781683,
1571667904247,
1571593070363
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1970/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1970/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1970/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1970/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1970/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1970/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1970/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents two novel VAE-based methods for semi-supervised anomaly detection (SSAD) where one has also access to a small set of labeled anomalous samples. The reviewers had several concerns about the paper, in particular completely addressing reviewer #3's comments would strengthen the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"General Statement\", \"comment\": \"We thank the reviewers for their thoughtful feedback.\", \"we_want_to_emphasize_the_motivation_for_our_ssad_study\": \"as our experiments demonstrate, even a small fraction of labeled data can significantly improve AD performance. Furthermore, our methods allow to improve general VAEs with negative samples, which, as we show, can have interesting applications beyond AD.\\n\\nTwo reviewers questioned how much the ensembles play a role in our results. We have added experiments to address that, and detailed our findings in our answers below. In short, ensembles are important for our method, but do not help the competing Deep SAD method. Thus, our proposed variational method (+ensembles) is, to our knowledge, the best performing SSAD approach.\\n\\nWe hope that our answers below address the other concerns, and we kindly ask the reviewers to reconsider their scores, given that our method is novel, general, well-motivated, and exhibits state-of-the-art performance.\"}",
"{\"title\": \"Addressing the reviewer concerns\", \"comment\": \"Dear reviewer, thank you for the encouraging review. We hope that the following will alleviate your concerns:\\n\\n1. The reconstruction term in the VAE is a direct consequence of optimizing the variational lower bound of the data log-likelihood. According to this formulation, our model is trained to output the (approximate) likelihood of data samples, making for a principled approach to novelty detection: a test sample is detected as normal if its likelihood under our model is high.\\nThis is starkly different from discriminative supervised methods, which do not model the data distribution. The generative unsupervised approach is fundamentally sound *even without any labeled outlier data*, and our work shows that just a few labeled samples can significantly improve results.\\n\\n2. We share similar views on improving anomaly detection with other auxiliary tasks. For example, [1], the SOTA in AD for images uses self-supervision with geometric transformations. It would be interesting to combine such approaches with our work, and we are actively working on this direction.\\n\\n3. We emphasize that our method works well even with much less than 10% of anomalies (e.g., 1% can lead to dramatic improvement in our results). \\nWhile there are tricks for mitigating class imbalance for supervised methods, we emphasize that our experimental setting, which builds on [2], is very different: it measures novelty detection of *unknown classes*. That is, during training, the classifier sees a large proportion of data from the normal class (e.g., the 0 digit in MNIST), and a small proportion from *only one* of the other anomaly classes (e.g., the digit 3). At test time, the anomalies are presented *from all of the classes* (digits 1-9). Thus, any class imbalance trick applied to supervised learning will not help, and we are not aware of standard tricks to make a discriminatively trained network correctly generalize to classes it has never seen before. This is validated in our results: the supervised classifiers failed to generalize to the unseen anomaly classes.\\n\\n[1] Golan, Izhak, and Ran El-Yaniv. \\\"Deep anomaly detection using geometric transformations.\\\" Advances in Neural Information Processing Systems. 2018.\\n[2] Ruff, Lukas, et al. \\\"Deep Semi-Supervised Anomaly Detection.\\\" arXiv preprint arXiv:1906.02694 (2019).\"}",
"{\"title\": \"Addressing the reviewer concerns\", \"comment\": \"Dear reviewer, thank you for your extensive review. We greatly appreciate the effort to improve our paper.\\n\\n1. While DP-VAE builds on a latent distribution for anomalies, this does not imply that anomalies are similar! \\nFor example, VAEs are known to generate very expressive distributions from just a single Gaussian prior. \\nFurthermore, note that even a standard VAE, which approximates the data distribution, is already a capable novelty detector just by training on normal data samples. \\nThe DP-VAE acts to *refine* the anomaly detection, by pushing anomalies outside the normal-data prior, thereby increasing their KL.\\nThe fact that in DP-VAE the KL increase has a direction (towards another Gaussian), while in MML-VAE it is not directed towards a specific point, does not mean that the method requires outliers to be clustered. This is because, as described in Section 4.3, *we only use the normal data prior for novelty detection*!\\nThis is also clearly demonstrated in our experiments. We train on data from only one class of anomalies, and test on different classes. If training on the class of, say, dogs, with anomalies from class, say, airplane, improves anomaly detection of images of horses, this clearly indicates that our method *does not overfit to the training anomaly class*.\\n\\n2. You are correct. Due to limited computational resources, we were not able to optimize MML-VAE in time for the deadline. During the review period, we were able to tune the model better, and the results are reported in the revised paper. MML-VAE is on par with DP-VAE. We have independently experimented with all of the methods you suggested, including gradient clipping, thresholding and learning rate scheduling. Learning rate scheduling combined with gradient clipping led to the improved results.\\n\\n3. We have extended our ablation study to CIFAR-10. Hyperparmeter tuning for the image datasets was done by measuring the AUROC on a validation set taken from training data, i.e., the normal class and the currently-trained-on outlier class (only one class).\", \"ensemble_ablation_study\": \"per our answer to reviewer #2, we agree, this is a valid concern, and we have expanded our experiments to address it. In our study, ensembles can improve our method\\u2019s result by approximately 2-4% on average, which makes sense as VAE training is a stochastic process. However, we found that ensembles *do not improve the results for Deep-SAD, the previous state-of-the-art*. This can be explained as follows. In Deep-SAD, confidence is measured according to distance to an arbitrary point C in the vector space. Thus, the scores from different networks in the ensemble are not necessarily calibrated (they have different C points). In our VAE approach, on the other hand, the score of each network is derived from the same principled variational bound, and therefore the scores are calibrated, giving rise to the benefit of the ensemble.\\nTo summarize, the use of ensembles is a specific feature of our approach, and using it does not improve the previous state-of-the-art Deep-SAD method, further supporting our claims in the paper.\\n\\n4. Thank you for the ideas for improvement. We agree that combining Deep-SAD with the CUBO may prove to be an exciting direction. We are happy to include additional anomaly detection experiments as the reviewer suggested in the final version. The hyperparameters are dependent on the type of data, and the range is similar within the types. We have detailed the hyper-parameters for each dataset in the appendix.\\n\\n5. Minor comments section: thank you for noticing the small errors in the text. As for the rest:\\na. Autoencoders and Variational Autoencoders are very different in theory and in practice, and are known to generate different behaviour, even for similar architectures.\\n\\nb. The CUBO was derived and analyzed in [1]. It is not limited to normal distributions.\\n\\nOnce again, we are grateful for the efforts you made in your review and hope that we have addressed your concerns. \\n\\n[1] Dieng, Adji Bousso, et al. \\\"Variational Inference via $\\\\chi $ Upper Bound Minimization.\\\" Advances in Neural Information Processing Systems. 2017.\"}",
"{\"title\": \"Addressing the reviewer concerns\", \"comment\": \"Dear reviewer, thank you for taking the time to review our paper. Your comments are much appreciated! Regarding your concerns:\\n\\n1. All the results are now reported in the revised paper. The missing experiments in the submission were unfortunately due to limited resources - some of the MML-VAE experiments did not finish before the ICLR deadline.\\n\\n2. DS-EBM results are reported for the cats-vs-dogs experiment, and described in Appendix A.2. Thank you for pointing us to Song et al.\\u2019s recent work. These results are impressive, and we have duly added them to the text. \\nThat said, we emphasize that *our work focuses on semi-supervised anomaly detection, which has not been handled by said EBM approaches*. Specifically, while for unsupervised AD, Inclusive-NRF\\u2019s results of 70.2% on CIFAR are much better than the 52.7% score of the vanilla VAE, even just 1% of labelled data is enough to get 74.5% with our DP-VAE.\\nThus, for the SSAD setting, *our results are state-of-the-art*, and further demonstrate the importance of the SSAD setting.\\nWe also mention that the Inclusive-NRF work used much more expressive ResNet architectures, while we opted for much simpler architectures, for a fair comparison with the Deep-SAD work (otherwise one could have argued that our improvement is only due to better architecture search).\\nWe agree that studying SSAD with EBMs is a very promising direction. In any such study, though, our current results should be a baseline for comparison.\\nFinally, the promising results of EBMs *should not waive off the study of VAE-based methods*. VAEs are widely used in many applications, and our experiments on motion planning demonstrate the benefit of extending VAEs to SSAD. Since our work is also SOTA in SSAD, we kindly ask the reviewer to reconsider the evaluation of our work\\u2019s motivation.\\n\\n3. We agree - as reported, our use of ensembles raises a valid concern, and we have expanded our experiments to address it.\\nIn our study, ensembles can improve our method\\u2019s result by approximately 2-4% on average, which makes sense as VAE training is a stochastic process. However, we found that ensembles *do not improve the results for Deep-SAD, the previous state-of-the-art*. This can be explained as follows. In Deep-SAD, confidence is measured according to distance to an arbitrary point C in the vector space. Thus, the scores from different networks in the ensemble are not necessarily calibrated (they have different C points). In our VAE approach, on the other hand, the score of each network is derived from the same principled variational bound, and therefore the scores are calibrated, giving rise to the benefit of the ensemble.\\nTo summarize, the use of ensembles is a specific feature of our approach, and using it does not improve the previous state-of-the-art Deep-SAD method, further supporting our claims in the paper. \\n\\nWe thank you again for your comments and hope you will reconsider your evaluation of our paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes two variational methods for training VAEs for SSAD (Semi-supervised Anomaly Detection). Experiments on benchmarking datasets show improvements over state-of-the-art SSAD methods.\\n\\nIn generally, the paper is well written. But I have some concerns.\\n\\n1. Some of the results have not yet been obtained.\\n\\n2. Missing some relevant references.\\nIn addition to VAEs, there is another class of deep generative models - random fields (a.k.a. energy-based models, EBMs), which have been applied to anomaly detection (AD) recently. Particularly, the unsupervised AD results on MNIST and CIFAR-10 from [2] are much better than the proposed methods (MML-VAE, DP-VAE).\\nThough semi-supervised AD is interesting, good performances on unsupervised AD can be a baseline indicator of the effectiveness of the AD models. The authors should add comments and comparisons.\\n\\n[1] S. Zhai, Y. Cheng, W. Lu, and Z. Zhang, \\u201cDeep structured energy based models for anomaly detection,\\u201d ICML, 2016.\\n[2] Y. Song, Z. Ou. \\\"Learning Neural Random Fields with Inclusive Auxiliary Generators,\\\" arxiv 1806.00271, 2018.\\n\\n3. \\u201cFor all of the experiments, our methods use an ensemble of size K = 5.\\u201d\\nAre other methods also tested by using an ensemble?\\n\\n--------update after reading the response-----------\\nThe updated paper has been improved to address my concerns.\\n\\nI partly agree with the authors that their results demonstrate the importance of the semi-supervised AD setting (a 1% fraction of labelled anomalies can improve over the state-of-the-art AD scores of deep energy based models). However, I think, the proposed methods in this paper will not be as competitive as semi-supervised deep energy based models.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents two novel VAE-based methods for the (more general) semi-supervised anomaly detection (SSAD) setting where one has also access to some labeled anomalous samples in addition to mostly normal data. The first method, Max-Min Likelihood VAE (MML-VAE), extends the standard VAE objective that maximizes the log-likelihood for normal data by an additional term that in contrast minimizes the log-likelihood for labeled anomalies. To optimize the MML objective, the paper proposes to minimize the sum of the standard (negative) ELBO for normal samples and the so-called CUBO, which is a variational upper bound on the data log-likelihood, for anomalous samples. The second method, Dual Prior VAE (DP-VAE), modifies the standard VAE by introducing a second separate prior for the anomalous data, which is also Gaussian but has different mean. The DP-VAE objective then is defined as the sum of the two respective ELBOs which is optimized over shared encoder and decoder networks (with the adjustment that the outlier ELBO only updates the encoder). The anomaly score for both models then is defined as the (negative) ELBO of a test sample. Finally, the paper presents quite extensive experimental results on the benchmarks from Ruff et al. [2], CatsVsDogs, and an application of robotic motion planning which indicate a slight advantage of the proposed methods.\\n\\nI am quite familiar with the recent Deep SAD paper [2] this work builds upon and very much agree that the (more general) SSAD setting is an important problem with high practical relevance for which there exists little prior work. Overall this paper is well structured/written and well placed in the literature, but I think it is not yet ready for acceptance due to the following key reasons: \\n(i) I think DP-VAE, the currently better performing method, is ill-posed for SSAD since it makes the assumption that anomalies are generated from one common latent prior and thus must be similar; \\n(ii) I think the worse performance of MML-VAE, which I find theoretically sound for SSAD, is mainly due to optimization issues that should be investigated; \\n(iii) The experiments do not show for the bulk of experiments how much of the improvement is due to meta-algorithms (ensemble and hyperparameter selection on a validation set with some labels).\\n\\n(i) DP-VAE models anomalies to be generated from one common latent distribution (modeled as Gaussian here) which imposes the assumption that anomalies are similar, the so-called cluster assumption [2]. This assumption, however, generally does not hold for anomalies which are defined to be just different from the normal class but anomalies do not have to be similar to each other. Methodologically, DP-VAE is rather a semi-supervised classification method (essentially a VAE with Gaussian mixture prior having two components) which the paper itself points out is ill-posed for SSAD: \\u201c... the labeled information on anomalous samples is too limited to represent the variation of anomalies ... .\\u201d I suspect the slight advantage of DP-VAE might be mainly due to using meta-algorithms (ensemble, hyperparameter selection) and due to the rather structured/clustered nature of anomalies in the MNIST, F-MNIST, and CIFAR-10 benchmarks.\\n\\n(ii) I find MML-VAE, unfortunately the worse performing method, to be a conceptually sound approach to SSAD following the intuitive idea that normal samples should concentrate under the normal prior whereas the latent embeddings of anomalies should have low likelihood under this prior. This approach correctly does not make any assumption on the latent structure of anomalies as DP-VAE does. I believe MML-VAE in its current formulation leads to worse results mainly to optimization issues that I suspect can be resolved and should be further investigated. I guess the major issue of the MML-VAE loss is that the log-likelihood for outlier samples has steep curvature and is unbounded from below. Deep networks might easily exploit this without learning meaningful representations as the paper also hints towards. This also results in unstable optimization. I think removing the reconstruction term for outliers, as the paper suggests, also helps for this particular reason but this is rather heuristic. These optimization flaws should be investigated and the loss adjusted if needed. Maybe simple thresholding (adding an epsilon to lower bound the loss), gradient clipping, or robust reformulations of the loss could improve optimization already?\\n\\n(iii) To infer the statistical significance of the results and to assess the effect of meta-algorithms (ensemble, hyperparameter tuning) an ablation study as in Table 4 (at least on the effect of ensembling) should be included also for the major, more complex datasets. Which score is used for hyperparameter selection (ELBO, log-likelihood, AUC)? How would the competitors perform under similar tuning?\\n\\n\\n####################\\n*Additional Feedback*\\n\\n*Positive Highlights*\\n1. Both proposed methods can be used with general data types and VAE network architectures (the existing Deep SAD state-of-the-art method employs restricted architectures).\\n2. The paper is well placed in the literature and all major and very recent relevant work that I am aware of are included.\\n3. This is an interesting use of the CUBO bound which I did not know before reading this work. This might be interesting for the general variational inference community to derive novel optimization schemes.\\n4. I found the robotic motion planning application quite cool. This also suggests that negative sampling is useful beyond the AD task.\\n5. I appreciate that the authors included the CatsVsDogs experiment although DADGT performs better as it demonstrates the potential of SSAD. I very much agree that employing similar self-supervised learning ideas and augmentation is a promising direction for future research.\\n\\n*Ideas for Improvement*\\n6. Extend the semi-supervised setting to unlabeled (mostly normal), labeled normal, and labeled anomalous training data. The text currently formulates a setting with only labeled normal and labeled anomalous samples. A simple general formulation could just assign different weights to the unlabeled and labeled normal data terms.\\n7. There might be an interesting connection between MML-VAE and Deep SAD in the sense that MML-VAE is a probabilistic version of the latter. The $\\\\chi_n$ distance of the CUBO loss has terms similar to the inverse squared norm penalty of Deep SAD.\\n8. Report the range from which hyperparameters are selected.\\n9. Add the recently introduced MVTec AD benchmark dataset to your experimental evaluation [1].\\n10. Run experiments on the full test suite of Ruff et al. [2]. At the moment only one of three scenarios are evaluated.\\n\\n*Minor comments*\\n11. Inconsistent notation for the expected value ($\\\\mathbb{E}$ vs $\\\\mathbf{E}$)\\n12. In Section 3, the parameterization of the variational approximate $q(z | x)$ is inconsistently denoted by $\\\\phi$ and $\\\\theta$ (which beforehand parameterizes the decoder).\\n13. In Section 3.2, the current formulation first says that MC produces a biased, then an unbiased estimate of the gradients.\\n14. First sentence in Section 4: I would not use \\u201cclassify\\u201d but rather \\u201cdetect\\u201d etc. for anomaly/novelty detection since the task differs from classification.\\n15. In Section 4.2, there should be a minus in front of the KL-divergence terms of the $ELBO_{normal}$ and $ELBO_{outlier}$ equations.\\n16. In the fully unsupervised setting on CIFAR-10 (Table 5), why is the VAE performance essentially at random (~50) in comparison to CAE and Deep SVDD although they use the same network architecture?\\n17. Is the CUBO indeed a strictly valid bound if one considers the non-normal data-generating distribution?\\n18. Are there any results on the tightness of the CUBO?\\n\\n\\n####################\\n*References*\\n[1] P. Bergmann, M. Fauser, D. Sattlegger, and C. Steger. Mvtec ad\\u2013a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9592\\u20139600, 2019.\\n[2] L. Ruff, R. A. Vandermeulen, N. Go\\u0308rnitz, A. Binder, E. Mu\\u0308ller, K.-R. Mu\\u0308ller, and M. Kloft. Deep semi-supervised anomaly detection. arXiv preprint arXiv:1906.02694, 2019.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The papers proposes to use VAE-like approaches for semi-supervised novelty detection. Two methods are describes:\\n(1) the MML-VAE fits a standard VAE to the normal samples and add a repulsive term for the outliers -- this term encourages the encoder to map the outliers far from the latent-space prior distribution.\\n(2) the DP-VAE fits a VAE with a mixture of Gaussian prior on the latent space -- one mixture component for normal samples, and one mixture component for outlier samples.\\n\\nThe described methods are simple, natural, and appear to work relatively well -- for this simple reason, I think that the text could be accepted.\\n\\nThere are several things that are still not entirely clear.\\n(1) without the reconstruction term, the methods are small variations of supervised methods. Consequently, I feel that the authors should try to explain much more carefully why the introduction of a reconstruction term (which could be thought as an auxiliary task) helps.\\n(2) given point (1), one could think of many auxiliary task (eg. usual colorisation, or rotation prediction, etc..) Would it lead to worse results?\\n(3) proportion > 10% of anomaly is relatively standard for supervised-methods + few standard tricks to work very well. Although I understand that only one small subset of anomalies is presented during training, I think that it would still be worth describing in more details the efforts that have been spent to try to make standard supervised methods work.\"}"
]
} |
HJeT3yrtDr | Cross-Lingual Ability of Multilingual BERT: An Empirical Study | [
"Karthikeyan K",
"Zihan Wang",
"Stephen Mayhew",
"Dan Roth"
] | Recent work has exhibited the surprising cross-lingual abilities of multilingual BERT (M-BERT) -- surprising since it is trained without any cross-lingual objective and with no aligned data. In this work, we provide a comprehensive study of the contribution of different components in M-BERT to its cross-lingual ability. We study the impact of linguistic properties of the languages, the architecture of the model, and the learning objectives. The experimental study is done in the context of three typologically different languages -- Spanish, Hindi, and Russian -- and using two conceptually different NLP tasks, textual entailment and named entity recognition. Among our key conclusions is the fact that the lexical overlap between languages plays a negligible role in the cross-lingual success, while the depth of the network is an integral part of it. All our models and implementations can be found on our project page: http://cogcomp.org/page/publication_view/900. | [
"Cross-Lingual Learning",
"Multilingual BERT"
] | Accept (Poster) | https://openreview.net/pdf?id=HJeT3yrtDr | https://openreview.net/forum?id=HJeT3yrtDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"UeVfJPlYf",
"rylalmYhiH",
"SJlML-Y2iB",
"BJg2ogFnir",
"rkl0klY2sS",
"ryekGZ3ijB",
"rJlUTFWI9B",
"Sklz-Z8JcH",
"B1xvDADRYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737147,
1573847797409,
1573847370200,
1573847203789,
1573847013789,
1573794054881,
1572374973753,
1571934457873,
1571876447139
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1969/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1969/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1969/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1969/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1969/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1969/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1969/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1969/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper introduces a set of new analysis methods to try to better understand the reasons that multilingual BERT succeeds. The findings substantially bolster the hypothesis behind the original multilingual BERT work: that this kind of model discovers and uses substantial structural and semantic correspondences between languages in a fully unsupervised setting. This is a remarkable result with serious implications for representation learning work more broadly.\\n\\nAll three reviewers saw ways in which the paper could be expanded or improved, and one reviewer argued that the novelty and scope of the paper are below the standard for ICLR. However, I am inclined to side with the two more confident reviewers and argue for acceptance. I don't see any substantive reasons to reject the paper, the methods are novel and appropriate (even in light of the prior work that exists on this question), and the results are surprising and relevant a high-profile ongoing discussion in the literature on representation learning for language.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"General Comment (for all 3 reviewers) -- Structural Similarity and Architecture\", \"comment\": \"We have updated our paper with additional experiments that strengthen our contributions. Please refer to the appendix.\", \"structural_similarity\": \"We found that reviewers and others are slightly concerned about the structural similarity, mainly due to its abstract nature. To illustrate the necessity of structural similarity and make it lucid, we have added an analysis of 2 sub-components of structural similarity: \\n\\n(1) Effect of Word-ordering\\n (a) Words are ordered differently between languages. For example, English has a Subject-Verb-Object order, while Hindi has a Subject-Object-Verb order. We analyze this component of structural similarity. \\n (b) We destroyed the word-ordering -- one component of structural similarity -- by shuffling some percentage of the words in sentences during pretraining. We shuffle both the source (Fake-English) and the target language (shuffling any one of them would also be sufficient). This way, the word ordering component of the structure is hidden from B-BERT. We shuffle random 25%, 50% and 100% of the words in the sentence while keeping others in their respective positions. When the sentence is 100% shuffled, each sentence can be treated as a Bag of Words.\\n (c) Note that during fine-tuning we don\\u2019t permute -- as cross-lingual ability arises from the pretraining, and not from fine-tuning. \\n (d) Our conclusion is that word ordering is crucial, but cross-linguality still preserves even when the whole sentence is shuffled. Please refer to A.1.1 WORD-ORDERING SIMILARITY for more details.\\n\\n\\n(2) Effect of word-frequencies (frequency distribution) \\n (a) It is possible that good cross-lingual representations benefit from similar words in languages having a similar frequency. In the perfect similar language (English-Fake), the same words have exactly the same frequency. \\n (b) Here, we study whether only word frequency (unigram frequency) allows for good cross-lingual representations\\n (c) We collect the frequency of words in the target language and generate a new monolingual corpus where a sentence is a set of words sampled from the same unigram frequency distribution as the original target language.\\n (d) Our conclusion is that when BERT is only given the frequency of words of the target language, the cross-lingual ability is very poor, but surprisingly not trivial. Please refer to A.1.2 WORD-FREQUENCY SIMILARITY for more details.\", \"architecture\": \"As reviewer 3 asked for some concrete threshold of number of parameters, we added a few more results on the number of parameters experiments, and we think the trend is clear now (There is a drastic drop in performance when the number of parameters is changed from 11.83M to 7.23M, this is kind of threshold, at least for 12 layer and 12 attention settings).\\n\\nPlease refer to appendix section \\u201cFURTHER DISCUSSIONS ON ARCHITECTURE\\u201d\"}",
"{\"title\": \"Significance of our work\", \"comment\": \"We sincerely thank the reviewer for reviewing our paper.\\n> **Lack of generality, originality or depth**\\n\\n(1) First, we would like to point out that this paper is the first to propose an experimental design that proves that word-pieces overlap do not contribute to the transferability of M-BERT. This was done by inventing the notion of Fake-English, with a distinct word-piece space. Moreover, we believe that the methodology we propose in this paper is general enough to support additional insights, and will be followed up by other authors, and therefore this in itself is a significant contribution. \\n\\n(2) Second, while the design of our architectural experiments may not be sophisticated, we are the first to perform this set of experiments systematically, and identify the aspects of the architecture that are important for transferability, as well as those that are not. We believe that this, too, is an important contribution to understanding M-BERT. Further, we have added a few more results on the number of parameters to understand the threshold, and also showed that we can get comparable performance with only a small number of parameters and attention heads, even in multilingual case (four language BERT). Please refer to appendix section \\u201cFURTHER DISCUSSIONS ON ARCHITECTURE\\u201d\\n\\n(3) Third, our results are the first to show clearly that the transferability of M-BERT depend on some aspect of structural similarity between the languages, and has nothing to do with lexical similarity. While we have not isolated yet which aspects of structural similarity contribute to transferability, and how much, this is already an important contribution, please refer to the appendix section \\u201cFURTHER DISCUSSIONS ON STRUCTURAL SIMILARITY\\u201d, for some of our initial experiments that break this down a bit more.\\n\\n(4) Our final observation of the drastic drop in performance when the premise and hypothesis are in different languages (Table 8) might suggest that BERT is simply learning the word matching instead of learning the actual entailment. This observation definitely needs special attention to understand what BERT learns from entailment supervision.\", \"general_comment\": \"Also, please take a look at the general comments for more on structural similarity and the number of parameters experiment.\"}",
"{\"title\": \"More insights on Architecture and Multilingual settings\", \"comment\": \"We sincerely thank the reviewer for reviewing our paper.\\n\\n>**The architecture experiments did not reach concrete suggestions on a minimum number of parameters or a minimum depth.**\\n\\n-- We agree that we didn\\u2019t show the concrete minimum number of parameters, but now we added few more results for number of parameters, and we think the trend is clear now (There is a drastic drop in performance when the number of parameters is changed from 11.83M to 7.23M, this is kind of threshold, at least for 12 layer and 12 attention settings). Please refer to appendix section \\u201cFURTHER DISCUSSIONS ON ARCHITECTURE\\u201d\\n\\n-- We think the trend with depth is mostly clear\", \"for_english\": \"The performance is almost saturated after a depth of 6\\nFor Russian (for cross-lingual): It\\u2019s almost saturated from 12 (still the performance increases slightly)\\nAlso, the performance of English drops slightly when we go from 18 to 24 layers. So, around this range is quite good for cross-lingual transfer. \\n\\n>** How these conclusions will change if they were repeated with the 100+ language version.**\\n\\n-- We didn\\u2019t do the 100+ language version but we currently added in the appendix the 4 language version (we hope the results follows similarly even for 100+ version)\\n-- Our results further show that we can get comparable performance even with as little as 15% of parameters, and single attention, given that the depth is good enough.\", \"general_comment\": \"Also, please take a look at the general comments for more on structural similarity and the number of parameters experiment.\"}",
"{\"title\": \"Clarification: Pires et al. and significant difference\", \"comment\": \"We sincerely thank the reviewer for valuable comments.\\n\\n> **Wrong interpretation of Pires et al. (2019)**\\n\\n-- We agree that Pires et al. also showed that M-BERT transfers between languages written in different scripts. However, they reason that this cross-lingual ability comes from the small number of word-piece overlap beyond lexical, such as numbers and URLs (Section 6 in [1]). \\n\\nIndeed an excerpt from Pires et al.: \\u201cAs to why M-BERT generalizes across languages, we hypothesize that having word pieces used in all languages (numbers, URLs, etc) which have to be mapped to a shared space forces the co-occurring pieces to also be mapped to a shared space, thus spreading the effect to other word pieces, until different languages are close to a shared space\\u201d\\n\\nA key contribution of the paper is that we are the first to suggest a solid experimental design that proves that word-piece overlap is not the reason for the transferability supported by M-BERT.\\n\\n\\n\\n> **What kind of difference in the numbers is considered significant by the authors ?** \\n\\n-- In the case of the number of attention heads, significance means \\\"what is its importance for cross-lingual transfer as a whole (or what is its importance in comparison to other components of architecture)\\\". By saying insignificant, we are not suggesting to use single attention, but arguing that it does not affect cross-lingually much. Similarly, in the case of word-piece overlap, we just say that it is not a major factor for cross-lingual transferability, but in terms of absolute performance, we still lose about 1%.\\n-- Whereas, in Next Sentence Prediction, significance means the difference comes beyond randomness, such that it is advisable to remove the NSP objective (we already know that with NSP it is cross-lingual)/\", \"general_comment\": \"-- Also, please take a look at the general comments for more on structural similarity. \\n\\n[1] Pires, Telmo, Eva Schlinger, and Dan Garrette. \\\"How multilingual is Multilingual BERT?.\\\" arXiv preprint arXiv:1906.01502 (2019).\"}",
"{\"title\": \"AnonReviewer2 Response\", \"comment\": \"Having read the other reviews, I feel even more strongly in the summary of the reasons given for the rating in my original review. I am tempted to lower my score, but have decided to keep it at 3: Weak Reject.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"What is the task?\\nComprehensive study of the contribution of different components in Multilingual BERT to its cross-lingual ability.\\n\\nWhat has been done before?\\n(Wu & Dredze, 2019) and (Pires et al., 2019) identified the cross-lingual success of the model and tried to understand it. However, both works treated M-BERT as a black box and compared M-BERT\\u2019s performance on different languages. This work, on the other hand, examines how B-BERT performs cross-lingually by probing its components, along multiple aspects. Some of the architectural conclusions have been observed earlier, if not investigated, in other contexts.\\n\\nAuthors claim that \\u201cPires et al. (2019) hypothesizes that the cross-lingual ability of M-BERT arises because of the shared word-pieces between source and target languages.\\u201d is not entirely correct. Pires et al. (2019) did show M-BERT\\u2019s ability to transfer between languages that are written in different scripts, and thus have effectively zero lexical overlap. There were results (e.g. Figure 1) showing that while performance using EN-BERT depends directly on word piece overlap, M-BERT\\u2019s performance is largely independent of overlap, indicating that it learns multilingual representations deeper than simple vocabulary memorization.\\n\\nPires et al. (2019) also showed that structural similarity is crucial for cross-lingual transfer\\n\\nWhat are the main contributions of the paper?\\nFirst comprehensive study of the contribution of different components in Multilingual BERT to its cross-lingual ability. Novel findings about the effect of network architecture, input representation and learning objective on cross lingual ability of M-BERT\\nMethodology that facilitates the analysis of similarities between languages and their impact on cross-lingual models by mapping English to a Fake-English language, that is identical in all aspects to English but shares no word-pieces with any target language.\\n\\nWhat are the key dimensions studied/analyzed?\\nDifferent components/aspects of Multilingual BERT investigated:\\n(i) Linguistics properties and similarities of target and source languages (has been studied in prior work)\\n(ii) Network Architecture (novel)\\n(iii) Input and Learning Objective (moderately novel)\\n\\nWhat are the main results? Are they significant?\\nLexical overlap between languages plays a negligible role in the cross-lingual success, while the depth of the network is an important part of it. \\n\\nStrengths \\nNovel findings about the effect of network architecture, input representation and learning objective on cross lingual ability of M-BERT\\nWeaknesses\\nPires et al. (2019) work has been misrepresented. (see above for more details)\\nPires et al. (2019) did study linguistics properties and similarities of target and source languages for Multilingual BERT and had similar findings as this work.\\n\\n\\nQuestions\\nWhat kind of difference in the numbers is considered significant by the authors ? For example, according to them, an increase of more than 2 points in accuracy in Table 3 (e.g. 1 head vs. 6 heads) is considered insignificant. But a decrease of a point in accuracy in Table 5 (e.g. enfake-ru NSP vs. No-NSP) is significant.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper evaluates the cross-lingual effectiveness of Multilingual BERT along three dimensions:\\n- Linguistics\\n- Architecture\\n- Input and learning objective\\n\\nIn each of these three dimensions, the authors run experiments to test why BERT is effective at cross-lingual transfer.\\nFor simplicity, they run their experiments on B-BERT which is trained on two languages. The fine-tuning is done on language, and the zero-shot performance is tested on the other one.\", \"pros\": \"I found the en-fake experiments enlightening. The authors find that wordpiece overlap is not as important for cross-lingual transfer as was suggested by previous papers (Wu & Pires).\\nThe idea of creating a unicode shifted version of English and use it for testing is a first of its kind and quite interesting.\\nMost experiments were well motivated and the authors draw good conclusions about the need for more depth, that only a few attention heads are sufficient.\\nThey end with an experiment that shows that the cross-lingual effectiveness drops significantly when the premise and hypothesis are in different languages. This is a good motivating experiment to end the paper on.\", \"cons\": [\"The architecture experiments were not that insightful and they authors did not reach concrete suggestions on a minimum number of parameters or a minimum depth.\", \"While the two language setting is easier to experiment with, I wonder how these conclusions will change if they were repeated with the 100+ language version.\", \"For structural similarity, it would have been more concrete if the authors were able to visualize and show that the newly created en-fake still aligned with corresponding similar words in the other languages. This would have proven that despite no wordpiece overlap, similar words still align.\"], \"minor_comments\": [\"typo: \\\"training also training also\\\"\", \"bad grammar - last para in page 1\", \"I found the introduction was filled with grammar and bad English. Please fix.\", \"Why are the numbers in Table 4 in a different format?\", \"3.4.2 It's not clear if this is a clear trend. The authors claim that lang-id helps.\", \"Please explicitly state the sentencepiece or wordpiece setup in a central piece. I found the detail hidden in section 3.1.2\", \"With the word vs char vs wordpiece experiments, I think more care should be taken to make sure that the number of parameters remains the same across all three setups. e.g. with only a few chars as vocab, the model has far fewer parameters. This should be compensated for.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"Weak reject\\n\\nREASONS FOR RATING (SUMMARY). Aside from the invention of Fake-English, which as far as I know is original and a clever approach to assessing the importance of token overlap in cross-language transfer, the other contributions are reporting results of mechanical changes. The paper\\u2019s contributions are useful, but do not reach a level of generality, originality, or depth justifying presentation at ICLR.\\n\\nAlthough it did not factor into my rating, I would like to point out that saying \\u2018structural similarity is important\\u2019 and saying \\u2018word-piece overlap is not important\\u2019 is saying exactly the same thing twice, since the gain not attributable to word-piece overlap, by their definition, equals the gain due to \\u2018structural similarity\\u2019, which is a concept otherwise undefined and unmeasurable.\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"\", \"contributions\": \"C1. Cross-linguistic token overlap. Fake-English: English, but with the Unicode codes of English characters all shifted by a large constant so that there is no overlap between Fake-English characters and those of actual languages, but the language-internal structure remains that of English. \\n\\nC2. A bilingually-trained BERT, pretrained on languages L and L\\u2019, is then trained on a downstream task in L then tested on that task in L\\u2019. The task is Cross-Lingual NLI (XNLI) or Cross-Lingual NER. L\\u2019 is Spanish, Hindi, or Russian. L is English or Fake-English. Comparing the success at test when L = English vs. when L = Fake-English, it is shown that eliminating all token overlap between L and L\\u2019 has a small effect (less than 1.5% on XNLI, less than about 3.5% on NER). (Table 1)\\n\\nC3. Several architectural parameters of BERT are varied holding the others roughly constant (same tasks as C2, with L = Fake-English). This shows that depth (Table 2) and level of tokenization matter (Table 7), while little effect results from varying the number of attention heads (Table 3), number of parameters (Table 4), whether the next sentence prediction task is used for training (Table 5), or whether the language of an input is explicitly given (Table 6).\\n\\nC4. Testing cross-language entailment on XNLI by B-BERT shows that there is a large reduction in performance when the hypothesis and premises sentences are from different languages.\"}"
]
} |
Hkeh21BKPH | Towards Finding Longer Proofs | [
"Zsolt Zombori",
"Adrián Csiszárik",
"Henryk Michalewski",
"Cezary Kaliszyk",
"Josef Urban"
] | We present a reinforcement learning (RL) based guidance system for automated theorem proving geared towards Finding Longer Proofs (FLoP). FLoP focuses on generalizing from short proofs to longer ones of similar structure. To achieve that, FLoP uses state-of-the-art RL approaches that were previously not applied in theorem proving. In particular, we show that curriculum learning significantly outperforms previous learning-based proof guidance on a synthetic dataset of increasingly difficult arithmetic problems. | [
"automated theorem proving",
"reinforcement learning",
"curriculum learning",
"internal guidance"
] | Reject | https://openreview.net/pdf?id=Hkeh21BKPH | https://openreview.net/forum?id=Hkeh21BKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"SMltj4yOvN",
"H1ec98wcsH",
"Hkx7iVuYiB",
"rkxeRXOKoH",
"SylpAGdFjB",
"HJxLhLwAtS",
"SJeNm88iKr",
"SJeKpyOtur"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737117,
1573709457849,
1573647514770,
1573647304396,
1573647061098,
1571874477893,
1571673628213,
1570500545180
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1967/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1967/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1967/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1967/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1967/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1967/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1967/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a curriculum-based reinforcement learning approach to improve theorem proving towards longer proofs. While the authors are tackling an important problem, and their method appears to work on the environment it was tested in, the reviewers found the experimental section too narrow and not convincing enough. In particular, the authors are encouraged to apply their methods to more complex domains beyond Robinson arithmetic. It would also be helpful to get a more in depth analysis of the role of the curriculum. The discussion period did not lead to improvements in the reviewers\\u2019 scores, hence I recommend that this paper is rejected at this time.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Your response was very helpful. Thanks!\"}",
"{\"title\": \"Answer to Reviewer #1\", \"comment\": \"Dear Reviewer,\\nThank you for your comments.\\n\\nYou are right that from a strictly RL perspective, our system brings no methodological novelty. However, this is a new method in automatic theorem proving and we argue that it is a good approach to address the sparse reward problem of theorem proving, a problem that hasn't been tackled before.\\n\\nWith respect to your concern about the simplicity of the Robinson Arithmetic dataset, please note that the dataset is already hard for state of the art theorem provers, as shown in Figure 4. In order to solve these simple problems, you have to manually calibrate a human heuristic, otherwise known proof methods fail as the proofs to be found get longer. Furthermore, this dataset allowed us to identify some important failure modes that are not particular to Robinson Arithmetic or to FLoP. We claim that our datasets are of the very few that not only challenge theorem provers, but also allow for understanding their current limits. Please see more about the dataset in the answer given to Reviewer #3.\"}",
"{\"title\": \"Answer to Reviewer #2\", \"comment\": \"Dear Reviewer,\\nThank you for your comments.\\n\\nIt is a challenging task to provide a proper introduction to the connection calculus in such a short paper. However, we did our best to provide illustration on the project webpage http://bit.ly/site_atpcurr . Here you can find some screencasts and logs that cover few selected problems and include all details of the reasoning. \\n\\nThe primary role of our curriculum is to provide more training signal to the learner, as well as to allow a priori knowledge to the system through proofs (please see the answer given to Rewiever #3). Figure 5 in Appendix B illustrates that curriculum learning indeed yields more reward during training. The upside is that this makes training faster and more stable. However, the downside of better is training is the risk of overfitting. We can see our curriculum as a tradeoff between faster training and higher chance of overfitting. As described in Appendix A, Failure Modes, Stage 3 is very sensitive to overfitting as different proofs of the same problem can yield very different generalization, this is why curriculum can be detrimental.\", \"rl_methods_are_oftenly_biased_towards_shorter_solutions\": \"even if no reward discounting is applied, exploration is more likely to find shorter solutions than longer ones. This is true with and without curriculum. What curriculum learning does is to speed up learning once a proof was found. So it is beneficial if used on some \\\"good\\\" proofs and detrimental if used on \\\"bad\\\" proofs. Table 3 shows an example of each scenario (Stage 2 and 3). Assessing the quality of proofs (with respect to generalization) is, we believe, an important research question.\\n\\nWhile our project targets to learn to solve hard problems that require long proofs, given a particular problem, we have no incentive to prefer longer proofs of that problem. Our only concern during training is to learn how to generalize to other problems.\\n\\nThe advancement of curriculum is meant to ensure that the system is continuously faced with a \\\"reasonably\\\" hard problem. Once it becomes easy to finish the proof from a particular state, we make things harder by moving the starting state backwards. It is only after we have gone through the full curriculum that the system has been exposed to all the states of the proof.\\n\\nThe reason why we believe our setting is useful for learning long proofs is that we are capable of performing very long rollouts during training time, without facing an exponentially diminishing chance of getting some reward. The system thus trains on long sequences of proof steps.\"}",
"{\"title\": \"Answer to Reviewer #3\", \"comment\": \"Dear reviewer,\\nThank you for your comments.\\n\\nYou raised concerns about the usefulness of our arithmetic datasets. Please keep in mind, that despite the simplicity, these problems are already challenging for state of the art automated theorem provers, as shown in Table 4. Something that works here might not necessarily work directly on more complex datasets, but we argue that the problems that we observe when trying to solve these simple arithmetic equations are equally present elsewere, even if not so easily observable. Nevertheless, we agree that the way forward is to\\ntarget more and more complex datasets. \\n\\nWe were able to identify the dominant failure modes, that are mostly related to overfitting to the training problems and more specifically to some particular proofs of the training problems. We believe that we are the first to raise the issue that valid proofs of the same problem might differ greatly in terms of generalizability. We haven't yet found a satisfactory solution to this problem, but we still think this is an important message to the community. These failure modes are not particular to Robinson Arithmetic or FLoP.\\n\\nIn order to appreciate the experimental results, it is useful to consider the lengths of proofs found by the various provers. Table 5 and Figure 4 together show that FLoP tends to find shorter proofs of the same problem than rlCoP,\\nstill the former finds much more longer proofs, suggesting that it solves more of the harder problems. We can also look at lengths of proofs found by the other (unguided) provers. Even if proofs of the same problem cannot be\\ncompared directly, due to the different underlying calculus, it is still an important question how well one prover can solve problems that require long inference chains. If we omit proofs in Stage 3 that are less than 100 steps long, then the number of proofs found by the provers is: Vampire (107), E (0 - including variants with auto and autoschedule mode, as well as the one where equation is replaced with the eq symbol), rlCoP (340), FLoP (481). This\\nsuggests that the advantage that FLoP obtained over its peers is in the range of harder problems, requiring several hundred proof steps. Also note that leanCoP is a rather simple and weak prover (as shown in Table 4), still guidance has allowed it to obtain competitive results.\\n\\nCurriculum learning has not yet previously been applied to theorem proving and we don't claim that our work is the only/best way to do it. We argue, however, that this will be an important component of provers that approach human level competence. Our main design choice here was to do curriculum on the length of a proof (whether given from outside or found by the prover). Another natural option for curriculum is to order problems according to their difficulty and first learn from the easier ones. While this is easy to implement in simple theories like Robinson Arithmetic, assessing the difficulty of a problem is a hard task in general.\", \"our_approach_to_curriculum_has_two_important_benefits\": [\"We alleviate the sparse reward problem of theorem proving. More reward signal results in faster and more stable training.\", \"We provide a comfortable mechanism to incorporate prior knowledge into the system in the form of proofs. As Table 7 shows, our best results were obtained when the system was given a couple longer proofs, so it didn't have to bootstrap from scratch. There can be theories where this is the only option to start the system because even the easiest problems are too hard to find through pure exploration. In FLoP, it is now a design choice of the user what kind of proofs to start with.\", \"This curriculum is independent from PPO, we could equally use any other popular RL method. We just make it easier for an episode to terminate in a reward yielding state.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": [\"In the paper, the authors present a new algorithm for training neural networks used in an automated theorem prover using theorems with or without proofs as training data. The algorithm casts this training task as a reinforcement learning problem, and employs curriculum learning and the Proximal Policy Optimization algorithm to find appropriate neural network parameters, in particular, those that make the prover good at finding long proofs. The authors also propose a new dataset for theorems and proofs for a simple equational theory of arithmetic, which is again suitable for improving (via learning) and testing the ability of the prover for finding long proofs. The proposed prover is tested against existing theorem provers, and for the authors' dataset, it outperforms those provers.\", \"I found it difficult to make up my mind on this paper. On the one hand, the paper tackles an interesting problem of improving an automated theorem prover via learning, in particular, its ability for finding long nontrivial proofs. Also, I liked a qualitative analysis of the failure of the curriculum learning for tackling hard tasks in the paper. On the other hand, I couldn't quite make me excited with the dataset used to test the prover in the paper. The dataset seems to consist of easy variable-free equational formulas about arithmetic that can be proved by evaluation. Of course, I may be completely wrong about the value of the dataset. Also, if the dataset includes variables and other propositional logic formulas, such as disjunction, negation and conjunction, so that the prover can be applied to any formulas from Peano arithmetic via Skolemization, I would be much more supportive for the paper. Another thing that demotivated me is that I couldn't find the discussion about the subtleties in using curriculum learning and PPO for the theorem-proving task in the paper. What are the possible design choices? Why does the authors' choice work better than others?\", \"I added a few minor comments below.\", \"abstract, p1: \\\"significantly outperforms previous learning-based\\\". When I read the experimental result section, I couldn't quite get this sense of huge improvement of the proposed approach over the existing provers. Specifically, from Table 4, I can see FLoP performs better than rlCoP, but I wasn't sure that the improvement was that significant (especially because rlCoP might not have given a chance to be tuned to the type of questions used to train FLoP -- I may be wrong here). I suggest you to add some further explanation so that a reader can share your sentiment and excitement on the improvement brought by your technique.\", \"p2: The related work section is great. I learned a lot by reading it. Thanks.\", \"p4: I think that you used the latex citation command incorrectly in \\\"learning Resnick ... Chen (2018)\\\"\", \"and \\\"features Kaliszyk ... Kaliszyk et al. (2015a; 2018)\\\".\", \"p6: discount factor) parameters related ===> discount factor), parameters related\", \"p8: a a well ==> a well\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper uses reinforcement learning for automated theorem proving. The proposed method aims to generalize the short proofs to longer proofs with similar structure. Experiments were run to compare the performance of curriculum learning with the ones without curriculum.\\n\\nOverall the paper attempts to explain clearly the original contribution of the proposed approach, which is using curriculum learning in RL based proof guidance. However, I am not convinced about the how compelling the results are in support of the claim. The main arguments to bolster my decision are as follows.\\n\\nI am familiar with RL and curriculum learning but not so much with connection tableau calculus. The description given in the paper seems to be insufficient and confusing for readers with limited knowledge in this area. A step-by-step explanation with a toy example might have done the job nicely. Without such a clear understanding of the calculus, it gets hard to appreciate the merits of the results.\\n\\nSome claims of the paper are not clearly validated by the reported experimental results. For example:\\n- in Table 8, curriculum learning is worse in some cases and better in others. What to conclude from such a report?\\n- in experiment 3, curriculum learning tends to find shorter proofs. Isn't that contrary to the focus of the paper? \\n- in Table 3, curriculum learning performs lot worse than the other method. What is to be inferred from such a report?\\n\\nIt would be nice if there was a clear explanation of the role of curriculum in the learning algorithm. For example, in Algorithm 1, how is Line 8 helping in overall objective of learning longer proofs? If one advances curriculum, one takes lesser number of proof steps according to stored proofs. How does that help in the learning? Does it imply 'less memorizing' with advancement of curriculum?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"OVERALL:\\n\\nI don't work on ATP and am not particularly well suited to review this paper, but I am\\nslightly inclined to accept for the following reasons.\\n\\n1. It's an important problem and not much work is done on it.\\n2. Creating new environments is hard, valuable work that is (IMO) insufficiently incentivized in our community,\\nand I think accepting papers that do this work is a good policy.\\n3. The experiments, to my super-inexperienced eye, seem well designed and like they address\\nobvious questions readers would have.\\n\\nHowever, I have no idea if e.g. the baselines used in this paper are reasonable, so\\nI would appreciate someone with more experience on that topic weighing in.\", \"some_limitations_of_the_paper\": \"1. I'm not actually convinced there's much that's methodologically new here. \\nIt seems like mostly an application of existing RL techniques to an ATP environment.\\n\\n2. The writing is not particularly clear, and could use substantial editing (but this\\nis something that could be fixed during the discussion period).\\n\\n3. The focus on Robinson arithmetic seems kind of limiting.\\nThough I am sympathetic to the reasoning given in the paper, it's unclear to me (again, as a non-expert),\\nthat the techniques that work in this context will actually work in the context considered by e.g. [1]\", \"detailed_comments\": \"> In the training set, all numbers are 0 and 1 and this approach works more often.\\nYou have this sentence twice in different places.\\n\\n> it is insightful to compare...\\nNot a very idiomatic use of insightful?\\n\\n\\n[1] Deep network guided proof search.\"}"
]
} |
Bylh2krYPr | Probing Emergent Semantics in Predictive Agents via Question Answering | [
"Abhishek Das",
"Federico Carnevale",
"Hamza Merzic",
"Laura Rimell",
"Rosalia Schneider",
"Alden Hung",
"Josh Abramson",
"Arun Ahuja",
"Stephen Clark",
"Greg Wayne",
"Felix Hill"
] | Recent work has demonstrated how predictive modeling can endow agents with rich knowledge of their surroundings, improving their ability to act in complex environments. We propose question-answering as a general paradigm to decode and understand the representations that such agents develop, applying our method to two recent approaches to predictive modeling – action-conditional CPC (Guo et al., 2018) and SimCore (Gregor et al., 2019). After training agents with these predictive objectives in a visually-rich, 3D environment with an assortment of objects, colors, shapes, and spatial configurations, we probe their internal state representations with a host of synthetic (English) questions, without backpropagating gradients from the question-answering decoder into the agent. The performance of different agents when probed in this way reveals that they learn to encode detailed, and seemingly compositional, information about objects, properties and spatial relations from their physical environment. Our approach is intuitive, i.e. humans can easily interpret the responses of the model as opposed to inspecting continuous vectors, and model-agnostic, i.e. applicable to any modeling approach. By revealing the implicit knowledge of objects, quantities, properties and relations acquired by agents as they learn, question-conditional agent probing can stimulate the design and development of stronger predictive learning objectives. | [
"question-answering",
"predictive models"
] | Reject | https://openreview.net/pdf?id=Bylh2krYPr | https://openreview.net/forum?id=Bylh2krYPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"OvKG2N3mS",
"B1eOCAMhsH",
"B1gI0H9oiH",
"rJeYs91jor",
"HJg4tUz5jB",
"H1ltX8ltiS",
"BygryLlKjB",
"HJemQxbMoH",
"HylfJebMsB",
"BylkT1WMoS",
"r1ep1oO0YS",
"rklyq2NAFS",
"rJx8VFf0Kr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737076,
1573822159734,
1573787085546,
1573743265144,
1573688955732,
1573615137379,
1573615069249,
1573158938926,
1573158874308,
1573158839405,
1571879652782,
1571863687479,
1571854637611
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1966/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1966/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1966/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1966/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1966/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1966/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1966/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1966/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1966/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1966/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1966/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1966/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes question-answering as a general paradigm to decode and understand the representations that agents develop, with application to two recent approaches to predictive modeling. During rebuttal, some critical issues still exist, e.g., as Reviewer#3 pointed out, the submission in its current form lacks experimental analysis of the proposed conditional probes, especially the trade-offs on the reliability of the representation analysis when performed with a conditional probe as well as a clear motivation for the need of a language interface. The authors are encouraged to incorporate the refined motivation and add more comprehensive experimental evaluation for a possible resubmission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"We can't see how the proposed remedies would resolve your principal concern\", \"comment\": \"Thank you for your response. We really appreciate your continued engagement, which has been instrumental in the improvements to the paper that we have made so\\u00a0far.\\n\\nSo that we can understand exactly how to address your worries, we just want to dig a little deeper into your principal concern, namely, that our methods cannot definitively determine if information exists in representations or whether it's just being obscured because of the weakness of the QA decoder ('[our method] does not provide sufficient evidence that the QA score is indicative of the information captured in the representation'). Is it not the case that this flaw is inherent to any probing approach, including all of the prior work that you refer to in your comments? Any failure of a probing analysis to identify information may be due to insufficient power or expressivity of the decoder. We acknowledge this in the paper, which is why we explicitly measure\\u00a0accuracy as a function of decoder capacity. Your position seems to be that this flaw is terminal for a QA conditional decoder, but acceptable for a non-conditional task-specific probe.\\u00a0\\n\\nYou suggest several ways that our paper might be improved (thank you), but it's really unclear to us how taking these steps would resolve your concern. First, you recommend comparing our decoder to an unconditional probe. As explained above, neither unconditional nor conditional probes are guaranteed to uncover *all* the information in a representation. Employing unconditional probes in a way that spanned the same range of propositional knowledge as we consider in the paper would\\u00a0require us to train, by our estimates, 7660 independent probing networks (what is the colour of the table? What is the colour of the chair? Etc etc). This would involve a vast effort, but having done so, we would still not know definitively what information is in the agent's representations. At some point, the practicality of executing the method must be a relevant factor in assessing its merit. Given that probing methods cannot (whether conditional or unconditional) reveal the extent of knowledge in a representation, we find the more pragmatic question of how likely are they to be used to uncover or reveal the knowledge of an agent to be more compelling.\\u00a0\\n\\nWe believe there is a very real chance that, without much additional effort, researchers might choose to bolt on a conditional (QA-style) decoder in order to monitor the aggregation of knowledge in their agents. Doing so with thousands of independent unconditional decoders seems highly impractical. We understand that you are strongly\\u00a0opposed to this position, but to help us move forward with this work, what would you suggest is a more likely, or less flawed, way that users will keep track of the propositional knowledge acquired by agents?\\n\\nYou have also requested that we run our method on more than two environments. Again, before embarking on this we face the same question. How would this resolve the fundamental criticisms of the method that are leading you to recommend rejection of the paper here? It would certainly be indicative of how the method is easy and quick to apply (although we think that our quick and dirty experiment with DM-lab and no tuning serves this purpose), but ultimately, exactly the same criticisms could easily be leveled at the work. Why would three environments suddenly be sufficient to overcome these issues?\\u00a0\\n\\nUnfortunately, we have the same uncertainty around your request for us to \\\"draw comparisons to prior methods for analyzing representations\\\" (without a 'right' answer to the question of what information *should* exist in an agent, how exactly might we compare two methods?), and for \\\"an analysis of the tradeoffs of input-conditioned probing interfaces\\\" (the 'tradeoffs' seem to involve the pragmatic and philosophical issues that we have discussed in this review process - how might we add that satisfactorily to the paper?).\\n\\nThanks again for your hard work and thoughtful assessment of our paper. We are genuinely grateful for it.\"}",
"{\"title\": \"Further analysis required to convince that this is a useful advance on prior work\", \"comment\": \"Thanks again for the quick response!\\nI do not think that it is necessary to include the focus on propositional knowledge in the intro, I think it is clear what kind of information both probing networks as well as the QA approach are trying to analyze a representation for.\\n\\nIn their response the authors claim that \\\"the method [they] propose in this work provides far greater scope for probing the propositional knowledge of agents than existing approaches\\\". However, I repeat the statement from my last response: the current submission lacks experimental evidence for such claims. While QA might provide a simple interface for training one model to elicit information about a multitude of attributes, it is unclear whether that is actually desirable and what tradeoffs go along with that. In my last response I highlighted the problem that the model can tradeoff performance between tasks in such a multi-task setup, so that we don't know whether the representation actually does not capture the information or whether it is too hard of a question for the QA system to answer in the first place / whether the QA model decides to focus on other questions. In Tab.2 of the submission the authors report \\\"oracle\\\" QA performance of as low as 41% on some question types (with representations optimized for that task), indicating that the QA task itself is hard and that the concern about the noise introduced by the interface is warranted. Experiments that compare to non-conditional baselines across multiple environments are needed to support the author's claims.\\n\\nIn conclusion, the submission proposes the interesting idea to investigate representations via question answering and shows that it is possible to train a QA system on an unsupervised representation. It does not draw comparisons to prior methods for analyzing representations, it does not analyze the tradeoffs of input-conditioned probing interfaces and it does not provide sufficient evidence that the QA score is indicative of the information captured in the representation. I therefore do not recommend acceptance of the submission in its current form, but we detailed both sides' arguments in this thread and it is now upon the AC to make a decision.\\n\\n\\nP.S.: Regarding the scoring scale: please note that this year's ICLR scoring scale only contains four score options [1, 3, 6, 8]. In last year's scoring system I would see my score rather like a 3, indicating that I don't think that the submission in its current form is ready for publication, but that there is an idea that can lead to a successful publication. In that sense I encourage the authors to resubmit a revised version of the manuscript to one of the upcoming conferences!\"}",
"{\"title\": \"Uncovering an agent's propositional knowledge is a useful advance on prior work\", \"comment\": \"Thanks again for detailed response. We really appreciate it. It's clear you feel strongly that this work does not make a meaningful contribution to the literature. We will try one more attempt to convince you that it is in fact a meaningful advance on prior work and therefore a valuable contribution. Assessing the scale of a contribution is necessarily somewhat subjective, but we feel that a score of 1/10 is entirely incongruous with the amount of progress already represented in this work\\n\\nWhy this work is meaningfully different from prior work\\n\\nCenturies of research in epistemology are testament to the fact that, as humans, we perceive some distinction between \\\"knowing how\\\" (procedural knowledge) and \\\"knowing what\\\" (propositional knowledge). See e.g. IEP [ https://www.iep.utm.edu/epistemo/ ] for a nice definition of propositional knowledge. It is uncontroversial that deep RL agents can develop a degree of procedural knowledge as they learn to play games, solve tasks etc. However, the extent to which they can develop propositional knowledge is much less clear. Propositional knowledge is important here, firstly because it may ultimately support procedural knowledge in carrying out truly complex tasks, but also because there seems (to philosophers at least) to be something essentially human about having propositional knowledge. If one of the goals of the AI effort is to build learning machines that can engage with, and exhibit convincing intelligence to, human users (.e.g such that humans understand and trust them), then some need for measuring and demonstrating propositional knowledge will arise. \\n\\nIt seems clear, to us at least, that the method we propose in this work provides far greater scope for probing the propositional knowledge of agents than existing approaches. There may be ways to ascertain narrow propositional knowledge from unconditional probes described in prior work, but this would very quickly become contrived, and it would be impossible to measure the breadth or diversity of propositional knowledge encoded by an agent, particularly in a way that would immediately convince human users/observers. \\n\\nThis focus on propositional knowledge is implicit in the current version of the paper, but we shied away from making it explicit for fear of descending unnecessary into the philosophical weeds. However, if you recommend, we would be happy to re-introduce this justification into the paper. \\n\\nWe agree that your suggestion of direct calibration between QA probes to individual property-specific probes (conditional or unconditional) would make an interesting analysis, and also that expansion of the method beyond the two environments that we consider here is an important goal. However, we would contend that not all of these challenges need to be met immediately, and that the introduction and proof of concept of the probing method in this paper is a sufficient contribution on its own and paves the way for additional work in this direction.\"}",
"{\"title\": \"Motivation improved, but not enough evidence to support claims\", \"comment\": \"Thanks for the detailed answer to my review!\\n\\nGiven their reply, the authors seem to agree that the current way of probing the knowledge captured in a network's representations is via training probing networks to infer the information of interest. From what I understand the submission is proposing Question Answering as an alternative tool for investigation. \\nIn their reply the authors provide a somewhat different motivation from the one that was given in the introduction of the submission. Instead of talking about a \\\"more intuitive investigation tool for humans\\\" they now underline the fact that their network is input-conditioned while usual probing networks are not. I think that the paper will benefit from incorporating this clear contrast to probing networks into the introduction; in the current form the introduction does not mention probing networks at all, even though they are the current natural choice for investigating representations.\", \"i_am_however_still_not_convinced_by_the_motivation_in_the_light_of_the_experimental_evidence_that_is_provided\": \"one good reason to have one probing network per task and not a global one that is input conditioned is, that in such a \\\"multi-task probing network\\\" it would not be clear whether some piece of information is actually not captured in the representation under investigation or whether the probing network traded off performance on this task vs performance on some other task. For question answering systems the problem would be the same: did the network trade off performance for answering questions about the shape for performance on questions about the color or does the representation actually capture shape worse than color?\\n\\nFurther, even if we assume that training an input conditioned probing network is desirable, it is not clear to me why we would need to train a language interface model vs one that operates for example on symbolic inputs (e.g. one 1-hot vector representing the object type, another 1-hot vector representing the attribute to be inferred). Such a network could still show the generalization capabilities the authors mention but would not require language input.\\n\\nIn the current form the submission merely shows that it is possible to use QA to infer properties of a representation but does not provide any experimental comparison to prior methods for investigating representations. If, however, the motivation of the work is to propose an alternative investigation tool such comparisons are crucial. The authors should validate that the multi-task structure they introduce does not induce any additional noise in the investigation of the representation's properties (e.g. by training individual probes for each property and showing that the multi-task probe agrees with the single-task ones on which representation is good/bad). Finally, the authors need to provide evidence for why a language input is better suited than a symbolic version.\\n\\nTo claim that QA is a general tool for testing representations the authors should additionally provide evidence that above-mentioned experimental findings hold across sufficiently different environments. The DMLab experiments the authors provided as part of the rebuttal does not provide such insights as it basically replicates one experiment from the paper with different objects in a new simulator. Instead the authors should provide results on standard environments that are sufficiently different, like AI2-THOR (that I mentioned in my original review) or DMLab for that matter vs Atari (like in [5]). Here the tasks would actually vary (questions about object shape/color in AI2-THOR/DMLab vs questions about score/lives/inventory in Atari).\\n\\nFinally, I don't naturally buy the idea of crowdsourcing Q/A pairs for investigating representations that the authors mention in their rebuttal. It is not clear whether such a process would provide sufficient Q/A pairs of a particular type to successfully train a QA model on it or how much the resulting performance deficiencies of the QA model would weaken the statements that can be made about the learned representation. Therefore, if the authors want to make this claim, they need to provide experimental evidence for it.\\n\\n###############\\nIn summary, I appreciate the refined motivation that the authors provide, but I don't see sufficient experimental evidence to support the claims of introducing a better analysis tool, as the manuscript does not compare to any alternative method for investigating learned representations. As such I don't see grounds to increase my score.\\n\\n\\n[5] Unsupervised State Representation Learning in Atari, Anand et al., NeurIPS 2019\"}",
"{\"title\": \"Additional results in the DeepMind Lab environment\", \"comment\": \"We set up the \\u201ccolor\\u201d task in the DeepMind Lab [1] environment. The environment consists of a rectangular room that is populated with a random selection of objects of different shapes and colors in each episode. There are 6 distinct objects in each room, selected from a pool of 20 objects and 9 different colors. We use a similar exploration reward structure as in the experiments in the main paper to train the agents to navigate and observe all objects. In each episode, we introduce a question of the form `What is the color of the <shape>?' where <shape> is replaced by the name of an object present in the room.\\n\\nConsistent with trends in the main paper, internal representations of the SimCore agent lead to the highest question-answering accuracy, while CPC|A and the vanilla LSTM agent perform worse and similar to each other. Crucially, for running these experiments, we did not change any hyperparameters from the experimental setup in the main paper. This demonstrates that our results are not specific to a single environment and that our approach can be readily applied in a variety of settings. Please see Section A.5 for a plot of question-answering accuracy during training and videos of SimCore agent trajectories here (anonymized): https://drive.google.com/drive/folders/1itmNlZgDhy6YAwlQxT6LMgr3RDU4ULTh?usp=sharing\\n\\n[1]: DeepMind Lab, Beattie et al., 2016\"}",
"{\"title\": \"Additional results in the DeepMind Lab environment\", \"comment\": \"We set up the \\u201ccolor\\u201d task in the DeepMind Lab [1] environment. The environment consists of a rectangular room that is populated with a random selection of objects of different shapes and colors in each episode. There are 6 distinct objects in each room, selected from a pool of 20 objects and 9 different colors. We use a similar exploration reward structure as in the experiments in the main paper to train the agents to navigate and observe all objects. In each episode, we introduce a question of the form `What is the color of the <shape>?' where <shape> is replaced by the name of an object present in the room.\\n\\nConsistent with trends in the main paper, internal representations of the SimCore agent lead to the highest question-answering accuracy, while CPC|A and the vanilla LSTM agent perform worse and similar to each other. Crucially, for running these experiments, we did not change any hyperparameters from the experimental setup in the main paper. This demonstrates that our results are not specific to a single environment and that our approach can be readily applied in a variety of settings. Please see Section A.5 for a plot of question-answering accuracy during training and videos of SimCore agent trajectories here (anonymized): https://drive.google.com/drive/folders/1itmNlZgDhy6YAwlQxT6LMgr3RDU4ULTh?usp=sharing\\n\\n[1]: DeepMind Lab, Beattie et al., 2016\"}",
"{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for their time and feedback. We are encouraged to hear that you found the paper interesting and well done, and our idea of using question-answering to probe an agent\\u2019s internal representations generally applicable to future agent analyses!\"}",
"{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for their feedback. We are happy to hear that you found our paper well-written, the experiments thorough, and the experimental settings clearly explained to aid reproducibility. We respond to specific comments below.\\n\\n1. Experiments on multiple environments\\n\\nWe agree that experiments across multiple environments would provide stronger empirical evidence. We are setting up a parallel task and experiments in DM-Lab [1] \\u2014 wherein we train agents with the same exploration reward and evaluate the representations learnt (by an LSTM agent, a CPC|A agent, and a SimCore agent) using a QA decoder on the \\u201ccolor\\u201d task. The vocabulary of objects, colors, and visual inputs differ from the environment we reported results on in our submission.\\n\\nWe will follow up with an update as soon as possible once we have these results.\\n\\n2. Study is quite narrow because only three models are compared\\n\\nTo our knowledge, CPC|A [2] and SimCore [3] (published in NeurIPS 2019) are the current state-of-the-art in auxiliary predictive objectives; so along with a vanilla LSTM agent, they seemed to be a solid suite of approaches to compare. Having said that, we are happy and curious to analyze other competitive approaches / baselines we may have missed. Please let us know!\\n\\n3. Straightforward methodological contribution / brief take home\\n\\nOur primary contribution is a task-agnostic linguistic decoder to analyze internal representations developed by predictive agents. Prior work has focused on non-linguistic probing networks trained independently for every property \\u2014 e.g. MLPs for position and orientation, ConvNets for top-down map as in [2,3]. As noted by R2, language provides an intuitive interface and allows for arbitrary levels of complexity. While the decoder itself is operationalized using common architectural primitives (e.g. language-conditioned LSTM), our higher-level idea is novel and we see the architectural simplicity as a positive, low barrier to entry. \\n\\n[1]: https://github.com/deepmind/lab\\n[2]: Neural Predictive Belief Representations, Guo et al., 2018\\n[3]: Shaping Belief States with Generative Environment Models for Reinforcement Learning, Gregor et al., NeurIPS 2019\"}",
"{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for the detailed review! We are encouraged that you found the discussion of related prior work thorough, the analysis of the agent\\u2019s representations as it changes over the course of a trajectory interesting (more qualitative videos here (anonymized): https://drive.google.com/drive/folders/1z1oQc-f8IsbsptMqCzYdGeI8qS33zbP2 ), and the writing easy to follow. We understand your concerns and respond to specific comments below.\\n\\n1. Motivation for using a language interface to probe emergent semantics\", \"our_motivation_for_this_work_is_twofold\": \"a. How can we understand the emergent knowledge in neural net-based agents as they learn and explore their world?\\nb. Is (or when is) an agent\\u2019s internal representation sufficient to support propositional knowledge about the environment, compositionality and (eventually) language understanding and use?\\n\\nWith respect to (a), we agree that our work builds on [1,2] (as also noted in Related Work). However, when considered as a general-purpose method for agent analysis, our technique is substantially different from prior work in that our decoder is conditioned by an external input (the question). We use a single, general-purpose network for all question types (e.g. shape, color, etc.), and the question we condition on is provided externally (and has multiple instantiations per question type being processed by the same network, e.g. \\u201cWhat shape is the _blue_ object?\\u201d, \\u201cWhat shape is the _red_ object?\\u201d for the \\u201cshape\\u201d question type). This is in contrast to prior work, where 1) there is typically no external input to the probe, and 2) probes have property-specific inductive biases \\u2014 e.g. MLPs for position and orientation, ConvNets for top-down map as in [1,2]. \\n\\nAn additional advantage of having a probe conditioned on external input is that it enables testing for generalization of an agent\\u2019s internal representation to perturbations of questions it is trained with. We do this in Sec 5.2, where we hold out some combinations of external input-output pairs (QA pairs) from the training set, e.g. \\\"Q: what shape is the blue object?, A: table\\u201d is excluded from the training set of the QA decoder, but \\u201cQ: what shape is the blue object?, A: car\\u201d and \\u201cQ: What shape is the green object?, A: table\\u201d are part of the training set (but not the test set).\\n\\nWith respect to (b), we would ultimately like to build an agent that we can interact with in open-ended natural language. While still far from that goal, our work is a step in that direction in that it provides a general-purpose language interface to check if an agent represents facts about the world (e.g. \\u2018the sofa is red\\u2019, or \\u2018there are four pencils on the floor\\u2019). In fact, property-specific probing networks are inherently constrained to the limited set of properties we can enumerate upfront. In future work, we would like to crowdsource open-ended natural language question-answer pairs from humans. In that setting, there is no scalable way to exhaustively enumerate all properties to decode from the agent. And so building independent probes might not even be scalable, while the architecture we propose in this work can be trained as is on that data.\\n\\nPlease let us know if this addresses your concerns. We will include this discussion in the paper.\\n\\n2. Results on a different environment / task combination\\n\\nWe agree that results across diverse environments would provide stronger empirical evidence. Unfortunately, no other environment readily provides a set of questions and semantic annotations out-of-the-box; and so we had to set up our own. We are in the process of setting up a parallel task and experiments in DM-Lab [3] \\u2014 wherein we train agents with the same exploration reward and evaluate the representations learnt (by an LSTM agent, a CPC|A agent, and a SimCore agent) using a QA decoder on the \\u201ccolor\\u201d task. The vocabulary of objects, colors, and visual inputs differ from the environment we reported results on in our submission.\\n\\nWe will follow up with an update as soon as possible once we have these results.\\n\\n3. CPC with a different negative sampling approach\\n\\nWe experimented with multiple sampling strategies for CPC (whether or not negatives are sampled from the same trajectory, the number of prediction steps, the number of negative examples) and reported the best in the main paper. We have added a more complete discussion in Sec A.1.5.\\n\\n[1]: Neural Predictive Belief Representations, Guo et al., 2018\\n[2]: Shaping Belief States with Generative Environment Models for Reinforcement Learning, Gregor et al., NeurIPS 2019\\n[3]: https://github.com/deepmind/lab\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"########## Post-Rebuttal Summary ###########\\nThe authors engaged actively in the rebuttal discussion and in the process we were able to concretize the motivation of the submission (as a result in increased my score). However, I think that the submission in its current form lacks experimental analysis of the proposed conditional probes, especially the trade-offs on the reliability of the representation analysis when performed with a conditional probe as well as a clear motivation for the need of a language interface. I can therefore not recommend the submission in its current form for acceptance.\\n(for detailed discussions + suggested experiments please see rebuttal discussions)\\n#######################################\\n\\n\\n## Paper Summary\\nThe paper tackles the problem of analyzing the information captured in the learned representation of a deep neural network. It proposes to replace the commonly used \\\"probing networks\\\" that try to directly infer the information from the learned representation, with a language interface in the form of a QA model which is trained post-hoc without propagating gradients into the learned representation. The authors argue that such a language interface provides a more natural interface for probing the information in a learned representation. The paper shows representation analysis results for the internal representations of agents trained on an exploratory task in a simple, simulated household environment similar to DM Lab. The authors conduct additional analysis on representations learned with auxiliary generative and contrastive objectives.\\n\\n## Strengths\\n- the language of the paper is clear and easy to follow\\n- the paper covers the related work well\\n- the provided explanations help understand the content of the paper\\n- the analysis of how the information captured in the agent's representation changes over the course of a trajectory is interesting (more such visualizations in the appendix would be nice!)\\n\\n## Weaknesses\\n(1) the motivation for the proposed problem does not convince: why do I need to train a Q/A system to infer which components of the true state are captured by the learned representation? For each of the properties of the environment I could train a probing network (as is done e.g. in [1,2]) and would get much more precise answers; ground truth labels need to be available whether I train QA or probing networks. The authors provide two motivations for the proposed approach which both do not convince me:\\n\\t(a) QA \\\"provides an intuitive investigation tool for humans\\\": I cannot imagine a workflow in which researchers would prefer to ask questions to their model over a plot showing explicit regression accuracies aggregated across many data samples (which the conventional probing networks provide).\\n\\t(b) \\\"the space of questions is open-ended, [...] enabling comprehensive analysis of [an agent's] internal state\\\": for each new question type we need to provide a sufficiently large number of question-answer-pairs to train the QA system for this question type. we could therefore also train a probing network using the same labels and would get a better overview of whether the state information is captured in the representation. I fail to see how substituting probing networks with a language interface helps the investigation of the representation's properties.\\n\\t\\n(2) the paper only provides minor novelty: the main proposal is to replace the probing networks, which were extensively used in prior work, with a natural language interface; a proposal that does not seem to provide a clear advantage (see above). The paper does not provide any further technical novelty.\\n\\t\\n(3) it is possible that the analysis of the contrastive model could improve substantially with a different sampling scheme of the negative examples: maybe sampling from different sequences makes the task of discriminating too easy so that the model is not encouraged to learn a rich representation. It would be interesting to see whether the representation captures more information if a more standard contrastive objective is chosen that discriminates between future frames with different offset in the same sequence (see for example the objective in [3]).\\n\\n(4) it seems that both environment and chosen task will have high influence on the representation the agent can learn from the collected data. Therefore the fact that the authors evaluate their approach only on a single environment / task combination, both of which they introduce themselves, weakens any conclusions the authors draw from their experiments. It would help strengthen the message of the paper to apply their methodology to previously proposed environment / task combinations, for example in the AI2-THOR environment [4]\\n\\n\\n[Novelty]: minor\\n[technical novelty]: minor\\n[Experimental Design]: not comprehensive\\n[potential impact]: low\\n\\n\\n#########################\\n[overall recommendation]: Reject - In its current form the paper does not provide a convincing argument for why learning a language interface for probing a representation is better than learning the usual probing networks. Further there are some doubts on the setup of the contrastive objective and the paper lacks comprehensive evaluation on standard environments. Therefore I cannot recommend acceptance in its current form.\\n[Confidence]: High\\n\\n\\n[1] Neural Predictive Belief Representations, Guo et al., 2018\\n[2] Shaping Belief States with Generative Environment Models for Reinforcement Learning, Gregor et al., 2019\\n[3] Representation Learning with Contrastive Predictive Coding, Van den Oord et al., 2018\\n[4] AI2-THOR: An Interactive 3D Environment for Visual AI, Kolve et al., 2017\\n\\n\\n\\n#### Final Rebuttal comment (to make it visible to the authors) ####\\nI understand the authors point that every method for probing representations can potentially be flawed in that the probing mechanism might not be expressive enough to extract the information that is indeed present in the representation. The point I was trying to make in my previous response is, that in the case of conditional probes that try to solve multiple such \\\"probing tasks\\\" in parallel, such uncertainties accumulate because the probing mechanism might trade off performance on one task for performance on another (if solving all of them at once is too challenging). If the submission's key contribution is to make probes conditional this seems like a trade-off that should be experimentally investigated, as it is vital to practitioners how much they can trust their probing method to extract the relevant information if it is actually in the representation.\\n\\nOne possible experiment would be to train an unconditional probe per attribute (maybe on a subset of all possible attribute-object combinations) and then a conditional probe across all of them to show that the conditional probe's assessments of the representation agree with the ensemble of unconditional probes. In my previous response I additionally raised the point that even if it can be shown that conditional probes are reliable, the authors still need to provide arguments why that conditioning should work via a language interface, not for example a symbolic/one-hot interface. If the claim is that this allows for generalization, it should again be shown that this generalization does not increase the error of the probe substantially, such that meaningful conclusions about the representation are still possible.\", \"regarding_testing_on_more_diverse_environments\": \"the questions these experiments are supposed to answer (i.e. how reliable are conditional probes) are inherently empirical, so verifying across diverse environments will make the analysis more conclusive.\\n\\nI agree with the author's point that training 7k unconditional probes would not be practical, probably the current approach would be to train an image decoder that then reconstructs a top-down view of the whole scene. This, however, would have the same problems as a conditional probe, i.e. the probing decoder could trade-off performance between reconstructing different parts of the image. Therefore, I agree with the authors that investigating conditional probes is an interesting direction, but the submission does not provide a comprehensive analysis of this question.\\n\\nOn a final note, I appreciate the continued, factual discussion with the authors and think that the refinement of the focus towards conditional vs unconditional probes is a step in the right direction. To acknowledge that I am raising my score from \\\"reject\\\" to \\\"weak reject\\\". Yet, I think that the work in its current form lacks experimental analysis of the proposed conditional probes. During the rebuttal discussion I highlighted concerns and proposed possible experiments. I cannot recommend acceptance of the submission in its current form, but I encourage the authors to incorporate the refined motivation and add more comprehensive experimental evaluation for a possible resubmission.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"The authors propose a framework to assess to which extent representations built by predictive models such as action-conditional CPC or SimCore contain sufficient information to answer questions about the environment they are trained/test on. The main idea is to train an independent LSTM (a Question-answer decoder) so that given the hidden state of the predictive model and a question about the environment, it is able to answer the question.\", \"The authors give empirical evidence that the representations created by SimCore contain sufficient information for the LSTM to answer questions quite accurately while the representations created by CPC (or a vanilla LSTM) do not contain sufficient information. Based on the experimental results, the authors argue that the information encoded by SimCore contains detailed and compositional information about objects, properties and spatial relations from the physical environment.\", \"The idea is clearly explained and seems sensible, the paper is well written, the execution is competent and the authors provide a sufficient amount of details so that reproducibility should be possible. As a result, I am positive, however, I think it would be best accepted as a workshop paper given that:\", \"The experiment are only carried out on a single environment, however, their claims are rather general. To support such general claims, experiments on additional environments seem necessary.\", \"While the idea is sensible, the study is quite narrow because it only compares three models.\", \"While sensible, the methodological contribution is rather straightforward.\", \"The take home is quite brief.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose question answering (QA) as a tool to investigate what agents learn about the world, i.e., how much about the world is encoded in their internal states. The authors argue that this is an intuitive method for humans and allows for arbitrary complexity.\\nConcretely, they train agents on exploration of a 3D environment using reinforcement learning and then ask them a set of non-trivial questions. This includes unseen combinations of seen attributes (\\\"zero-shot\\\"), showing that, what the agents learn, is to some degree compositional. Importantly, agents are not trained to answer questions explicitly.\\n\\nThe authors investigate multiple agents and find that LSTM and CPC|A representations are no better than chance, SimCore's representations seem to be the best for the QA task, and there is still a big performance difference between SimCore and the upper bound \\\"No SG\\\".\\n\\nI think this paper is interesting and well done. I agree with the authors that QA is an intuitive probing tool, which can be used for similar agent analyses in the future.\"}"
]
} |
rkeqn1rtDH | Hierarchical Graph Matching Networks for Deep Graph Similarity Learning | [
"Xiang Ling",
"Lingfei Wu",
"Saizhuo Wang",
"Tengfei Ma",
"Fangli Xu",
"Chunming Wu",
"Shouling Ji"
] | While the celebrated graph neural networks yields effective representations for individual nodes of a graph, there has been relatively less success in extending to deep graph similarity learning.
Recent work has considered either global-level graph-graph interactions or low-level node-node interactions, ignoring the rich cross-level interactions between parts of a graph and a whole graph.
In this paper, we propose a Hierarchical Graph Matching Network (HGMN) for computing the graph similarity between any pair of graph-structured objects. Our model jointly learns graph representations and a graph matching metric function for computing graph similarity in an end-to-end fashion. The proposed HGMN model consists of a multi-perspective node-graph matching network for effectively learning cross-level interactions between parts of a graph and a whole graph, and a siamese graph neural network for learning global-level interactions between two graphs. Our comprehensive experiments demonstrate that our proposed HGMN consistently outperforms state-of-the-art graph matching networks baselines for both classification and regression tasks. | [
"Graph Neural Network",
"Graph Matching Network",
"Graph Similarity Learning"
] | Reject | https://openreview.net/pdf?id=rkeqn1rtDH | https://openreview.net/forum?id=rkeqn1rtDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0OXaFIh1Q3",
"rklNbW8_oS",
"HklRSgLOiB",
"SylL1lIOoH",
"ryxXCRrujS",
"BJgjovGVqB",
"rJe9pCD6tH",
"S1lZD04oYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737037,
1573572859958,
1573572678031,
1573572573906,
1573572299041,
1572247458973,
1571811010019,
1571667544841
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1963/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1963/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1963/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1963/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1963/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1963/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1963/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The submission proposes an architecture to learn a similarity metric for graph matching. The architecture uses node-graph information in order to learn a more expressive, multi-level similarity score. The hierarchical approach is empirically validated on a limited set of graphs for which pairwise matching information is available and is shown to outperform other methods for classification and regression tasks.\\n\\nThe reviewers were divided in their scores for this paper, but all noted that the approach was somewhat incremental and empirically motivated, without adequate analysis, theoretical justification, or extensive benchmark validation. \\n\\nAlthough the approach has value, more work is needed to support the method fully. Recommendation is to reject at this time.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Responses for Official Blind Review #1\", \"comment\": \"Author response:\\n\\nFirst of all, we want to thank the reviewer for their thorough reading and valuable comments! However, there are some points of misunderstanding that we address in this rebuttal.\", \"below_we_address_the_concerns_mentioned_in_the_review\": \"1) The paper proposes a network learning similarity between graphs. In particular, the proposed method focuses on node-graph cross-level interaction which has not been considered by other neural net studies. The performance is evaluated by two datasets on classification and regression tasks respectively. Overall, the basic idea would be reasonable, and the architecture is clearly described.\\n\\nWe are very grateful to the reviewer for this accurate summary, and for the kind recognition of our key contributions. \\n\\n2) The network considers node-graph interaction, but in general, subgraph-graph interaction can be considered. Rationale that only focusing on node-graph interaction is not mentioned.\\n\\nYes, subgraph-graph interaction could be exploited as well, which we will leave it as one of the future works. However, we are not sure if the subgraph-graph interaction provides additional information beyond graph-graph interactions (global-level features) and node-graph interactions (low-level features). In this work, we are still focusing on how to develop a more effective way to capture the low-level features to augment the global-level features. This is why we mainly focused on node-graph interaction in this paper.\\n\\n3) The authors repeatedly mention that the proposed method jointly learns representation and similarity. Is the learned representation can be used for any purpose? Since it depends on a counterpart of the input pair, the representation is seemingly difficult to use.\\n\\nThis is a great question. We don\\u2019t think these learned representations for each graph are best for performing other tasks such as node classification or graph classification compared to the dedicated models for these tasks. What we tried to emphasize is that our model will learn the representations of each graph (this is always true for any DL-based data driven models) that are more suitable for computing a good similarity metric between two graphs in an end-to-end fashion. \\n\\n4) The experiments show superior performance of the proposed methods, but the datasets are only two for each tasks. In particular, since graph classification is a popular task, evaluation on a variety of benchmarks would be more convincing. \\n\\nIt seems like there are some misunderstandings regarding the datasets for graph matching. Indeed, we would like to have more datasets in order to avoid any cherry-picking results. However, it is very hard to get graph matching benchmarks mainly because it is quite hard to get the ground truth (labels). Although there are many benchmarks for graph classification task, the inputs of graph matching are different and not directly available. For instance, for graph matching, the input sample is a pair of graphs (G1, G2) and the corresponding label (Y) which essentially computes the similarity between them. For all graph classification benchmarks, they only assign each graph with a label, and do not provide any similarity information between any two graphs. One cannot simply say two graphs with the same labels can be treated as \\u201csimilar\\u201d. \\n\\nCurrently, we only have two datasets created by (Bai et al., WSDM 2019) that use graph edit distance to compute ground-truth for graph-graph regression task that are publicly available. One of the main contributions of our work is to release another two datasets (as well as these sub-datasets) that are created from the binary functions compiled from the source codes for graph-graph classification task (see more details in A.1). We hope these four datasets together can serve as good benchmarks for promoting the research in developing graph match models. \\n\\n5) A baseline with some graph kernel can be informative.\\n\\nThis is a good suggestion, although some of the datasets are quite large for kernel methods. Note that in (Li et al., ICML 2019) they compared their GMN model against the WL kernel and showed significantly better performance. Our model consistently outperforms GMN model, which may be an indirect comparison to the WL kernel. \\n\\n6) Showing an example of graph pairs, in which cross-level interaction is indispensable to appropriately evaluate similarity, would be convincing.\\n\\nThis is a great suggestion. In practice, it is not hard to find an example of graph pairs to show only cross-level interaction is able to correctly predict the label right. However, it is pretty hard to show what features (in hidden high-dimensional space) the cross-level interaction captures so that they are able to perform correctly. \\n\\n7) A related paper being missed would be 'Yoshida, et al. Learning Interpretable Metric between Graphs: Convex Formulation and Computation with Graph Mining, SIGKDD 2019'.\\n\\nThanks for pointing it out. We have fixed it.\"}",
"{\"title\": \"Responses for Official Blind Review #3 (Continued)\", \"comment\": \"3) Why different similarity score functions are adopted for the classification task and the regression task?\\n\\nThis is simply because two different tasks have different requirements. For the classification task, we chose cosine similarity (between -1 and 1) because we want to calculate the similarity between two graphs. For a pair of inputs, it is common to choose cosine similarity for performing classification task. For the regression task, we cannot directly use cosine similarity because the regression score range should be between 0 and 1. Therefore, we chose sigmoid as the activation function to enforce the final score range between 0 and 1. \\n\\n4) For the classification task, the mean squared error loss is adopted. Why not using other more commonly used loss for classification task? \\n\\nThe reason we chose the mean squared loss instead of other commonly used loss like cross-entropy loss is because what we really care about is still the calculation of the similarity between two graphs, instead of performing binary classification. For instance, we can train our model for classification, but we used it for graph retrieval task where we can use similarity score for ranking. Note that we used AUC score rather than accuracy as our metric. \\n\\n5) It would be better if the authors could empirically show the effectiveness of the Bi-LSTM aggregator.\\n\\nPlease see our responses above. \\n\\n6) It would be helpful if the authors could conduct some investigation on how the number of perspectives affect the performance of the model.\\n\\nThis is a great suggestion. Based on the reviewer\\u2019s advice, we have performed additional experiments to show the effect of the number of perspectives on the model performance. As shown in the following table, when the graph size is [3, 200] and [20, 200] (more training samples), our model performance is not sensitive to the number of perspectives (from 50 to 150). When the graph size is [50,200] (fewer training samples), the variance of the model becomes relatively larger than these on [3, 200] and [20, 200]. However, when we used more perspectives (like 150), the variance of the model reduced significantly. \\n\\nModels\\t\\t | FFmpeg[3, 200] | FFmpeg [20, 200] | FFmpeg [50, 200] | OpenSSL[3, 200] | OpenSSL[20, 200] | OpenSSL [50, 200] \\nMPNGMN (50) | 98.11+/-0.14 | 97.76+/-0.14 | 96.93+/-0.52 | 97.38+/-0.11 | 97.03+/-0.84 | 93.38+/-3.03 \\nMPNGMN (75) | 97.99+/-0.09 | 97.94+/-0.14 | 97.41+/-0.05 | 97.09+/-0.25 | 98.66+/-0.11 | 92.10+/-4.37 \\nMPNGMN (100) | 97.73+/-0.11 | 98.29+/-0.21 | 96.81+/-0.96 | 96.56+/-0.12 | 97.60+/-0.29 | 92.89+/-1.31 \\nMPNGMN (125) | 98.10+/-0.03 | 98.06+/-0.08 | 97.26+/-0.36 | 96.73+/-0.33 | 98.67+/-0.11 | 96.03+/-2.08 \\nMPNGMN (150) | 98.32+/-0.05 | 98.11+/-0.07 | 97.92+/-0.09 | 96.50+/-0.31 | 98.04+/-0.03 | 97.13+/-0.36\", \"note\": \"The number in the bracket after MPNGMN represents the number of perspectives.\"}",
"{\"title\": \"Responses for Official Blind Review #3\", \"comment\": \"Author response:\\n\\nFirst of all, we want to thank the reviewer for their thorough reading and valuable comments! However, there are some points of misunderstanding that we address in this rebuttal.\", \"below_we_address_the_concerns_mentioned_in_the_review\": \"1) The novelty of the paper is incremental. The major contribution of the paper lies in the propose of multi-perspective matching function , which is somewhat similar to the Neural Tensor Networks proposed in [1] and utilized in [2] ...\\n\\nThis work has two main contributions. First of all, we proposed a new type of interactions - cross-level node-graph interactions in order to more effectively exploit different-level granularity features between a pair of input graphs, where previous works only considered graph-graph and node-node interactions. Secondly, we have provided systematic studies on different factors on the performance of all graph matching models such as the impact of different tasks (classification and regression) and the sizes of input graphs. These tasks and factors the previous model failed to fully consider them are most important aspects to validate a graph matching model is indeed better or not. \\n\\nIt seems like there are some misunderstandings between our multi-perspective matching function and neural tensor networks. Given two input vectors v1, and v2 \\\\in R{d}, the neural tensor networks essentially performs the following calculations: \\n h = f( v1, v2 ) = f( v1 * T^[1,..,k] * v2), where T^[1, \\u2026,k] \\\\in R^{d, d, k} is a tensor\", \"while_our_multi_perspective_matching_function_performs_this_operation\": \"h = f( v1, v2 ) = f( v1 .* w_i, v2 .* w_i), where w_i \\\\in R^{d} is a weight vector, i =1, \\u2026,k\\nAnd the operator .* is the element-wise multiplication.\\nTherefore, we can clearly see that these two functions are very different. Conceptually, our multi-perspective matching function belongs to the class of multi-head attention function in (Vaswani et al., 2017) although it is also different. We also performed the comparisons between these two attention functions in Sec 4.4 and showed that our multi-perspective matching function is much more effective. \\n\\n2a) In Eq. (7), attentive graph-level embeddings are calculated using weighted average of its node embeddings. However, it is not clear which node from the other graph should be used to calculate the weights (\\\\alpha_{I,j}, \\\\beta_{i,j}). Furthermore, it is also not clear why the attention score should solely base on a single node from the other graph rather than the information of the entire graph. \\n\\nThe weights are defined and calculated in Equation (6), where we calculate the cross-graph attention coefficients between the node v_i from graph G_1 or G_2 and all other nodes v_j from other graph. In other words, each node v_i in G_1 will compute the attention coefficients (\\\\alpha_{i,j}) between itself and all other nodes in another graph G_2. Similarly, each node v_i in G_2 will also compute the attention coefficients (\\\\beta_{i,j}) between itself and all other nodes in another graph G_1. We have updated our manuscript to make it more clear. \\n\\n2b) In Eq. (10), it would be better if the authors could provide more motivations about using Bi-LSTM aggregator. Especially, the embeddings to be aggregated are unordered. What are the two directions in Bi-LSTM in this case? What is the benefit of using Bi-LSTM as aggregator compared with LSTM aggregator or other aggregators? \\n\\nThis is a great question. To aggregate these cross-level interaction feature matrix from the node-graph matching layer, we employ the BiLSTM model to aggregate the unordered feature embeddings \\\\widetilde{\\\\vec{h}}_j. You are absolutely right that the Bi-LSTM is a biased aggregator and we just simply used any random order to perform this operation. The reason we chose this option is because BiLSTM aggregator achieved consistently (slightly) better performance compared to other aggregators as shown in our experiments. We have also observed similar choices in the previous works (Hamilton et al., NIPS 2017; Zhang et al., KDD 2019).\\n\\nModels\\t | FFmpeg [20, 200]| FFmpeg [50, 200] | OpenSSL[3, 200]| OpenSSL[20, 200]| OpenSSL [50, 200] \\nMPNGMN (Max) \\t| 73.85+/-1.76 | 77.72+/-2.07 | 67.14+/-2.70 | 63.31+/-3.29 | 63.02+/-2.77 \\nMPNGMN (FCMax) | 96.61+/-0.17 | 96.65+/-0.30 | 95.37+/-0.19 | 96.08+/-0.48 | 95.90+/-0.73 \\nMPNGMN (Avg) \\t| 83.29+/-4.49 | 85.52+/-1.42 | 80.10+/-4.59 | 70.81+/-3.41 | 66.94+/-4.33 \\nMPNGMN (FCAvg) | 73.90+/-0.70 | 94.22+/-0.06 | 93.38+/-0.80 | 94.52+/-1.16 | 94.71+/-0.86 \\nMPNGMN (LSTM) \\t| 97.02+/-0.99 | 84.65+/-6.73 | 96.30+/-0.69 | 97.51+/-0.82 | 89.41+/-8.40\\nMPNGMN (BiLSTM) | 98.29+/-0.21 | 96.81+/-0.96 | 96.56+/-0.12 | 97.60+/-0.29 | 92.89+/-1.31\\n\\nIn order to justify this choice, we have added these additional experimental results in Table 9 and 10 in the appendix in the updated manuscript.\"}",
"{\"title\": \"Responses for Official Blind Review #2\", \"comment\": \"Author response:\\n\\nFirst of all, we want to thank the reviewer for providing valuable comments! Below we address the concerns mentioned in the review:\\n\\n1) The evidence for preferring this architecture over existing one is entirely empirical -- based on experiments on four datasets. \\n\\nOur model is well motivated by the drawbacks of existing works that they either only consider graph-graph level interactions or node-node level interactions when computing the graph similarity for graph matching. As we clearly illustrated in the Introduction Section, we would like to ask how a DL model can address the following challenges in order to overcome the above limitations: : i) how to learn different-level granularity (global level and local level) of interactions between a pair of graphs; ii) how to effectively learn more rich cross-level interactions between parts of a graph and a whole graph. To effectively cope with these challenges, we propose our Hierarchical Graph Matching Network(HGMN) for computing the graph similarity between any pair of graph-structured objects. The proposed HGMN model consists of a novel multi-perspective node-graph matching network for effectively learning cross-level interactions between parts of a graph and a whole graph, and a siamese graph neural network for learning global-level interactions between two graphs. \\n\\nWe would like to point it out that existing relevant works (Bai et al., WSDM 2019) and (Li et al., ICML 2019) considered either graph-graph regressions ( AIDS700 and LINUX1000) or graph-graph classification (FFmpeg). We are the first one to systematically investigate different factors on the performance of all graph matching models such as the impact of different tasks (classification and regression on four datasets) and the sizes of input graphs (which is ignored by the existing works). \\n\\n2) The properties of the graphs used in the experiments are not clear: edge, degree distribution.. How much graph structure contribute to learning? What happens if one just trained on the features of nodes?\\n\\nWe have listed various properties of four datasets in Table 1. Based on reviewer\\u2019s comments, we have also added the characteristics of edges in the Appendix of the updated manuscript. We copied some of them below. \\n\\nDatasets \\t\\t | # of Edges (Min/Max/AVG)\\t| AVG # of Degrees (Min/Max/AVG) \\nFFmpeg [3, 200]\\t | (2/332/27.02) \\t\\t | (1.25/4.33/2.59)\\nFFmpeg [20, 200]\\t | (20/352/75.88) \\t\\t | (1.90/4.33/2.94) \\nFFmpeg [50, 200]\\t | (52/352/136.83) \\t\\t | (2.00/4.33/3.00)\\nOpenSSL [3, 200]\\t | (1/376/21.97) \\t\\t | (0.12/3.95/2.44)\\nOpenSSL [20, 200] | (2/376/67.15)\\t\\t | (0.12/3.95/2.95)\\nOpenSSL [50, 200] | (52/376/127.75)\\t\\t | (2.00/3.95/3.04)\\nAIDS700\\t\\t | (1/14/8.80) \\t\\t\\t | (1.00/2.80/1.96)\\nLINUX1000\\t\\t | (3/13/6.94) \\t\\t\\t | (1.50/2.60/1.81)\\n-----------------------------------------------------------------------------------------------\\n\\nGNN takes both graph structure and node features as inputs. As we can see in Table 1, datasets AIDS700, FFmpeg, and OpenSSL have rich node features while dataset LINUX1000 only has one node feature. In our extensive experiments, our model is able to consistently outperform other state-of-the-art baselines on these datasets with quite different characteristics, highlighting that our model is not sensitive to the graph structure and the node features. \\n\\nWe are not sure what the reviewer mean \\u201cWhat happens if one just trained on the features of nodes?\\u201d. All GNN models must have graph structure (graph adjacency matrix) as inputs and cannot be trained only on node features. \\n\\n(3) The lack of any theoretical reasons ... or insights as to when and on what types of graphs this architecture performs well ...\\n\\nThis is a good suggestion and we will leave it as one of our future works. However, all previous works (Bai et al., WSDM 2019) and (Li et al., ICML 2019) did not provide any theories to support their models either. \\n\\nFor graph matching task, our model and two previous models (Bai et al., WSDM 2019) and (Li et al., ICML 2019) did not assume the types of graphs to consider. However, it is important to have a model that considers both global-level interactions and low-level interactions between two graphs. This is exactly our main contribution in this work that we proposed a novel type of interactions - cross-level node-graph interactions in order to more effectively exploit different-level granularity features between a pair of input graphs. In addition, we have provided systematic studies on different factors on the performance of all graph matching models such as the impact of different tasks (classification and regression) and the sizes of input graphs. These tasks and factors the previous model failed to fully consider them are most important aspects to validate a graph matching model is indeed better or not.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an architecture for (supervised) learning a similarity score between graphs through a series of layers for node embedding, node-graph matching, aggregated graph embedding, and finally prediction.\\n\\nThe evidence for preferring this architecture over existing one is entirely empirical -- based on experiments on four datasets. The properties of the graphs used in the experiments are not clear: How many edges do they have on average?What is the exponent of their degree distribution on average? How many triangles do they have? How much does the graph structure contribute to learning? What happens if one just trained on the features of nodes? \\n\\nThe lack of any theoretical reasons for using this architecture over others (perhaps by linking it to the cut norm of the graphs) or insights as to when and on what types of graphs this architecture performs well reduces the significant of this paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, a hierarchical graph matching network, which considers both graph-graph interaction and node-graph interaction, is proposed. Specifically, the graph-graph interaction is modeled through graph level embeddings learned by GCN with pooling layers. While node-graph interaction is modeled using node embedding learned by GCN and attentive graph embedding aggregated from node embedding.\", \"some_concerns_about_the_paper_are_as_follows\": \"1)\\tThe novelty of the paper is incremental. The major contribution of the paper lies in the propose of multi-perspective matching function $f_m()$, which is somewhat similar to the Neural Tensor Networks proposed in [1] and utilized in [2] Although, in [2], the Neural Tensor Network is used to measure the similarity between graph-level embeddings.\\n2)\\tSome of the technical details of the paper is not clearly presented or well explained. \\na.\\tIn Eq. (7), attentive graph-level embeddings are calculated using weighted average of its node embeddings. However, it is not clear which node $i$ from the other graph should be used to calculate the weights (\\\\alpha_{I,j}, \\\\beta_{i,j}). Furthermore, it is also not clear why the attention score should solely base on a single node from the other graph rather than the information of the entire graph. \\nb.\\tIn Eq. (10), it would be better if the authors could provide more motivations about using Bi-LSTM aggregator. Especially, the embeddings to be aggregated are unordered. What are the two directions in Bi-LSTM in this case? What is the benefit of using Bi-LSTM as aggregator compared with LSTM aggregator or other aggregators?\", \"some_other_questions_to_be_clarified\": \"1)\\tWhy different similarity score functions are adopted for the classification task and the regression task? \\n2)\\tFor the classification task, the mean squared error loss is adopted. Why not using other more commonly used loss for classification task?\", \"suggestions\": \"It would be better if the authors could empirically show the effectiveness of the Bi-LSTM aggregator.\\n\\nIt would be helpful if the authors could conduct some investigation on how the number of perspectives affect the performance of the model.\\n\\n[1] Reasoning With Neural Tensor Networks for Knowledge Base Completion\\n[2] SimGNN: A Neural Network Approach to Fast Graph Similarity Computation\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a network learning similarity between graphs. In particular, the proposed method focuses on node-graph cross-level interaction which has not been considered by other neural net studies. The performance is evaluated by two datasets on classification and regression tasks respectively.\\n\\nOverall, the basic idea would be reasonable, and the architecture is clearly described. Any of my comments below are not critical concerns.\\n\\nThe network considers node-graph interaction, but in general, subgraph-graph interaction can be considered. Rationale that only focusing on node-graph interaction is not mentioned.\\n\\nThe authors repeatedly mention that the proposed method jointly learns representation and similarity. Is the learned representation can be used for any purpose? Since it depends on a counterpart of the input pair, the representation is seemingly difficult to use.\\n\\nThe experiments show superior performance of the proposed methods, but the datasets are only two for each tasks. In particular, since graph classification is a popular task, evaluation on a variety of benchmarks would be more convincing. \\n\\nA baseline with some graph kernel can be informative.\\n\\nShowing an example of graph pairs, in which cross-level interaction is indispensable to appropriately evaluate similarity, would be convincing.\\n\\nA related paper being missed would be \\n'Yoshida, et al. Learning Interpretable Metric between Graphs: Convex Formulation and Computation with Graph Mining, SIGKDD 2019'.\"}"
]
} |
rJxq3kHKPH | A Simple Approach to the Noisy Label Problem Through the Gambler's Loss | [
"Liu Ziyin",
"Ru Wang",
"Paul Pu Liang",
"Ruslan Salakhutdinov",
"Louis-Philippe Morency",
"Masahito Ueda"
] | Learning in the presence of label noise is a challenging yet important task. It is crucial to design models that are robust to noisy labels. In this paper, we discover that a new class of loss functions called the gambler's loss provides strong robustness to label noise across various levels of corruption. Training with this modified loss function reduces memorization of data points with noisy labels and is a simple yet effective method to improve robustness and generalization. Moreover, using this loss function allows us to derive an analytical early stopping criterion that accurately estimates when memorization of noisy labels begins to occur. Our overall approach achieves strong results and outperforming existing baselines. | [
"noisy labels",
"robust learning",
"early stopping",
"generalization"
] | Reject | https://openreview.net/pdf?id=rJxq3kHKPH | https://openreview.net/forum?id=rJxq3kHKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"myLM_Km5oA",
"Skl4y3ytoB",
"rkxyVdJFjH",
"B1eD-EkKjr",
"ByluCM5MsB",
"BklPOzqGor",
"HkghJo9-jS",
"ryxCkugxjr",
"BkxvUxMA5H",
"SyxBYBr6Fr",
"rJepBpantB",
"HJleZsO2KB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798737008,
1573612507741,
1573611559350,
1573610494661,
1573196496252,
1573196399354,
1573133028408,
1573025765803,
1572900942871,
1571800445102,
1571769669163,
1571748599971
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1962/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1962/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1962/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1962/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1962/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1962/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1962/Authors"
],
[
"~Yilun_Xu1"
],
[
"ICLR.cc/2020/Conference/Paper1962/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1962/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1962/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper focuses on mitigating the effect of label noise. They provide a new class of loss functions along with a new stopping criteria for this problem. The authors claim that these new losses improves the test accuracy in the presence of label corruption and helps avoid memorization. The reviewers raised concerns about (1) lack of proper comparison with many baselines (2) subpar literature review and (3) state that parts of the paper is vague. The authors partially addressed these concerns and have significantly updated the paper including comparison with some of the baselines. However, the reviewers were not fully satisfied with the new updates. I mostly agree with the reviewers. I think the paper has potential but requires a bit more work to be ready for publication and can not recommend acceptance at this time. I have to say that the authors really put a lot of effort in their response and significantly improved their submission during the discussion period. I recommend the authors follow the reviewers' suggestions to further improve the paper (e.g. comparing with other baselines) for future submissions\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Experiment Updates\", \"comment\": \"Hi! We just updated the experiment sections as well. The updated experiments were suggested by the other reviewers and we think helped with making the paper more solid and demonstrated the effectiveness of the proposed method. Please let me summarize the update to the experiments as follows:\\n\\n(1) we updated Table 1 in section 4.2 to study our method on the IMDB dataset, which is a standard NLP task for sentiment analysis. The model used is a standard LSTM. We have updated the experiment section of our paper to include experiment on the IMDB dataset, see table 1 (r=0.1-0.3) and also see appendix section M (r=0.02-0.08), where the gambler\\u2019s loss is shown to outperform the baseline and the AES criterion is shown to outperform validation early stopping method significantly. \\n\\n(2) section J: we include two more demonstrations of the three stage phenomenon. One is on IMDB, using LSTM and GloVe word embedding, and the other is on openimage which is a noisy dataset whose noise rate is hard to estimate), for this example, we also show how the proposed criterion might help to estimate the noise level in the dataset\\n\\n(3) section K: we include some experiment to demonstrate that the proposed method might also be robust to asymmetric noise; the improvement over the baseline is consistent and significant; however, given the time and computational resource we had, the experiments are small scale; moreover, we actually plan to deal with the asymmetric label noise in a future work \\n\\n(4) section L: we compare our method to the \\u201cupper bound\\u201d, i.e. training without the corrupted data (no early stopping). We notice that for some range (r=0.2-0.5), analytical early stopping perform at least as good as the upper bound, we hypothesize that this is because our method also effective prevents overfiting by stopping early\\n\\n(5) Besides, we removed the part that combine our method with CT (co-gambling), because, as argued in the original version, the two methods do not seem very compatible. In its place, we added a simple schedule to improve our method consistently, called LAES, which starts the training by five epochs of warmup (o=m) and then switch to smaller o.\\n\\n(6) we also updated the CIFAR10 experiments in table 2 to include a comparison with simple training the nll loss, and we thinks this further shows the effectiveness of our method. While the proposed method does not achieve SOTA results in the range (r=0.2-0.6), it still achieves significant improvement over simply training on the nll loss (by 10-30% absolute accuracy, see the updated table 2), and we think this deserves some merit. In comparison with methods such as CT, we note that CT outperforms the proposed method only marginally by about 2-5% accuracy in the rage (r=0.2-0.6) while requiring twice as many parameters and training time, and so we think another merit of the proposed method is its simplicity.\\n\\n\\nWhile the theory part bases on assumptions that seem quite strong, we think the above experiments further verified these assumptions, and the fact that the proposed method is shown to be effective on these datasets and tasks further suggest the correctness and wide applicability of the theory.\"}",
"{\"title\": \"Experiment Updates\", \"comment\": \"Hi! We have updated our experiment section to answer your questions! Indeed, we think that adding in these experiments make the current paper more solid. The update to experiments include:\\n\\n(1) section J: we include two more demonstrations of the three stage phenomenon. One is on IMDB, using LSTM and GloVe word embedding, and the other is on openimage which is a noisy dataset whose noise rate is hard to estimate), for this example, we also show how the proposed criterion might help to estimate the noise level in the dataset\\n\\n(2) section K: we include some experiment to demonstrate that the proposed method might also be robust to asymmetric noise; the improvement over the baseline is consistent and significant; however, given the time and computational resource we had, the experiments are small scale; moreover, we actually plan to deal with the asymmetric label noise in a future work \\n\\n(3) section L: we compare our method to the \\u201cupper bound\\u201d, i.e. training without the corrupted data (no early stopping). We notice that for some range (r=0.2-0.5), analytical early stopping perform at least as good as the upper bound, we hypothesize that this is because our method also effective prevents overfiting by stopping early\\n\\n(4) moreover, we also updated Table 1 in section 4.2 to study our method on the IMDB dataset, which is a standard NLP task for sentiment analysis. The model used is a standard LSTM. We have updated the experiment section of our paper to include experiment on the IMDB dataset, see table 1 (r=0.1-0.3) and also see appendix section M (r=0.02-0.08), where the gambler\\u2019s loss is shown to outperform the baseline and the AES criterion is shown to outperform validation early stopping method significantly. \\n\\n(5) Besides, we removed the part that combine our method with CT (co-gambling), because, as argued in the original version, the two methods do not seem very compatible. In its place, we added a simple schedule to improve our method consistently, called LAES, which starts the training by five epochs of warmup (o=m) and then switch to smaller o.\", \"let_us_also_provide_an_answer_the_the_following_question\": \"A. 'It seems that gamblers loss best shines when the corruption rate is as high as 80% . That is 80 percent of the data is corrupted. Does this mean that if I trained with only 20% of the non-corrupt data I would still get a 99% accuracy on MNIST (even without gamblers loss)? A comparison of this sort would have been useful. '\\n\\n- we indeed think that the gambler's loss can be very useful when the strength of noise present is very strong. Since the current methods can hardly deal with extremely strong label noise (>0.7) especially because the noise rate becomes very hard to estimate at this stage. However, we also argue that the value for the proposed method should not be underestimated when the noise rate is small. For example, on CIFAR10, while the proposed method does not achieve SOTA results in the range (r=0.2-0.6), it still achieves significant improvement over simply training on the nll loss (by 10-30% absolute accuracy, see the updated table 2), and we think this deserves some merit. In comparison with methods such as CT, we note that CT outperforms the proposed method only marginally by about 2-5% accuracy in the rage (r=0.2-0.6) while requiring twice as many parameters and training time, and so we think another merit of the proposed method is its simplicity.\"}",
"{\"title\": \"Reply and experiment updates\", \"comment\": \"Hi! Thank you so much for your reply! I think your advice really helped us improving the paper. We have updated the experiment section of our paper to include experiment on the IMDB dataset, see table 1 (r=0.1-0.3) and also see appendix section M (r=0.02-0.08), where the gambler\\u2019s loss is shown to outperform the baseline and the AES criterion is shown to outperform validation early stopping method significantly. Besides, we removed the part that combine our method with CT (co-gambling), because, as argued in the original version, the two methods do not seem very compatible. In its place, we added a simple schedule to improve our method consistently, called LAES, which starts the training by five epochs of warmup (o=m) and then switch to smaller o.\\n\\nBesides this part, other updates to the experiments include: (1) section J: we include two more demonstrations of the three stage phenomenon (on IMDB and on openimage); (2) section K: we include some experiment to demonstrate that the proposed method might also be robust to asymmetric noise; (3) section L: on MNIST, we compare our method to the \\u201cupper bound\\u201d, i.e. training without the corrupted data. If you are interested , we also included two sections to elaborate on our theory part (Section A and section B). \\n\\nNow please let me address your specific questions.\\n\\nA1. \\u2018Specifically, the improvements on CIFAR-10 only appear for large corruption rates (0.7+), and performance is lower than the baselines for other corruption rates. This is a worrying problem, because it calls into question the value of the method on larger problems.\\u2019\\n- While the proposed method does not achieve SOTA results in the range (r=0.2-0.6), it still achieves significant improvement over simply training on the nll loss (by 10-30% absolute accuracy), and we think this deserves some merit. In comparison with methods such as CT, we note that CT outperforms the proposed method only by 2-5% accuracy in the rage (r=0.2-0.6) while requiring twice as many parameters and training time, and so we think another merit of the proposed method is its simplicity.\\n\\nA2. \\u2018At the top of page 3, the authors say that the idealized gap assumption \\u201cholds well for simple datasets such as MNIST and on datasets with very high corruption rate, where our method achieves best results, and less so on more complicated datasets such as CIFAR10\\u201d. The idealized gap assumption is behind the AES criterion, but Figure 5 suggests that the AES criterion works well on CIFAR-10, so what do the authors mean when they say the assumption doesn\\u2019t work as well on CIFAR-10? Is this just referring to the results?\\u2019 \\n- Sorry for this confusion! The same problem was also pointed to by reviewer 3. We removed this ambiguous sentence and added an example for this. It relates to the fact that on complicated dataset such as CIFAR10, the training accuracy on the clean dataset might be actually significant below 100%. While for MNIST, the training accuracy on the clean part is very close to 100%.\\n\\nA3. \\u2018Saying traditional label noise correction methods are \\u201cof no use when one is not aware of the existence of label noise\\u201d seems unfair. The FC method and others do not require foreknowledge of the corruption rate and do not harm performance in the absence of label noise, so they can also be said to automatically correct label noise.\\u2019\\n- Yes, it is true that other methods, such as FC, when used without label noise, can be seen as a simple label smoothing method and should not harm learning. What we really meant was the cases in which the noise rate might be small (say, r=0.02-0.08) and might be hard to estimate, and in these case, other methods are unlikely to provide improvement when r is unknown. However, using the gambler\\u2019s loss with some benign value of o is observed to actually improve the performance of the model (compared to training with nll loss). For example see section M (where o is set to 1.9 without any tuning).\\n\\nA4. \\u2018\\u201cFC, however, requires knowing the whole transition matrix, and is outperformed significantly by our method.\\u201dThis is not quite true, because Patrini et al. propose an estimate of the transition matrix as part of the Forward correction. Did you use the estimated or true transition matrix for the FC method? It would be good to clarify this in the paper.\\u2019\\n- Yes, we used the estimation method proposed in the original paper, as we described in the related works section. When the transition matrix is exactly known, the Forward correction method can actually be quite strong (or even the best method).\\n\\nA5. We corrected the minor points you mentioned.\\n\\nAgain, thank you very much for the comments, and please let us know if you have any other questions!\"}",
"{\"title\": \"Reply (part 2)\", \"comment\": \"A7. 'Equation 1: So the loss function proposed is log(f(x)_y + (1/o) f(x)_m+1) . What is y here? The true label? Why is y called a point mass? Is this different from the cross-entropy loss + log loss on m+1 ?'\\n- Sorry for making this confusing! We have rewritten section 2 to make this as clear as we can. y is indeed the true label. Saying that y is a point mass simply means that it is a 0-1 loss (having value 1 for the correct label, and having 0 for the incorrect labels). The loss function function indeed takes the form log(f(x)_y + (1/o) f(x)_m+1), this has well studied information-theoretic properties (see chapter 6 of [1] for a detailed discussion on its intuitive meaning and mathematical properties). The function you mentioned might work in practice? But it is hard to give interpretation on such loss, and its theoretical properties are, to our knowledge, yet unknown.\\n\\nA8. 'For figure 3 again what datasets were used?'\\n-As we mentioned in the paper, the exact detail for figure 3 is given in the appendix section I, and as is described there, it is done in MNIST with corruption rate 0.5. Many more experiments are also shown there.\\n\\nA9. 'I do not understand equations 2 to 5. '\\n- Reviewer 3 also found this part a little confusing, and we agree that our original presentation of this part can indeed be greatly improved. We have reorganized and rewrote section 2 and added some intuitions of the mechanism at working (see the short paragraph above the current equation 5), in the hope to make this clearer. We also added appendix section A and appendix section B to clalrify further details. Please let us know if you still have any question about this part.\\n\\nA10. '\\u201c Making random bet will help with making money and a skilled gambler will not make such bets\\u201dWhy does making random bet help with making money? If random is good how can a skilled gambler exist in such a game? What is this skill?'\\n- Sorry! This is a typo! It should be \\u201c Making random bet will NOT help with making money and a skilled gambler will not make such bets\\\" \\n\\nA11. 'k denotes the sum of probability of predicting anything that is not y or m+1 (it does not denote prediction). '\\n- Hmmm, what we really meant was that 'k denotes the sum of PREDICTED probability of predicting anything that is not y or m+1'. We have updated section 2.1 to clarify this. Please also see appendix section A for more detailed information about the gambler's loss.\\n\\nA12. 'In the experiments section what was the symbol for the rate of corruption changed from epsilon to r. Are they different? '\\n- epsilon refers to (1-r), meaning the non-corrupt rate, so they are two different symbols.\\n\\nA13. 'What is nll? '\\n- nll is shorthand for negative log loss, which is also called cross entropy loss\\n\\nPlease let us know if you find any other parts confusing! We will do our best to revise the manuscript and clarify the points. \\n\\n[1] Thomas M. Cover. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing).\\n[2] https://arxiv.org/abs/1905.11604\"}",
"{\"title\": \"Reply (part 1)\", \"comment\": \"Hi! Thank you so much for the comment! I have read your comment carefully and I think, fortunately, many questions are due to misunderstanding, and we have updated the manuscript to make many points more clear (this includes a rewriting of section 2, and addition of section A and section B in the appendix). Before adding in experiments or rewriting sections, let me answer the questions that we can already provide answer to. We will let you know once we update the experiment section.\\n\\nA1. 'I cannot understand figure 1(a). The y-label says accuracy but it seems that the plot is about loss.'\\n-For figure 1a, please note that figure 1a has two y-axes; on the left it says accuracy, and on the right it says loss, also, please note that we consistently we used this 2-axes style in the paper (e.g. Fig.3, Fig.5, Fig.6).\\n\\nA2. 'What dataset was this and what architecture of DNN was used?'\\n-As is described in the title of Figure 1. This plot is for MNIST with corruption rate 0.5. Since the dataset is MNIST, the architecture is the one described in Table 3 in the appendix.\\n\\nA3. 'The plot shows that the DNN achieved a 100% accuracy in 5 epochs. Is this result meaningful?'\\n-Hmmm, it is a little hard to answer this question without a clear definition of 'meaningful'. The dataset is MNIST, and usually the accuracy reaches >95% using a DNN without label noise.\\n\\nA4. 'Before establishing a hypothesis based on this should the hypothesis not be tested on multiple datasets.The paper says that these stages are persistent across multiple architectures and datasets and as proof the paper says \\u2018we verified that\\u2019. Why can\\u2019t the reader see the experiments? By across datasets does the paper mean MNIST and CIFAR? By across architecture does the paper mean the two architectures mentioned in the appendix one each for MNIST and CIFAR respectively? '\\n-In addition to the two CNN architectures we described in the paper, two experiments are done using ResNet18 (please see Figure 5.c and 5.d, and search for the word ResNet). In fact, this 3-stage phenomenon is quite easily observable for many datasets when decent level of label noise is present. Since the paper is already 16 pages long, we did not include more experiments in the initial version, but as you suggested, we will also include a few plots from other dataset and architectures soon (we will let you know once we update the experiments). \\n-- 'we verified that .' was actually a type, we have removed this sentence. \\n\\nA5. 'The paper makes the assumption that label noise is symmetrically corrupted. Why and where does such an assumption hold? What happens to the proposed method if that is not true. '\\n- Since the criterion does not apply to assymetrically corrupted data, to deal with asymmetrically corrupted data, some other criterion is needed. However, simply using gambler's loss in this case should also improve the final performance. We can also update some toy experiment on this if time allows by the time the rebuttal period ends.\\n\\nA6. 'Assumption 2: During the gap stage the model has learned nothing about the corrupt data points. How is that even possible? '\\n- Empirically, the situation is that, on average, the model makes random prediction on the corrupted dataset. So there might be some corrupt points that the model actual learned, but the number of such points should be small compared to the ones the model has not learned. Moreover, this assumption can be verified by Figure 1, where the training loss on the corrupt part does not start to decrease significantly until 20th epoch, where learning of the clean data is very low. Theoretically speaking, this is because gradient descent tends to learn function of increasing complexity, and the corrupt labels constitute a very high complexity function, and so is very hard to learn compared with clean points [2].\"}",
"{\"title\": \"Reply\", \"comment\": \"Hi! Thank you so much for your comment! We agree that the original writing needs a lot improvement for better clarity, and we have made updates according to your comments. We reorganized and rewrote large parts in section 2, 2.1 and 2.2 to make our theory part clearer. We also corrected sentences that we feel inappropriate.\\n\\nNow please let me address your questions specifically.\\nA1. '3 The assumptions \\\\hat{p}+\\\\hat{k}+\\\\hat{l}=1 is very strong to me. The events should be dependent. This makes all the theoretical analyses pseudo and not convincing at all. The authors may spend more effort to make the part clear, reasonable, and convincing.'\\n- We added in section A to explain this. We think it is a misunderstanding. In particular, see equation 12 and the discussion around it. \\\\hat{p}+\\\\hat{k}+\\\\hat{l}=1 is not an assumption and holds by construction. Please also see the current beginning part to section 2 (discussion around equation 1).\\n\\nA2. '(1 Why the derivative of Eq. (1) is Eq. (2)? The notation of f, f_\\\\theta, and f_w has been abused. The notation is confusing without explanation. It seems Eq. (2) is not correct and the following is not convincing.'\\n- Sorry for our sloppy notation in the original version! We have updated this section to make it clearer. Please check the current section 2.2 and let us know if the clarity is improved. We will work hard to make this as clear as possible. Please also see the newly added appendix section B if you are interested in further discussion regarding the robustness of gambler's loss.\\n\\nA3. (2 Why a small gradient will slower the model to fit the data? This is not clear and maybe not true.\\n- Yes, this is indeed an intuitive assumption, and might not hold for some special cases, since zero gradient means no learning has occurred. In fact, please note that the goal of (revised) section 2.2 is to explain, at a high level, a very surprising phenomenon, i.e. training with gambler's loss improves final accuracy when label noise is present. The actual mechanism might be very complicated and task-dependent, and a precise theoretical study of this phenomenon is extremely difficult given our current knowledge about deep learning theory. We added more intuition to this section, and we think that a clear theoretical understanding of this phenomenon is way beyond the scope of this work.\\n\\nA4. 'Many claims are very ambiguous. For example, \\\"the key point is that it always holds on some degree\\\", on which degree and why always holds?'\\n- Sorry for this confusion! We removed this ambiguous sentence and added an example for this. It relates to the fact that on complicated dataset such as CIFAR10, the training accuracy on the clean dataset might be actually significant below 100%.\\n\\nA5. 'In some cases, it even leads to a better memorization phenomenon\\\", in which cases and why lead to better memorization phenomenon?'\\n- There is no such sentence in our paper. In fact, we think this is a misreading of the two neighboring sentences in section 2:\\n\\\"As a result, this widens the test accuracy plateau by slowing down the memorization phenomenon. In some cases, it even leads to a better convergence speed and better peak performance (see Figure 4). \\\"\\n\\nA6. 'Some claims are even wrong. For example, \\\"Traditional methods in label noise often involves introducing a surrogate loss function that is specialized for the corrupted dataset at hand and is of no use when one is not aware of the existence of label noise\\\" This is just for some specific methods, not for the most of them.'\\n- Hmmm, thanks for pointing this out. Actually, what we really meant was that 'one common approach amongst others is to introduce a surrogate loss function', since the context was to introduce another loss function approach to the field. We have corrected this sentence.\\n\\nA7. We have also fixed the typo you mentioned.\\n\\nAgain, thank you so much for the review, and please check the newly revised section 2 and section A B in the appendix for your questions. Please let us know if you have more questions regarding section 2, and we will do our best to make it as clear as possible.\"}",
"{\"title\": \"Reply\", \"comment\": \"Hi Yilun! Thanks for your comment and for noticing our work! The paper you pointed us to is indeed interesting, and using determinant of the estimated joint matrix is very novel and seems to work very well (I personally learned a lot from reading your paper). We were not aware of this paper before and we think it is good for us to relate to this work in our paper. However, we decided that we will refrain from making a comparison with this method for the following two reasons: (1) let C denote the number of classes, then computing the determinant of the joint matrix is of O(C^3) complexity (e.g., for the numpy implementation), for tasks such as CIFAR100 or ImageNet, it does not look like the method will scale up easily, this is then followed by a matrix inversion, which is again of Omega(C^2.3) complexity; (2) the loss function also seems to have non-trivial batchsize dependence in order to make the estimated joint matrix full rank, for Imagenet, for example, this method needs at least 1000 batchsize, and to ensure that the matrix is full rank with high probability, the batchsize seems to need to be another order of magnitude larger. In short, we really want to compare with methods with similar computational complexity, and we will relate to this work in the paper. Still, we are very excited to hear about it.\"}",
"{\"title\": \"Related work/baseline missing\", \"comment\": \"Hi,\\n\\n[1] proposes the first loss function that is provably not sensitive to noise patterns and noise amount. Also, [1] does not require to know the noise patterns or noise amount beforehand. I wonder how the noise-robust function[1] performs in your setting. In experiments of [1], the noise pattern has both symmetry and asymmetry patterns, as well as diagonal-dominant and non-diagonal-dominant patterns. \\n\\nThank you!\\n\\n\\n[1] Yilun Xu, Peng Cao, Yuqing Kong, and Yizhou Wang. L_DMI: A novel information-theoretic loss function for training deep nets robust to label noise. NeurIPS 2019\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"\", \"update_after_rebuttal\": \"\", \"the_good\": \"The rebuttal and updated paper address many of my concerns. Most importantly, the updated paper demonstrates the three-stage phenomenon on Open Images and adds experiments on IMDB showing that the Gambler's loss with AES helps a lot. The LAES iteration introduced in the updated paper alleviates my concern about performance drop compared to the CT baseline at certain corruptions on CIFAR-10.\", \"the_bad\": [\"From Figure 12, it looks like the three-stage phenomenon doesn't hold on IMDB. Does AES provide additional benefit beyond the Gambler's loss on IMDB? This needs to be clarified with the way Figure 12 turned out.\", \"There is a serious missing citation [1] that should be included as a baseline. The proposed method in [1] is at least superficially similar to the Gambler's loss and also makes use of the fact that it is easier to fit clean labels than noisy labels. My apologies for not noticing this earlier.\"], \"overall\": \"I would still suggest acceptance, because the three-stage phenomenon is an interesting find that the authors make good use of. In light of the missing citation, though, I cannot raise my score.\\n\\n\\n[1]: Zhilu Zhang, Mert R. Sabuncu. \\\"Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels\\\". NeurIPS 2018.\\n\\n-----------------------------------------------------------------------\", \"summary\": \"This paper proposes a method to alleviate label noise. It opens with the observation of three distinct stages when training in the presence of label noise. Importantly, there is a \\u2018gap\\u2019 stage during which the network has not begun memorizing noisy labels and early stopping is ideal. The authors then observe that the Gambler\\u2019s loss (Ziyin et al., 2019) elongates the gap stage and propose an analytic early stopping (AES) criterion for identifying when to stop training.\\n\\nThe analysis of the AES criterion, e.g. in Figure 5, and the observation of a phase transition when tuning the o hyperparameter are quite interesting, and the latter observation is of practical value when using the AES criterion.\\n\\nThe AES criterion seems to be well-motivated, and the empirical evaluation of the Gambler\\u2019s loss with and without early stopping is good. The results are strong on MNIST but somewhat weak on CIFAR-10. Specifically, the improvements on CIFAR-10 only appear for large corruption rates (0.7+), and performance is lower than the baselines for other corruption rates. This is a worrying problem, because it calls into question the value of the method on larger problems. However, seeing as this is a distinct approach from the baselines and that it demonstrates some promise, I recommend borderline accept. The authors could raise my score by demonstrating more consistent gains on another larger-than-MNIST CV dataset or an NLP/speech dataset. Other points of concern that I have are listed below.\", \"major_points\": \"At the top of page 3, the authors say that the idealized gap assumption \\u201cholds well for simple datasets such as MNIST and on datasets with very high corruption rate, where our method achieves best results, and less so on more complicated datasets such as CIFAR10\\u201d. The idealized gap assumption is behind the AES criterion, but Figure 5 suggests that the AES criterion works well on CIFAR-10, so what do the authors mean when they say the assumption doesn\\u2019t work as well on CIFAR-10? Is this just referring to the results?\\n\\nSaying traditional label noise correction methods are \\u201cof no use when one is not aware of the existence of label noise\\u201d seems unfair. The FC method and others do not require foreknowledge of the corruption rate and do not harm performance in the absence of label noise, so they can also be said to automatically correct label noise.\\n\\n\\u201cFC, however, requires knowing the whole transition matrix, and is outperformed significantly by our method.\\u201d\\nThis is not quite true, because Patrini et al. propose an estimate of the transition matrix as part of the Forward correction. Did you use the estimated or true transition matrix for the FC method? It would be good to clarify this in the paper.\", \"minor_points\": \"\", \"there_are_a_few_grammatical_errors_and_typos_in_the_paper\": \"\\u201cor explicit regularization, this is also what is suggested by Abiodun et al. (2018)\\u201d (run-on sentence)\\\\\\n\\n\\u201cCIFAR10\\u201d should be \\u201cCIFAR-10\\u201d\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Updated review: Thanks for your comments. I feel the latest version of the paper is better than the previous version.\\nHowever, as stated by other reviewers as well, the claims of the paper are quite ambiguous. Another example from the author response is the point about how Chapter 6 of the Elements of Infomation Theory is related to Gambler's loss. This is not clear to me. I would not object to accepting the paper but I find it difficult to recommend accept for this paper. Perhaps the authors can be more clear in their claims. \\n\\n----------------------------------------------------------------------------------------------------------------------------------------------------------\", \"summary\": \"The paper focusses on the problem of noisy labels in supervised learning with deep neural networks. The paper, in turn, proposes an early stopping criterion for handling label noise. The early stopping criterion is dependent on a new loss function that is defined as the log of true label + weight on a reservation option? The paper shows that when the labels are corrupted then the propose early stopping criterion does better than early stopping criteria obtained via the validation set.\\n\\n\\\\The first section of the paper establishes that when label noise is present in the dataset, then there are three stages to training a deep neural network. \\nThe learning stage where the highest accuracy on the test set is achieved. \\nThe gap stage where test set accuracy goes down. \\nMemorization stage corresponds to when a deep neural network memorizes corrupt labels and test accuracy goes completely down. \\nI cannot understand figure 1(a). The y-label says accuracy but it seems that the plot is about loss. What dataset was this and what architecture of DNN was used? The plot shows that the DNN achieved a 100% accuracy in 5 epochs. Is this result meaningful? Before establishing a hypothesis based on this should the hypothesis not be tested on multiple datasets.\\nThe paper says that these stages are persistent across multiple architectures and datasets and as proof the paper says \\u2018we verified that\\u2019. Why can\\u2019t the reader see the experiments? By across datasets does the paper mean MNIST and CIFAR? By across architecture does the paper mean the two architectures mentioned in the appendix one each for MNIST and CIFAR respectively? \\n\\nThe paper makes the assumption that label noise is symmetrically corrupted. Why and where does such an assumption hold? What happens to the proposed method if that is not true.\", \"assumption_2\": \"During the gap stage the model has learned nothing about the corrupt data points.\\nHow is that even possible?\", \"equation_1\": \"So the loss function proposed is log(f(x)_y + (1/o) f(x)_m+1) . What is y here? The true label? Why is y called a point mass? Is this different from the cross-entropy loss + log loss on m+1 ?\\n\\nI do not understand equations 2 to 5. \\n\\nFor figure 3 again what datasets were used?\\n\\n\\u201c Making random bet will help with making money and a skilled gambler will not make such bets\\u201d \\nWhy does making random bet help with making money? If random is good how can a skilled gambler exist in such a game? What is this skill?\\n\\nk denotes the sum of probability of predicting anything that is not y or m+1 (it does not denote prediction). \\n\\nIn the experiments section what was the symbol for the rate of corruption changed from epsilon to r. Are they different? \\n \\nWhat is nll? \\n\\nIt seems that gamblers loss best shines when the corruption rate is as high as 80% . That is 80 percent of the data is corrupted. Does this mean that if I trained with only 20% of the non-corrupt data I would still get a 99% accuracy on MNIST (even without gamblers loss)? A comparison of this sort would have been useful. \\nOne astonishing result the paper presents is that with gambler\\u2019s loss even with 80% corrupt labels a 94% test accuracy is possible on MNIST dataset. I think this is significant, this raises the question that is it required to label all the data points ina dataset to achieve high accuracy or is it possible to achieve just as much with only 20% of the labels?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new loss function for dealing with label noise, claiming that the loss function is helpful in preventing overfitting caused by noisy labels. Some experiments show the effectiveness.\\n\\nThe theory part is unclear to me. (1 Why the derivative of Eq. (1) is Eq. (2)? The notation of f, f_\\\\theta, and f_w has been abused. The notation is confusing without explanation. It seems Eq. (2) is not correct and the following is not convincing. (2 Why a small gradient will slower the model to fit the data? This is not clear and maybe not true. (3 The assumptions \\\\hat{p}+\\\\hat{k}+\\\\hat{l}=1 is very strong to me. The events should be dependent. This makes all the theoretical analyses pseudo and not convincing at all. The authors may spend more effort to make the part clear, reasonable, and convincing.\\n\\nMany claims are very ambiguous. For example, \\\"the key point is that it always holds on some degree\\\", on which degree and why always holds? \\\"In some cases, it even leads to a better memorization phenomenon\\\", in which cases and why lead to better memorization phenomenon?\\n\\nSome claims are even wrong. For example, \\\"Traditional methods in label noise often involves introducing a surrogate loss function that is specialized for the corrupted dataset at hand and is of no use when one is not aware of the existence of label noise\\\" This is just for some specific methods, not for the most of them.\", \"typo\": \"\\\"We verify that . Therefore,\\\"\\n\\nOverall, I cannot understand why the proposed loss function works and cannot recommend acceptance for the current version.\"}"
]
} |
Hygq3JrtwS | On the Reflection of Sensitivity in the Generalization Error | [
"Mahsa Forouzesh",
"Farnood Salehi",
"Patrick Thiran"
] | Even though recent works have brought some insight into the performance improvement of techniques used in state-of-the-art deep-learning models, more work is needed to understand the generalization properties of over-parameterized deep neural networks. We shed light on this matter by linking the loss function to the output’s sensitivity to its input. We find a rather strong empirical relation between the output sensitivity and the variance in the bias-variance decomposition of the loss function, which hints on using sensitivity as a metric for comparing generalization performance of networks, without requiring labeled data. We find that sensitivity is decreased by applying popular methods which improve the generalization performance of the model, such as (1) using a deep network rather than a wide one, (2) adding convolutional layers to baseline classifiers instead of adding fully connected layers, (3) using batch normalization, dropout and max-pooling, and (4) applying parameter initialization techniques. | [
"Generalization Error",
"Sensitivity Analysis",
"Deep Neural Networks",
"Bias-variance Decomposition"
] | Reject | https://openreview.net/pdf?id=Hygq3JrtwS | https://openreview.net/forum?id=Hygq3JrtwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"1Fz7lg0p-N",
"H1xu68SnsB",
"SkgmWi2Kor",
"Hkg-VbqFjB",
"S1llvxcYoS",
"SJxoegqtoH",
"BJgIJnMy5S",
"Ske_OjzaYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798736979,
1573832383605,
1573665531482,
1573654824682,
1573654615678,
1573654515495,
1571920862500,
1571789680468
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1961/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1961/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1961/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1961/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1961/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1961/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1961/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a definition of the sensitivity of the output to random perturbations of the input and its link to generalization.\\n\\nWhile both reviewers appreciated the timeliness of this research, they were taken aback by the striking similarity with the work of Novak et al. I encourage the authors to resubmit to a later conference with a lengthier analysis of the differences between the two frameworks, as they started to do in their rebuttal.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Sensitivity on the training set\", \"comment\": \"Thanks for your suggestion, we will add in the paper the differences with [1], and give a short summary of the mentioned points.\\n\\nPlease find below the calculated sensitivity on the MNIST train data set for a few trained networks (4 layer FC, 4 layer CNN, and VGG13 with various widths and number of channels). Since sensitivity is not using labeled data and the distribution of the input images are the same in test and train, the sensitivity value is the same when calculated on either set.\\n\\nS_test = [17627.77, 69103.00, 176774.00, 362343.86, 670139.63, 1087607.93, 148859.97, 2159199.86, 14964266.50, 42166734.79, 105627866.40, 221379445.12, 645672613.35, 1055347918.57, 1957933949.95, 2.54, 5.41, 18.52, 24.32]\\n\\nS_train = [17845.30, 69081.24, 176801.36, 362165.64, 663971.02, 1090227.46, 145624.26, 2184448.63, 15247152.58, 42963099.05, 105478721.26, 221472995.39, 652898741.97, 1052446483.39, 1965431193.4045093, 2.57, 5.65, 19.58, 26.77]\\n\\nThe Pearson correlation coefficient between the two = 0.9999925\", \"we_also_plotted_these_on_the_same_figure_versus_the_generalization_loss\": \"\", \"https\": \"//ibb.co/JQKW9Cm\"}",
"{\"title\": \"Thank you for your reponse\", \"comment\": \"Thanks for your response. I still believe that the connections to Novak et. al. should be discussed clearly in the paper. I also believe that the novelty of the paper is very limited since there is little difference between what is suggested in Novak et. al. and this paper. Having experiments on ConvNets is great but is not enough for a paper to be accepted at this conference. About the test loss, I agree with authors' response but that is only analytical and what I really care is empirical correlation with 0/1 test error instead of the cross-entropy loss. Finally you have mentioned that \\\"Although in the experiments presented in the paper S is computed on the testing data points, its value on train data exactly matches that on test data\\\". This is not clear to me at all and that is why I encourage you to calculate and report this measure on the training set.\"}",
"{\"title\": \"Reply to Reviewer #2\", \"comment\": \"Thank you very much for your structured feedback which we will use in our response.\", \"1__novelty\": \"Please refer to the reply to Reviewer 3, where we very thoroughly compared our work with [1] and pointed out, in particular, the differences between the two works.\", \"2__definition_of_the_test_loss\": [\"We would like to comment on this point from three aspects:\", \"We find it quite surprising to find a relationship between the cross-entropy loss (which depends on labels of the input data) and the sensitivity of the neural network output (which does not depend on the labels of the input data). In particular, please refer to Equation (8) where the left-hand side depends on the labels and the right-hand side does not depend on the labels.\", \"We can combine the bound found in [2] (refer to proposition 1 in [2] where the classification error is upper bounded by four times the regression loss) and the relation between cross-entropy loss and mean square error found in our work (Section 8.3), and find an upper bound for the classification error as a function of the cross-entropy loss.\", \"The relation between the variance term in the bias-variance decomposition and the sensitivity can be extended to any loss with such a decomposition. [3] defines the concepts of bias, variance, and noise for the multi-class classification error, but there is still no rigorous multi-class classification error decomposition (which might not be purely additive). If such a decomposition is found for the classification error, then the relation between sensitivity and classification error would follow.\"], \"3__using_test_data_in_the_complexity_measure\": \"Thanks for pointing this out. This is exactly why the sensitivity metric is intriguing since this is exactly what S is in principle doing. The sensitivity metric is a property of the network and not of the input data. Although in the experiments presented in the paper S is computed on the testing data points, its value on train data exactly matches that on test data, and the same conclusions can be made for both. So, as long as the unseen data follows the same distribution as the accessed training data, the link with the generalization loss remains the same.\\n\\n\\n[1] Novak, Roman, et al., \\\"Sensitivity and generalization in neural networks: an empirical study.\\\" arXiv preprint arXiv:1802.08760 (2018). \\n[2] Brady Neal, Sarthak Mittal, Aristide Baratin, Vinayak Tantia, Matthew Scicluna, Simon Lacoste-Julien, and Ioannis Mitliagkas. A modern take on the bias-variance tradeoff in neural networks. arXiv preprint arXiv:1810.08591, 2018.\\n[3] P. Domingos and G. Hulten. A unified bias-variance decomposition and its applications. In Proceedings of the 17th International Conference on Machine Learning, pages 231\\u2013238, 2000.\"}",
"{\"title\": \"Reply to Reviewer #3 (part 1/2)\", \"comment\": \"We would like to thank the reviewer for their thorough reading of our paper.\\n \\nThe intuition behind the correlation between sensitivity and generalization in neural networks goes back actually to 1995 [3]; even if that work was then limited to synthetic data. It wasn't until last year that Novak et al. [1] hinted on this correlation with an empirical study on fully connected neural networks in image classification datasets. They compare the sensitivity (as measured by the input-output Jacobian of the output of the softmax function) and the generalization gap (the difference between test and train accuracy) for trained feedforward fully connected neural networks with various depths, widths, and hyper-parameters, and conclude that the Jacobian norm is predictive of generalization depending on how close to the manifold of the training data the function is evaluated. \\n \\nThe sensitivity S and the Jacobian J are indeed conceptually similar, but there are some minor differences between the metrics in the two papers. S is computed before the softmax layer whereas J is computed after the softmax layer. Because of the chain rule, J depends on the derivative of the softmax function with respect to the logits, which is very low for highly confident predictors (the ones which assign a very high probability to one class and almost zero to the other classes). For instance, if the predictor erroneously assigns a high probability to a wrong class, the derivative of the softmax function is very low, resulting in a very low J, so J might be misleading in this case as it would indicate good generalization. In contrast, the sensitivity S does not depend on the confidence level of the predictor. \\n\\nOn the other hand, a practical motivation for using S instead of J is that in real-world applications where we are given multiple trained networks, the sensitivity metric S allows us to have an indication on the network architecture(s) with the best generalization ability without any access to their architecture, whereas computing J requires a backward pass and access to the architecture of the network.\\n \\nWhile [1] investigates the link between the norm of the Jacobian and the test error for feedforward fully connected neural networks, going beyond fully connected networks is needed in deep learning applications. Our work presents empirical results not only for convolutional networks but also for the state of the art neural network architectures such as VGGs and ResNets (Figure 1). \\n\\nShowing the correlation between sensitivity and generalization is only the first part of our work (Figure 1), which motivates the rest of the paper. The second and main part of the paper is to show how this relation sheds new light on understanding why certain techniques work in practice. In particular, we find a repeated link between the benefit of a large and diverse set of popular methods improving learning in deep-nets and the way they decrease S:\\n\\n- Batch Normalization (BN): \\nThere are quite a few results bringing insights on the reasons why BN works, here we provide an alternative explanation for the effectiveness of BN in terms of sensitivity (Figure 2). We further give a new viewpoint on the success of dropout and max-pooling. Networks with these methods have a lower sensitivity alongside with a lower generalization loss.\\n\\n- Initialization Techniques: \\nOur work presents the impact of different initialization techniques and rediscovers the effectiveness of He and Xavier techniques by computing the sensitivity (Section 5.2, Figure 3).\\n\\n- Comparing Different Architectures: \\nOur work presents empirical results on comparing depth versus width and convolutional versus fully connected networks (Section 5.1). From Figure 2 it is clear that the generalization ability of quite a few different architectures can be easily compared without the use of any labeled data, just by computing the sensitivity for the given trained architectures.\\n\\n- Sensitivity S of Untrained Networks: \\nOur work compares the sensitivity of untrained networks and the generalization of trained networks in Section 6.1 (Figure 4). This result is important in architecture search since it hints at the generalization ability of the network before the networks are trained. \\n\\nInterestingly, despite the crude approximations in our derivations, there is a strong alignment between Equation (8) and the empirical results (Figure 1). Even when the match is loose, this relation suggests a convincing explanation (refer to Figure 10 in Section 8.8). Also, it is easily transferable to other machine learning tasks such as regression tasks (refer to Figure 9 in Section 8.7).\"}",
"{\"title\": \"Reply to Reviewer #3 (part 2/2)\", \"comment\": \"Our work complements and reinforces the results presented in [1] by giving a new explanation in terms of sensitivity to the benefits of methods such as BN and dropout, and a new approximate tool to compare or pre-select different network architectures without requiring labeled data.\\n \\nArora et al. [2] provides a very interesting generalization error bound based on metrics that allow noise stability (layer cushion, etc.), whereas our paper presents a direct link between noise stability (in terms of an average case local sensitivity measure) and the generalization loss.\\n \\n[1] Novak, Roman, et al., \\\"Sensitivity and generalization in neural networks: an empirical study.\\\" arXiv preprint arXiv:1802.08760 (2018). \\n[2] Arora, Sanjeev, et al. \\\"Stronger Generalization Bounds for Deep Nets via a Compression Approach.\\\" International Conference on Machine Learning. 2018.\\n[3] Yannis Dimopoulos, Paul Bourret, and Sovan Lek. Use of some sensitivity criteria for choosing networks with good generalization ability. Neural Processing Letters, 2(6):1\\u20134, 1995.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper examines generalization performance of various neural network architectures in terms of a sensitivity metric that approximates how the error responds to perturbations of the input. A crude argument is presented for how the proposed sensitivity metric captures the variance term in the standard bias-variance decomposition of the loss. A number of experimental results are presented that show strong correlation between the sensitivity metric and the empirical test loss.\\n\\nUnderstanding the distinguishing characteristics of networks that generalize well versus networks that generalize poorly is a central challenge in modern deep learning research, so the topic and analyses presented in this paper are salient and will be of interest to most of the community. The experimental results are intriguing and the presentation is clear and easy to read. While some may object to the egregious simplifications utilized in \\\"deriving\\\" the sensitivity metric, I believe this kind of analysis should be welcomed if it produces new insights and helps explain otherwise opaque empirical phenomena. All told, if taken in isolation from prior work, I think the insights and empirical results presented in this paper are quite interesting and certainly sufficient for acceptance to ICLR.\\n\\nHowever, there is significant overlap with prior work that severely detracts from the novelty of the results presented here, and I think the community is already familiar with the paper's main conclusions. From the empirical viewpoint, [1] performs a very similar (and actually quite a bit more thorough) analysis, and reaches very similar conclusions. The authors do cite [1], but unless I missed something, their main argument for uniqueness is basically \\\"in experiments, we prefer S to the Jacobian, because in order to compute S it is enough to look at the network as a black box that given an input, generates an output, without requiring further knowledge of the model.\\\" While this may be useful from the practical standpoint for some non-differentiable models, I'm not convinced that this distinction is really significant in terms of building insights or new understanding. \\n\\nOne additional way this paper is distinct from [1] is that it includes a theoretical \\\"derivation\\\" for the sensitivity metric. While I found the argument interesting, from the theoretical perspective, [2] gives much more rigorous and insightful arguments that help explain the observed phenomena. \\n\\nOverall, I'm just not convinced this paper is novel enough to merit publication. But perhaps I've overlooked something, in which case I hope the author's response can highlight their unique contributions relative to prior work.\\n\\n[1] Novak, Roman, et al. \\\"Sensitivity and generalization in neural networks: an empirical study.\\\" arXiv preprint arXiv:1802.08760 (2018).\\n[2] Arora, Sanjeev, et al. \\\"Stronger Generalization Bounds for Deep Nets via a Compression Approach.\\\" International Conference on Machine Learning. 2018.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the connection between sensitivity and generalization where sensitivity is roughly defined as the variance of the output of the network when gaussian noise is added to the input data (generated from the same distribution as the training error).\\n\\nThe paper is well-written and the experiments are very comprehensive. There are however 3 major issues with the current approach:\", \"1__novelty\": \"Novak et al. 2018 suggests a very similar notation of sensitivity and they show correlation with generalization. Even though the authors site this work, they don't discuss the connection very clearly. In light of that work, there is very limited novelty in this paper.\", \"2__definition_of_test_loss\": \"Authors define the test loss to be cross-entropy but in almost all these tasks, what we care about is the task-loss which is 0/1 classification error on the test data and not the cross-entropy loss. These two loss behave very differently. In particular, the cross-entropy loss is very sensitive to the of variance of the output while 0/1 classification loss does not depend on it. Therefore, it is not surprising that there is high correlation between the output variance and the cross-entropy loss but it is not clear if this has anything to do with the test error.\", \"3__using_test_data_in_the_complexity_measure\": \"The goal of understanding generalization is not just to get correlation with the test error. One can always use a validation set to get a very good correlation. Even when we have limited data, we can always put a small portion of the data for validation without loosing much in the final performance. The main goal is to predict generalization without using any access to the distribution. In particular, we need properties that show how networks behave on new data instead of simply measuring a property on the new data. Therefore, using a measure that is evaluated on new data is not really helpful.\\n\\n\\n********************************\", \"after_author_rebuttals\": \"Authors have addressed one of my concerns (no 3) but the other two concerns are not addressed adequately. I increase my score to \\\"weak reject\\\" but not higher because of my concern about the novelty of the work in light of Novak et al. 2018.\"}"
]
} |
H1eF3kStPS | Redundancy-Free Computation Graphs for Graph Neural Networks | [
"Zhihao Jia",
"Sina Lin",
"Rex Ying",
"Jiaxuan You",
"Jure Leskovec",
"Alex Aiken."
] | Graph Neural Networks (GNNs) are based on repeated aggregations of information across nodes’ neighbors in a graph. However, because common neighbors are shared between different nodes, this leads to repeated and inefficient computations.We propose Hierarchically Aggregated computation Graphs (HAGs), a new GNN graph representation that explicitly avoids redundancy by managing intermediate aggregation results hierarchically, and eliminating repeated computations and unnecessary data transfers in GNN training and inference. We introduce an accurate cost function to quantitatively evaluate the runtime performance of different HAGsand use a novel search algorithm to find optimized HAGs. Experiments show that the HAG representation significantly outperforms the standard GNN graph representation by increasing the end-to-end training throughput by up to 2.8x and reducing the aggregations and data transfers in GNN training by up to 6.3x and 5.6x. Meanwhile, HAGs improve runtime performance by preserving GNNcomputation, and maintain the original model accuracy for arbitrary GNNs. | [
"Graph Neural Networks",
"Runtime Performance"
] | Reject | https://openreview.net/pdf?id=H1eF3kStPS | https://openreview.net/forum?id=H1eF3kStPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"SpKGoqpcUJ",
"rkl9lZZhoB",
"SygZInyhsS",
"ryexiZJ2jB",
"r1gKPbyhor",
"S1x2xZ1njH",
"rJxFqXA1ir",
"ryxmxkCRtB",
"rkxBrYqaYH",
"Syly8HwpKH",
"BJlw11mpFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798736948,
1573814513595,
1573809225273,
1573806487552,
1573806432718,
1573806323892,
1573016465272,
1571901162861,
1571821885202,
1571808582853,
1571790559411
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1960/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1960/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1960/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1960/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1960/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1960/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1960/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1960/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1960/Authors"
],
[
"~Xiaojian_Wu1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a new graph Hierarchy representation (HAG) which eliminates the redundancy during the aggregation stage and improves computation efficiency. It achieves good speedup and also provide theoretical analysis. There has been several concerns from the reviewers; authors' response addressed them partially. Despite this, due to the large number of strong papers, we cannot accept the paper at this time. We encourage the authors to further improve the work for a future version.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for the quick response\", \"comment\": \"It seems there is a slight misunderstanding in Theorem 1 and 2.\\n\\nTheorem 1 shows that HAG always maintains the exact same model accuracy as GNN-graph for both categories of GNN models (i.e., set and sequential AGGREGATE). In fact, training a GNN model on HAG produces the exact same activations/gradients/weights as traditional training on GNN-graph in each epoch.\\n\\nTheorem 2 and 3 describe the runtime performance (i.e., the execution time to train an epoch) of the HAGs discovered by our search algorithm, since there exist numerous HAGs functionally equivalent to the original GNN-graph. In particular, Theorem 2 proves that, for GNN models with sequential AGGREGATE, the search algorithm finds a HAG with globally optimal runtime performance (i.e., minimal execution time to train an epoch). Theorem 3 proves a similar bound for GNN models with set AGGREGATE.\\n\\nFigure 6 compares the training time of HAG and GNN-graph for GCN, which uses set AGGREGATE. We are happy to also include a time-to-accuracy comparison for a GNN model with sequential AGGREGATE in the final paper. Note that Theorem 1 proves that HAG maintains the same model accuracy as the original GNN-graph by design.\"}",
"{\"title\": \"Thank you for the response\", \"comment\": \"Thank you for the clarification for GraphSAGE. I understand.\\n\\n>The HAG graph representation achieves the same training and test results as the original GNN-graph representation for each GNN model, even though GNN models with different aggregation methods (i.e., set v.s. sequential) may obtain different accuracy--- this issue is orthogonal to the HAG optimizations.\\n>Theorem 1 in the paper proves the equivalence of HAG and GNN-graph representations for both training and inference. This means that HAG performs exactly the same computations as the traditional model training/inference but in a non-redundant way. This means, that HAG obtains exactly the same model as traditional training (but HAG is much faster).\\n>To address reviewer\\u2019s comment we have added an experiment to evaluate the training effectiveness of HAG (Figure 6 on page 13), which compares the time-to-accuracy performance between original GNN-graph representation and HAG, and show that HAG can reduce the training time by 1.8x while obtaining the same model accuracy.\\n\\nThanks for the comment.\\nI am completely fine for the exact case (i.e., Theorem 1). This case the performance should be the same.\\n\\nSo, Figure 6 is based on Theorem1 or Theorem 2? This part is still not clear.\\nIf this is exact one (based on Theorem 1), I still believe the accuracy comparison of sequential one is necessary, since we can make the method extremely fast with very poor accuracy.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We thank the reviewer for a thorough review and valuable questions. The reviewer has concerns about (1) how HAG affects model accuracy and (2) how sampling rate affects the runtime performance. We believe there are a few important misunderstandings, which we address in detail in our response below. We have also updated the paper to further clarify and emphasize these points.\\n\\n### Need to report the model accuracy\\nThe HAG graph representation achieves the same training and test results as the original GNN-graph representation for each GNN model, even though GNN models with different aggregation methods (i.e., set v.s. sequential) may obtain different accuracy--- this issue is orthogonal to the HAG optimizations.\\n\\nTheorem 1 in the paper proves the equivalence of HAG and GNN-graph representations for both training and inference. This means that HAG performs exactly the same computations as the traditional model training/inference but in a non-redundant way. This means, that HAG obtains exactly the same model as traditional training (but HAG is much faster).\\n\\nTo address reviewer\\u2019s comment we have added an experiment to evaluate the training effectiveness of HAG (Figure 6 on page 13), which compares the time-to-accuracy performance between original GNN-graph representation and HAG, and show that HAG can reduce the training time by 1.8x while obtaining the same model accuracy.\\n\\n### What is the trade-off between the sampling rate and the speedup\\nWe think there is a misunderstanding here. The reviewer seems to assume that HAG is designed for mini-batch training, probably because we use GraphSAGE as an example to demonstrate different aggregation functions in Table 1. In fact, HAG is designed for full-batch training, which is also the training method used in most existing GNN models, including GCN (Kipf & Welling, 2016), GIN (Xu et al., 2019), and SGC (Wu et al., 2019). We will fix this confusion by emphasizing the full-batch training setting in the introduction and use other GNN models (with full-batch training) as examples in Table 1.\\n\\n### Equations are used without explaining the meaning (e.g., AGGREGATE in Equation (1)) \\nWe apologize for the missing explanation in the equations. The AGGREGATE in Equation (1) can be arbitrary associative and commutative operations performed on a set (i.e., invariant to the order in which the aggregations are performed). We have updated the paper to clarify this.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We thank the reviewer for providing a thorough review and asking valuable questions. The reviewer raises several concerns about the broad applicability of HAGs. As we argue below, HAG is broadly applicable to the majority of GNN models, it can support directed and weighted graphs, and it provides significant speedups with no loss in model performance (both at training as well as prediction time). We have revised the paper to further clarify and emphasize all these points.\\n\\n### HAG does not support GAT\\nWe thank the reviewer for making this point. This is correct and HAG optimization does not apply to graph attention networks (GAT). The HAG graph representation is designed for GNNs with a neighborhood aggregation scheme (formally defined in Algorithm 1), and is applicable to most existing GNN models, including GraphSAGE, PinSAGE, GCN, GIN, SGC, DCNN, DGCNN, and many others. Thus, it is reasonable to conclude that HAG applies to a significant majority of GNN models. Because such GNN-graphs are processed individually in these GNN models, they contain significant redundant computation (up to 84%), and HAG can reduce the overall computation by 6.3x, while provably preserving the original model accuracy. \\nWe do appreciate the comment, and we have added a paragraph at the end of page 8 to discuss this limitation of HAG in the revised paper.\\n\\n### Can HAG support directed and weighted graph?\\nHAG can support directed and/or weighted graphs as long as the GNN models can be abstracted as in Algorithm 1. In particular, HAG can support directed graphs by changing N(v) in Algorithm 1 to be the set of incoming-neighbors of node v (instead of the set of all neighbors). For weighted graphs, HAG can incorporate edge weights in neighborhood aggregation by updating the AGGREGATE function in Algorithm 1 to consider edge weights. For weighted graphs, rather than identifying common subsets of neighbors, HAG identifies common neighbors with shared edge weights as redundant computation. The fact that existing GNN models in the literature do not consider edge weights and are designed for undirected graphs makes it hard to find a realistic benchmark to evaluate the performance of HAG on directed and weighted graphs.\\n\\nTo address the reviewer\\u2019s point we have added a discussion on potential extensions of our HAG algorithm to directed and weighted graphs in the revised paper.\\n\\n### The training effectiveness of HAG is questionable\\nThere is a slight misunderstanding here. HAG eliminates redundancy in GNN training while exactly maintaining the original computation (proved in Theorem 1), therefore it is guaranteed to preserve the original model accuracy by design. However, to address reviewer\\u2019s comment we have added an experiment to evaluate the training effectiveness of HAG (Figure 6 on page 13), which compares the time-to-accuracy performance between original GNN-graph representation and HAG, and shows that HAG can reduce the end-to-end training time by 1.8x while obtaining the same model accuracy. Thus, HAG leads to significant faster training time with no loss in model performance.\\n\\nWe have further clarified this in the main paper, because we see it as one of the important benefits of HAG that it maintains original model performance (by performing exactly the same computations), while leading to significant speed-ups.\"}",
"{\"title\": \"Response to Review#4\", \"comment\": \"We thank the reviewer for providing a thorough review and asking valuable questions. The reviewer raises a concern about the broad applicability of HAGs. HAG is broadly applicable to the majority of GNN models, including GraphSage, PinSage, GCN, GIN, SGC, DCNN, DGCNN, and many others. HAG provides significant speedups while provably preserving model accuracy (both for training and inference).\\n\\n### HAG does not support GAT\\nWe thank the reviewer for making this point. This is correct and HAG optimization does not apply to graph attention networks (GAT). The HAG graph representation is designed for GNNs with a neighborhood aggregation scheme (formally defined in Algorithm 1), and is applicable to most existing GNN models, including GraphSAGE, PinSAGE, GCN, GIN, SGC, DCNN, DGCNN, and many others. Thus, it is reasonable to conclude that HAG applies to a significant majority of GNN models. Because such GNN-graphs are processed individually in these GNN models, they contain significant redundant computation (up to 84%), and HAG can reduce the overall computation by 6.3x, while provably preserving the original model accuracy. \\nWe do appreciate the comment, and we have added a paragraph at the end of page 8 to discuss this limitation of HAG in the updated paper.\\n\\n### More GNN results in different models would make the paper more convincing\\nWe thank the reviewer for the constructive feedback. In the updated paper, we have evaluated HAG on more GNN models and observed similar or even better performance improvement. In particular, we have further evaluated HAG on GIN (Xu et al., 2019) and SGC (Wu et al., 2019). The results are shown in Figure 5 on page 12. Compared to the GCN model, our HAG optimizations achieve similar speedups on GIN and better speedups on SGC.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a new graph Hierarchy representation named HAG. The HAG aiming at eliminating the redundancy during the aggregation stage In Graph Convolution networks. This strategy can speed up the training and inference time while keeping the GNN output unchanged, which means it can get the same predict result as before. The idea is clear and easy to follow. For the theory part, I do not thoroughly check the theoretical proof but the theorem statement sounds reasonable for me. The experiment shows the HAG performs faster in both training and inference.\\n\\nGenerally speaking, I think this paper has good theory analysis, the speed-up effect is also good from the experimental result. However, I still have some concerns and comments.\\n\\n1. The algorithm seems hard to apply on the attention-based Graph Neural network, which achieves good performance in several benchmarks these years. In other words, the redundancy of the node aggregate only exists in the Graph Convolution model with the fix node weight, which is replaced by a dynamic weight in many latest models with higher performance. That weakens the empirical use of this algorithm.\\n\\n2. The authors state that the HAG can optimize various kinds of GNN models, but the experiment only shows the results on a small GCN model. More GNN results in different models and settings would make the algorithm more convincing.\\n\\nIn conclusion, I think this is a good paper. Regards the comments above, I prefer a grade around the borderline.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper aims to propose a speeding-up strategy to reduce the training time for existing GNN models by reducing the redundant neighbor pairs. The idea is simple and clear. The paper is well-written. However, major concerns are:\\n\\n1. This strategy is only for equal contribution models (e.g., GCN, GraphSAGE), not for methods which consider distinct contribution weights for individual node neighbor (e.g. GAT). However, in my opinion, for one target node, different neighbors should be assigned with different contributions instead of equal contributions.\\n\\n2. What kind of graphs can the proposed model be applied to? This paper seems to only consider unweighted undirected graphs. How about directed and weighted graphs? Even for an unweighted undirected graph, the symmetric information may also be redundant for further elimination. Then can HAG reduce this symmetric redundancy?\\n\\n3. There is no effectiveness evaluation comparing the original GNN models with the versions with HAG. The authors claim that, with the HAG process, the efficiency could be improved without losing accuracy. But there are no experimental results verifying that the effectiveness of the HAG-versions which could obtain comparable performance with the original GNN models for some downstream applications (e.g., node classification).\\n\\n--------------------------------------------------Update------------------------------------------------\\nThanks very much for the authors' feedback. The revised version has clarified some of my concerns. However, the equal-contribution (in Comment 1) is still a big one that the authors should pay attention. I increase my score to 3.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #32\", \"review\": \"In this paper, authors propose a way to speed up the computation of GNN. More specifically, the hierarchically aggregate computation graphs are proposed to aggregate the intermediate node and utilize this to speed up a GNN computation. Authors proof that the computation based on HAGs are equivalent to the vanilla computation of GNN (Theorem1). Moreover, for sequential aggregation, it can find a HAG that is at least (1-1/e)-approximation of the globally optimal HAGs (Theorem 3). These theoretical results are nice. Through experiments, the authors demonstrate that the proposed method can get faster computation than vanilla algorithms.\\n\\nThe paper is clearly written and easy to follow. However, there are some missing piece needed to be addressed.\\nI put 6 (weak accept), since we cannot put 5. However, current my intention about the score is slightly above 5.\", \"detailed_comments\": \"1. Experiments are only done for computational time comparison. In particular, for the sequential one, prediction accuracy can be changed due to the aggregation algorithm. Thus, it needs to report the prediction accuracy.\\n\\n2. In GraphSAGE, what is the sampling rate? It would be nice to have the trade-off between the sampling rate and the speedup. I guess if we sample small number of points in GraphSAGE, the performance can be degraded. In contrast, the proposed algorithm can get similar performance with larger sampling rate? Related to the question 1, the performance comparison is needed. \\n\\n3. Equations are used without not explaining the meaning. For instance AGGREGATE function (1), there is no definition how to aggregate.\"}",
"{\"comment\": \"Thanks for your interests in the paper.\\n\\nThe negligible memory overhead is because the intermediate aggregations nodes do not need to be memorized for back propagation, and our HAG implementation uses the same memory across all layers. Caching the intermediate results for 100K nodes (with 16 activations per node) requires approximately 6MB GPU memory, which is negligible compared to the overall memory usage (~6GB) for training COLLAB.\\n\\nFor the second question, our HAG approach can actually reduce the memory usage for edges, since a HAG contains 1.3x-5.6x fewer edges than the original graph representation. The two bottom charts in Figure 3 show the edge comparison between HAG and the original graph representation.\\n\\nWe will include more analysis on the memory overhead in the revised paper.\", \"title\": \"The HAG memory overhead is negligible (~0.1%), and HAG can reduce the number of edges by 1.3-5.6x.\"}",
"{\"comment\": \"In section 5.5, you mentioned \\\"by gradually increasing the capacity,the search algorithm eventually \\ufb01nds a HAG with 100K aggregation nodes, which consume 6MB of memory (0.1% memory overhead) while improving the training performance by 2.8\\u00d7.\\\" It seems that a lot of extra nodes are added but only 0.1% memory overhead is introduced, could you explain how this number was calculated?\\nAnd how much memory overhead will be introduced for extra edges?\", \"title\": \"clarification on memory overhead\"}"
]
} |
HJxKhyStPH | Toward Understanding The Effect of Loss Function on The Performance of Knowledge Graph Embedding | [
"Mojtaba Nayyeri",
"Chengjin Xu",
"Yadollah Yaghoobzadeh",
"Hamed Shariat Yazdi",
"Jens Lehmann"
] | Knowledge graphs (KGs) represent world's facts in structured forms. KG completion exploits the existing facts in a KG to discover new ones. Translation-based embedding model (TransE) is a prominent formulation to do KG completion.
Despite the efficiency of TransE in memory and time, it suffers from several limitations in encoding relation patterns such as symmetric, reflexive etc. To resolve this problem, most of the attempts have circled around the revision of the score function of TransE i.e., proposing a more complicated score function such as Trans(A, D, G, H, R, etc) to mitigate the limitations. In this paper, we tackle this problem from a different perspective. We show that existing theories corresponding to the limitations of TransE are inaccurate because they ignore the effect of loss function. Accordingly, we pose theoretical investigations of the main limitations of TransE in the light of loss function. To the best of our knowledge, this has not been investigated so far comprehensively. We show that by a proper selection of the loss function for training the TransE model, the main limitations of the model are mitigated. This is explained by setting upper-bound for the scores of positive samples, showing the region of truth (i.e., the region that a triple is considered positive by the model).
Our theoretical proofs with experimental results fill the gap between the capability of translation-based class of embedding models and the loss function. The theories emphasis the importance of the selection of the loss functions for training the models. Our experimental evaluations on different loss functions used for training the models justify our theoretical proofs and confirm the importance of the loss functions on the performance.
| [
"Knowledge graph embedding",
"Translation based embedding",
"loss function",
"relation pattern"
] | Reject | https://openreview.net/pdf?id=HJxKhyStPH | https://openreview.net/forum?id=HJxKhyStPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"AbjKCGwpnf",
"BkldNUq2sS",
"BJevC-Onsr",
"B1eqmUAsoH",
"B1erEVO5oH",
"rygRl-HcjB",
"H1xjq6N5ir",
"SJeLKcEqor",
"Hkg-SNR_or",
"HyenwvQbsS",
"rklmtk369H",
"BylVtzu35S",
"BygbP8ShYH",
"H1eX02mVYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736918,
1573852719960,
1573843406885,
1573803554282,
1573712941297,
1573699829979,
1573698963470,
1573698174159,
1573606457047,
1573103460380,
1572876155342,
1572795003777,
1571735128817,
1571204299142
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1959/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1959/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1959/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1959/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1959/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1959/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1959/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1959/Authors"
],
[
"~Jingpei_Lei1"
],
[
"ICLR.cc/2020/Conference/Paper1959/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1959/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1959/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1959/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper analyses the effect of different loss functions for TransE and argues that certain limitations of TransE can be mitigated by choosing more appropriate loss functions. The submission then proposes TransComplEx to further improve results. This paper received four reviews, with three recommending rejection, and one recommending weak acceptance. A main concern was in the clarity of motivating the different models. Another was in the relatively low performance of RotatE compared with [1], which was raised by multiple reviewers. The authors provided extensive responses to the concerns raised by the reviewers. However, at least the implementation of RotatE remains of concern, with the response of the authors indicating \\\"Please note that we couldn\\u2019t use exactly the same setting of RotatE due to limitations in our infrastructure.\\\" On the balance, a majority of reviewers felt that the paper was not suitable for publication in its current form.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"The Revised Version Of The Paper\", \"comment\": \"We would like to thank the reviewers for their valuable and constructive comments. We uploaded the revised version of the paper addressing the reviewers\\u2019 comments and suggestions.\", \"summary_of_changes\": \"1- Revision of the style and grammar (Reviewer 1,4)\\n2- Inclusion of histogram of the scores of triples to show the losses approximate the conditions (a-d) (Reviewer 4)\\n3- Training RPTransComplEx without grounding (reviewer 2)\\n4- Inclusion of the Figures corresponding to the relation pattern loss convergence (reviewer 2,4)\\n5- Experiments on TransComplEx (without relation pattern injection) with a bigger setting (bigger dimension, more negative samples) are included in the Appendix (reviewer 1,2,3)\\n6- Moving hyper-parameters in a table to the appendix (reviewer 4, 3)\\n7- Revision of some parts of the text to better show the novelty and importance of our work (reviewer 2)\"}",
"{\"title\": \"Response to Jingpei Lei\", \"comment\": \"\\\" BTW, I notice a small error in the last paragraph of proof for Lemma 3 in page 14. But the Lemma 3 in this paper is still correct.\\\"\", \"response\": \"Thank you for the point. $$\\\\theta = \\\\pi/2$$ is the necessity and sufficiency condition considering the equation 9.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"\\\"There are some typos in this paper. For example, in Line 11 of Section 4.3, the comma should be a period; in Section 5, the \\\"Dissuasion of Results\\\" should be \\\"Discussion of Results\\\"\", \"response\": \"Thank you, we fixed the typos.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"\\\"Also, the reported performance of TransE in [1] is much better than what is reported in the paper\\\"\", \"response\": \"[1] reported the result of TransE on WN18RR with different settings (Table5 and 7 of [1]). We decided to rerun experiments on TransE with a Margin Ranking Loss using the setting which we reported in order to have 1) a fair comparison and 2) a proper justification in the theories as the theories are related to TransE with different loss. We will update the results in the paper.\"}",
"{\"title\": \"Response to Reviewer 2: Regarding the novelty of the paper\", \"comment\": \"\\\"Using complex representations in TransComplEx seems also a straightforward application of the insights of ComplEx/Hole...\\\"\", \"response\": \"The main novelty of this work is to re-investigate the limitations of existing models in the light of loss functions in order to have a more accurate conclusion. Without taking the loss functions into account, the theories of limitations and capabilities are inaccurate. This is the main contribution of this work. TransComplEx is a case study used beside TransE to analyze the main limitations in the light of the loss function.\"}",
"{\"title\": \"Response to Reviewer 2: Regarding the evaluation 2: Analysis of the results and training on the test\", \"comment\": \"Thank you for the comment.\", \"comments\": \"\\\"Even more serious: Following again Section 5 (Dataset), it seems that the paper imputes all missing triples in the training set for symmetric and transitive relations (\\\"grounding\\\"). Hence, the models get to see _all_ true triples for these relation types and as such the models in this paper are trained on the test set.\\\"\", \"response\": \"Analysis of the results and \\\"train on the test\\\":\", \"1\": \"In order to investigate the effect of each of the losses (3, 5, 6), we trained RPTransComplEx3, RPTransComplEx5 and RPTransComplEx6 on the same data (triples and set of relation patterns as background knowledge). From the experiments, we found that the loss 5 (approximating the condition (c) ) obtained a better performance. This is consistent with our theories indicating that the models trained on the condition (c) have fewer limitations.\\nTherefore, a comparison of RPTransComplEx3, RPTransComplEx5, and RPTransComplEx6 (which are done on the same data and patterns) concludes that the loss 5 is more effective.\", \"2\": \"RPTransComplEx5 can be compared with other models injecting relation patterns (use triples and a set of relation patterns) such as RUGE, KALE etc. Please note that RUGE did grounding for rule injection. We used the same data (triples and relation patterns with the confidence of above 80%). Therefore, the comparison is fair because it is done with the same conditions (triples and relation patterns).\", \"3\": \"We then decided to investigate the performance of RPTransComplEx5 without using additional knowledge (i.e., relation patterns with confidence values) while we already found the loss 5 to be more effective based on theories and previous experiments. Therefore, we trained the TransComplEx with the best-reported loss (i.e. the loss 5 approximating the condition (c)) from the previous experiment, using only triples (not to use relation patterns with confidence and grounding) i.e., TransComplEx5. TransComplEx5 used the same dataset as the first class of embedding models reported in the table 2 and 3 used. The results also confirm the theories.\", \"4\": \"Given the relation pattern (in the form of Body -> Head), not too many triples in the test set exist in the grounding of Head. For example, our statistics show that only 0.7% (less than one percent) of the test set exist in the grounding of Head for symmetric in FB15K. Therefore, this does not affect our conclusion, especially when all RPTransComplEx# are trained and tested using the same data to conclude that which of the losses is more effective (less restrictive according to theories).\", \"5\": \"TransComplEx5 and RPTransComplEx5 obtain close performance. Therefore, the models trained with proper loss function (i.e. 5) encode the patterns properly by only training on triples (i.e., injection of relation patterns with/without grounding didn\\u2019t affect the performance significantly). It is further justified when the convergence of the losses of the patterns with and without injection are compared (the relation pattern loss properly converges even without injection). We will include the convergence figures in the paper.\", \"6\": \"We will report the results of RPTransComplEx5 without the relation patterns which have been grounded to further justify that the high performance is obtained by proper selection of the loss function lifting the limitations. We will additionally include the results of TransComplEx (trained only on triples) with different losses to further support the theories.\"}",
"{\"title\": \"Response to Reviewer 2: Regarding the evaluation 1: using modified dataset\", \"comment\": \"Thank you for the valuable comments.\", \"comments\": \"\\\" it seems from Section 5 (Dataset), that this paper is using a modified dataset\\\":\", \"response\": \"Actually, we used the same dataset (WN18rr, WN18, FB15K-237, FB15K) that have been extensively used for evaluation of KGEs by others and these data sets do not contain any information about the confidence of triples. Therefore, our models are not trained on high confidence triples. In Section-5 (Dataset), we already mentioned that the relation patterns (i.e., rules) with a lower confidence value are removed. That does not have anything to do with triples and their potential level of confidence. We used the relation patterns (rules) extracted by AMIE. These relation patterns (rules) were used (by doing grounding) in RUGE to be injected into the learning process. Each relation patterns (and not triples) used in RUGE has a confidence value. RUGE also only used relation patterns with confidence higher than 80%. We used the same dataset. \\nWe compare our models (trained by different losses) with two classes of models: 1) the models that have not used any relation patterns (rules) as background knowledge (such as RotatE and ComplEx, TransE etc), for injection and 2) the models used a set of relation patterns (rules) as background knowledge to inject them into the embedding models during the learning process (such as RUGE, KALE etc). To have a fair comparison, we trained the TransComplEx under two conditions. First: in order to compare with the first class of models, TransComplEx is trained using only triples (Table2,3\\u2026 TransComplEx row) and we did not use or inject any relation patterns into it. Second: in order to compare with the second class of models, RPTransComplEx used relation patterns (rules) as background knowledge to be injected into the learning process such as RUGE which trained ComplEx using relation patterns with confidence higher than 80%. Therefore, we included both of the models in the Table-2,3 to have a comprehensive evaluation. \\nMoreover, comparing TransComplEx5 and RPTransComplEx5 (which both are trained with the same loss function), we see that the results of TransComplEx trained using loss (5) are very close to RPTransComplEx5. We conclude that the model which is trained on only triples with the loss 5 (i.e. TransComplEx5) is capable of properly learning the most of patterns without using additional background knowledge (relation patterns) to be injected. We visualized the relation patterns losses convergence for TransComplEx5 and RPTransComplEx5 (respectively, without and with relation pattern injected). The convergence of the losses confirms that TransComplEx can properly learn the relation patterns without using additional knowledge to be injected. We will include the figures of relation pattern losses convergence of TransComplEx and RPTransComplEx in the paper. We did new experiments on TransComplEx and TransE as well as RotatE with different loss functions with a bigger setting. we will include them in the paper. The results are consistent with the theories corresponding to the limitations of different models. In this experiment, we didn\\u2019t use any relation patterns and the models are trained only using triples.\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"\\\"Experiments: Can the authors provide examples of relations learned ...\\\"\", \"response\": \"Thank you for the important point. We are running experiments confirming our theories. We will include them in the paper. Our experiments confirm that most of the relation patterns are properly learned by the model (and for some of them even them without injection), showing the value of loss function.\"}",
"{\"title\": \"About \\\"The short-comings of TransE and improvements to the loss have been discussed quite extensively in prior work. \\\"\", \"comment\": \"Hello,\\n\\n Very sorry for the disturb. \\n I am interested in the limitations of TransE with different loss functions in this paper. \\n Furthermore, I think this is the main contribution of this paper, though I do not know whether the authors of this paper will agree with me or not. \\n First, these \\\"limitations\\\" is not just effect TransE. \\\"Limitations\\\" described in [1] effect all translation-based models, as well as RotatE [2]. \\n Second, this paper tell me most of these \\\"limitations\\\" are not real. Since, no one use the lost function in condition a) in training. Most of the translation-based models are trained in condition d) and the reported best performance of TransE in [2] uses a loss function which can be regarded as a special case of condition c). Most of current papers prove the \\\"limitations\\\" of TransE and other translation-based models on condition a), while it may be not reasonable enough. \\n I do not think the anaysis of \\\"limitations\\\" in this paper is very diffcult, but it is really novel to me. Hope to get another related prior work. \\n \\n I am a green finger in this scope and can not exhaust all related papers. I have no intention of offending and sorry for the disturb again. \\n\\n[1]Seyed Mehran Kazemi, David Poole: SimplE Embedding for Link Prediction in Knowledge Graphs. NeurIPS 2018: 4289-4300\\n[2]Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, Jian Tang: RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. ICLR (Poster) 2019\\n\\n BTW, I notice a small error in the last paragraph of proof for Lemma 3 in page 14. But the Lemma 3 in this paper is still correct.\\n \\\"To avoid contradiction, $\\\\alpha \\\\geq 1$. If $\\\\alpha \\\\geq 1$ we have cos($\\\\theta_{u,r}$) =$\\\\pi$/2\\\". \\n Here, the \\\"cos($\\\\theta_{u,r}$) =$\\\\pi$/2\\\" is not valid. \\n In fact, \\\"$\\\\theta_{u,r}$ =$\\\\pi$/2\\\" is a sufficient condition for \\\"$\\\\alpha \\\\geq 1$\\\". The scope of $\\\\theta_{u,r}$ is depend on $\\\\parallel \\\\textbf{u} \\\\parallel / \\\\parallel \\\\textbf{r} \\\\parallel$.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper revisits limitations of relation-embedding models by taking losses into account in the derivation of these limitations. They propose and evaluate a new relation encoding (TransComplEx) and show that this encoding can address the limitations previously underlined in the literature when using the right loss.\\n\\nThere seems to be merit in distinguishing the loss when studying relation encoding but I think the paper's analysis lacks proper rigor as-is. A loss minimization won't make equalities in (3) and (5) hold exactly, which the analysis do not account for. A rewriting of the essential elements of the different proofs could make the arguments clearer.\", \"paper_writing\": [\"The manuscript should be improved with a thorough revision of the style and grammar. Example of mistakes include: extraneous or missing articles, incorrect verbs or tenses.\", \"The 10-pages length is not beneficial, the recommended 8-pages could hold the same overall content.\", \"The option list on page 8 is very difficult to read and should be put in a table, e.g. in appendix.\", \"Parentheses are missing around many citations and equation references\"], \"theory\": \"Equation (2) and (4) do not seem to bring much compared to the conditions in Table 1. Eq. (3) and (5) show \\\"a\\\" loss function rather than \\\"the\\\" loss function since multiple choices are possible. \\\\gamma_1 should be set to 0 when it is 0 rather than staying in the equations.\\n* Minimizing the objective (3) and (5) will still not make the conditions in Table 1 hold exactly, because of slack variables.\", \"experiments\": [\"Can the authors provide examples of relations learned with RPTransComplEx# that go address the limitations L1...L6, validating experimentally the theoretical claims and showing that the gain with RPTransComplEx5 correspond to having learned these relations?\"]}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper analyses the effect of different loss functions for TransE and argues that certain limitations of TransE can be mitigated by chosing more appropriate loss functions. Furthermore, the paper proposes TransComplEx -- an adaption of ideas from ComplEx/HolE to TransE -- to mitigate issues that can not be overcome by a simply chosing a different loss.\\n\\nAnalyzing the behavior and short-comings of commonly-used models can be an important contribution to advance the state-of-the-art. This paper focuses on the performance of TransE, which is a popular representation learning approach for knowledge graph completion and as such fits well into ICLR.\\n\\nUnfortunately, the current version of the paper seems to have issues regarding methodology and novelty.\", \"regarding_the_experimental_evaluation\": \"The paper compares the results of TransComplEx and the different loss functions to results that have previously been published in this field (directly, without retraining). However, it seems from Section 5 (Dataset), that this paper is using a modified dataset, as the TransE models are only trained on high-confidence triples. All prior work that I checked doesn't seem to do this, and hence the numbers are not comparable.\", \"even_more_serious\": \"Following again Section 5 (Dataset), it seems that the paper imputes all missing triples in the training set for symmetric and transitive relations (\\\"grounding\\\"). Hence, the models get to see _all_ true triples for these relation types and as such the models in this paper are trained on the test set.\", \"regarding_novelty\": \"The short-comings of TransE and improvements to the loss have been discussed quite extensively in prior work. Using complex representations in TransComplEx seems also a straightforward application of the insights of ComplEx/Hole. As such, the main novelty would lie in the experimental results which, unfortunately, seem problematic.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors investigate the main limitations of TransE in the light of loss function. The authors claim that their contributions consist of two parts: 1) proving that the proper selection of loss functions is vital in KGE; 2) proposing a model called TransComplEx. The results show that the proper selection of the loss function can mitigate the limitations of TransX (X=H, D, R, etc) models.\\n\\nMy major concerns are as follows.\\n1.\\tThe motivation of TransComplEx and why it works are unclear in the paper.\\n2.\\tThe experiments might be unconvincing. In the experiments, the authors claim that they implement RotatE [1] in their setting to make a fair comparison. However, with their setting, the performance of RotatE is much worse than that in the original paper [1]. Therefore, the experiments might be unfair to RotatE.\\n3.\\tThere are some typos in this paper. For example, in Line 11 of Section 4.3, the comma should be a period; in Section 5, the \\\"Dissuasion of Results\\\" should be \\\"Discussion of Results\\\".\\n\\n[1] Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. Rotate: Knowledge graph embedding by relational rotation in complex space. arXiv preprint arXiv:1902.10197, 2019.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"\", \"summary\": \"This paper list several limitations of translational-based Knowledge Graph embedding methods, TransE which have been identified by prior works and have theoretically/empirically shown that all limitations can be addressed by altering the loss function and shifting to Complex domain. The authors propose four variants of loss function which address the limitations and propose a method, RPTransComplEx which utilizes their observations for outperforming several existing Knowledge Graph embedding methods. Overall, the proposed method is well motivated and experimental results have been found to be consistent with the theoretical analysis.\\n\\nSuggestions/Questions:\\n\\n1. It would be great if hyperparameters listed in the \\u201cExperimental Setup\\u201d section could be presented in a table for better readability. \\n\\n2. In Section 2, the authors have mentioned that RotatE obtains SOTA results using a very large embedding dimension (1000). However, it gives very similar performance even with smaller dimensional embedding (such as 200) with 1000 negative samples. In Section 5, RotatE results with 200 dimension and 10 negative samples are reported for a fair comparison. Wouldn\\u2019t it be better to instead increase the number of negative samples in RPTransComplEx instead of decreasing negative samples in RotatE?\\n\\n3. In Table 3, it is not clear why authors have not reported their performance on the WN18RR dataset for their methods. Also, the reported performance of TransE in [1] is much better than what is reported in the paper. \\n\\n[1] Sun, Zhiqing, Zhi-Hong Deng, Jian-Yun Nie and Jian Tang. \\u201cRotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space.\\u201d ArXiv abs/1902.10197 (2019): n. pag.\"}"
]
} |
SylO2yStDr | Reducing Transformer Depth on Demand with Structured Dropout | [
"Angela Fan",
"Edouard Grave",
"Armand Joulin"
] | Overparametrized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality than when training from scratch or using distillation. | [
"reduction",
"regularization",
"pruning",
"dropout",
"transformer"
] | Accept (Poster) | https://openreview.net/pdf?id=SylO2yStDr | https://openreview.net/forum?id=SylO2yStDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"CnZiloV-j",
"HJxQ91dooH",
"Hye41XHsoB",
"rye_ClKqsH",
"r1g895NcjB",
"HJlfzJEYiH",
"HkgC027tjH",
"S1xKtsXYjB",
"SkgVhtmtsH",
"BklKzkFpYS",
"S1xLPNk3YH",
"r1lYne7iFS",
"H1gM1Ef8Yr",
"Hkg3glcHtr",
"HklXwwmCOS",
"S1x8GuyW_B",
"rJg38HoyuS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798736890,
1573777291440,
1573765852005,
1573716176152,
1573698190469,
1573629706209,
1573629142085,
1573628801089,
1573628331770,
1571815185402,
1571710046111,
1571659953305,
1571328985707,
1571295220356,
1570809691515,
1569941517526,
1569858899883
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1958/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1958/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1958/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1958/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1958/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1958/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1958/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1958/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1958/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1958/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1958/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1958/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1958/AnonReviewer2"
],
[
"~Weng_Rongxiang1"
],
[
"ICLR.cc/2020/Conference/Paper1958/Authors"
],
[
"~Liyuan_Liu2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents Layerdrop, which is a method for structured dropout which allows you to train one model, and then prune to a desired depth at test time. This is a simple method which is exciting because you can get a smaller, more efficient model at test time for free, as it does not need fine tuning. They show strong results on machine translation, language modelling and a couple of other NLP benchmarks. The reviews are consistently positive, with significant author and reviewer discussion. This is clearly an approach which merits attention, and should be included in ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for your additional questions!\", \"comment\": \"Thanks for your fast response!\", \"re\": \"Figure 4, sorry about the confusion. We meant that Figure 4 (left) shows the performance of a full model trained with structured dropout (with LayerDrop being one of the choices) but with no layers pruned. Specifically, we train the 16 Layer Transformer + LayerDrop with the different structured dropout configurations shown in the bar chart (head dropout, layer dropout, sublayer dropout). There is no inference time pruning, so these results are the perplexities of a 16 layer model.\\n\\nOn Figure 4 (right), we show the performance of keeping a subset of 8 out of the 16 layers, varying the technique for choosing the 8 layers. This table varies pruning, so only evaluates LayerDrop (while the Left figure does not prune and evaluates different structured dropout). This is why the perplexity on the right figure is around 1.5 PPL worse, as it reflects the loss of 50% of the model capacity due to pruning.\\n\\nTo address your review point that these two diagrams are not aligned, we added Appendix Table 12 that is trained with Structured Dropout (with LayerDrop being one of the choices), but evaluating perplexity in the pruned to 8 layers regime, where the pruning is fixed at the every other layer technique. \\n\\nLet us know if you have any other questions. Thanks for the detailed review!\"}",
"{\"title\": \"Response to your comments\", \"comment\": \"Thank you for the detailed response!\\n\\nRe dev results, it would be nice to see dev results for all experiments, not just the MT ones. Also, figure 3 doesn't mention explicitly whether these are test or dev results.\", \"dropout\": \"I realize you assume dropout was properly tuned for your baselines, but I still think dropping a similar proportion of parameters in your baselines to the ones you drop with LayerDrop is an important baseline, and would be happy to see these results for at least for 1-2 tasks to ensure that the gains are not from standard heavier regularization.\", \"figure_4\": \"I am sorry, but this is still a bit confusing to me (I think you might have mixed left and right in your response). If the structured dropout comparison shows the full model (without LayerDrop?) then what is being compared here?\"}",
"{\"title\": \"Thanks for your comment!\", \"comment\": \"Thanks for your fast response!\", \"to_address_your_point_about_transformers_without_layerdrop\": \"(1) In a comparable setting, Transformer + LayerDrop is better than Transformer alone. See Table 1, Row 2 Baseline (29.3) and Row 4 LayerDrop (29.6) for NMT, higher is better. Table 2, Row 1 Baseline (18.7) and Row 3 LayerDrop (18.3) for LM, lower is better. Table 3, Row 1 Baseline (40.1) and Row 2 LayerDrop (40.5) for Summarization, higher is better. Table 4 displays results for BERT style pre-training in two sections: with BERT data only (top half) and with more data (bottom half). With BERT data only, the Baseline has 89.0 on MNLI and LayerDrop has 89.2. \\n\\n(2) For the deep SOTA models that we show- deeper models without LayerDrop do not work well because of overfitting and instability during training when the models are deep. For example, on neural machine translation, a 12 layer encoder model does not converge to good results. When we apply other techniques from the literature (see our response to Weng Rongxiang's comment on deeper NMT models), we can achieve BLEU of 28.3 on 12 layer encoder models without LayerDrop. However, our 12 layer model with LayerDrop is much better - we see BLEU 30.2. \\n\\nOn Language modeling, we see similar trends. There is worse performance with a 24 Layer Transformer than a 16 layer Transformer, due to the depth. By training with LayerDrop as a regularizer, we can improve the performance.\"}",
"{\"title\": \"Further comments on the coherence\", \"comment\": \"Thanks for the detailed response.\\n\\nAs I mentioned in the review, it is very encouraging to see simple method like layer dropping could provide smaller inference networks beating even the DistilBERT without any additional training.\\n\\nIt is good to have some sanity check about the performance of the Transformer network trained with LayerDrop. To my knowledge, I do not think there exists work comparing the interaction between LayerDrop and residual connections. However, the claimed SOTA results almost all come from training with larger network/data. It is hard to know whether LayerDrop is the sole reason responsible for the improvement.\"}",
"{\"title\": \"thanks for your review!\", \"comment\": \"Thanks for your review and all of your comments and questions. We have included our response below, and please let us know if you have additional questions. We have a long response as you had a lot of questions!\", \"re\": \"Minor comments- thanks! We have made those fixes and appreciate you reading the paper so carefully.\\n\\nTo answer your question about Figure 4- Yes, they are the same model, but the Figure on the right (comparing different types of dropout) is the full model with all of its layers, while the Figure on the left (comparing different strategies to prune) is the same model pruned to 8 layers with the different techniques. The gap in perplexity is from halving the model size. We updated the Appendix to include Table 12 comparing different types of structured dropout when pruning (e.g. the varying setting of the left side but with models pruned with Every Other Layer). Note that some of the pruned results are not competitive because the models have not been trained with LayerDrop and are thus not robust to pruning at inference time (e.g. Half FFN, Baseline, Head Dropout alone).\"}",
"{\"title\": \"Additional result incorporating Layer Sharing\", \"comment\": \"We added an additional experiment, an investigation of the question Can LayerDrop be combined with Layer Sharing?\\n\\nWe will add the following table that shows that layers can be shared and LayerDrop can be applied to them. As layer sharing reduces the parameter size, performance is expected to decrease as more layers are shared. However, when sharing chunks of two layers (e.g. layer 0 and layer 1 have the same weights, layer 2 and layer 3 have the same weights, etc) we only see a marginal effect on performance but about 50% fewer parameters. Extending to sharing larger quantities of layers, such as chunks of four layers, we see small decreases in performance likely as the model capacity as been reduced by the loss of parameters from layer sharing. Additional ways of adding back the model capacity could be examined in future work.\\n\\nModel | Valid \\nAdaptive Inputs | 18.4\\nAdaptive Inputs + LayerDrop | 18.2\\nLayerDrop Share Chunks of 2 | 18.2\\nLayerDrop Share Chunks of 4 | 18.9\\n\\nWe will add these results into the appendix.\"}",
"{\"title\": \"thanks for your review\", \"comment\": \"Thanks for the comments! We appreciate it. Please find our response below and let us know if you have any further questions.\", \"re\": \"Data Driven Pruning - Yes, we agree. We added citations to these dynamic inference methods.\\n\\nTo answer your question, the main difference is that in our Data Driven Pruning, it is a simpler approach that does not vary the model based on the input layers. Instead, we try to learn based on the dataset which layers are the most relevant, but at inference time forward a *fixed* set of layers, as you describe. We are very excited by the dynamic inference techniques and are interested in exploring them in future work.\"}",
"{\"title\": \"thanks for your review\", \"comment\": \"Thanks for the review. We have responded to your points below. Please let us know if you would like to see additional analyses or have further questions.\", \"re\": \"Even deeper models - Yes, this is possible. We show in Tables 1, 2, and 4 that this can be used to train models that are double the depth on Language Modeling, Machine Translation, and Sentence Pre-training benchmarks. Applications in ASR are possible as well. LayerDrop is a strong regularizer and stabilizes training as fewer layers are used each forward pass.\\n\\nWe have added new analyses and results to improve our paper. Appendix Table 10 displays the relationship between LayerDrop and standard Dropout. Appendix Table 12 displays the impact of varying different types of structure dropout (Head, Sublayer, Layer, etc) on pruned networks.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a method, LayerDrop, for pruning layers in Transformer based models. The goal is to explore the stochastic depth of transformer models during training in order to do efficient layer pruning at inference time. The key idea is simple and easy to understand: randomly dropping transformer layers during training to make the model robust to subsequent pruning. The authors perform empirical studies on several sequence modeling task to conclude that the proposed approach allows efficient pruning of deeper models into shallow ones without fine-tuning on downstream tasks. There are also empirical experiments done to demonstrate that the proposed approach outperforms recent model pruning techniques such as DistillBERT under comparable configurations.\", \"strengths\": [\"The technique seems to be simple to apply yet powerful and promising.\", \"Strong results from the pruned networks without fine-tuning on downstream tasks.\", \"Good ablation studies that help establish the connection to other pruning strategies and the internal of LayerDrop.\"], \"weaknesses\": \"- Stochastic depth has demonstrated a lot of significance for training in prior work. Although the end goal here (for pruning) is slightly different, the novelty is a little incremental. \\n\\nOverall, the paper is a good contribution given the current great interest of transformer-based models. The motivation is quite clear, and the writing is easy to follow. It is also a sensible approach given the strong regularization effect of stochastic depth.\", \"question\": \"Similar to Pham et al.'s work on applying stochastic depth to train very deep transformers for speech, do you expect LayerDrop to be helpful for training very deep transformer-based models for NLP tasks assuming memory is not a big constraint?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work explored the effect of LayerDrop training in efficient pruning at inference time. The authors showed that it is possible to have comparable performance from sub-networks of smaller depth selected from one large network without additional finetuning. More encouraging is that the sub-networks are able to perform better than the same network trained from scratch or learned based on distillation.\\n\\nBesides the promising results, I think the authors could make the presentation more coherent. Since the title is about \\\"reducing transformer depth on demand\\\", the focus is on pruning the network to meet inference requirements. But the authors spent a lot of space showing improved results on many tasks, which are mainly from learning a larger network or with additional data compared to the baselines. Then some of the results shown in the appendix, especially the ones referenced in the main text, could be brought into the main part.\\n\\nOn the other hand, I do not think it is adequate to argue the proposed method is a \\\"novel approach to train over-parameterized networks\\\". As the authors acknowledged, the layer dropping technique has been proposed in (Huang et al., 2016). Even though the authors extended this to different components of the network, the main focus is on layer dropping which is exactly the one proposed in (Huang et al., 2016). Actually, two layer dropping schedules were introduced in (Huang et al., 2016). One is the uniform dropping which is adopted in this work, the other is the linear decay dropping which is shown to achieve better performance (Huang et al., 2016). Even though more involved, it is interesting to see how the linear decay dropping works in terms of pruning.\\n\\nIt is intriguing to see that simple dropping method as every other could perform comparably to exhaustive search as shown in Figure 4 (right). Is this an artifact of the used dropping masks in training or something intrinsic to the method? The Data Driven Pruning approach, in a way, has the same flavor as the recently proposed dynamic inference methods [1,2] reducing the inference on a per-input basis. That is, different inference complexity will be given different inputs based on the inferred difficulty. The proposed method, on the other hand, assigns the same inference complexity to all the inputs but tries to learn strong sub-networks. It is worth mentioning these works and compare the differences.\\n\\n[1] Z. Wu, T. Nagarajan, A. Kumar, S. Rennie, L.S. Davis, K. Grauman, and R. Feris. BlockDrop: Dynamic inference paths in residual networks. CVPR 2018.\\n[2] X. Wang, F. Yu, Z.-Y. Dou, T. Darrell, and J.E. Gonzalez. SkipNet: Learning dynamic routing in convolutional networks. ECCV 2018.\"}",
"{\"comment\": \"Thanks for your comment. Of course the concept of dropping layers was explored in Stochastic Depth and this is described in our work. There are two reasons why we chose to say Layer Dropout:\\n\\n1. We think dropping layers is a subset of dropping structures (FFN, attention, attention heads, parts of FFN matrices, etc), which is not explored in the Stochastic Depth paper. As we drop layers, we say \\\"LayerDrop,\\\" but dropping other of these structures is also effective for pruning and regularization (see Figure 4, left). \\n\\n2. The Stochastic Depth paper showed that this technique can be used for regularization and training speed improvement, which we acknowledge in our work. However, the main goal of our paper is to make models smaller by pruning layers away. This is not explored in the Stochastic Depth work, but is an effect of dropping layers to make models shallower. So \\\"LayerDrop\\\" emphasizes both the training mechanism as well as the inference time goal- dropping layers.\", \"title\": \"Response to your comment\"}",
"{\"comment\": \"Thanks for your comment and for pointing out these references. We will add citations when we update the paper draft. When we use the pre-norm modification you suggest (from reference [1]) to train a Transformer Big architecture with 12 encoder layers, the model converges but does not achieve strong BLEU (we see 28.3).\", \"for_the_comparison_to_the_additional_works_you_cited\": \"We agree that our proposed techniques can be combined with these other works for improved results. However, we believe adding our techniques to a large variety of existing models is out of scope for this work. Adding LayerDrop allows for the training of 20 layer encoders, but we find the other model parameters need to be tuned to prevent overfitting on WMT en-de.\", \"title\": \"Response to your comment\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents LayerDrop, a simple method for dropping groups of weights (typically layers) jointly. Despite its simplicity (which is actually a big plus), the method seems to improve performance quite consistently on a range of NLP tasks. Moreover, it allows the authors to train very deep networks, that are very hard to train otherwise (according to the authors). For me the most exciting thing about this approach is that this training regime allows to prune the trained network at test time *without finetuning*, effectively getting a smaller, more efficient network for free. This is a great benefit compared to existing approaches that require retraining a smaller network for each costume size. While the method isn't really applicable to any size, and largely depends on the dropout rate the full model was trained on, I imagine it could serve as a starting point for other researchers to develop more flexible extensions that would allow for any size of network to be pruned at test time. I think this is a very strong submission and strongly advocate accepting it to ICLR.\", \"questions_and_comments\": \"1. The main thing missing for me is some more analysis on the runtime/energetic savings (e.g., in terms of FLOPs) of the proposed method. The authors argue (3.2.1) that approaches such as DropConnect are not necessarily more efficient, but do not analyze the efficiency of their pruned networks apart from the size of the pruned network. \\n\\n2. Similarly, details about the experiments are also somewhat lacking:\\na. how many GPUs were used to train the models? the authors mention 8 v100 in A.3, but I am not sure if this was the setup for all experiments. \\nb. Figure 7, which shows that LayerDrop also improves training speed, is very interesting and should be part of the main text in my opinion. Was this trend consistent for all experiments?\\nc. Similarly, presenting the total running time of the models (and not just words per second) would be helpful for reproducibility. \\nd. Finally, reporting dev and not only test results (e.g., in tables 1 and 2) would also facilitate future reproducibility efforts.\\n\\n3. Did the authors use a regular dropout? If I understand correctly, in A.1.3, the authors mention tuning the dropout rate between {0.2,0.3}. Was this done for all tasks? and was it done for the baseline models as well? Using dropout in the baseline model with a similar proportion as LayerDrop seems like an important baseline, and in particular it would be interesting to see whether the deep experiments (e.g., 40 layers on WT103) that are hard to train without LayerDrop could converge with regular dropout.\", \"minor\": [\"3.2: \\\"We present *an* regularization approach ...\\\" (should be \\\"a\\\")\", \"Table 2 is referred to before table 1, it might be clearer to switch them.\", \"In figure 4, it wasn't clear to me why \\\"Layer\\\" on the lefthand side is much better than \\\"Every other\\\" on the righthand side. Aren't these the same model variant?\", \"Missing year for paper \\\"Language models are unsupervised multitask learners\\\".\"]}",
"{\"comment\": \"In Table 1, I think authors should compare with the base model with 12 layers encoder. The 12/6 layers Transformer can be trained easily with a minor modification (post-norm to pre-norm) [1]. Furthermore, I am also interested in the comparison of advanced deep NMT models [1,2,3] with the same setting, e.g. 20 layers encoder. The proposed layer dropout may work with them to improve the effect and efficiency.\\n\\n\\n\\n[1] Wang et al., Learning Deep Transformer Models for Machine Translation. ACL 2019.\\n[2] Wu et al., Depth Growing for Neural Machine Translation. ACL 2019.\\n[3] Zhang et al., Improving Deep Transformer with Depth-Scaled Initialization and Merged Attention. EMNLP 2019.\", \"title\": \"Comments about NMT experiment\"}",
"{\"comment\": \"Thanks for your comment and sharing your related work! We will add a citation describing your previous paper on dense connections for LSTMs.\", \"re\": \"ensemble of smaller networks: We agree. The original Dropout paper had a nice interpretation as bagging several smaller models at training time, as you describe. At inference time, we find our method robust to the choice of which layers are pruned, possibly a result of this.\", \"title\": \"Response to your comment\"}",
"{\"comment\": \"Thanks for your interesting paper. I like the idea to prune w.o. fine-tuning.\\n\\nWe had some similar observations on layer-wise dropout and pruning language models without fine-tuning [1]. Specifically, we replace the residual connection with the dense connection, which allows us to drop any layers without deleting all subsequent ones. Although our method requires some modifications before being applied to BERT (as it requires the dense connection), I think these two methods are very related and have similar intuitions.\\n\\nBesides, in our experiments, we have an interesting observation: we found the shape of the final network (after pruning) have some randomness. We conjecture this is because the network trained with the layer-wise dropout, is actually an ensemble of many small networks (similar to the lottery ticket hypothesis [2]), and the pruning is actually trying to select one from these networks. \\n\\n1. Liu, Liyuan, et al. \\\"Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling.\\\" EMNLP 2018.\\n2. Frankle, Jonathan, and Michael Carbin. \\\"The lottery ticket hypothesis: Finding sparse, trainable neural networks.\\\" ICLR 2019.\", \"title\": \"Interesting paper and a related work.\"}"
]
} |
BJg_2JHKvH | Semi-Supervised Learning with Normalizing Flows | [
"Pavel Izmailov",
"Polina Kirichenko",
"Marc Finzi",
"Andrew Wilson"
] | We propose Flow Gaussian Mixture Model (FlowGMM), a general-purpose method for semi-supervised learning based on a simple and principled probabilistic framework. We approximate the joint distribution of the labeled and unlabeled data with a flexible mixture model implemented as a Gaussian mixture transformed by a normalizing flow. We train the model by maximizing the exact joint likelihood of the labeled and unlabeled data. We evaluate FlowGMM on a wide range of semi-supervised classification problems across different data types: AG-News and Yahoo Answers text data, MNIST, SVHN and CIFAR-10 image classification problems as well as tabular UCI datasets. FlowGMM achieves promising results on image classification problems and outperforms the competing methods on other types of data. FlowGMM learns an interpretable latent repesentation space and allows hyper-parameter free feature visualization at real time rates. Finally, we show that FlowGMM can be calibrated to produce meaningful uncertainty estimates for its predictions. | [
"Semi-Supervised Learning",
"Normalizing Flows"
] | Reject | https://openreview.net/pdf?id=BJg_2JHKvH | https://openreview.net/forum?id=BJg_2JHKvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"YxuYqHFjH9",
"YqfqyWENWa",
"rkeEIJV3ir",
"rJg2gb6ooB",
"S1gJ3l6ioS",
"H1xq0J6siH",
"BygksbcRtB",
"r1xTxK8CYr",
"SJlkOR-cFH",
"BkeV9headB"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1583772119369,
1576798736840,
1573826379997,
1573798131980,
1573798054858,
1573797842142,
1571885462873,
1571870964652,
1571589735153,
1570733196057
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1957/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1957/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1957/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1957/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1957/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1957/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1957/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1957/AnonReviewer1"
],
[
"~Arsenii_Ashukha1"
]
],
"structured_content_str": [
"{\"title\": \"Thoughts\", \"comment\": \"We respectfully disagree with this assessment. The paper makes a variety of contributions, including a method with substantial novelty. Nearly all classifiers are discriminative. Even approaches that use a generator typically involve a discriminator in the pipeline. For example, sometimes one learns a generator on unlabelled data, then recycles the representation as part of a discriminative classifier. Generative models are compelling because we are trying to create an object of interest. The challenge in generative modelling is that standard approaches to density estimation are poor descriptions of high-dimensional natural signals.\\n\\nThe method proposed in this paper, FlowGMM, is arguably one of the only end-to-end fully generative approaches to classification with normalizing flows, which is a very significant point of novelty. Just because the model involves a Gaussian mixture, and Gaussian mixtures have been used in other contexts, does not take away from the novelty. The method also provides a coherent approach to handling both labelled and unlabelled data, which are often treated separately in deep semi-supervised methods. We also propose a new type of probabilistic consistency regularization that significantly improves FlowGMM on image classification problems. And the method is also relatively interpretable and broadly applicable.\\n\\nWe appreciate the feedback and have made several modifications to the paper, including a more visible presentation of the contributions.\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper offers a novel method for semi-supervised learning using GMMs. Unfortunately the novelty of the contribution is unclear, and the majority of the reviewers find the paper is not acceptable in present form. The AC concurs.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to response\", \"comment\": \"Dear authors, thank you for updating the paper and addressing my concerns.\\n\\nI understand your points about examining the latent space and interpretability, but I am still not convinced that the emphasis of the paper is quite right. It is not so much about squeezing even more into the paper, it is more about changing the focus. Though I do appreciate that this is not possible for a rebuttal.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank the reviewer for detailed comments. We addressed the clarity issues that the reviewer identified in the updated version of the paper. In particular,\\nFlowGMM Sup and FlowGMM Supervised were both referring to the same method (we renamed the entries in the updates version), FlowGMM trained only using the labeled data. In Table 2, \\u201cFlowGMM Sup (All labels)\\u201d was trained in a fully-supervised setting, when labels are available for all data points (e.g. 50k labels in CIFAR-10), and \\u201cFlowGMM Sup ($n_l$ labels)\\u201d was trained only on $n_l$ labeled data (e.g. 4k data points for CIFAR-10).\\nIn all Tables we use classification accuracy as the predictive metric.\\n\\nRegarding the novelty of our model interpretation experiments, while we agree that GMMs are not novel, we believe that the combination of normalizing flows with GMMs is novel. We argue that while we could expect some of the observed properties would hold, they are not trivial and verifying them is important. In particular,\\nIf the data was generated from the FlowGMM model, we would indeed be sure a priori that the decision boundary between classes was passing through a low-density region. However, when we fit actual image data using FlowGMM the fit is not perfect, and there is no way of concluding that the same property would hold without experimentally verifying it. Further, the separation between classes in the latent space is of crucial importance for interpretation of FlowGMM, so we believe that it is important to study it explicitly.\\nAnother important observation about the latent spaces is that including unlabeled data does indeed push the decision boundary away from unlabeled data. This property is desired, and we believe that explicitly demonstrating it helps interpreting FlowGMM.\\n\\nWe agree that using the Chinese Restaurant Process GMM to automatically determine the number of classes in the data is an exciting direction for future work. However, it would require a non-trivial amount of effort, and methodological advancement, and the reviewer pointed out that \\u201cthe authors tried to squeeze too much into the paper\\u201d even with the current content of the paper. We do not agree that not being able to infer the number of classes is a major shortcoming of FlowGMM, as the setting when the number of classes is unknown is not typically considered in semi-supervised literature, and most of the existing semi-supervised methods are also not directly applicable in this setting. We plan to explore inferring the number of unlabeled classes in future work.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We appreciate the reviewer\\u2019s comments and advice. In response to (1), while it\\u2019s true that our model does not perform as well as the Pi model (Tarvainen and Valpola, 2017), the base network architecture in that work is substantially more powerful owing to it not being constrained with invertibility. When trained using all of the labels on CIFAR10 and no unlabeled data, the CNN from Tarvainen and Valpola (2017) has an error rate of 5.56 and the RealNVP architecture we use gets an error rate of 11.55.\\n\\nThe second point made was that although FlowGMM performance on NLP/tabular tasks is promising, the experiments needed a more thorough and careful comparison to supervised and semi-supervised baselines. We agree and have added additional baselines to this section. The performance of k-NN is very similar to the other supervised only methods on the UCI datasets but on the two text classification datasets the performance is substantially worse. We suspect this has to do with the way the BERT embeddings were originally trained for separation type tasks. The label propagation baseline applied in the paper uses a dense affinity matrix, hence challenges with scaling but we thank the reviewer for their suggestion and updated the semi-supervised baselines to include sparse k-NN based label spreading approach that uses a larger fraction of unlabeled data. For tuning the hyper parameters of these label spreading methods, we perform an independent grid search for each method on each dataset.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"0) We updated the caption of the Tables and specified the performance metrics.\\n1) We found that the performance of the methods in Table 1 had high variance, so we decided to adopt the following strategy. We train each method three times, and pick up the run that attained the best accuracy on a validation set. We then report the performance of that run on the test data (different from validation). This procedure is still fair, and is attainable in practice. In an updated version of the paper we will report the mean and std over multiple repetitions of this process.\\n\\n2) The performance of a supervised model (which was trained with all labels) shows the general capacity of the model. For example, on CIFAR-10, FlowGMM Sup (All labels), 2nd row of Table 2, is trained on 50k labeled examples, while FlowGMM Sup ($n_l$ labels), 5th row of Table 2, is trained on 4k labeled examples (unlabeled data is not used); reporting both accuracies shows the gap which appears when using much less data, and this gap is significantly decreased when we add unlabeled data. The testing data for a fixed dataset is the standard test split, and is the same across all models and all settings (supervised and semi-supervised).\\n\\n3) Like other methods of feature visualization (regularized optimization and inversion by optimization) our novel feature visualization method gives insight into what kinds of features activate a given channel and spatial location, a tool for understanding the intermediate representations and what is learned by the network. Unlike other feature visualization methods, our method does not require optimization or hyperparameters and hence can be performed at real time rates for interactive feature exploration.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper describes a normalising flow with the prior distribution represented by a Gaussian mixture model (GMM). The method, FlowGMM, maps each class of the dataset to a Gaussian distribution in the latent space by optimising the joint likelihood of both labelled and unlabelled data, thus making the method useful for semi-supervised problems. Predictions are made using the maximum a posteriori estimate of the class label. To make the method robust to small perturbations of the inputs, the authors introduce a novel consistency regularisation term to the total loss function, which maximises the likelihood of predicting the same class after a perturbation.\\nThe authors further examine the learnt latent space by considering two simple, synthetic datasets that can be easily visualised, showing that the latent space behaves in a way one would intuitively expect.\\nThe method is evaluated on both tabular and image data, showing promising results in terms of accuracy (presumably, see below). As the model is found to be overconfident in its predictions, the authors introduce a calibration scheme and empirically verify that it improves the uncertainty estimates. Lastly, the authors introduce a feature visualisation scheme and use it to illustrate the effect of perturbing the activations of the invertible transformations.\\n\\nI generally like the proposed method, which seems useful and intuitive. I am particularly happy with the discussion on uncertainty calibration, where the authors suggest an elegant addition to the model to increase the variance of the mixture components. I do, however, have significant concerns about the novelty of the paper as well as its structure and clarity, as detailed below. I do, therefore, not recommend it for acceptance.\\n\\nThe paper reads well, although I feel that it lacks some details and explanations. For example, in table 2, it is never mentioned what \\\"FlowGMM Sup\\\" refers to and if it is different from \\\"FlowGMM Supervised\\\". It is also not clear what \\\"(All labels)\\\" refers to - does it mean that labels were provided for the entire dataset or that the models were trained only on the small subset with labels? Or something else? Which performance metric is used in the tables? The accuracy, presumably, but this is never specifically stated. Similarly, the number of datapoints and ratio of labelled to unlabelled data for the synthetic datasets are not reported. They are not crucial to know but should be included for completeness.\\n\\nWhile the first half of the paper is informative and well-structured, the second half appears a bit less so. From experimentally verifying that the method works, the paper goes on to discuss uncertainty calibration, examine the latent space representations, and visualising the effect of feature perturbations. While I greatly appreciate the focus on interpreting the trained model, I think it appears somewhat chaotic, as if the authors tried to squeeze too much into the paper. For example, the feature visualisation technique is quite neat, but it works for any flow and is not really used for anything in the paper. I would suggest saving it for a dedicated paper.\\n\\nI am not convinced by the novelty of this paper. The authors list two contributions: 1) the model itself, 2) an empirical analysis with much focus on the interpretability of the model. While the model is, to my knowledge, indeed novel, the analysis is quite standard, and the interpretability even appears to be oversold. GMMs are nice and intuitive, but not novel in any way, yet the authors seem to be describing the properties of GMMs as specific to their method.\\nIn particular, the authors go to great lengths to show that the latent space representations cluster around the means of the mixture components and that the decision boundary lies in low-density regions of the latent space. I do not see why these properties should be so surprising since the method directly optimises the likelihood of the data under the mixture distribution. That this is also empirically observed is, of course, reassuring, but these observations are better suited for the appendix, in particular given that the paper went over the recommended page limit.\\nI think that much of the claimed second contribution follows directly from the GMM aspect of the model. Instead of claiming the standard GMM properties as contributions, I think the proposed consistency loss term should be highlighted as a contribution on its own. I find it elegant, and I guess it would be particularly useful for NLP tasks where sentences can be phrased in different ways but still mean the same.\\n\\nInstead of discussing the latent space, I would have preferred to see extra evaluations of the method, like convergence rates of both FlowGMM and FlowGMM-cons compared to the competing models. Furthermore, a major limitation of the model is that knowledge of the correct number of classes in the data - even in the unsupervised setting. The authors hint at extensions to mitigate this in the discussion (using a Chinese Restaurant Process GMM or by adding extra Gaussians to the mixture during training), but these should have been investigated in the current paper.\\n\\nIn conclusion, I think that the paper lacks novelty and that it spends far too much space on \\\"trivial\\\" properties of the model instead of addressing shortcomings, like the prior specification of the number of classes, which the authors even point out in the discussion.\", \"minor_comments\": [\"p 4, bottom: \\\"each with 1 hidden layers\\\" -> \\\"each with 1 hidden layer\\\"\", \"p 5, middle: \\\"FlowGMM is able to leveraged\\\" -> \\\"FlowGMM is able to leverage\\\"\", \"p 5, bottom: \\\"Table 5.1\\\" -> \\\"Table 1\\\"\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose a semi-supervised learning model, named as Flow Gaussian Mixture Model (FlowGMM). The model is learnt by maximizing the join likelihood of the labeled and unlabeled data with a consistency regularization.\\nThe authors demonstrate that the proposed model outperforms others on text classification; for image classification, the performance can be improved in future.\\nAlso the authors demonstrate that the model is interpretable via feature visualizations.\\nOverall the paper is fine written.\\nYet, The conclusion is not fairly supported and the paper could be much stronger with the issues discussed already but I don\\u2019t think its current form is ready yet.\", \"below_are_more_detailed_comments\": \"0) It would be nice to add the definition of the performance metric; without the definitions, none of the numbers in the tables would make sense. \\n1) The main result for text classification in Table 1 is reporting the best of 3 runs, which can\\u2019t support the conclusion that the proposed method outperforms the other. In general, it\\u2019s nice to provide statistical significance comparing two models or reporting the mean and std across multiple runs. \\n2) In Table 2, it\\u2019s not clear what conclusion could be drawn by comparing the performance of supervised and semi-supervised performance. Are the testing data points the same?\\n3) The feature visualization as discussed in Section 6.3 is not explained clear. Specifically, \\u201cgiving us insight into the workings of the model\\u201d is not clear; what exactly insight can we get and what exactly are the workings can we get?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper describes how to use normalising flows for Semi Supervised Learning (SSL). Briefly, the method consists in finding a (bijective) map for transforming a mixture of Gaussians into a density approximating the empirical data-distribution -- as usual for flow methods, the parameters are found through likelihood maximisation. This is a elegant approach that naturally exploits the standard so-called cluster assumption in SSL. The papers also shows how to incorporate a consistency-based regularisation within the method.\\n\\n\\nAlthough it is an elegant and simple approach, and the article is relatively well written, I think that the paper should be rejected because (1) on image classification tasks (and even with consistency regularisation), the performances are well-below the straightforward-to-implement \\\\Pi-model. (2) for tabular/NLP data, although the performances seems to be good, the comparison with standard methods could have been much better done -- I am still not convinced by the method. \\n\\nI agree with the authors that there are many situations where it is not possible to find good perturbation (eg. NLP / tabular / genomics / etc...). If the authors could demonstrate more carefully that their approach does lead to state-of-the-art performances in this type of situations, I do believe that the approach would be of great interest. Given that the methods does not work well at all for image classification, I think that he authors should have been much more careful with the comparisons with the standard methods when investigating the performances on NLP/tabular tasks.\\n\\n\\n(1) basic k-NN benchmark?\\n(2) basic dimension reduction (PCA / autoencoder / extract lower representation from a NN) associated with either k-NN or label-propagation?\\n(3) it is *not* difficult at all to implement label propagation with fast nearest-neighbours (eg. FAISS library) and sparse linear algebra on the full datasets. In the current submission, it has not been done for the NLP datasets.\\n(4) There are indeed several ways to compute distance / affinity within label-propagation-type approaches\\n(5) Brief description of parameter tuning for label-prop should be added\\n\\nI think that the method has a lot of potential and the fact that it is not competitive for computer vision task is not important. I encourage the authors to carry out more convincing numerical comparisons ing tabular/NLP/etc.. settings in order to strengthen the message of the paper. If convincing results can be obtained, I believe that the method has a lot of potential.\\n\\n[Edit after rebuttal]\\nI would like to thank the authors to have provided additional label propagation experiments and details -- the proposed method appears to be quite much better than this baseline approach, which is very reassuring and proves that it is worth exploring further this line of work.\"}",
"{\"comment\": \"Thank you for the interesting work. I suggest that the paper \\\"Semi-Conditional Normalizing Flows for Semi-Supervised Learning\\\" from ICML Workshop is relevant. The work also uses a class conditional prior in the form of Normalizing flow and GMM. The discussion of the difference between the methods will be useful.\", \"link\": \"https://invertibleworkshop.github.io/accepted_papers/pdfs/INNF_2019_paper_20.pdf\", \"title\": \"The relevant paper \\\"Semi-Conditional Normalizing Flows for Semi-Supervised Learning\\\"\"}"
]
} |
rJgD2ySFDr | Neural Communication Systems with Bandwidth-limited Channel | [
"Karen Ullrich",
"Fabio Viola",
"Danilo J. Rezende"
] | Reliably transmitting messages despite information loss due to a noisy channel is a core problem of information theory. One of the most important aspects of real world communication is that it may happen at varying levels of information transfer. The bandwidth-limited channel models this phenomenon. In this study we consider learning joint coding with the bandwidth-limited channel. Although, classical results suggest that it is asymptotically optimal to separate the sub-tasks of compression (source coding) and error correction (channel coding), it is well known that for finite block-length problems, and when there are restrictions to the computational complexity of coding, this optimality may not be achieved. Thus, we empirically compare the performance of joint and separate systems, and conclude that joint systems outperform their separate counterparts when coding is performed by flexible learnable function approximators such as neural networks. Specifically, we cast the joint communication problem as a variational learning problem. To facilitate this, we introduce a differentiable and computationally efficient version of this channel. We show that our design compensates for the loss of information by two mechanisms: (i) missing information is modelled by a prior model incorporated in the channel model, and (ii) sampling from the joint model is improved by auxiliary latent variables in the decoder. Experimental results justify the validity of our design decisions through improved distortion and FID scores. | [
"variational inference",
"joint coding",
"bandwidth-limited channel",
"deep learning",
"representation learning",
"compression"
] | Reject | https://openreview.net/pdf?id=rJgD2ySFDr | https://openreview.net/forum?id=rJgD2ySFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"AuYsZoMBV",
"H1gA7XOEhS",
"r1g4BlWssB",
"r1ginQgssH",
"BJgDWQejsS",
"ryeWu1pp9S",
"B1g4UQY3Kr"
],
"note_type": [
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798736811,
1574368037648,
1573748795694,
1573745586992,
1573745407213,
1572880233187,
1571750732419
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1956/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1956/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1956/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1956/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1956/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1956/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"There was some support for this paper, but it was on the borderline and significant concerns were raised. It did not compare to the exiting related literature on communications, compression, and coding. There were significant issues with clarity.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"\", \"paper_summary\": \"The paper proposes to use ML methods, specifically neural networks, to learn source and/or channel coding systems, either jointly or separately. Specifically, they investigate these systems under the bandwidth-limited channel. They investigate their models applied to the task of transferring images across the channel.\", \"pros\": [\"The paper is in a difficult area, and needs to communicate ideas about communications and information as well as various deep learning models. The initial exposition does this well.\", \"The paper discusses ML-based communications models applied to the bandwidth limited channel and performs experiments to investigate joint vs. separate coding.\"], \"two_main_things_of_concern\": [\"This area (communications, compression, coding) is highly developed, yet there are no comparisons to practical, state of the art techniques in this paper. The experiments investigate using the authors' models to send images across a noisy channel. So shouldn't we see a comparison to an equivalent non-ML-based system that currently accomplishes this task, if only for perspective?\", \"The paper contains too much exposition and the structure makes it difficult to read. The authors' work seems to mostly be focused in sections 4 and 6, but prior work is summarized in section 5. The authors want to communicate a lot of ideas which the audience might not be familiar with, and this is a difficult task. However, the paper could be reorganized such that the ideas are clearer. Rate distortion is delegated to an appendix, but two pages are spent on basic communications. Overall it could be much more focused.\"], \"some_smaller_concerns\": [\"Pg. 3 \\\"hypothesis spaces can be searched increasingly quickly in an automated fashion, allowing researchers to search...\\\"\", \"I think this statement may be over-reaching. The authors also frequently use the term \\\"flexible function approximation\\\". Neural networks do have interesting approximation properties. But this is not a complete picture of deep learning, and the situation is much more nuanced than this. Where is the role of optimization and data? In order to transmit data with the author's algorithms, do we need to go out and collect a large body of examples in the specific domain, like CelebA? Because you don't need to do that to e.g. compress any image with JPEG, code the data using LDPC to send across a channel. Is this comparing apples and oranges?\", \"Pg. 6 \\\"Note, that a simple additive white Gaussian noise.... LDPC. However, in more general scenarios they do not perform as well and can be beaten by neural network architectures.....\\\"\", \"I think the claims in this paragraph need to be toned down a bit. LDPC does not perform well in regards to what? Can be beaten under what conditions? Decode efficiently with regard to what block length? I don't think the picture is as clear as painted here.\", \"Pg. 4 The statements here regarding the bandwidth-limited channel appear to be the focus of the author's work. This section should be expanded and explained. Reading the first paragraph then going to the second paragraph (\\\"To summarize...\\\"), there's sort of a disconnect. How do we know these two things are equivalent? What exactly is novel that was introduced?\"], \"small_typos\": [\"Pg. 2 final paragraph, some norm is used for the distortion, but this is not defined (either on pg. 2 or in App. B)\", \"Pg. 6 white is misspelled \\\"withe\\\"\"]}",
"{\"title\": \"Personal response AnonReviewer3\", \"comment\": \"Thank you for your review. In the following, we would like to address both of your questions.\", \"question_1\": \"Why does joint coding outperform separate coding specifically in the ML setting?\\n\\nThis is indeed very interesting and has sparked various discussions among ourselves as well. \\nThere is of course classical research that illuminates why solving the joint problem may be possible in some scenarios where solving the communication problem separately is not possible (under more realistic assumptions). \\n\\nMore specifically, in the ML context when separate coding is performed, it is understood that the channel coder receives the distribution of source embeddings. However, to be source agnostic, this distribution needs to be a generic (i.e. source data independent) distribution. For example, this could be a standard Gaussian as is used in a basic VAE. This, however, induces a bottleneck: The source coder needs to match the aforementioned generic prior distribution such that the channel coder receives the correct input. At the same time, if the source coder was to perfectly match this prior, there would be no mutual information I(source data; source embedding), and thus nothing would be learned. Joint coding does not suffer from this trade-off problem. We believe this is why it outperforms separate coding in our experiments.\", \"questions_2a\": \"Relevance of the bandwidth-limited channel.\\n\\nWe hope in our main rebuttal we could point out why modelling with the bandwidth limited channel has such a central role in modelling communication, and why introducing learning to coding is a relevant contribution.\", \"question_2b\": \"How does our approach extend to other domains (non-vision)?\\n\\nConcerning the dataset we used, we are currently running an experiment on imagenet to diversify our claim. \\nWe do believe, however, that our findings extend far beyond image datasets to video, language, audio and even beyond perceptual tasks. We believe that there is evidence that the architecture of the neural encoders and decoders that are employed will determine the success in these domain. Domain specific architectures are, however, not the focus of this work. Farsad et al. (2018) for example considers a language application.\\n\\n\\nWe thank the reviewer for any further feedback and are happy to discuss more if desired.\"}",
"{\"title\": \"Personal response to AnonReviewer2\", \"comment\": \"Thank you for starting this discussion, I hope we could clarify the main points of novelty in the paper and draw the connection to existing work sufficiently in our main rebuttal (1).\"}",
"{\"title\": \"Main rebuttal (1)\", \"comment\": \"We would like to thank the reviewers for their review and start the discussion of our paper.\\nIn the main rebuttal we would like to clarify novelty, the connection to other work, and why bandwidth-limited information transfer is a relevant problem. \\n\\n# NOVELTY\\n\\nIn our paper, we assess the research question: How can we best model communication when the level of information transfer varies? This classic problem of information theory is highly relevant in many communication settings, as elaborated in the next section.\\n\\nSpecifically, in this work we attempt to understand what the best modelling choices are for this problem when using flexible function approximators to encode and decode messages. We make three fundamental observations about the model class that should be used.\\n\\nFor best message reconstruction, we observe:\\n 1. Bandwidth-limited communication should be modelled as a joint coding problem. \\n 2. Flexible approximate prior distributions should be provided for decoders to marginalize over possible message encodings.\\n\\nWhen sampling from the communication model (e.g. when there is little transmitted information), we observe:\\n 3. Directly modelling the marginalization over missing information with an auxiliary latent variable decoder results in more reasonable (in distribution) decoded samples.\\n\\nThis assessment itself, and the resulting observations, are novel work to the best of our knowledge. Additionally, we propose a novel model for the bandwidth-limited channel (BWLC),as well as the novel concept of the auxiliary latent variable decoder.\\n\\n# OTHER WORK\\n\\nOur work is distinguished from other work in that it is the first to investigate the bandwidth-limited channel with the tools of machine learning.\\nRelated work in this field focuses on the assessment of joint coding vs. separate coding (see last paragraph section 5) and channel coding (2nd paragraph section 5) with learned function approximators. \\n\\n\\n# APPLICATIONS\\n\\nGeneral\\nThe BWLC model is applicable to many communication problems.\\nTypical examples involve communication over radio, telephone lines or WiFi. All three can be modelled as a BWLC with white noise. By extension, any radio signal, phone conversation, or internet traffic, such as video streaming, can be seen as a relevant application. \\n\\nReinforcement learning\\nIt is also possible to integrate communication algorithms such as those proposed here into multi-agent reinforcement learning systems to emulate more realistic communication between agents. \\n\\nRepresentation learning\\nAdditionally, the bandwidth-limited channel orders the latent representation according to its importance for reconstruction. Thus, our approach could be useful for representation learning by allowing for straightforward dimensionality reduction. There may be a connection to other dimensionality reduction methods such as PCA that could be explored in follow up work.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper is out of my research area. I could understand that the paper studies the message transformation with bandwidth-limited channels. It seems naturally the message transformation could be represented as a autoencoder model. The paper proposed variational model for this problem and it seems to me the paper employs the popular models in neural networks for example VAE, etc. Technically, what's new of this paper? Was it the auxiliary variable decoders? Is it that this class of algorithms/models firstly applied to this problem domain? To be honest the paper mentioned most of the terminologies in ML and seems that the paper wanted to connect to them, for example, ELBO, VAE, GAN, re-parameterization, etc. The paper provides experimental results on the designed model for bandwidth-limited channel.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper focuses on transmitting messages reliably by learning joint coding with the bandwidth-limited channel. The authors justify joint systems outperform their separate counterparts when coding is performed by flexible learnable function approximators. Their experiments show the advantage of their design decisions via improved distoration and FID scores.\", \"pros\": \"1. This paper is clearly written and well-structured in logic. For example, the authors use Figures 1 and 2 assist readers to catch the difference between joint communication system and separate communication system.\\n\\n2. This paper gives a reliazation of joint source-channel coding, especially to give auxilary latent variable decoders.\\n\\n3. This paper has been verified in both Gaussian channel and bandwidth-limited channel. The empirical results show the advantage of joint coding.\", \"cons\": \"1. Intuitively, you Section 4.3 should be better than Section 4.2. However, I don't see any difference or major items to justify this kind of benefits. Could you please explain why techniques in Section 4.3 can outperform these in Section 4.2.\\n\\n2. Although the authors verified their work on CelebA, it seems that the proposed method has very limited applications. If possible, the authors should do more datasets to verify their proposed method, which will be more useful to boarder readers.\"}"
]
} |
SkeP3yBFDS | Reducing Computation in Recurrent Networks by Selectively Updating State Neurons | [
"Thomas Hartvigsen",
"Cansu Sen",
"Xiangnan Kong",
"Elke Rundensteiner"
] | Recurrent Neural Networks (RNN) are the state-of-the-art approach to sequential learning. However, standard RNNs use the same amount of computation at each timestep, regardless of the input data. As a result, even for high-dimensional hidden states, all dimensions are updated at each timestep regardless of the recurrent memory cell. Reducing this rigid assumption could allow for models with large hidden states to perform inference more quickly. Intuitively, not all hidden state dimensions need to be recomputed from scratch at each timestep. Thus, recent methods have begun studying this problem by imposing mainly a priori-determined patterns for updating the state. In contrast, we now design a fully-learned approach, SA-RNN, that augments any RNN by predicting discrete update patterns at the fine granularity of independent hidden state dimensions through the parameterization of a distribution of update-likelihoods driven entirely by the input data. We achieve this without imposing assumptions on the structure of the update pattern. Better yet, our method adapts the update patterns online, allowing different dimensions to be updated conditional to the input. To learn which to update, the model solves a multi-objective optimization problem, maximizing accuracy while minimizing the number of updates based on a unified control. Using publicly-available datasets we demonstrate that our method consistently achieves higher accuracy with fewer updates compared to state-of-the-art alternatives. Additionally, our method can be directly applied to a wide variety of models containing RNN architectures. | [
"recurrent neural networks",
"conditional computation",
"representation learning"
] | Reject | https://openreview.net/pdf?id=SkeP3yBFDS | https://openreview.net/forum?id=SkeP3yBFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BvbXerpLuY",
"B1xwoW0njB",
"BklWehDioS",
"rJgFtz44iB",
"B1lnoIMQiB",
"S1xB6zspFr",
"BJecY0BnFr",
"Byxm0iP5KH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736782,
1573867935089,
1573776361492,
1573302913028,
1573230243609,
1571824317427,
1571737218131,
1571613643017
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1955/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1955/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1955/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1955/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1955/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1955/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1955/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper introduces a new RNN architecture which uses a small network to decide which cells get updated at each time step, with the goal of reducing computational cost. The idea makes sense, although it requires the use of a heuristic gradient estimator because of the non-differentiability of the update gate.\\n\\nThe main problem with this paper in my view is that the reduction in FLOPS was not demonstrated to correspond to a reduction in wallclock time, and I don't expect it would, since the sparse updates are different for each example in each batch, and only affect one hidden unit at a time. The only discussion of this problem is \\\"we compute the FLOPs for each method as a surrogate for wall-clock time, which is hardware-dependent and often fluctuates dramatically in practice.\\\" Because this method reduces predictive accuracy, the reduction in FLOPS should be worth it!\", \"minor_criticism\": \"1) Figure 1 is confusing, showing not the proposed architecture in general but instead the connections remaining after computing the sparse updates.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"thanks for changes\", \"comment\": \"Thanks for the additional clarification and experiment, it helped to contextualize the difficulty of the problem and value added of the method. I've revised my score accordingly.\"}",
"{\"title\": \"Reply to reviewer 2 - Thank you for thoughtful review\", \"comment\": \"Thank you very much for your constructive review.\\n\\nPer your suggestion, we have run further experiments on more complicated data that we hope will convince you of both the merit of our proposed method along with this line of research with respect to previous approaches. We revised the paper, adding a new section to the beginning of the appendix that describes the \\u201cAdding\\u201d task, which asks a network to learn to sum two values in a long sequence of sampled values given a mask indicating the indices to be summed, along with our results. We invite you to take a look at the new section but we highlight our main findings as follows:\\n\\nTo add a new perspective on complexity in our experiments we tested long-term dependencies as part of this new experiment, following the lead of the literature [1-3]. As shown in our Appendix, we found that even in the presence of extremely long-term dependencies (up to 500 timesteps), our proposed SA-RNN solves the task perfectly with a very low number of FLOPs and very few state updates compared to the other data-reactive method SkipRNN. As expected, the standard GRU also solves the task while Random Skips does not.\", \"regarding_your_gradient_estimation_and_slope_annealing_question\": \"In our experiments, we used the straight-through estimator to be comparable to the literature, including [3] and [4]. Plus, we were pleased to have achieved our empirically-good results with these basic settings. However, we agree that this is an interesting question, as it is unlikely that the same estimator is the absolute best for all possible tasks. Thus, there is potentially room for further tuning by designing update pattern-specific gradient estimation. For slope-annealing we used the parameters and setting as described in [4], gradually increasing the slope of the hard sigma function as the model trains, starting at slope $\\\\alpha=1$ and increasing according to the schedule $a = \\\\min(5, 1+0.04*N_{epoch})$. We have added information about both of these settings to our experimental description in our paper to assure full reproducibility.\\n\\n[1] Henaff, M., Szlam, A., LeCun, Y. \\u201cRecurrent Orthogonal Networks and Long-Memory Tasks\\u201d, ICML 2015.\\n[2] Neil, D., Pfeiffer, M., Liu, S.-C., \\u201cPhased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences\\u201d, NeurIPS 2016.\\n[3] Campos, V., Jou, B., Giro-i-Nieto, X., Torres, J., Chang, S.-F. \\u201cSkipRNN: Learning to Skip State Updates in Recurrent Neural Networks\\u201d, ICLR 2018.\\n[4] Chung, J., Ahn, S., Bengio, Y. \\u201cHierarchical Multiscale Recurrent Neural Networks\\u201d, ICLR 2017.\"}",
"{\"title\": \"Response to Reviewer 3 - Thank you for positive feedback + expression improvement\", \"comment\": \"We thank you for your time and effort in reviewing our work and your positive response to our proposed method and experimental results, especially given your expertise in the area.\\n\\nConcerning your suggestions to improve the presentation itself, we have undertaken a careful round of proof-reading to improve the readability of the manuscript. In addition, we had a colleague in the English department provide additional editing suggestions. We have now uploaded a new version of the paper addressing both your specific edits as well as this general round of proof-reading. We invite you to take a look at the revised presentation.\", \"timing_comparisons\": \"Metrics such as wall-clock time greatly depend on factors outside the model such as implementation strategy, machine learning framework, and hardware specifics. To target the methodological differences between our compared methods, we compute and report the FLOPs instead. This is independent of hardware and implementation. Instead, it directly compares the computational requirements of the update-mechanisms, as described in [1] for a fairer comparison.\\n\\n[1] Campos et. al, \\u201cSkipRNN: Learning to Skip State Updates in Recurrent Neural Networks\\u201d, ICLR 2018.\"}",
"{\"title\": \"Respose to Reviewer 1 - Thank you for positive feedback.\", \"comment\": \"We greatly appreciate your positive feedback on our method, presentation, and experimental results, especially given your expertise in the area.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"A main problem with RNN is to update all hidden dimensions in each time step. The authors proposed selective-activation RNN (SA-RNN), which modifies each state of RNN by adding an update coordinator which is modeled as a lightweight neural network. The coordinator, based on the incoming data, makes a discrete decision to update or not update each individual hidden dimension. A multi-objective optimization problem is defined to both solving a sequential learning task and minimizing the number of updates in each time step. The authors evaluated their networks on three public benchmark datasets and achieved good results compared to the state-of-the-art ones.\\nThe papers is well-written. The idea proposed in this paper is interesting and it is presented very well. There is also an extensive evaluation.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper attempts to reduce computation in recurrent neural networks. Instead of artificially determining the update pattern for updating the states, the authors propose SA-RNN to predict discrete update patterns automatically through optimization driven entirely by the input data. Experiments on publicly-available datasets show that the proposed method has competitive performance with even fewer updates.\", \"pros\": \"Overall, I think the idea of this paper is clear and the whole paper is easy to follow. The experiments clearly show the advantage of the proposed method claimed by the authors.\", \"cons\": \"1.\\tSome expressions need to be improved. For example, in \\u201cThis way, representations can be learned while solving a sequential learning task while minimizing the number of updates, subsequently reducing compute time.\\u201d two \\u201cwhile\\u201ds are not elegant and there should be an \\u201cIn\\u201d before \\u201cthis way\\u201d. In \\u201cWe augment an RNN with an update coordinator that adaptively controls the coordinate directions in which to update the hidden state on the fly\\u201d, the usage of \\u201cin which to\\u201d is not right. I suggest the authors to thoroughly proofread the whole paper and improve the presentation.\\n2.\\tSince this paper focuses on the efficiency of RNN, I suggest the authors could provide the time complexity comparisons. Merely the comparisons on skip of neurons cannot show the advantage on the efficiency.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: This paper proposes selective activation RNN (SA-RNN), by using an update coordinator to determine which subset of the RNN\\u2019s hidden state dimensions should be updated at a given timestep. The proposed loss term is then a sum of the original objective (e.g. classification) and a weighted sum of the probability that each dimension will be updated for each timestep. The method is evaluated on 3 time series datasets: Seizures, TwitterBuzz, Yahoo.\", \"decision\": \"Weak Reject. Although the authors tackle a challenging problem, their empirical results are lacking to provably demonstrate that their approach outperforms existing baselines.\\n\\nSupporting Arguments/Feedback: The authors compare SA-RNN to 5 baselines: random updates, clockwork RNN, phased LSTM, Skip RNN, and VC-GRU. Although I appreciated the authors\\u2019 comparison across the suite of methods with respect to various metrics (e.g. # FLOPS, proportion of neurons that weren\\u2019t updated, etc.), the experiments were conducted on datasets that were relatively simple. For example, in prior work, the empirical evaluations were on much larger-scale datasets such as Wikipedia [Shen et. al 2019], real clinical data sources [Liu et. al 2018], and Charades videos [Campos et. al 2018], among others. I would be very interested to see how this training procedure fairs when evaluated on much more complex tasks, and would make the results about computational speedups at train/test time much more convincing.\", \"questions\": [\"I\\u2019m curious if you tried different types of gradient estimators to get around the non-differentiability rather than the straight-through estimator. Also how was the slope-annealing conducted (e.g. annealing schedule)?\"]}"
]
} |
SygD31HFvB | A Novel Analysis Framework of Lower Complexity Bounds for Finite-Sum Optimization | [
"Guangzeng Xie",
"Luo Luo",
"Zhihua Zhang"
] | This paper studies the lower bound complexity for the optimization problem whose objective function is the average of $n$ individual smooth convex functions. We consider the algorithm which gets access to gradient and proximal oracle for each individual component.
For the strongly-convex case, we prove such an algorithm can not reach an $\eps$-suboptimal point in fewer than $\Omega((n+\sqrt{\kappa n})\log(1/\eps))$ iterations, where $\kappa$ is the condition number of the objective function. This lower bound is tighter than previous results and perfectly matches the upper bound of the existing proximal incremental first-order oracle algorithm Point-SAGA.
We develop a novel construction to show the above result, which partitions the tridiagonal matrix of classical examples into $n$ groups to make the problem difficult enough to stochastic algorithms.
This construction is friendly to the analysis of proximal oracle and also could be used in general convex and average smooth cases naturally. | [
"convex optimization",
"lower bound complexity",
"proximal incremental first-order oracle"
] | Reject | https://openreview.net/pdf?id=SygD31HFvB | https://openreview.net/forum?id=SygD31HFvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"FFwV3wZenW",
"rJetIwmnsH",
"HklQaUm2jS",
"r1xmsSQhiS",
"S1lhVSXhiB",
"rJxQHktG5r",
"H1ezmonaYS",
"S1xPXyAiYH",
"BkxbWS7wtr",
"SyxXlwcXYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798736753,
1573824336674,
1573824186681,
1573823898993,
1573823795943,
1572142907025,
1571830553702,
1571704607066,
1571398904543,
1571165930665
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1954/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1954/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1954/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1954/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1954/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1954/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1954/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1954/Authors"
],
[
"~Sebastian_U_Stich1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper considers a lower bound complexity for the convex problems. The reviewers worry about whether the scope of this paper fit in ICLR, the initialization issues, and the novelty and some other problems.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"General Comments on Latest Revision\", \"comment\": \"We would like to thanks all official reviewers and Sebastian U Stich for their insightful and helpful comments.\\nWe have appended a new lower bound in the case of $\\\\kappa = \\\\mathcal{O}(n)$ for strongly-convex optimization, which matches the upper bound of IFO algorithm (Hannah et al., 2018) (see Table 1, Theorem 3.1 and Section 4.1 in our latest version of the paper). \\nWe also have included a comparison with other constructions in Appendix B.\"}",
"{\"title\": \"Thanks for your comments!\", \"comment\": \"Thanks for the reviewer's insightful and helpful comments.\\n\\n1. We have appended a new lower bound in the case of $\\\\kappa = \\\\mathcal{O}(n)$, which matches the upper bound of IFO algorithm (Hannah et al., 2018) (see Table 1, Theorem 3.1 and Section 4.1 in our latest version of the paper). Please note that the our lower bound analysis includes PIFO algorithms, while previous results (Hannah et al., 2018) only consider IFO one in this case. \\n\\n2. It is interesting to use our framework to analyze the lower bound of algorithms with adaptive sampling. This extension looks not easy and we would like to study it in future work. \\n\\n3. The main reason of using the construction Eq.5/6 is the decomposition $r({\\\\bf x})=\\\\sum_{i=1}^n r_i({\\\\bf x})$ (omit the constants) is friendly to the analysis of PIFO algorithms. Concretely, our construction holds ''only one'' property (Lemma 2.6) both for proximal and gradient operator, while the construction of (Lan and Zhou, 2017; Zhou and Gu, 2019) only holds this property for IFO, which leads their construction is invalid to analyze PIFO algorithms. \\n\\n4. We have included a comparison with other constructions in Appendix B. \\n\\nBriefly speaking\\na) The analysis in (Lan and Zhou, 2017; Zhou and Gu, 2019) employed an aggregation framework while this paper proposed a new decomposition framework.\\nAs we stated in Reply 2, the construction in (Lan and Zhou, 2017; Zhou and Gu, 2019) is only valid for analyzing IFO algorithms, while our decomposition framework can be easily extended to show the lower bound of PIFO algorithms. \\n\\nb) The analysis in (Woodworth and Srebro, 2016; Fang et al., 2018) considers a very complicated approach to dealing with the proximal operator (completely different from how to deal with gradient operator). In contrast, our construction holds ''only one'' property (Lemma 2.6) both for proximal and gradient operator, which makes the proof more concise. We also use our technique to prove the tight lower bound of PIFO algorithm when $\\\\kappa = \\\\mathcal{O}(n)$, which is a new result. \\n\\n5. The construction $f$ of (Lan and Zhou, 2017; Zhou and Gu, 2019) is from ${\\\\mathbb R}^{mn}$ to ${\\\\mathbb R}$ while our $r$ is from ${\\\\mathbb R}^{m}$ to ${\\\\mathbb R}$ (please see the detailed definitions of $r$ and $f$ in Appendix B), which provides an intuitive understanding why our construction requires a smaller dimension.\"}",
"{\"title\": \"Thanks for your comments!\", \"comment\": \"Thanks for the reviewer's insightful and helpful comments.\\n\\n1. We have appended a new lower bound in the case of $\\\\kappa = \\\\mathcal{O}(n)$, which matches the upper bound of IFO algorithm (Hannah et al., 2018) (see Table 1, Theorem 3.1 and Section 4.1 in our latest version of the paper). Please note that the our lower bound analysis includes PIFO algorithms, while previous results (Hannah et al., 2018) only consider IFO one in this case. \\n\\n2. We must emphasize that the constructions in our proof are novel and different from previous work, which makes our results stronger. We provide the detailed comparison of our technique with existing proofs in Appendix B of our latest version.\\n\\nBriefly speaking\\na) The analysis in (Lan and Zhou, 2017; Zhou and Gu, 2019) employed an aggregation framework while this paper proposed a new decomposition framework. The aggregation one is only valid for analyzing IFO algorithms, while our decomposition framework can be easily extended to show the lower bound of PIFO algorithms. \\n\\nb) The analysis in (Woodworth and Srebro, 2016; Fang et al., 2018) considers a very complicated approach to dealing with the proximal operator (completely different from how to deal with gradient operator). In contrast, our construction holds ''only one'' property (Lemma 2.6) both for proximal and gradient operator, which makes the proof more concise. We also use our technique to prove the tight lower bound of PIFO algorithm when $\\\\kappa = \\\\mathcal{O}(n)$, which is a new result.\\n\\n3. It is worth noting that the proposed analysis framework is general and NOT limited to convex optimization, which also can be used to study NON-CONVEX problems. And we have provided the results and proofs about nonconvex optimization in Appendix J.\"}",
"{\"title\": \"Thanks for your comments!\", \"comment\": \"Thanks for the reviewer's insightful and helpful comments.\\n\\n1. We have appended a new lower bound in the case of $\\\\kappa = \\\\mathcal{O}(n)$, which matches the upper bound of IFO algorithm (Hannah et al., 2018) (see Table 1, Theorem 3.1 and Section 4.1 in our latest version of the paper). Please note that the our lower bound analysis includes PIFO algorithms, while previous results (Hannah et al., 2018) only consider IFO one in this case. \\n\\n2. Our framework is valid for any initial point. As we stated at the bottom of Page 3, we can take $\\\\{{\\\\hat f}_i({\\\\bf x}) = f_i({\\\\bf x} + {\\\\bf x}_0)\\\\}_{i=1}^n$ into consideration if the initial point ${\\\\bf x}_0\\\\neq 0$. Then analyzing ${\\\\hat f}_i({\\\\bf x}) $ is similar to analyzing $f_i({\\\\bf x})$ with ${\\\\bf x}_0=0$.\\n\\n3. It is interesting to use our framework to analyze the lower bound of algorithms with adaptive sampling. This extension looks not easy and we would like to study it in future work.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proves a better complexity lower bound for stochastic PIFO optimizers on the problem of finite-sum minimization. The paper assumes that the objective function is the sum of n individual loss functions. It further assumes that (1) the optimizer initializes at a fixed point, and (2) at each iteration, it randomly and independently selects one loss function to update the parameter vector.\\n\\nTo prove the desired lower bound, the paper constructed a group of special loss functions, such that each individual loss depends on only 2 coordinates of the parameter vector (except for the regularization term). By this construction, if the parameter vector is initialized at 0, then the number of non-zero coordinates of it will grow slowly enough so that the parameter vector will stay in some low-dimensional subspace unless a large number of iterations is performed. Using this construction, the authors prove the lower bound for 4 different configurations of optimization problems.\\n\\nOverall, I think the results are very interesting. Similar ideas (the diagonal matrix used in this paper) have been widely adopted in proving complexity lower bound. The novelty of this paper appears to be that the diagonal matrix is partitioned into n groups to define the individual loss functions. Despite the tight lower bound, the assumption (1) and (2) above seems to be restrictive, but they are necessary for the analysis of this paper. If we allow the optimizer to initialize at a random point, or if the optimizer can adaptively choose the loss function at each iteration based on the parameter trajectory, then the analysis framework no longer applies. This is probably the main limitation of this work.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"** Summary\\nThe paper derives a novel lower bound on the complexity of optimizing finite-sum convex functions (under different assumptions) using algorithms that have access to point-wise evaluation of the function, its gradient, and proximal information. \\n\\n** Overall evaluation\\nFinite-sum convex functions are very common in machine learning problems and how the optimization complexity scales with their properties (e.g., condition number) and the number of components (e.g., number of samples in typical ML problems) is a very important question. This paper addresses the question from a lower bound point of view, showing that there is no proximal incremental first-order algorithm that can optimize such functions at an accuracy level of epsilon in less than a term which depends linearly with number of components n and sqrt(k) (k being the condition number). The paper fills an existing gap in the literature and it achieves two very interesting results:\\n1- The lower bound now matches an existing upper bound for Point-SAGA, showing that no better algorithm can exist (at least in a worst-case sense). \\n2- This result also illustrate that proximal algorithms are not necessarily more powerful than first-order methods that only access the gradient of the function. This is also very interesting, as it was still an open question whether proximal information could possibly give an advantage.\\n\\nThe paper is also well written, although some elements could be improved:\\n1- Def 2.4: the authors consider algorithms where the sampling distribution cannot adapt through iterations. Although this is standard, I am wondering whether adaptivity may buy anything in the performance or whether the lower bound applies to adaptive algorithms as well.\\n2- Although similar constructions to create worst-case functions were used before in deriving complexity lower bounds, it would be useful to have an intuition about the specific choice made in eg Eq.5/6 and how this enables the refined analysis presented in the paper.\\n3- More in general, I encourage the authors to illustrate how their techniques compare and differ from previous lower bound proofs.\\n4- In all theorems, the analysis is done by linking the dimension d to all other parameters of the problem. As pointed out by the authors, the requirements on the dimensionality in the theorems of this paper are milder than previous results. It would be helpful to illustrate how the lower bound would behave when the dimensionality changes and provide an intuition about the specific choice in the theorems\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors prove lower bounds on the number of queries required for optimizing sums of convex functions. They consider more powerful queries than the usual queries that provide function evaluation/gradient pairs for chosen summands. As was done in [1] (which is cited in the submission), in this work algorithms can also get the answer to a\\n\\\"prox query\\\" solving a regularized optimization problem for the chosen summand at the chosen point. For different classes of functions obtained through smoothness and (strong) convexity constraints, the lower bounds are on\\nthe number of queries needed by an algorithm to guarantee to approximate the minimum.\\n\\nThe main result is for the case that the summands are mu-strongly convex and L-smooth. Bounds for this case are often\\ngiven in terms of kappa = L/mu. An upper bound of O( (n + sqrt(kappa n) ) log(1/eps)) is known, and\\n[1] had proved a lower bound of Omega( n + sqrt(kappa n) log(1/eps)), which matches the second term of the upper bound, but leaves a log-factor gap for the first. This paper proves an Omega( (n + sqrt(kappa n) ) log(1/eps)) lower bound, but for a restricted class of algorithms that fix a probability distribution over the summands ahead of time, and randomize by repeatedly sampling independently at random from this fixed distribution. The iterates of the algorithm are also constrained to be in the span of the answers to previous queries. Thus, this new result is incomparable in strength with the result in [1]. Also, the authors of this paper mention early in the paper that kappa is often large relative to n.\\nBut even if kappa is on the same order as n, the second term of the upper bound dominates the first, and is matched by the lower bound in [1].\\n\\nThe authors point to some new techniques in their analysis. I can see some new elements, but my knowledge of the previous work in this area is not deep enough to evaluate technical novelty very well.\\n\\nI have some question about the extent to which this work is in scope for ICLR. An argument could go that since stochastic gradient methods are so important to deep learning, study of the foundations and limitations of those methods is in scope.\\nBut a lower bound for the convex case seems to be stretching this a little far. \\n\\nThis seems like a somewhat incremental contribution that would be of interest to a smallish subset of ICLR attendees.\\n\\n[1] Blake Woodworth and Nathan Srebro. Tight complexity bounds for\\noptimizing composite objectives. In NIPS, 2016.\"}",
"{\"comment\": \"1. Many thanks for your reviewing.\\n2. The algorithms in Theorem 2 of (Hannah et al., 2018) use IFO while algorithms in our results use PIFO. PIFO provides more information than IFO and it would be potentially more powerful than IFO in first order optimization algorithms. Moreover, we develop a novel analysis framework to deal with PIFO algorithms and our framework is much simpler and more straightforward than the approach in (Woodworth and Srebro, 2016) .\\n3. Indeed, (Hannah et al., 2018) obtained a slightly better result for IFO algorithms. However, a subtle adjustment like the approach in (Hannah et al., 2018) also can improve our result to match the upper bound in (Hannah et al., 2018) as well. And we will update the modified results as soon as possible.\", \"title\": \"Thanks for reviewing!\"}",
"{\"comment\": \"Interesting paper!\\nHow do your results (and assumptions) compare to e.g. the lower bounds in https://arxiv.org/abs/1805.07786?\\nThanks for clarification,\", \"title\": \"Lower bounds by Hannah et al.\"}"
]
} |
Skx82ySYPH | Neural Outlier Rejection for Self-Supervised Keypoint Learning | [
"Jiexiong Tang",
"Hanme Kim",
"Vitor Guizilini",
"Sudeep Pillai",
"Rares Ambrus"
] | Identifying salient points in images is a crucial component for visual odometry, Structure-from-Motion or SLAM algorithms. Recently, several learned keypoint methods have demonstrated compelling performance on challenging benchmarks. However, generating consistent and accurate training data for interest-point detection in natural images still remains challenging, especially for human annotators. We introduce IO-Net (i.e. InlierOutlierNet), a novel proxy task for the self-supervision of keypoint detection, description and matching. By making the sampling of inlier-outlier sets from point-pair correspondences fully differentiable within the keypoint learning framework, we show that are able to simultaneously self-supervise keypoint description and improve keypoint matching. Second, we introduce KeyPointNet, a keypoint-network architecture that is especially amenable to robust keypoint detection and description. We design the network to allow local keypoint aggregation to avoid artifacts due to spatial discretizations commonly used for this task, and we improve fine-grained keypoint descriptor performance by taking advantage of efficient sub-pixel convolutions to upsample the descriptor feature-maps to a higher operating resolution. Through extensive experiments and ablative analysis, we show that the proposed self-supervised keypoint learning method greatly improves the quality of feature matching and homography estimation on challenging benchmarks over the state-of-the-art. | [
"Self-Supervised Learning",
"Keypoint Detection",
"Outlier Rejection",
"Deep Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=Skx82ySYPH | https://openreview.net/forum?id=Skx82ySYPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"rsg7Mx14yE",
"Bkes8ku3sS",
"SJg5VTHnsH",
"rJgDnhXcjH",
"Hkx7DhXqoH",
"Bylh6imcsr",
"S1gX9jm5sr",
"BygUyjQ5iH",
"HklcBcmcsr",
"HJeKkwvk5H",
"BkluihCpYr",
"rklyawTEYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736725,
1573842770946,
1573834033715,
1573694639359,
1573694555216,
1573694404377,
1573694347271,
1573694173822,
1573694017528,
1571940064841,
1571839136199,
1571243958654
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1953/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1953/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1953/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1953/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1953/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1953/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1953/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1953/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1953/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1953/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1953/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a solid (if somewhat incremental) improvement on an interesting and well-studied problem. I suggest accepting it.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We have updated it in the paper. Thank you for pointing this out.\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"I thank the authors for their detailed response and paper revision. The new version of figure 3 and architecture details in Tables 6 and 7 in particular are good, and address most of my concerns. I am raising my score to a 6.\\n\\nCorrection in Table 6 - \\\"Tan Harmonic\\\" -> \\\"Tan Hyperbolic\\\" presumably?\"}",
"{\"title\": \"Response to Reviewer #3 (part 1/2)\", \"comment\": \"R3: The experiments are performed on HSequences dataset (wrongly called \\\"HPatches\\\", as HPatches dataset is literally image patches, not full images), showing noticable improvement over the state of the art.\\n\\nWe thank the reviewer for this clarification. We updated the text to emphasize the fact that we are evaluating on image sequences from the HPatches dataset.\", \"r3\": \"Why the IONet is used only for training? Wouldn\\u2019t it better to actually learn everything end-to-end, which is already done in paper and evaluate?\", \"the_reasons_why_we_only_use_io_net_for_training_are_two_fold\": \"(1) Our major contribution in this paper is showing that KeyPointNet can learn from a proxy task (IO-Net) to either train the descriptor directly or to improve it (when trained with the descriptor loss). We focus on improving the actual performance of KeyPointNet at training time, rather than using another network to improve the final matching during inference. Moreover, for a fair comparison with other methods we aimed to keep the homography component estimation the same (i.e. using OpenCV and RANSAC), thus showing that our superior results are due to our improved keypoints and descriptors.\\n(2) Inspired by Negative Sample mining, which is commonly deployed for metric learning, we perform a similar strategy by feeding the lowest-k score (non-border) keypoints to the IO-Net during training. We found that using the top-k score keypoints performs worse. However, at test time, it is not meaningful to take the keypoints with lowest score to estimate the homography, hence we do not use IO-Net for the evaluation.\"}",
"{\"title\": \"Response to Reviewer #3 (part 2/2)\", \"comment\": \"R3: How is association in training (e.g. on Fig.3) done, if multiple cells in img2 returns keypoint close to the same keypoint in img1?\\n\\nWe have updated the caption in Figure 3 to properly explain the differences between each scenario. In particular, the UnsuperPoint method in (a) forces keypoint predictions to be in the same cell. Our method in (b) predicts locations from the cell-center and allow keypoints to cross cell borders, which promotes better matching and aggregation. This implies that multiple keypoints from one image (e.g. blue keypoints) may have the same corresponding keypoint in the second image (e.g. red keypoints) which is the desirable behavior we aim for.\", \"r3\": \"List of contributions in abstract is inconsistent with 3rd paragraph in Introduction, which also lists contributions.\\nWe appreciate that the reviewer identified this inconsistency. We have updated the abstract and introduction to clarify the contributions of the paper in a consistent manner.\\n\\n[1] Shi, Wenzhe, et al. \\\"Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network.\\\" Proceedings of the IEEE CVPR. 2016.\", \"320x240\": \"Method | Repeat. | Loc. | Cor-1 | Cor-3 | Cor-5 | M.Score\\nSIFT+ 0.7 ratio | 0.474 | 0.993 | 0.497 | 0.752 | 0.798 | 0.287\\nSIFT+ 0.8 ratio | 0.474 | 0.993 | 0.529 | 0.788 | 0.852 | 0.287\\nSIFT+ 0.9 ratio | 0.474 | 0.993 | 0.564 | 0.836 | 0.879 | 0.287\\nSIFT + reciprocal | 0.474 | 0.993 | 0.605 | 0.859 | 0.895 | 0.287\", \"640x480\": \"Method | Repeat. | Loc. | Cor-1 | Cor-3 | Cor-5 | M.Score\\nSIFT+ 0.7 ratio | 0.498 | 1.062 | 0.507 | 0.790 | 0.862 | 0.290\\nSIFT+ 0.8 ratio | 0.498 | 1.062 | 0.548 | 0.826 | 0.905 | 0.290\\nSIFT+ 0.9 ratio | 0.498 | 1.062 | 0.571 | 0.862 | 0.910 | 0.290\\nSIFT + reciprocal | 0.498 | 1.062 | 0.584 | 0.864 | 0.917 | 0.290\"}",
"{\"title\": \"Response to Reviewer #1 (part 1/2)\", \"comment\": \"R1: I feel the additions to existing pipelines are well motivated but insufficiently explained. In particular, the explanation of the neural network architecture along with figures 1 and 2 leaves many details unclear to me. Phrases like \\\"a 1D CNN ... with 4 default setting residual blocks\\\" is to me insufficient - residual networks have many details such as Resnet V1 or V2 style (ie is there a path right through the network which doesn't hit any activation functions), what kind of normalization is applied, number of channels in each block, how to do skips between different spatial resolutions, etc.\\n\\nWe agree that many details involved in the description and implementation of our networks have been left out of the original paper because of the page limitation. To address this, we have added these details in the Appendix diagrams describing both KeyPointNet and IO-Net with sufficient network architecture description. (Section D, Tables 6 and 7).\", \"r1\": \"The two figures showing the architecture are very different in design, which is not in itself a problem but the relationship between them could be clearer. I feel that the 'matching' box in figure 1 is misleading because it implies that matching only happens for the IONet, but the loss function for location described in Eq 1 also requires matching keypoints between the image pair. I'm also unclear on the division between direct and indirect supervisory signal - all the 4 loss components have a clear purpose, but it's not obvious what this partitioning means. \\\"Indirect\\\" only appears in this figure and the caption - perhaps.\\n\\nWe thank the reviewer for pointing this out - we have updated the caption of Figures 1 and 2 to better explain the relationship between them as well as the combination of the explicit loss applied directly on the KeyPointNet outputs (score, location and descriptor) and indirect loss derived from IO-Net via the outlier rejection classification task.\"}",
"{\"title\": \"Response to Reviewer #1 (part 2/2)\", \"comment\": \"\", \"r1\": \"In the conclusion - \\\"even without an explicit loss\\\" - what is the difference between the loss functions used in this work, and an explicit loss?\\nWe refer to an explicit loss as one that is defined over each of the 3 target outputs (score, location, descriptor) as in Equations 1, 3, and 4. The inlier-outlier loss (Equation 5) however, does not penalize the KeyPointNet outputs directly but acts as an indirect supervisory signal that is able to generate distinguishable keypoint descriptors during matching.\\n\\n[1] Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. Superpoint: Self-supervised interestpoint detection and description. InProceedings of the IEEE Conference on Computer Vision andPattern Recognition Workshops, pp. 224\\u2013236, 2018b.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"R2: The description of the evaluation procedure was a bit vague. Is RANSAC being used to find correspondences? If so, perhaps error bars are necessary to account for variance across multiple runs?\\n\\nWe thank the reviewer for this suggestion. To address this we added more details about how we compute the homography in the Appendix. Specifically, to estimate the homography, we performed reciprocal descriptor matching and we used OpenCV\\u2019s findHomography method with RANSAC, error threshold 3 and a maximum of 5000 iterations. \\n\\nTo capture the variance induced by the RANSAC component during evaluation, we added Table 4 in the Appendix, where we perform 10 evaluation runs with different random seeds and report the mean and standard deviation.\", \"r2\": \"Overall, I think the improvements are a bit incremental, but the experiments seem to support the claim that they are beneficial. I had some concerns about the clarity of the paper, and would be willing to raise my rating if addressed.\\n\\nWe thank the reviewer for the feedback, and hope that the improvements to the paper's clarity and contributions sufficiently addresses these concerns.\"}",
"{\"title\": \"Response to Reviewers\", \"comment\": [\"First of all, we would like to thank all the reviewers for their feedback and useful suggestions, and we are glad to see the recognition of the novelty of our work, relevance, and state-of-the-art results. As the reviewers have pointed out, we acknowledge that our paper would certainly benefit from a clearer and more detailed technical presentation. To this end, we have updated the manuscript in the following ways:\", \"Clarified the description of our contributions in the related work, particularly in regards to UnsuperPoint.\", \"Added a detailed description of our neural network architecture in the Appendix.\", \"Improved the figures/tables and associated captions.\", \"As requested by the reviewers, we conducted new experiments to quantify (i) the standard deviation of our homography estimation (ii) additional qualitative and quantitative experiments on HPatches (viewpoint, illumination and specific sequences).\", \"Fixed minor corrections and typos that were accurately pointed out by the reviewers.\", \"To facilitate this rebuttal process, we would like to suggest referring to our updated manuscript and detailed comments on each of the reviewers\\u2019 points, that were addressed individually.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The following work proposes several improvements over prior works in unsupervised/self-supervised keypoint-descriptor learning such as Christiansen et al. One improvement is the relaxation of the cell-boundaries for keypoint prediction -- specifically allowing keypoints anchored at the cell's center to be offset into neighboring cells. Another change was the introduction of an inlier-outlier classifier network to be used as a proxy loss for the keypoint position and descriptors. They found the inlier-outlier loss to improve homography accuracy at 1 and 3 pixel thresholds.\", \"strengths\": \"-The ablation study seems complete\\n-Clear improvements over state of the art methods\\n\\nWeaknesses/improvements:\\n-The description of the evaluation procedure was a bit vague. Is RANSAC being used to find correspondences? If so, perhaps error bars are necessary to account for variance across multiple runs?\\n-Make it more clear in the related works about how the proposed method relates to Unsuperpoint. My understanding is that the proposed work is a somewhat incremental improvement over Unsuperpoint.\\n-Section 3.3 (Score learning) was a bit difficult to follow. I find it better to start by stating the high level goal of the loss function before going into the formulation.\\n-Captions for Tables 2 and 3 are lacking. At the very least, mention what the numbers being compared are.\\n\\nOverall, I think the improvements are a bit incremental, but the experiments seem to support the claim that they are beneficial. I had some concerns about the clarity of the paper, and would be willing to raise my rating if addressed.\", \"post_rebuttal\": \"The authors have adequately addressed my concerns regarding clarity. I have updated my rating to weak accept in agreement with the other reviews.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"#\\u00a0UPDATE following rebuttal\\n\\nScore increased to 6 due to architecture details in supplemental.\\n\\n\\n# Contributions\\n\\nThe paper contributes a self-supervised method of jointly learning 2D keypoint locations, descriptors, and scores given an input RGB image. The paper builds on previous work, adding:\\n\\n* A more expressive keypoint location regression, which allows each 8x8 pixel region to vote for a keypoint location outside its boundary\\n\\n* An upsampling step, similar to a U-net, to allow descriptors to be regressed with more detailed information\\n\\n* An additional proxy task for the total loss, based on outlier rejection.\\n\\nThe authors train on COCO by manually distorting images to generate pairs with known homography, and show competitive results for keypoint detection and homography estimation tasks.\", \"decision\": \"Weak reject. I would give this a 5 if the website allowed me to. A more detailed explanation of the neural network architecture, along with minor fixes described below, would make me increase my rating.\\n\\n\\nI feel the additions to existing pipelines are well motivated but insufficiently explained. In particular, the explanation of the neural network architecture along with figures 1 and 2 leaves many details unclear to me. Phrases like \\\"a 1D CNN ... with 4 default setting residual blocks\\\" is to me insufficient - residual networks have many details such as Resnet V1 or V2 style (ie is there a path right through the network which doesn't hit any activation functions), what kind of normalization is applied, number of channels in each block, how to do skips between different spatial resolutions, etc. The upsampling step for the descriptor head, which is claimed as a novel contribution, is not fully explained - \\\"fast upsampling\\\" implies (correctly) there are many variants of upsampling with different tradeoffs, but from the text I am unsure whether this is nearest neighbour upsampling, a ConvTranspose, etc. Similarly, \\\"VGG style block\\\" leaves some details unclear - whether the resolution downsampling is with a strided convolution / pooling / etc. Lots of the details are implied to be in other previous papers, but I feel that the paper would be hugely improved by exact architectural details.\\n\\nThere are various minor notational discrepancies in the paper - for example the outlier rejection is various defined as \\\"InlierOuterNet (IONet)\\\" and \\\"The Inlier-Outlier model \\\\emph{IO}\\\", which also seems to be the same as the function $C$ defined a paragraph above. Perhaps it is common in this part of the literature, but to me an encoder decoder network is more likely to either be an autoencoder, or for the decoder to output something in the same modality (eg in machine translation). To say that some VGG blocks are an encoder, and the heads which produce keypoint locations / score / descriptor is a decoder, implies all neural networks could be described as an encoder/decoder.\\n\\nThe two figures showing the architecture are very different in design, which is not in itself a problem but the relationship between them could be clearer. I feel that the 'matching' box in figure 1 is misleading because it implies that matching only happens for the IONet, but the loss function for location described in Eq 1 also requires matching keypoints between the image pair. I'm also unclear on the division between direct and indirect supervisory signal - all the 4 loss components have a clear purpose, but it's not obvious what this partitioning means. \\\"Indirect\\\" only appears in this figure and the caption - perhaps.\\n\\nThe term \\\"Anchor\\\" appears only once with no reference, below equation 3 - I appreciate this is an existing term in this subfield, but given that the start of section 3 goes as far as explicitly defining what it means to produce 2D keypoints for an image, I feel defining this term would make the descriptor loss much clearer. \\n\\nOne of the main contributions, that of allowing locations to regress outside their 8x8 area, sounds like a good idea but I feel that Figure 3 does not adequately show the benefit. In both a) and b), the blue estimates appear to be roughly as good as each other - clearly from the ablation a large benefit is gained from this innovation but perhaps a better illustrative example could be made here?\\n\\nOn a more positive note, I feel the components of the loss function are in general very clearly motivated and defined, and the description of training & data augmentation hyperparameters appears complete. If the description of the architecture could be improved that would result in a paper very amenable to reproduction.\\n\\nThe experiments are well explained, and the ablation of the various proposed components is good. I feel table 1 would be improved with error bars - given that the bold best score is not exclusively next to V4, but in many cases the difference between V4 and the best is ~1%, error bars from different training runs might make clearer that V4 is overall the best configuration.\\n\\nIn the conclusion - \\\"even without an explicit loss\\\" - what is the difference between the loss functions used in this work, and an explicit loss?\", \"minor_corrections\": \"The euclidean distance between descriptors is various notated as $d$ (section 3), $x$ (above equation 5) and $d$ again (below equation 5).\", \"typos\": \"\\\"normalzation\\\" -> \\\"normalization\\\", \\\"funcion\\\" -> \\\"function\\\", \\\"tripled\\\" -> \\\"triplet\\\".\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper is devoted to self-supervised learning of local features (both detectors and descriptors simultaneously). The problem is old yet not fully solved yet, because handcrafted SIFT is still winning the benchmarks. This work mostly follows and improves upon SuperPoint (DeTone et.al 2017) and the follow-up work UnsuperPoint (Christiansen et.al 2019) architecture and training scheme.\", \"the_claimed_contributions_are_following\": [\"use the recently published Neural Guided RANSAC as additional auxilary loss provider\", \"allowing the \\\"cells\\\" to predict keypoint location outside the cell while learning\", \"special procedure for improving descriptors interpolation\", \"The experiments are performed on HSequences dataset (wrongly called \\\"HPatches\\\", as HPatches dataset is literally image patches, not full images), showing noticable improvement over the state of the art.\"], \"strong_points\": [\"Method is sound, paper is mostly well written and results are good (may be too good, see questions).\"], \"questions\": \"1) Regarding descriptor interpolation, which is claimed as contribution. It is not clear for me, how different it is compared to SuperPoint one, which also do descriptor upsampling, so that network output is H x W x [256], i.e. full resolution. Could you please clarify the differences to it? Also, in Figure 2 it is not clear, how one can do \\\"feature concatenation\\\" for blocks with different spatial resolution.\\n\\n 2) Why the IONet is used only for training? Wouldn`t it better to actually learn everything end-to-end, which is already done in paper and evaluate? \\n \\n 3) How is association in training (e.g. on Fig.3) done, if multiple cells in img2 returns keypoint close to the same keypoint in img1? \\n\\n 4) HSequences consists of two subsets: Illumination and Viewpoint. Could you please report results per subset instead of per whole dataset? Could you also please specifically report results for the following image sequences: graffity, bark, boat, especially for 1-6 pairs and visualize matches (same way as in Figure 4-6)?\\n The reason that I am asking these, is results looks like too well and I suspect overfitting to a points, which are suitable for estimation (small) homography, not general-purpose points.\\n \\n 5) Could you please explain in more details, how did you do homography estimation precision benchmark? Specifically, was Lowe`s second nearest neighbor ratio used for filtering out wrong matches? If not, could you please repeat this experiments with it, at least for SIFT matches?\", \"small_comments\": \"- list of contributions in abstract is inconsistent with 3rd paragraph in Introduction, which also lists contributions.\\n \\n\\n***\\nOverall I like the work, but there are unclear moments to me. \\n\\n\\n****\\nAfter rebuttal comments. While this paper may appear not \\\"sexy\\\" I think it is quite valuable for the local features learning community: both for the main contributions, and small details and tricks evaluated inside. \\nI am happy to increase my score to strong accept.\"}"
]
} |
H1gBhkBFDH | B-Spline CNNs on Lie groups | [
"Erik J Bekkers"
] | Group convolutional neural networks (G-CNNs) can be used to improve classical CNNs by equipping them with the geometric structure of groups. Central in the success of G-CNNs is the lifting of feature maps to higher dimensional disentangled representations, in which data characteristics are effectively learned, geometric data-augmentations are made obsolete, and predictable behavior under geometric transformations (equivariance) is guaranteed via group theory. Currently, however, the practical implementations of G-CNNs are limited to either discrete groups (that leave the grid intact) or continuous compact groups such as rotations (that enable the use of Fourier theory). In this paper we lift these limitations and propose a modular framework for the design and implementation of G-CNNs for arbitrary Lie groups. In our approach the differential structure of Lie groups is used to expand convolution kernels in a generic basis of B-splines that is defined on the Lie algebra. This leads to a flexible framework that enables localized, atrous, and deformable convolutions in G-CNNs by means of respectively localized, sparse and non-uniform B-spline expansions. The impact and potential of our approach is studied on two benchmark datasets: cancer detection in histopathology slides (PCam dataset) in which rotation equivariance plays a key role and facial landmark localization (CelebA dataset) in which scale equivariance is important. In both cases, G-CNN architectures outperform their classical 2D counterparts and the added value of atrous and localized group convolutions is studied in detail. | [
"equivariance",
"Lie groups",
"B-Splines",
"G-CNNs",
"deep learning",
"group convolution",
"computer vision",
"medical image analysis"
] | Accept (Poster) | https://openreview.net/pdf?id=H1gBhkBFDH | https://openreview.net/forum?id=H1gBhkBFDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"JZQ1oKG3tv",
"ByeIRAAjiS",
"BJxluT0ijS",
"Hke2w8J7jH",
"rJgO04yXjr",
"HyxYiVyXjS",
"Hkg4Wa0MiH",
"BJgI5GfF9B",
"HyeS9jyxqS",
"SkgVdEbAFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736697,
1573805774139,
1573805416458,
1573217891538,
1573217487552,
1573217440736,
1573215483720,
1572573837756,
1571974029042,
1571849323626
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1951/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1951/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1951/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1951/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1951/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1951/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1951/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1951/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1951/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper describes principles for endowing a neural architecture with invariance with respect to a Lie group. The contribution is that these principles can accommodate discrete and continuous groups, through approximation via a base family (B-splines).\\n\\nThe main criticisms were related to the intelligibility of the paper and the practicality of the approach, implementation-wise. Significant improvements have been done and the paper has been partially rewritten during the rebuttal period.\\n\\nOther criticisms were related to the efficiency of the approach, regarding how the property of invariance holds under the approximations done. These comments were addressed in the rebuttal and the empirical comparison with data augmentation also supports the merits of the approach.\\n\\nThis leads me to recommend acceptance. I urge the authors to extend the description and discussion about the experimental validation.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"First thoughts (continued)\", \"comment\": \"P.s. With respect to implementation challenges: Here is a code snipped (also promised to Rev1) that illustrates the type of coding that needs to be done. See code link above for more detail.\\n\\n** In a \\u201cgroup class\\u201d file \\u201cSE2.py\\u201d we define:\", \"class_h\": \"# Group product: two rotation angles simply add up\\n def prod( h_1, h_2 ):\\n return h_1 + h_2\\n # Group inverse: rotation angle changes sign\\n def inv( h ):\\n \\t return \\u2013h\\n # Logarithmic map: mapping the angle to the interval [0,2pi]\\n def log( h ):\\n \\t return tf.mod(h + np.pi, 2*np.pi) - np.pi\\n # The action on Rn: describes a rotation of the coordinate grid\\n # The input is a transformation parameter h, and a coordinate grid xx. The output is the transformed coordinate grid.\\n def left_action_on_Rn( h, xx ):\\n x = xx[...,0]\\n y = xx[...,1]\\n th = h[0]\\n x_new = x * tf.cos(th) - y * tf.sin(th)\\n y_new = x * tf.sin(th) + y * tf.cos(th)\\n # Reformat c\\n xx_new = tf.stack([x_new,y_new],axis=-1)\\n # Return the result\\n return xx_new\\n\\n** In the main file used to build the architecture the library is called via:\\n\\ngroup_name = 'SE2'\\ngroup = importlib.import_module('gsplinets.group.'+group_name)\\nlayers = gsplinets.layers(group) \\n...\\n\\n# Lifting layer:\\ntensor = inputs\\nl1 = layers.ConvRnG( tensor, N_out, k_size, h_grid)\\ntensor = tf.nn.relu(l1.outputs)\\n# G-conv layer\\nl2 = layers.ConvGG( tensor, N_out, k_size, h_grid)\\ntensor = tf.nn.relu(l2.outputs)\\n...\"}",
"{\"title\": \"Revised paper\", \"comment\": \"We thank the reviewers again for their time and valuable and constructive feedback. In our revision we have carefully addressed the suggestions and discussion points of each reviewer. We believe that this considerably improved the paper. Although all reviewers agree that the paper presents solid work, they also agree that paper is heavy on the math which makes it hard to read: \\u201cthe intuitive nature of the core ideas could be better conveyed e.g. by fancy diagrams.\\u201d We fully agree and this has been the core focus of our revision. In addition to new figures and clarifications we also added extra experiments (G-CNNs\\u2019 relation to data-augmentation) by which we addressed questions/remarks by Rev2 and Rev3. The main changes are as follows.\\n\\n** In order to improve readability of the paper and make it accessible to a wider audience we made the following modifications:\\n * We put great effort in crafting a new introductory figure (Fig. 1) and believe that it intuitively illustrates the main components of G-CNNs and their relations to the part-whole/capsule viewpoint.\\n * We also included a new figure (Fig. 2) that illustrates the idea of defining convolution kernels on the Lie algebra via the Log-map.\\n * Additionally we added a concrete example of the group structures and the actual group convolution operators in the main body of the text, and wrote out explicit examples for several groups in the appendix B. Moreover, we added two new illustrations (Fig. 6 and 7) for the group representations, which are core components in the theory and experiments.\\n * The main theorem is now better introduced and explained (if you want your networks to be equivariant, than you should use G-CNNs).\\n * In several places we slightly rewrote technicalities or inserted an additional brief explanation.\\n\\n** Rev3 had a related concern on whether or not the theory is too complicated to be actually implemented. We hope that the added examples and illustrations alleviate this concern. We furthermore now anonymously provide the code used in the experiments (see link above) and share an open access repository after publication.\\n\\n** Rev2 had several points for discussion regarding related work and the limitations of the method. We have addressed these in detail in our first response, but we also believe that in our thorough literature study and discussions in the paper itself we already addressed these in our first submission (see e.g. app C.2 \\\"Gauge equivariant networks\\\"). \\n\\n** Rev2 expressed concerns about the method being only approximately equivariant due to discretizations. The experiments show that networks greatly benefit from (both scale and rotation) equivariance which is provided by the G-CNNs. We further experimentally addressed the equivariance property with new experiments in which we compare model training with and without rotation augmentation. From this we drew the following conclusions:\\n * \\u201c\\u2026 comparing the models with and without $90^\\\\circ$ augmentation show that such augmentations are crucial for the 2D model but hardly affect the $SE(2)$ model. Moreover, the $SE(2)$ model *without* outperforms the 2D model *with* augmentation. This confirms the theory: G-CNNs guarantee both local and global equivariance by construction, whereas with augmentations valuable network capacity is spend on learning (only) global invariance. The very modest drop in the $SE(2)$ case may be due to discretization of the network on a grid after which it is no longer purely equivariant but rather approximately, which may be compensated for via augmentations.\\u201d\\n\\n** Rev3 had a question on how our method relates to data-augmentation. This is answered by the above.\"}",
"{\"title\": \"First thoughts\", \"comment\": \"Thank you for such a careful analysis of the paper! We also thank you for identifying some points for improvement; we address these in our revision and believe it leads to a much improved paper. We discuss them below. We are currently working on updating the manuscript. If in the meantime you have additional questions we would be happy to respond to them!\\n\\n***\\n\\u201c[Readability] For readers who are not familiar with Lie groups, this paper is very hard to follow. \\n(1) For Theorem 1, the authors are suggested to give some illustrative explanation. Besides, what is \\u201cStab_G\\u201d? \\n(2) The architecture of G-CNN, i.e., the 3 types of layers, are directly given in Eqs. (5)-(7) without examples, illustrative examinations, or visual illustrations. \\n(3) Fig. 1 can be modified for better readability. \\u201c\\n\\n(1) We are working on an illustration of theorem 1 for the case of roto-translation equivariant networks, and will place this in appendix B.2 and refer to it in the main text. We will provide extra explanation for each layer to give a more context.\\n(2) We will subsequently add a paragraph in which equations (5)-(7) are given explicitly for the roto-translation group and write out the equations for several other groups in Appendix B.\\n(3) We are working on an improved introductory figure.\\nAll in all these modifications will probably add another page to the main body of the paper, but of course we still aim to stay within the 10 page limit. Stay tuned for the revision.\\n\\n***\\n\\u201c[Experiments] The proposed G-CNN has some similarities with data augmentation (like rotation, scaling) based CNN. Then, how better can the G-CNN perform than CNN with data augmentation? More experiments on this point are suggested, and relevant theoretical explanations will be appreciated. \\u201c\\n\\nWe initially left out discussions regarding augmentation as these are addressed in prior work on G-CNNs, but we realize that it is a too important connection to ignore. So we are currently trying to find a way to fit this into the revision.\", \"there_are_mainly_two_arguments_why_g_cnns_are_preferred_over_data_augmentations\": \"1. Data augmentations transform the inputs globally and are not able to deal with local transformations/symmetries. G-CNNs handle both local and global symmetries.\\n2. By using data augmentations you let the network learn how to deal with such transformations. It thus has to spend valuable network capacity on this. G-CNNs on the other hand have the appropriate geometric structure encoded in them and therefore do not have to spend valuable network capacity on learning geometric behavior, but rather can spend it all on learning effective representations.\\n\\nWe do remark however, that data-augmentations and G-CNNs happily live together, and that data augmentations can still be used to improve performance, in particular when the augmentations include transformations that are not covered by the Lie group.\\n\\n***\\n\\\"[Implementation] Considering the complicated mathematics in this paper, I am afraid that implementation of the proposed G-CNN is also very hard. It would be better for the authors to discuss the implementation. In my mind, if the implementation is not so hard, then the formulation of G-CNN can also be simplified for better readability. \\u201c\\n\\nIn order to achieve a generic viewpoint on equivariance we make an abstraction step (and speak of representations of groups), and this step is indeed somewhat mathematically demanding, but it eventually allows us to develop the code in a modular (object oriented) and generic way. The specific equations for Lie group CNNs layers, e.g. for roto-translation equivariance, are however very readable and similar to the conventional convolution operators. We will provide such explicit examples in the revision in Appendix B, but we are trying to fit a concrete example in the main body of the text as well for the revision.\\n\\nVia the abstractions made in this paper a developer/researcher interested in implementing G-CNNs for a particular transformation group only has to define the group structure of the sub-group H that he/she wants to combine with translations (e.g. to build translation+rotation networks, translation+scalings networks, translations+skewing networks,\\u2026) and all the layers are automatically derived. \\n\\nWe hope to be able to convince you of the tractability of implementing the theory by anonymously providing examples of implementations for the 2D roto-translation and scale-translation groups (as python classes), together with the g-splinets (as it is currently called) tensorflow library via the link above (here on openreview.net). We are working on this give an update when we submit the final revision. The code will appear on GitHub after the accept/reject decision is made, with minimal working examples and the script used to generate the results.\"}",
"{\"title\": \"First thoughts (continued)\", \"comment\": \"***\\n\\u201cTheorem 1 seems important but it is a bit cryptic. What is the statement \\\"a kernel satisfying such and such properties gives rise to an equivariant CNN\\\"? Or \\\"A CNN is equivariant if and only the kernel satisfies such and such properties\\\"? \\u201c\\n\\nThese are excellent questions and raise a valid concern regarding the readability. We will improve the presentation by elaborating on the theorem in the text and by adding additional illustrations in appendix B where concrete examples are discussed (also in response of reviewer 3). The summary is as follows. In CNNs we work with feature maps and transformations between them. In general these layers can be very complicated and are described by two-argument kernel operators. But if we want to constrain such layers to be equivariant w.r.t. translations then it turns out that we are only allowed to use group convolutions which are fully described by only a single-argument convolution kernel.\\n \\nIn image analysis we are used to working with 2D feature maps (functions on X=R^2). Now the theorem says that if we want to stick to working with 2D feature maps (X=Y=R^2) and want to have equivariance w.r.t. not just translations, but also rotations (so to SE(2)), then our only option is to work with isotropic (Eq. 4) convolution kernels (since $\\\\mathbb{R}^2 \\\\equiv SE(2)/SO(2)$). If we do not want to have any isotropy constraints on the convolution kernels, than we need to lift the data to higher dimensional feature maps (Y=SE(2)). This then defines lifting correlations. \\n\\nIn general the theorem gives you a recipe for obtaining the type of layer that you are allowed to use given a choice of group to which you want to be equivariant to, and given a preferred domain on which to represent the feature maps.\\n\\n***\\n\\u201cConcerningly, the paper is closely related to a few other papers using the spline CNN idea or at least the idea of taking a fixed set of functions and moving it around on the homogeneous space by acting on it with select group elements, most notably \\\"Roto-translational convolutional neural networks for medical image analysis\\\" by Bekkers et al.. The main difference of the present paper relative to that one is that the idea is fleshed out in a little more detail and is generalized from SE(2) to arbitrary Lie groups. However, conceptually there is little that is new. \\u201c\\n\\nWe agree on related work, and remark that in fact the paper by Bekkers et al. inspired us to propose a comprehensive generalization of their method (also stated in the main text). We however do not agree that in the current paper the idea is just \\u201cfleshed out in a little more detail\\u201d and that \\u201cconceptually there is little that is new\\u201d. We believe that precisely on a conceptual level we made a significant contribution by realizing that splines can be defined on arbitrary Lie groups by defining them on the Lie algebra. This viewpoint is entirely unique and is in no way considered in the paper by Bekkers et al., where they were only able to construct B-splines on SE(2) using the group parameterization since the sub-group of rotations is 1-dimensional. By the proposed generalization we are able to apply the theory to a very large class of problems that do not just involve rotations. The fact that we can now do this is both theoretically as well as practically demonstrated.\\n\\n***\\n\\\"In such a situation it would be important to present convincing experiments. Unfortunately in the present paper, results are only presented on 2 datasets, and the algorithm is basically only compared to different versions of itself, rather than state of the art competitors. \\\"\\n\\nThe paper proposes CNN layers that can be used in any CNN architecture. As such, the purpose of the experiments is not to outperform any of those architectures in literature (which to choose?) but rather show (1) that group convolutional layers should be used when equivariance is desired and (2) that we can now actually build G-CNNs (for the first time) that are not based on roto-translations (e.g. scale-translation CNNs). We believe that only by comparing the method to different versions of itself (which includes standard 2D CNN architecture design) we are able to draw sensible conclusions and gain insight in how it behaves in different settings.\\n\\n***\\n\\\"The paper is clearly written but the intuitive nature of the core ideas could be better conveyed e.g. by fancy diagrams.\\\"\\n\\nWe agree that an intuitive exposition of the method is important (also mentioned by reviewers 1 and 3). We are working on a new introduction figure.\"}",
"{\"title\": \"First thoughts\", \"comment\": \"Thank you for your thorough analysis of the paper and for raising points for discussion which we are happy to address in the following. We are currently working on updating the manuscript. If in the meantime you have additional questions we would be happy to respond to them!\\n\\n***\\n\\u201cIn contrast to the Gauge equivariant and Fourier approaches that have recently appeared, here the authors simply put a B-spline basis on local patches of the homogeneous space and move the basis elements around explicitly by applying the group action. \\u201c\\n\\nWe provided a detailed discussion about the connection of this work to the theory of gauge equivariant CNNs in appendix C.2 and summarized this in the introduction. It turns out that the two viewpoints are equivalent in certain settings: we choose the gauge frames to be left-invariant vector fields generated by the Lie group structure. In a related way as is done in our paper, gauge equivariant CNNs also \\u201csimply move a kernel around\\u201d and align it with a particular vector field (gauge field). In the gauge paper, however, a particular grid/manifold is chosen that allows for discrete convolutions and as such avoid interpolation. In this respect, we prefer to invert the \\u201cin contrast to \\u2026 simply\\u2026\\u201d statement, and remark that in order to apply the gauge CNN framework to other cases (such as meshes or manifolds in general), one has at some point to resort (analytic) kernel representations that can be sampled at arbitrary points on the manifold. The proposed B-splines enable that. We agree that they are simple functions, but that is precisely why they are nice to work with.\\n\\nFourier methods are a different story. These are also wonderful techniques that do not necessarily require a specific discretization grid. I would say that such methods are your method of choice when dealing with compact (unimodular) groups, but these methods do not generalize well to other types of manifolds.\\n\\nThe purpose of this paper is to explore new ways to represent data and build learning architectures. A particular result is that in the B-spline Lie G-CNN viewpoint we can adopt conventional engineering heuristics such as working with localized, deformable and atrous convolutions, which is simply not possible in a Fourier basis.\\n\\n***\\n\\u201cHowever, there is a constant need for interpolation. What is more more significant is that both the homogeneous space and the group need to be discretized and in general that cannot be done in a regular manner (no notion of a uniform grid on SO(3) for example). The authors assure us that \\\"we find that it is possible to find approximately uniform B-splines... e.g. by using a repulsion model\\\". I am not sure that it is so simple. This is one of those things where the idea is straightforward but the devil is in the details. \\u201c\\n\\nWe are a big fan of Fourier methods and irreps to steer convolution kernels (w.r.t. trafo parameters), they allow to work exclusively with the coefficients without ever having to sample them. This, however, requires specialized activation functions (several are proposed e.g. in the works by Worrall et al. 2017, Weiler et al. 2018a, Kondor 2018 and others alike). Again, these methods work well on rotation groups, but do not generalize well to other groups. \\n\\nInterestingly, however, in popular techniques for spherical convolutions (both Cohen 2018b and Esteves et al 2018a) one does in fact rely on sampling of the data on the sphere (with grids that are non-uniform). They rely on a sequence of spherical harmonic fits, exact convolutions in \\u201cFourier\\u201d domain, followed by sampling again on the sphere such that element-wise nonlinearities can be applied in a conventional way. They are highly effective despite the fact that after applying such nonlinearities (1) the functions leave the spherical harmonic basis in which they were expressed and (2) the networks are not fully equivariant anymore due to the non-uniform grid. As in many real world applications one has to make a trade-off between mathematical beauty and computational efficiency or pragmatism. \\n\\nRegarding discretizations on uniform grids. As remarked in the main body of the paper, uniform local grids can always be constructed on Lie groups. However, on compact groups one has to be careful that the grid does not start to overlap with itself, as can happen with SO(d). Luckily on compact groups repulsion models also always work as due to the periodic nature one has that the repulsing forces do not send elements outside of the domain. \\n\\nFinally, in response to \\u201cthe constant need for interpolation\\u201d. We do not regard the need for interpolation as a limitation. Computationally, interpolation (in our case actually just sampling of the kernels) only occurs with every transformation in the sub-group H that is sampled, and only on for the convolution kernels.\"}",
"{\"title\": \"First thoughts\", \"comment\": \"Thank you for reading and for providing your high-level summary (which is correct ;)). We agree that the paper relies on advanced mathematical/geometrical concepts. We found it important to build up the proposed framework in a mathematically coherent and solid way, and the abstractions help us to make generalizations, grasp the broader picture (see also paragraphs and appendices on related work) and eventually implement the theory in an accessible, object-oriented way.\\n\\nNevertheless, we also find it important that the paper is accessible to a wide audience. As such, we will open-source the code (see also the code snipped as a response to reviewer 3) and work on new figures and add extra clarifications of the theory in the main text. \\n\\nWe are currently working on updating the manuscript. If in the meantime you have additional questions we would be happy to respond to them!\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a neural network architecture which that enables the implementation of group convolutional neural networks for arbitrary Lie groups. This lifts a significant limitation of such models which were previously confined to discrete or continuous compact groups due to tractability issues.\\nI'm afraid that this paper is over my head. It relies heavily on field-specific terminology and as such is likely to be accessible to a relatively small subset of researchers. This looks to me like a solid contribution, however I'm really not qualified to judge.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an (approximately) equivariant neural network architecture for data lying on homogeneous spaces of Lie groups. In contrast to the Gauge equivariant and Fourier approaches that have recently appeared, here the authors simply put a B-spline basis on local patches of the homogeneous space and move the basis elements around explicitly by applying the group action.\\n\\nThe approach is appealing in its simplicity and generality. No need to worry about irreducible representations and Fourier transforms, the formalism works for virtually any Lie group, no problem with non-compact groups. However, there is a constant need for interpolation. What is more more significant is that both the homogeneous space and the group need to be discretized and in general that cannot be done in a regular manner (no notion of a uniform grid on SO(3) for example). The authors assure us that \\\"we find that it is possible to find approximately uniform B-splines... e.g. by using a repulsion model\\\". I am not sure that it is so simple. This is one of those things where the idea is straightforward but the devil is in the details.\\n\\nTheorem 1 seems important but it is a bit cryptic. What is the statement \\\"a kernel satisfying such and such properties gives rise to an equivariant CNN\\\"? Or \\\"A CNN is equivariant if and only the kernel satisfies such and such properties\\\"?\\n\\nConcerningly, the paper is closely related to a few other papers using the spline CNN idea or at least the idea of taking a fixed set of functions and moving it around on the homogeneous space by acting on it with select group elements, most notably \\\"Roto-translational convolutional neural networks for medical image analysis\\\" by Bekkers et al.. The main difference of the present paper relative to that one is that the idea is fleshed out in a little more detail and is generalized from SE(2) to arbitrary Lie groups. However, conceptually there is little that is new.\\n\\nIn such a situation it would be important to present convincing experiments. Unfortunately in the present paper, results are only presented on 2 datasets, and the algorithm is basically only compared to different versions of itself, rather than state of the art competitors.\\n\\nThe paper is clearly written but the intuitive nature of the core ideas could be better conveyed e.g. by fancy diagrams.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, a framework for building group CNN with an arbitrary Lie group G is proposed. Generally, such a group CNN consists of 3 types of layers: a lifting layer which lifts a 2D image to a 3D data (G-image) whose domain is G; a group correlation layer which computes a 3D G-image from a 3D G-image; and a projection layer from a 3D G-image to a 2D image. To implement the convolutions in the lifting layer and group correlation layer which are defined in the continuous setting, the B-Spline basis functions are applied to expand the convolution kernels. Experimental results on tumor clarification and landmark localization show the superiority over CNN.\", \"advantages\": \"1. A flexible framework for group convolutional neural network is proposed with strong theoretical support in Theorem 1.\\n2. Familiar properties of convolutions from classical CNN design (like localized, atrous, and deformable convolutions) can also be implemented in G-CNN using specified B-Spline basis functions.\\n3. In comparison with standard CNN, the effectiveness of the B-Spline-based G-CNN is validated through experiments on two typical data sets.\", \"weakness\": \"1. [Readability] For readers who are not familiar with Lie groups, this paper is very hard to follow. \\n(1)\\tFor Theorem 1, the authors are suggested to give some illustrative explanation. Besides, what is \\u201cStab_G\\u201d? \\n(2)\\tThe architecture of G-CNN, i.e., the 3 types of layers, are directly given in Eqs. (5)-(7) without examples, illustrative examinations, or visual illustrations.\\n(3)\\tFig. 1 can be modified for better readability. \\n \\n2. [Experiments] The proposed G-CNN has some similarities with data augmentation (like rotation, scaling) based CNN. Then, how better can the G-CNN perform than CNN with data augmentation? More experiments on this point are suggested, and relevant theoretical explanations will be appreciated. \\n\\n3. [Implementation] Considering the complicated mathematics in this paper, I am afraid that implementation of the proposed G-CNN is also very hard. It would be better for the authors to discuss the implementation. In my mind, if the implementation is not so hard, then the formulation of G-CNN can also be simplified for better readability.\"}"
]
} |
rkxNh1Stvr | Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel | [
"Xin Qiu",
"Elliot Meyerson",
"Risto Miikkulainen"
] | Neural Networks (NNs) have been extensively used for a wide spectrum of real-world regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative result. In many such tasks, the point prediction is not enough: the uncertainty (i.e. risk or confidence) of that prediction must also be estimated. Standard NNs, which are most often used in such tasks, do not provide uncertainty information. Existing approaches address this issue by combining Bayesian models with NNs, but these models are hard to implement, more expensive to train, and usually do not predict as accurately as standard NNs. In this paper, a new framework (RIO) is developed that makes it possible to estimate uncertainty in any pretrained standard NN. The behavior of the NN is captured by modeling its prediction residuals with a Gaussian Process, whose kernel includes both the NN's input and its output. The framework is justified theoretically and evaluated in twelve real-world datasets, where it is found to (1) provide reliable estimates of uncertainty, (2) reduce the error of the point predictions, and (3) scale well to large datasets. Given that RIO can be applied to any standard NN without modifications to model architecture or training pipeline, it provides an important ingredient for building real-world NN applications. | [
"Uncertainty Estimation",
"Neural Networks",
"Gaussian Process"
] | Accept (Poster) | https://openreview.net/pdf?id=rkxNh1Stvr | https://openreview.net/forum?id=rkxNh1Stvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"H9crcczRGZ",
"HyebnYqnoB",
"HyeqxqPniB",
"rJxjWOxhjr",
"BklZCwgnsB",
"HJxRrPxhiS",
"BkleE8xhor",
"HkgA-Il2jH",
"Skl8jSxnjr",
"ryeapbx2sB",
"BJgfqye3sH",
"ByeMeJe3oS",
"ryx5FWhcqS",
"S1gTU-Ev5S",
"Syge0X1BqB",
"HJx95fEwYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736668,
1573853609255,
1573841393826,
1573812226776,
1573812169000,
1573812038013,
1573811751682,
1573811717594,
1573811614375,
1573810629047,
1573810058257,
1573809897761,
1572680066162,
1572450645297,
1572299719655,
1571402385732
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1948/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1948/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1948/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1948/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1948/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1948/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1948/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1948/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1948/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1948/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1948/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1948/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1948/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1948/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1948/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents a method to model uncertainty in deep learning regressors by applying a post-hoc procedure. Specifically, the authors model the residuals of neural networks using Gaussian processes, which provide a principled Bayesian estimate of uncertainty. The reviewers were initially mixed and a fourth reviewer was brought in for an additional perspective. The reviewers found that the paper was well written, well motivated and found the methodology sensible and experiments compelling. AnonReviewer4 raised issues with the theoretical exposition of the paper (going so far as to suggest that moving the theory into the supplementary and using the reclaimed space for additional clarifications would make the paper stronger). The reviewers found the author response compelling and as a result the reviewers have come to a consensus to accept. Thus the recommendation is to accept the paper.\\n\\nPlease do take the reviewer feedback into account in preparing the camera ready version. In particular, please do address the remaining concerns from AnonReviewer4 regarding the theoretical portion of the paper. It seems that the methodological and empirical portions of the paper are strong enough to stand on their own (and therefore the recommendation for an accept). Adding theory just for the sake of having theory seems to detract from the message (particularly if it is irrelevant or incorrect as initially pointed out by the reviewer).\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to \\\"Response to all Responses\\\" of Reviewer #4\", \"comment\": \"Thanks for the quick response! We were happy to be able to supply some last minute feedback below.\\n\\n1. On \\u201c## Demos\\u201d comment\\n\\nGood suggestion! We have generated the plots for \\u201cR+I\\u201d (predicting residuals with only input kernel) and \\u201cR+O\\u201d (predicting residuals with only output kernel) in the same way as we did for RIO in Figure 9, 10 and 11. In the updated version of the paper, please see Figures 12, 13 and 14 in Appendix D. 3 regarding \\u201cR+I\\u201d, and Figures 15, 16 and 17 in Appendix D.3 regarding \\u201cR+O\\u201d. From the results, the output kernel indeed helps a lot in problems where input kernel does not work well (\\u201cCT\\u201d and \\u201cMSD\\u201d), and it also shows more robust performance in terms of improvement ratio (IR) in most datasets. However, it is still generally worse than full RIO. The conclusion is that output kernel is very helpful, but combining input kernel with output kernel is the best.\", \"more_details_and_clarification_for_the_results\": \"1). \\u201cR+I\\u201d shows an extremely low IR in \\u201cCT\\u201d dataset (Figure 14), after investigation, this is because the input kernel itself is not able to learn anything from the complex high-dimensional input space, so it treats everything as noise. As a result, it keeps the NN output unchanged during correction in most cases. Applying output kernel instead solves the issue.\\n 2). After comparing Figure 10, 13 and 16, it can be observed that the behaviors of RIO are either a mixture or selection between R+I and R+O. This means RIO with I/O kernel is able to choose the best kernel among these two or combines both if needed.\\n\\n2. On \\u201c## Prediction vs Uncertainty\\u201d comment\\n\\nGood suggestion, thanks. RIO focuses on the pretrained NN\\u2019s internal uncertainty about its predictions, i.e., indeed the uncertainty implied by mistakes made on the training set, like you suggest. We will make this perspective clear in the final version of the paper, and discuss how it relates to other forms of uncertainty.\\n\\n3. On \\u201c## Theory\\u201d comment\\n\\nThanks for the suggestion to simply state the extension to $r_f(.)$ and $r_g(.)$ in the high-level portion of the theory. We agree that this makes the intuition even more clear, and will add it to the final version of the paper.\\n\\nWe agreed that the current theory does not cover all the aspects that leads to RIO\\u2019s success. We think the improvement of RIO results from both \\u201cindistinguishability\\u201d and \\u201cmisspecification\\u201d. We will discuss this point and make it clear in the final version of the paper. However, we find that the current theory also captures part of the motivations of the approach, and provides a novel perspective, as well as insights on the connection between NN and GP. We can move part of the theory into the appendix, and move more detailed empirical studies into the main paper.\\n\\nThanks for all your help\\u2014we believe the paper has much improved as a result!\"}",
"{\"title\": \"Response to all responses\", \"comment\": \"Thanks for these thorough responses. They give a clearer picture of your method. As I stated before, I think the method is a very good idea, but that the theory discussion (still) does more to obfuscate the method than to motivate it.\\n\\n## Demos\\n\\nThese experiments are helpful.\\n\\nWould you be able to include plots of what _just_ the input kernel or _just_ the output kernel would do? I'm particularly interested in how the output kernel adds value here.\\n\\n## Coverage experiments\\n\\nI appreciate this discussion, and agree that coverage is not the end-all. It is useful to have the context about what the GP method is doing.\\n\\n## Prediction vs Uncertainty\\n\\nI take the point that prediction and uncertainty are clearly related. I still wish there were more discussion about what kind of uncertainty is being quantified, and what uncertainty is not quantified. For example, as you stated, for a completely overfit NN, your uncertainty estimate would be zero. Perhaps this is what you're getting at with Theorem 2.2, but I would find it more compelling if were described, e.g., in terms of uncertainty implied by mistakes made in the training set.\\n\\n## Theory\\n\\nI appreciate that Section 2.2 has been rewritten, and it is indeed much clearer now.\\n\\nI still feel like there's too much going on in this theory section. The main point is that the decomposition of the residual function may be more favorable to model with a GP than the decomposition of the label function. I like your characterization of $f(.)$ and $g(.)$ in terms of _apparent_ signal and noise, and it would be straightforward to extend this $r_f(.)$ and $r_g(.)$, and simply state that the expected error depends on $\\\\|g\\\\|$ and $\\\\|r_g\\\\|$ in each case, so if $\\\\|r_g\\\\|$ is smaller than $\\\\|g\\\\|$, then you're going to do better.\\n\\nThe additional assumptions made here to construct a case where this condition happens to hold are only confusing, and I would bet that these assumptions do not hold in the experiments. (I do appreciate that these are now articulated clearly as assumptions). For example, in the CT experiment, I assume you fit SVGP using an RBF kernel (which is stationary): did you check whether the kernel from fitting the GP alone is proportional to the kernel from fitting the GP after the NN? I predict that the length scale of the kernel from fitting the GP alone is much larger than the length scale of the kernel from fitting the GP after the NN. I imagine in most cases where the noise variance changes dramatically between SVGP and RIO that the input kernels look very different.\\n\\nEven though the qualitative behavior happens to match the predictions of the theory, to me, this theory does not capture why you see the behavior that you do. As I stated in the original review, given that most of these GP baselines are using stationary kernels, differences in misspecification between the raw labels vs the residuals seems like a much more plausible explanation for RIO's success.\"}",
"{\"title\": \"Response to Review #4 (4 out of 4)\", \"comment\": \"Q. On concern that the arguments for lower NN+GP errors are too narrow to be useful:\", \"a\": \"This is indeed a compelling idea in general. However, in practice, there is nothing inherent to RIO that assumes the input and output kernels of GP are stationary, and both the input kernel and the output kernel can contain both stationary and non-stationary components. The \\u201cmisspecification\\u201d issue is a problem-specific kernel tuning problem, which is general to all GP-related approaches. Therefore, we simplify this aspect in order to highlight properties specific to RIO in particular.\"}",
"{\"title\": \"Response to Review #4 (3 out of 4)\", \"comment\": \"As described in the Main Response at the top of the first part of the response, this part of the response contains responses to specific comments regarding the theory in Section 2.2. Here, each original review comment has been briefly summarized to make the response more concise.\\n\\nQ. On concern that the discussion focuses on too much on prediction and too little on uncertainty:\", \"a\": \"Indeed, this is the main point, and not much theoretical complexity is required to establish it at a high level. Section 2.2. has been rewritten to reflect this perspective. The goal of the more involved theory is to provide a concrete instantiation of the point, i.e., to identify a class of scenarios in which the high-level motivation produces the desired behavior.\"}",
"{\"title\": \"Response to Review #4 (2 out of 4)\", \"comment\": \"More details regarding the unreliability of CI coverage metric:\\n1. The plots for dataset \\u201cyacht\\u201d, \\u201cENB/c\\u201d, \\u201cprotein\\u201d, \\u201cSC\\u201d, and \\u201cCT\\u201d show that the RIO variants are more optimistic than SVGP in 95% and 90% CI coverage, but becomes more conservative than SVGP in 68% CI coverage. If one approach is really more conservative than the other one, then it should have wider CI coverage at different confidence levels consistently. This phenomenon alerts us that the empirical CI coverage may mislead the comparison. \\n2. In addition, for \\u201cCT\\u201d dataset, SVGP has an extremely high RMSE of ~52 while RIO variants only have RMSEs of ~1. However, SVGP still shows acceptable 90% and 68% CI coverage, and even has over-confident coverage for 68% CI. After investigation, what actually happened is that SVGP was not able to extract any useful information from the high-dimensional input space, so it treated all the outcomes as simply noise. As a result, SVGP shows a very large RMSE compared to other algorithms, and the mean of its predicted outcome distribution is always around 0. Since SVGP treats everything as noise, the estimated noise variance is very high, and the estimated 95% CI based on this noise variance is overly high and covers all the test outcomes in most cases. When the estimated 90% CI is evaluated, the big error in mean estimation and big error in noise variance estimation cancel most part of each other by chance, i.e., the estimated 90% CI is mistakenly shifted by erroneous mean then the overly wide noise variance fortunately covers slightly more than 90% test outcomes. Similar thing happens to the estimated 68% CI, but now the error in noise variance cannot fully cover the error in mean, so the coverage percentages are below 68%, indicating over-confidence. This investigation shows how noisy the empirical CI coverage may be.\\n3. To have a clearer picture about what RIO and SVGP do regarding CI coverage, we added an experiment that shows the distribution of CI coverages for all confidence levels (from 1% to 99%), and plot the results for RIO and SVGP in the same figure (Figure 6, 7, and 8 in Appendix D.2). From Figure 6, 7 and 8, it can be seen that SVGP also shows more \\u201coptimistic\\u201d CI coverages in many cases (\\u201cairfoil\\u201d, \\u201cCCPP\\u201d, \\u201cprotein\\u201d, \\u201cCT\\u201d, and confidence levels below 70% in \\u201cYacht\\u201d). One interesting observation is that SVGP tends to be more \\u201cconservative\\u201d for high confidence levels (>90%), even in cases where they are \\u201coptimistic\\u201d for low confidence levels. After investigation, this is because SVGP normally has an overly high noise variance estimation (also comes with a higher prediction RMSE in most cases), so it has a higher chance to cover more points when the increase in CI width (influenced by noise variance) surpasses the erroneous shift of mean (depending on prediction errors). This can explain why the original 95% and 90% CI coverage plots may suggest that SVGP is more \\u201cconservative\\u201d. In summary, we can not easily draw solid conclusions from these CI coverage metrics regarding the ability of approaches in predictive uncertainty estimation. A method that simply learns the distribution of the labels would perform well in CI coverage metric, but it cannot make any meaningful point-wise prediction.\", \"details_on_using_nlpd\": \"We agree that being conservative is better than over-confident in real-world applications, but we also think a more accurate uncertainty estimation makes sense \\u2014- it will provide more useful information for decision making. A good example would be SVGP in \\u201cCT\\u201d dataset, its 95% CI covers 100% of the testing outcomes, which is very conservative. However, no useful information can be extracted from this extremely wide CI: it simply covers everything. We think a good balance between being accurate and being conservative is important. This is why we choose the NLPD metric to measure the performance in uncertainty estimation. According to [1], \\u201cThe NLPD loss favours conservative models\\u201d (see Fig. 7 in [1]), but it also penalizes both over- and under-confident predictions. It is widely used in literature as a reasonable measure of Bayesian models [2]. During our testing, NLPD indeed returns more reliable evaluations of the uncertainty estimation (one good example would be that SVGP has a very high NLPD loss in \\u201cCT\\u201d dataset). Based on all the above considerations, we prefer NLPD as the performance metrics for uncertainty estimation.\\n\\n[1] Joaquin Quin \\u0303onero-Candela, Carl Edward Rasmussen, Fabian Sinz, Olivier Bousquet, and Bernhard Scholkopf. \\u201cEvaluating predictive uncertainty challenge\\u201d. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, pp. 1\\u201327, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg.\\n[2] Andrew Gelman, Jessica Hwang, and Aki Vehtari. \\u201cUnderstanding predictive information criteria for bayesian models\\u201d. Statistics and Computing , 24(6):997\\u20131016, Nov 2014.\"}",
"{\"title\": \"Response to Reviewer #2 (2 out of 2)\", \"comment\": \"Q4: \\u201cFollowing equation (7), you claim that \\u201cIn other words, RIO not only adds uncertainty estimation to a standard NN\\u2014it also makes its predictions more accurate, without any modification to its architecture or training\\u201d. Could you please verify and justify how RIO makes predictions of NNs more accurate? In this statement, I guess that you consider the results given in Theorem 2.6. However, you should not that the error functions given in Theorem 2.6 are calculated in a cascaded manner, i.e., by applying a GP at the output of a NN.\\u201d\", \"a4\": \"We recognize that our statement \\u201cmakes its predictions more accurate\\u201d leads to a confusion here. RIO is designed as a supporting tool that can be applied on top of a pre-trained NN, so what we mean here is that RIO can calibrate/correct the output of that pre-trained NN --- it does not change the performance of the pre-trained network itself. The error functions given in Theorem 2.6 correctly reflects the standard usage of RIO, i.e., a NN is pre-trained, then RIO is applied to it afterwards. To avoid confusion, we have modified the statement to \\u201cit also provides a way to calibrate NN predictions\\u201d.\", \"q5\": \"\\u201cThe main proposal of the paper is that RIO makes it possible to estimate uncertainty in any pretrained standard NN. In order to verify that proposal, you should improve the experiments, esp. using larger datasets with larger neural networks, including deep neural networks.\\u201d\", \"a5\": \"Thanks for this constructive comment. We have added new experiments that show RIO's off-the-shelf applicability to modern deep convolutional architectures on large datasets. It was applied to a recent pre-trained NN for age estimation based on DenseNet [1][2], which has 121 layers. The dataset is IMDB, which is the largest open source dataset of face images with age labels for training [3]. The pretrained NN and all data preprocessing were taken exactly from the official code release. RIO substantially improves upon the prediction errors of the pre-trained NN, outperforms SVGP in terms of both prediction error and uncertainty estimation, and yields realistic confidence intervals (See Table 3 in the main paper for more details). We have added these results into the main paper.\\n\\n[1] Tsun-Yi Yang, Yi-Hsuan Huang, Yen-Yu Lin, Pi-Cheng Hsiu, and Yung-Yu Chuang. \\u201cSSR-net: A compact soft stagewise regression network for age estimation\\u201d. In Proc. of IJCAI, pp. 1078\\u20131084, 2018.\\n[2] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. \\u201cDensely connected convolutional networks\\u201d. In Proc. of CVPR, pp. 4700\\u20134708, 2017.\\n[3] https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/\"}",
"{\"title\": \"Response to Reviewer #2 (1 out of 2)\", \"comment\": \"Thank you for your constructive comments. Please see our responses to each of your concerns below:\", \"q1\": \"\\u201cThe proposed method can be applied to any machine learning algorithm. It is not clear why you focus on employment of the proposed method for vanilla NNs.\\u201d\", \"a1\": \"RIO can indeed be applied to other machine learning algorithms, but we believe vanilla NNs are a good choice for this paper for two reasons: (1) As the first paper on RIO, it makes sense to focus it on the analysis and demonstration of RIO\\u2019s abilities without the complexity of multiple platforms, and (2) vanilla NNs are very common model used by practitioners, making the results relevant to many people. We have discussed the motivation and reasons for which we choose standard NN as the focus in the main paper. However, since it is insightful to also test the generality of RIO, we have added a whole set of experiments that instead use random forest models, as described in A2 below.\", \"in_more_details\": \"1. Since this is the very first paper that introduces RIO, focusing on one widely used model allows us to do a thorough and deep investigation into the new approach, both theoretically and empirically. These detailed analysis and results should be very informative for practitioners who are using vanilla NNs. Including different approaches may lose this focus and depth.\\n2. Vanilla NN is arguably the most commonly used model among practitioners for making point predictions, but it also creates a lot of inconvenience and risks due to the lack of uncertainty information. Our target is to develop a tool that is practical and useful for the practitioner community, so choosing vanilla NN to demonstrate the effectiveness of RIO would be most appropriate as the first milestone.\", \"q2\": \"\\u201cHave you applied RIO for other learning algorithms as well?\\u201d\", \"a2\": \"We have added the experimental results on Random Forests for all RIO variants and all datasets. Please see Table 7 in Appendix D.6 for full details of the experiments and results. To summarize, RIO performs the best or equals the best method (based on statistical tests) in 9 out of 12 datasets in terms of both RMSE and NLPD. In addition, RIO significantly improves the performance of original RF in 11 out of 12 datasets. These empirical results verifies the robustness and broad applicability of RIO. Full details of the results are included in the appendix, and we also referred to this as a concrete example when we discuss the extensibility of RIO in future work.\", \"q3\": \"\\u201cCould you please explain more precisely, how you utilize which particular properties of NNs in RIO, and/or how RIO helps quantification and improvement of uncertainty of NNs particularly?\\u201d\", \"a3\": \"We use the expressivity of NNs, which means that they can learn complex structure that a GP would treat as noise. This point has been clarified in the revised version of Section 2.2. Similarly, RIO is particularly well-suited for NNs, because their expressivity makes it difficult to quantify their uncertainty with simpler analytical methods. However, RIO can be easily extended to other kinds of regression models as well, e.g., the new experiments in Appendix D.6 show that they work with random forests.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Many thanks for your recognition of our work! We really appreciate your encouraging words about our contributions. Please see below for our responses to your concerns:\", \"q1\": \"\\u201cThe theoretical section 2.2 feels a bit rushed, I think it would be worth sharing the high level intuition behind some of the theory first before going into the details.\\u201d\", \"a1\": \"Thanks for this constructive suggestion. We have rewritten Section 2.2 to make the high level intuition and motivation more clear. A big picture summary was added to the beginning of section, followed by an intuitive discussion of the approach. Prose has also been added to improve the flow of the section and clarify the predictions and conclusions drawn from the theoretical model.\", \"q2\": \"\\u201cSection 2.4 could be more explicit about what \\\"large scale\\\" means. I.o.w. from a practical point of view, the method is only limited by approximate inference for Gaussian processes. Anno 2019 this is ...\\u201d\", \"a2\": \"Thanks for bringing up this point. Yes, the scalability of RIO is only limited by the approximate GP method. In order to quantitatively define what is a \\u201clarge scale\\u201d dataset, we analyzed existing public regression datasets (as of November 2019). Based on the distribution of their sizes, a regression dataset can be considered \\u201clarge scale\\u201d (~top 10% in size) if the product of its number of data points and number of features is larger than 1 million. We have added a clarification in Section 2.4 to make the definition of \\u201clarge scale\\u201d more explicit. Among the datasets tested in this paper, 3 of them (\\u201cSC\\u201d, \\u201cCT\\u201d, \\u201cMSD\\u201d) fulfill this criterion, and they are ~1.7 million, ~20 million and ~46 million, respectively. RIO shows strong performance in all three \\u201clarge scale\\u201d datasets, so the scalability of RIO is demonstrated.\"}",
"{\"title\": \"Response to Review #4 (1 out of 4)\", \"comment\": \"Thanks for this thorough and detailed review of our work, particularly with regards to the theory. The overarching concern was that the motivation, details, and implications of the theory were unclear, and it would be more compelling if the detailed behaviors of RIO can be demonstrated using concrete examples. To address these concerns, we have added more concrete empirical demonstrations of the detailed behaviors of RIO, regarding both output correction and confidence interval coverage. We have also rewritten Section 2.2 in the newly uploaded version of the paper, aiming to clarify the assumptions, the motivation of each step, and the conclusions drawn. We believe this update addresses the overarching concern, and addresses many of the specific comments in the process. We will respond to your concerns regarding \\u201c## Demos I Wish I Had Seen\\u201d and \\u201c## Coverage Experiments\\u201d first, then reply to all your concerns related to theory.\", \"q\": \"comments within \\u201c## Coverage Experiments\\u201d section\", \"a\": \"Confidence interval (CI) coverage indeed is a concrete and straightforward performance metric, which is why we included the 95%/90%/68% CI coverages for all algorithms in all datasets in the Appendix. However, after a deeper investigation, including an additional experiment (Figure 6, 7 and 8 in Appendix D.2), we found this performance metric to be noisy and potentially misleading. Drawing conclusions from it requires lengthy qualifications; given the page limits of the main text, we believe such discussions are better presented in the appendix. In contrast, NLPD loss is more reliable, which is why it is the primary measure in this paper. This choice is now explained in the main text, and a justification given in the appendix.\", \"more_details_on_the_first_study\": \"In the first empirical analysis, we randomly pick a run for each tested dataset, and plot the distributions of ground truth labels(outcomes), original NN predictions and predictions corrected after RIO. The results are summarized in Figure 9 of Appendix D.3. Based on the results, it is clear that RIO is not simply shrinking predictions together. Instead, RIO tends to calibrate each NN prediction accordingly. The distribution of outputs after RIO calibration may be a shift, or shrinkage, or expansion, or even more complex modifications of the original NN predictions, depending on how different are NN predictions from ground truths. As a result, the distribution of RIO calibrated outputs are always closer to the distribution of ground truths. One interesting behavior can be observed for \\u201cprotein\\u201d dataset (row 3, rightmost plot): after applying RIO, the range of whiskers shrunk and the outliers disappeared, but the box (indicating 25 to 75 percentile of the data) expanded. This behavior shows that RIO is actually trying to calibrate each point differently. To provide more details, the point-wise comparisons between NN outputs and RIO-corrected outputs for the same experimental runs as in Figure 9 are shown in Figure 10 of Appendix D.3. From Figure 10, RIO shows different calibration behaviors accordingly. If we compare the plots in Figure 10 to the corresponding ones in Figure 9 (they are for the same run on the same dataset), it is clear that all these different calibration behaviors actually make sense, and they are generally leading to more accurate predictions of ground truths.\", \"more_details_on_the_second_study\": \"In the second empirical study, we define a new performance metric called \\u201cimprovement ratio\\u201d (IR), which is the ratio between number of successful corrections (successfully reducing the error) and total number of data points. For each run on each dataset, we calculate this IR value, and the distribution of IR values over 100 independent runs (random dataset split except for MSD, random NN initialization and training) on each dataset is plotted in Figure 11 of Appendix D.3. According to the results, the IR values for RIO are above 0.5 in most cases. For 7 datasets, IR values are above 0.5 in all 100 independent runs. For some runs in \\u201cyacht\\u201d, \\u201cENB/h\\u201d, \\u201cCT\\u201d, and \\u201cMSD\\u201d, the IR values are above 0.8 or even above 0.9. All these observations show that RIO is making meaningful corrections instead of random perturbations. Results in Figure 11 also provides useful information for practitioners: Although not all RIO calibrations improve the result, most of them do.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for your positive evaluation of our work. Please see our point-to-point responses to your comments:\", \"q1\": \"\\u201cIn more practical settings we cannot assume that NN is always trained well. In this case, does the proposed method perform much worse than GP?\\u201d\", \"a1\": \"No. The study did actually include several cases where the original NN performs poorly, and RIO still performed better than or comparably to GP. This result was not emphasized in the original paper; we have made it clear in the revised version.\", \"in_more_details\": \"We paid special attention to evaluating the robustness of RIO during the design of the experiments. As stated in the experimental setups, for each dataset (except for MSD, in which the dataset split is strictly predefined by the provider), 100 independent runs are conducted. During each run, the dataset is randomly split, and the NN is randomly initialized. Moreover, we are using a standard training procedure without over-tuning that are commonly used by practitioners. All these steps enable us to cover different training situations and generate NNs with different qualities. The performance distributions of original NN and corresponding RIO/GP are plotted in Figure 3 in the original paper. From Figure 3, the trained NNs show diverse performance in terms of prediction RMSE (horizontal axis), and RIO is able to consistently improve the performance of NN into a level that is better or comparable to GP even though the original performance of NN is poor. For the datasets in which original NN performs much worse than GP, namely \\u201cairfoil\\u201d, \\u201cCCPP\\u201d and \\u201cMSD\\u201d, the performance after applying RIO becomes better than GP instead. More specifically, for \\u201cairfoil\\u201d, the original RMSEs of some NNs are above 5.5 (dots on the right side) while corresponding GP in the same runs only have RMSEs below 4.0, applying RIO to these NNs achieves RMSEs below 3.5. Similar pattern can be observed in \\u201cCCPP\\u201d. For \\u201cMSD\\u201d, although the original RMSEs of NN are above 17.0 in some cases, comparing to ~9.6 for GP, RIO is able to reduce these RMSEs to similar level (~9.8) as GP or even better (~9.4). These experimental results demonstrate that RIO is still robust even in the cases where NN are not well trained.\", \"q2\": \"\\u201cIs this the only proposal for fitting the residuals for uncertainty estimation? Is there any other similar approach? I would like to see more discussions on other related methods and how the idea is different.\\u201d\", \"a2\": \"We have done a thorough literature review, and to the best of our knowledge, this is the only work that fits residuals for uncertainty estimation. The unique characteristic of RIO is that it is designed as a supporting tool to augment pre-trained NNs. In contrast, all other existing methods are designed as independent models that need to be trained from scratch. Considering the popularity of NNs among practitioners and the lack of uncertainty information in NN predictions, we do think a tool that can be directly applied on top of pre-trained NNs provides practical value. We have expanded the discussions in \\u201cIntroduction\\u201d section and \\u201cRelated Work\\u201d section to emphasize this point.\", \"q3\": \"\\u201cSummarizing the whole procedure in an algorithm could make things clearer.\\u201d\", \"a3\": \"Thanks for your constructive comment. We have added an algorithm in the Appendix (Algorithm 1 in Section C) that describes the procedure of RIO. We have added a note in the main paper to refer interested readers to the algorithm.\"}",
"{\"title\": \"Summary of Main Revisions\", \"comment\": \"We want to thank all the reviewers for their effort in reading the paper and providing constructive comments. We have addressed all the main concerns, and have updated the paper accordingly. The main updates are as follows:\\n\\n-New experiments:\\n -Application of RIO to age estimation with pretrained DenseNet \\n -Application of RIO with Random Forest regressors\\n-New empirical analysis:\\n -Detailed analysis on the behavior of RIO in error correction\\n -Detailed analysis on confidence interval coverage\\n-Clarification of theory:\\n -Section 2.2 has been rewritten to clarify the high level intuition, the motivation of each step, and the conclusions drawn. \\n\\nWe have also responded to the specific comments of each reviewer in replies to their reviews.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper focuses on the model inference of neural networks (NN). The authors propose to use NN for the model, and fit the prediction residuals with a Gaussian process with input/output (IO) kernel. This kernel considers both input x and output y. The authors show that the NN+GP scheme has lower generalization error compared with solely using GP or NN to fit the model. Also, the IO kernel generalizes better than input kernel I, and output kernel O, in Gaussian process modeling. In experiments, the authors evaluate various methods in terms of several metrics to show that the proposed procedure gives better uncertainty estimation and more accurate point estimation.\\n\\nIn general it is a good paper, with good applications. The motivation is clear. The key idea of this paper is pretty common in statistical inference. \\n1.\\tIn more practical settings we cannot assume that NN is always trained well. In this case, does the proposed method perform much worse than GP? \\n2.\\tIs this the only proposal for fitting the residuals for uncertainty estimation? Is there any other similar approach? I would like to see more discussions on other related methods and how the idea is different. \\n3.\\tSummarizing the whole procedure in an algorithm could make things clearer.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"# Summary\\n\\nThe authors propose a method for post-hoc correction and predictive variance estimation for neural network models. The method fits a GP to the model residuals, and learns a composite kernel that combines two kernels defined on the input space and the model\\u2019s output space (called RIO, R for residual, and IO for the input-output kernel). The authors suggest that residual means and variances at test points can then be calculated explicitly using predictive distributions from the GP. The authors run a large panel of experiments across a number of datasets, and compare to a number of methods that draw connections between neural networks and GP\\u2019s. In addition, the full method is compared to a number of methods that utilize only some components of the full RIO method. In these experiments, the RIO method generally shows strong performance in both RMSE and NLPD compared to these baselines.\\n\\n# Feedback\\n\\nOverall, this is a neat method. It has the flavor of a number of other composite ML methods that have worked well in the past---e.g., boosting and platt scaling---but is different enough to stand on its own. The experimental results are quite promising.\\n\\nHowever, I am torn about the paper, because the theoretical discussion of the method is quite convoluted and seems either irrelevant or incorrect. I wish that the authors had spent more time with small demonstrations of what the procedure does in some simple settings. This would give practitioners considering the method far more intuition about when they would expect it to work and fail than the current theoretical discussion.\\n\\n## Uncertainty Discussion is Lacking\\n\\nThe motivation and discussion sell this method as an uncertainty quantification method, but almost all of the theoretical development revolves around prediction correction. The methods properties as an uncertainty quantification tool are underdeveloped.\\n\\nThe only theoretical point made about uncertainty estimation is Theorem 2.7, which states that the scalar variance of the GP \\u201cnugget\\u201d is positively correlated with the variance of the NN\\u2019s residuals. Providing a scalar summary of noise is not particularly compelling for a method advertised as a point-prediction uncertainty quantification method. In addition, it is not clear what probability distribution the \\u201ccorrelation\\u201d is defined over. The argument made in the proof seems quite obvious: if a GP is used to model a noisier process (i.e., residuals with a larger variance), it will in some cases classify that variability as independent noise.\\n\\nIf the authors wanted to focus on the properties of their method as an uncertainty quantification tool, they could discuss the assumptions underlying the GP error estimates, and when they would be likely to diverge from practical properties like predictive interval coverage. For example, because the base NN predictor is treated as fixed, it seems that this method ignore uncertainty that stems from the NN fit due to random initialization. Likewise, it seems that this method would not quantify uncertainty from resampling the data and obtaining a new NN predictor. )The coverage experiments in the appendix seem to confirm this -- generally, the predictive intervals generally under-cover the predicted values.) It\\u2019s fine if the method doesn\\u2019t quantify these types of uncertainty, but discussion of these types of issues would be far more welcome than the current convoluted theory in Section 2. This discussion might not yield theorems, but it would give practitioners useful guidelines for deciding whether the particular scheme would likely work for this application.\\n\\n## Problems with the Error Correction Argument\\n\\nThe theory section, especially 2.2, was very difficult to parse. First, as a matter of style, a sequence of Lemma and Theorem statements are given without defining most of the notation used therein, and with almost no prose providing context or intuition. In the buildup to the theorems, it is also unclear which assertions about the decompositions of y_i are assumptions about the true data generating process, and which assertions are specifications of a particular GP model.\\n\\nThe substance also has some issues. I think the intention in this section is to get to a rather simple variance decomposition of the labels y. The question is how much variation in y or the residual is represented in the posterior predictive mean of a particular GP. It seems reasonable that in some cases, the structure in the residual may be more amenable to modeling with a stationary GP than the structure in the raw labels y. It is not clear that all of the theoretical complexity here is necessary to make this point.\\n\\nInstead, the authors make a convoluted argument that attempts to establish that the errors from the NN + GP approach will be smaller under very general circumstances. The argument is phrased somewhat ambiguously (it is not clear exactly what is being assumed, and what is corresponds to the specification of a working model), but depending on how one reads this section, the argument makes statements that are either too broad to be correct, or too narrow to be relevant.\\n\\nThe argument decomposes for the raw labels and the residuals into pieces that a GP can \\u201ccapture\\u201d or \\u201crepresent\\u201d, and parts that it cannot. The two equations are:\\n\\ny_i = f(x_i) + g(x_i) + \\\\xi_i\\nR_i = (y_i - h_NN(x_i)) = r_f(x_i) + r_g(x_i) + \\\\xi_i\\n\\nf(.) and r_f(.) represent the portions of the label and residual processes, respectively, that the GP \\\"captures\\\". It is assumed that the GP will model this portion correctly, and leave the \\u201cepsilon-indistinguishable\\u201d portion g(.) or r_g(.) untouched. The argument then assumes that f(.) and r_f(.) will have proportional kernels, and so it is possible to show that the predictions of residuals based on r_f(.) will have smaller predictive variance than predictions based on f(.) as long as the variation represented by r_g(.) is smaller than the variation represented by g(.).\\n\\nOn its face, this argument raises some red flags. Because h_NN(.) is allowed to be an arbitrary function, the argument here should be symmetric. Why can\\u2019t we also get a guaranteed variance reduction by adding h_NN(.) to y rather than subtracting it? Perhaps some of this is captured in the parameter \\\\delta, which quantifies the reduction in variation represented in r_g(.) vs g(.), but the argument that the kernel of r_f(.) can be no larger than the kernel f(.) in terms of trace (that is, the proportionality constant \\\\alpha is not greater than 1) does not make sense. If h_NN(x_i) is simply -f(x_i), then these arguments would not go through. At the very least, conditions need to be articulated about the properties of h_NN(.).\\n\\nSome of the strangeness comes from the fact that this is a poor model of most prediction problems, where the main issue with fitting a GP is not \\u201cindistinguishability\\u201d, but misspecification. Consider a process y_i that is non-stationary; say g(.) has a linear trend in some component of x. A GP with a stationary covariance kernel fit to this process (such as RBF) will attempt to explain the variation due to the linear trend with a variance kernel that encodes long-range dependence. On the other hand, if this trend were removed by a base model like an NN, the residuals would have a very different structure (perhaps they would be stationary), and in this case, the GP would fit the data with a very different covariance kernel. \\n\\nUnfortunately, it does not seem like the formalism here can express a notion of misspecification at all. In the theory, it is assumed that the GP will only model the portion of the labels y_i for which it is property specified (in this case, f(.)). This generally does not occur in practice, as in the example above. It might be possible for this to apply in some circumstances, but the authors give no conditions (e.g., that the process y_i be stationary). Based on this assumption, the authors assert that the fitted GP to f(.) and r_f(.) will have the same covariance kernel parameters up to some proportionality constant \\\\alpha. Much of their theoretical argument depends on this proportionality. But this proportionality cannot apply in general, and again, no conditions are given for when we might expect this to hold.\\n\\nIt would be far more compelling if the authors proposed the very standard approach to modeling data via covariance kernels, where one first models non-stationary portions of the data with a base model, then models the correlation in the residuals with something like a GP. This is the bread-and-butter approach in, say, timeseries analysis (see, e.g., the Shumway and Stoffer textbook https://www.stat.pitt.edu/stoffer/tsa4/tsa4.htm), and the approach in this paper could be framed similarly.\\n\\n## Demos I Wish I Had Seen\\n\\nI wish the authors had presented some demonstrations of what the GP does to the fitted values of an NN. Giving a demonstration of how the output kernel modifies predicted values, for example, would give some nice intuition the value added by this portion. I suspect that this step essentially performs something like Platt scaling, but for continuous outcomes, by shrinking predictions together so that they better match the overall distribution of observed labels. Perhaps the mechanism is different. At any rate, it would be useful to understand where the information gain is coming from, and this would be far better expressed concretely in terms of a toy data example than the theoretical arguments that are given.\\n\\n## Coverage Experiments\\n\\nI wish the coverage experiments evaluating predictive intervals were included in the main text. As far as uncertainty quantification evaluations go, coverage is one of the few assessments that does not rely on the model itself (unlike NPLD, which uses the model\\u2019s own log-likelihood), and can be phrased as a concrete performance guarantee.\\n\\nHere, the goal for predictive intervals is to cover the true prediction value _at least as often_ as the nominal rate (95% intervals should cover the truth _at least_ 95% of the time), not merely that coverage be \\u201cclose\\u201d to the nominal rate. This asymmetric evaluation gives you a concrete guarantee that the uncertainty estimate is conservative. The coverage experiments show that this method quite systematically under-covers compared to the end-to-end SVGP method, which generally satisfies this coverage property. I think this is important information to include about the model, and generally I think this behavior results from the fact that uncertainty is not propagated from the NN fit. This should be presented clearly in the main text.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper solves an interesting scientific and applied problem: can we construct an algorithm to predict uncertainties without re-training/modifying existing neural network training algos? The authors propose a novel technique (called RIO) which leverages existing neural network but use both the input as well as the output of the neural net as an input to a GP which regresses on the residual error of the neural network. The authors describe the theoretical foundations as well as show empirical results on multiple datasets.\", \"my_thoughts_on_the_paper\": [\"The paper is well written and from section 2.1 it is clear how one could re-produce their method.\", \"The theoretical section 2.2 feels a bit rushed, I think it would be worth sharing the high level intuition behind some of the theory first before going into the details.\", \"Section 2.4 could be more explicit about what \\\"large scale\\\" means. I.o.w. from a practical point of view, the method is only limited by approximate inference for Gaussian processes. Anno 2019 this is ...\", \"The empirical section is particularly strong and contains a wide variety of experiments with detailed analysis.\", \"As a result, I think this is a good piece of scientific work that could be interesting to the wider community.\", \"Although I did not re-run the results, the authors do share full source code for their results.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper proposes a new framework (RIO) to estimate uncertainty in pretrained neural networks. For this purpose, RIO employs Gaussian Processes whose kernels are calculated by kernel functions of input and output samples and the corresponding target values.\", \"The proposed approach is interesting and the initial results are promising. However, there are various major and minor problems with the paper:\", \"The proposed method can be applied to any machine learning algorithm. It is not clear why you focus on employment of the proposed method for vanilla NNs.\", \"Have you applied RIO for other learning algorithms as well?\", \"Could you please explain more precisely, how you utilize which particular properties of NNs in RIO, and/or how RIO helps quantification and improvement of uncertainty of NNs particularly?\", \"Following equation (7), you claim that \\u201cIn other words, RIO not only adds uncertainty estimation to a standard NN\\u2014it also makes its predictions more accurate, without any modification to its architecture or training\\u201d. Could you please verify and justify how RIO makes predictions of NNs more accurate? In this statement, I guess that you consider the results given in Theorem 2.6. However, you should not that the error functions given in Theorem 2.6 are calculated in a cascaded manner, i.e., by applying a GP at the output of a NN.\", \"The main proposal of the paper is that RIO makes it possible to estimate uncertainty in any pretrained standard NN. In order to verify that proposal, you should improve the experiments, esp. using larger datasets with larger neural networks, including deep neural networks.\"], \"after_rebuttal\": \"I read the comments of the other reviewers and response of the authors. Most of my questions were addressed in the rebuttal, and the paper was improved. However, there is still room to improve the paper with additional analysis using state-of-the-art algorithms on benchmark datasets, and to improve presentation of the work. Therefore, I improve my rating to Weak Accept.\"}"
]
} |
HJem3yHKwH | EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks | [
"Sanchari Sen",
"Balaraman Ravindran",
"Anand Raghunathan"
] | Ensuring robustness of Deep Neural Networks (DNNs) is crucial to their adoption in safety-critical applications such as self-driving cars, drones, and healthcare. Notably, DNNs are vulnerable to adversarial attacks in which small input perturbations can produce catastrophic misclassifications. In this work, we propose EMPIR, ensembles of quantized DNN models with different numerical precisions, as a new approach to increase robustness against adversarial attacks. EMPIR is based on the observation that quantized neural networks often demonstrate much higher robustness to adversarial attacks than full precision networks, but at the cost of a substantial loss in accuracy on the original (unperturbed) inputs. EMPIR overcomes this limitation to achieve the ``best of both worlds", i.e., the higher unperturbed accuracies of the full precision models combined with the higher robustness of the low precision models, by composing them in an ensemble. Further, as low precision DNN models have significantly lower computational and storage requirements than full precision models, EMPIR models only incur modest compute and memory overheads compared to a single full-precision model (<25% in our evaluations). We evaluate EMPIR across a suite of DNNs for 3 different image recognition tasks (MNIST, CIFAR-10 and ImageNet) and under 4 different adversarial attacks. Our results indicate that EMPIR boosts the average adversarial accuracies by 42.6%, 15.2% and 10.5% for the DNN models trained on the MNIST, CIFAR-10 and ImageNet datasets respectively, when compared to single full-precision models, without sacrificing accuracy on the unperturbed inputs. | [
"ensembles",
"mixed precision",
"robustness",
"adversarial attacks"
] | Accept (Poster) | https://openreview.net/pdf?id=HJem3yHKwH | https://openreview.net/forum?id=HJem3yHKwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"xbVi20gvP",
"HklscZ5doS",
"SJx1vWc_sr",
"B1x6WxqOoB",
"HJgE7loI5H",
"BkgEnMcyqS",
"H1xg6DlTFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736639,
1573589395327,
1573589335175,
1573588997292,
1572413468176,
1571951276231,
1571780536131
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1946/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1946/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1946/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1946/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1946/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1946/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposed to apply emsembles of high precision deep networks and low precision ones to improve the robustness against adversarial attacks while not increase the cost in time and memory heavily. Experiments on different tasks under various types of adversarial attacks show the proposed method improves the robustness of the models without sacrificing the accuracy on normal input. The idea is simple and effective. Some reviewers have had concerns on the novelty of the idea and the comparisons with related work but I think the authors give convincing answers to these questions.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for their positive comments. As correctly pointed out by the reviewer, this work was intended to showcase an alternative low-cost approach to increasing the robustness of deep learning models through the use of low-precision models, without sacrificing accuracy on the original unperturbed examples. Also, as discussed in our response to reviewer 1, other defense strategies like adversarial training, input gradient regularization, defensive distillation and full-precision ensembles suffer from limitations of increased training time, increased model size or increased inference time. However, the development of several hardware platforms and software libraries supporting low-precision operations has decreased the training and inference times for low-precision models allowing us to achieve increased robustness with minimal training, inference and model size overheads.\\n\\nWe would like to clarify that the adversarial attacks on the low-precision models weren\\u2019t performed at full-precision. The attacked model was a low precision model utilizing quantized weights and activations. However, the gradients used in the attack generation were not quantized, allowing the adversary to launch a stronger attack. We have updated the paper to include this clarification.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for their comments. Please find the detailed responses to the individual concerns below.\", \"suitability_for_iclr\": \"We believe that ICLR is the right venue for our paper for two main reasons. First, the high cost associated with ensemble models is often ignored by the machine learning community when considering its advantages in terms of increased performance and robustness. Their high memory and compute footprint can even be prohibitive on resource-constrained devices such as IoT edge devices and wearables. As an alternative, we propose mixed-precision ensembles and illustrate their advantages in terms of both higher robustness and low compute and memory overhead. Second, over the past few years, ICLR has published many papers on low precision networks [1][2]. This work builds on the existing works and demonstrates an additional advantage of these low precision networks.\\n\\n[1] Aojun Zhou et al. \\u201cIncremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights\\u201d. ICLR 2017.\\n[2] Angus Galloway et al. \\u201cAttacking Binarized Neural Networks.\\u201d ICLR 2018.\", \"additional_baselines_for_benchmarks_other_than_mnist\": \"We did not present additional baselines for the CIFAR-10 and AlexNet benchmarks as they did not yield networks with higher adversarial accuracies and <5% drop in unperturbed accuracies compared to the full-precision baselines. To illustrate the point further, we present the results for CIFAR-10 with defensive distillation and input gradient regularization below. The distillation process was implemented with a softmax temperature of T = 100, the gradient regularization was realized with a regularization penalty of lambda = 200.\\n\\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\\nDefense strategy | Unperturbed Accuracy | CW | FGSM | BIM | PGD | Average Adversarial\\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\\nDefensive distill. | \\t63.84 | 31.4 | 14.4 | 5.83 | 4.08 | 13.93\\nInp. Grad. Reg. | \\t74.91 | 12.58 | 10.06 | 12.72 | 10.43 | 11.45\", \"comparison_with_other_mechanisms_proposed_as_a_defense_for_adversarial_attacks\": \"Among the plethora of works on increasing robustness, we have chosen to compare our work with some of the most cited and popular defense strategies, namely, FGSM based adversarial training, defensive distillation and input gradient regularization, due to page restrictions and implementation efforts. However, as requested by Reviewer 4, we have also updated the paper to include the comparison with PGD-based adversarial training. We would like to highlight that our approach stands out from previous work in terms of drastically lower overheads. Adversarial training, input gradient regularization and defensive distillation all increase the overall training time significantly, while ensembling with full-precision models increase the overall model size and inference time several-fold. In contrast, with the development of hardware that natively supports low-precision operations and the development of libraries that can take advantage of these low precision computation engines (Ex: CUDA 10 on Turing GPUs https://devblogs.nvidia.com/cuda-10-features-revealed/), the training and inference times for low-precision models are decreasing remarkably (https://devblogs.nvidia.com/int4-for-ai-inference/). We exploit this advantage of low-precision models to achieve increased robustness with minimal increases in overall training and inference times (training and executing two low precision models in addition to one full-precision model).\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for their comments. Please find the detailed responses to the individual concerns below.\", \"related_efforts_on_ensembles_and_unique_contribution_of_this_work\": \"We thank the reviewer for pointing out the additional related work; we have updated the related work section to include these works. However, we feel that our work makes a significant contribution above the previous work when it comes to computationally-efficient defenses. Previous approaches either increase the training time greatly (adversarial training, input gradient regularization and defensive distillation), or increase the inference memory and compute footprint several-fold (full-precision ensembles). As a result, these approaches are inapplicable to resource-constrained systems (like IoT edge devices and wearables). This is the problem addressed in our work.\\n\\nRecent years have seen a tremendous growth in efforts towards DNNs optimized for computational efficiency [1] [2]. Following the same motivation, this work demonstrates a computationally-efficient approach of utilizing mixed-precision ensembles to increase robustness while maintaining unperturbed accuracy. Advances in hardware that natively supports low-precision operations and software libraries that can take advantage of these low precision computation engines (Ex: CUDA 10 on Turing GPUs https://devblogs.nvidia.com/cuda-10-features-revealed/) infact allow low-precision models to execute much faster than their full-precision counterparts (https://devblogs.nvidia.com/int4-for-ai-inference/), thereby restricting the inference time overheads of EMPIR to <25%. Further, unlike other popular non-ensemble defense techniques like adversarial training, input gradient regularization and defensive distillation, our approach doesn\\u2019t increase training time. The overall idea is simple, but effective. We believe that its successful implementation, as demonstrated in this paper, is an important step towards realizing computationally efficient and robust DNNs. \\n\\n[1] A.G Howard et al. \\u201cMobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications\\u201d. ArXiv, abs/1704.04861 (2017).\\n[2], Forrest N. Iandola et al. \\u201cSqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size.\\u201d ArXiv abs/1602.07360 (2017).\", \"comparison_with_pgd_adversarial_training\": \"Since there is a plethora of efforts on increasing robustness, we restricted the comparisons to a few popular representative works due to space and time restrictions. FGSM adversarial training results were presented instead of PGD adversarial training because it converges faster. However, as requested by the reviewer, we have updated the paper to include the following results on PGD adversarial training [3], and we will include additional results in the final paper. The adversarial training was performed on adversarial examples generated with a maximum possible perturbation of epsilon = 0.3.\\n\\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\\nNetwork | Approach | Unperturbed Accuracy | CW | FGSM | BIM | PGD | Average Adversarial\\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\\nCIFARconv | PGD Adv. Train | 73.55 | 12.62 | 12.45 | 10.97 | 8.52 | 11.14\\nCIFARconv | EMPIR | 72.56 | 48.51 | 20.61 | 24.59 | 13.34 | 26.76\\n\\nAs evident from the above values, for the CIFARconv benchmark, EMPIR achieves a higher average adversarial accuracy than PGD adversarial training. EMPIR is able to achieve this improvement with zero training overhead, whereas PGD adversarial training increases the training time significantly because of the need to construct adversarial examples during training (One PGD adversarial training epoch is 22x slower than a clean training epoch on an RTX 2080 Ti GPU). \\n\\n[3] A. Madry et al. \\u201cTowards Deep Learning Models Resistant to Adversarial Attacks.\\u201d ArXiv abs/1706.06083 (2017).\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper suggests using ensemble of both full-precision and low-bits precision models to defense adversarial examples.\\n\\nFrom methodological point of view, this idea is quite straightforward and not novel, since there are already several works that applied ensemble methods to improve the robustness of NNs, including the Strauss et.al 2017 and (the following references are not included in the manuscript)\\n\\\"Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong\\nWarren He, James Wei, Xinyun Chen, Nicholas Carlini, Dawn Song\\\" \\n\\\"Ensemble Adversarial Training: Attacks and Defenses\\nFlorian Tram\\u00e8r, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel\\\" .\\n\\\"Improving Adversarial Robustness via Promoting Ensemble Diversity\\nTianyu Pang, Kun Xu, Chao Du, Ning Chen, Jun Zhu \\\" ICML 2019\\n\\nThough these methods only considered combining full-precision models, the idea is the same in essence and let the low-bits networks involve into the ensemble is quite natural and straightforward. So I don't think the methodology contribution of this paper is enough for publication.\\n\\nWhen checking the empirical results, the compared baselines miss a very common-used and strong baseline PGD adversarial training. And also the performance of this ensemble is not significant. \\n\\nConsidering the weakness of the paper both in methodology development and empirical justification, this work does not merit publication from my point of view.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"I think the paper reads well. It proposes to use ensembles of full precision and low-precision models in order to boost up robustness to adversarial attacks. It relies on the fact that low precision models are known to be more robust to adversarial attacks though performing poorly, while ensembling generally boosting up performance.\\n\\nI think the premise of the paper is quite clear, and the results seem to be intuitive. At a high level one worry that I have is if ICLR is the right conference for this work. \\n\\nI would have expected maybe a more thorough empirical exploration. E.g. using resnets for ImageNet rather than AlexNet. Providing more baselines for the larger (and more reliable datasets) rather than MNIST which might be a bit misleading. I think the work does a decent job at looking at different number of components in the ensemble and analyzing the proposed method, but maybe not enough comparing and exploring other mechanism proposed as a defense for adversarial attacks. \\n\\nHowever I think the message is clear, the results seem decent and I'm not aware of this being investigated in previous works.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose an ensemble of low-precision networks as a solution to providing a neural network with solid adversarial robustness whilst also providing good accuracy.\\n\\nI found the paper easy to read with a high quality introduction and background, the results are very convincing and the idea is simple but intriguing. I think this will shift the community towards seriously considering low precision networks a partial solution to adversarial attacks (alongside adversarial training).\\n\\nI could not work out from the paper whether the adversarial attacks on the low-precision networks were performed at full precision. I.e. someone could clone the low-precision networks, cast them to full precision, perform an adversarial attack like FGSM and then evaluate on the quantized network. It would be good to clarify this (or make it clearer in the text how you handle this).\"}"
]
} |
HklXn1BKDH | Learning To Explore Using Active Neural SLAM | [
"Devendra Singh Chaplot",
"Dhiraj Gandhi",
"Saurabh Gupta",
"Abhinav Gupta",
"Ruslan Salakhutdinov"
] | This work presents a modular and hierarchical approach to learn policies for exploring 3D environments, called `Active Neural SLAM'. Our approach leverages the strengths of both classical and learning-based methods, by using analytical path planners with learned SLAM module, and global and local policies. The use of learning provides flexibility with respect to input modalities (in the SLAM module), leverages structural regularities of the world (in global policies), and provides robustness to errors in state estimation (in local policies). Such use of learning within each module retains its benefits, while at the same time, hierarchical decomposition and modular training allow us to sidestep the high sample complexities associated with training end-to-end policies. Our experiments in visually and physically realistic simulated 3D environments demonstrate the effectiveness of our approach over past learning and geometry-based approaches. The proposed model can also be easily transferred to the PointGoal task and was the winning entry of the CVPR 2019 Habitat PointGoal Navigation Challenge. | [
"Navigation",
"Exploration"
] | Accept (Poster) | https://openreview.net/pdf?id=HklXn1BKDH | https://openreview.net/forum?id=HklXn1BKDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"OYMnCO123b",
"BJgKgprsjr",
"B1xj41B9sS",
"Bklq_0EqsH",
"SylTeTEqsH",
"SyeU6jNqor",
"r1e0iqQIqB",
"SklnIsw2FB",
"S1lLOLkZYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736608,
1573768433445,
1573699379088,
1573699186111,
1573698805373,
1573698494293,
1572383397770,
1571744596464,
1570989678395
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1945/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1945/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1945/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1945/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1945/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1945/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1945/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1945/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper presents a method for visual robot navigation in simulated environments. The proposed method combines several modules, such as mapper, global policy, planner, local policy for point-goal navigation. The overall approach is reasonable and the pipeline can be modularly trained. The experimental results on navigation tasks show strong performance, especially in generalization settings.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"After rebuttal\", \"comment\": \"Thank you for your answers, they address my questions.\"}",
"{\"title\": \"Author Response\", \"comment\": \"We thank the reviewers for the helpful feedback. The reviewers have appreciated our realistic experimental design, strong generalization results, and ablation studies. They found our experiments convincing in their comparisons with the state of the art. We are glad that our effort to tackle real-world aspects of the exploration problem in the context of navigation was appreciated.\\n\\nThe reviewers had requested some clarifications and had some suggestions about related work. We provide these clarifications in individual responses to the reviewers below and have made minor revisions in the paper accordingly.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for valuable comments and suggestions. We address the concerns and answer the questions below:\", \"regarding_comparison_with_cmp\": \"CMP was originally designed for the pointgoal navigation task and trained with Imitation Learning. We provided a comparison with CMP on the pointgoal task in the supplementary material. We also tried running CMP for the exploration task, however, it did not perform well in the initial set of experiments due to multiple reasons:\\na) Firstly, there is no ground-truth trajectory in the exploration task, so CMP needs to be trained with reinforcement learning rather than imitation learning. As the reviewer pointed out, CMP uses VIN as a differentiable planner and VINs do not perform as well with reinforcement learning. For example, the results in the original VIN paper show that the performance of VIN drops from 99.3% (using imitation learning) to 82.5% (using reinforcement learning) when trained on small 16x16 mazes (see Table 1 for IL results and Table 3 in Appendix for RL results: https://arxiv.org/pdf/1602.02867.pdf). We are working with orders of magnitude larger maps which makes it even more difficult for VINs to learn planning using reinforcement learning.\\nb) Another complication is that both CMP and VINs were originally designed for and tested in a grid-based environment with 90-degree rotation and no motion noise. Switching to 10-degree rotations with motion noise creates aliasing effects in the map which makes it difficult to learn fine-grained navigation. We consulted with an author of CMP to ensure that our implementation is correct.\", \"regarding_related_literature_on_hierarchical_rl\": \"Thank you for the suggestions. We have added the relevant literature to the related work section.\\n\\n\\n> The figures 1 and 2 have been completely redone, but they are not completely clear.\\nWe have updated the figures to add more labels and correct some typos in the revised version of the manuscript.\", \"regarding_the_role_of_the_sensor_output\": \"We understand the source of confusion. The reviewer is correct that sensors normally provide relative positions. However, they provide the position relative to the starting position of the robot, but we need the sensor\\u2019s estimate of the position relative to the position at the last step for aligning egocentric map predictions between consecutive frames. In order to predict the pose change, we first align the egocentric map predictions at consecutive steps (using relative pose from sensor\\u2019s estimate) and then pass it through the learned pose estimator to refine the sensor\\u2019s estimate. The intuition is that by looking at the egocentric predictions of the last two frames, the pose estimator can learn to predict the small translation and/or rotation that would align them better. The pose estimator is trained using supervised learning. More details of the pose estimation model are provided in Appendix D.\\n\\n\\n> The authors mention that unexplored area is considered as free space for planning. What consequences did this have in case of unexplored obstacles? I guess the problem was delegated to the local policy, which needed coping with these issues?\\nThe reviewer is correct, the local policy learns to avoid obstacles too close to the agent which are not visible in the frame and thus unexplored. We briefly discuss this in Section 6.1 Local Policy ablations.\", \"regarding_pointgoal_navigation_task\": \"The central problem we are tackling in this paper is that of exploration in realistic settings (with realistic pose noise, etc). We moved the PointGoal results into supplementary as we felt that they distract the reader from the main message of the paper, and because we wanted to stay as close as possible to the soft 8-page limit rather than to the hard 10-page limit. We will further emphasize the PointGoal results in the main body of the paper. The supplementary material contains all relevant details already.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank the reviewer for helpful feedback. We address your concerns below.\", \"q\": \"Experiments are small, more challenging domains would yield negative results.\", \"a\": \"On the contrary, we believe our experimental setup is as realistic and challenging as it gets, significantly more so than any prior work in the area:\\nSimulation environments are scans of real environments, so they retain the visual complexity.\\nActuation noise models are derived from real robot runs (vs no noise, or artificial Gaussian noise in past works).\\nThe model generalizes to new Matterport domains out-of-the-box (Tables 1 and 3).\\nThe model also works in the real-world (which in our opinion is the most challenging and useful domain).\", \"regarding_fast_marching_method\": \"It is a simple shortest path planning algorithm which we implement using a few lines of Python code using an off-the-shelf package. One can also use other shortest path algorithms, such as A* or Djikstra\\u2019s instead of the Fast Marching Method.\", \"regarding_grus\": \"Recurrent layers are commonly used in navigation models. The motivation is that the agent needs to have some memory of prior observations to navigate effectively. In our case, we need memory to get feedback of obstacles not visible in the current frame. Among the types of recurrent units, we found that both LSTMs and GRUs gave similar performance, and we chose GRU as it was slightly faster.\", \"regarding_the_definition_of_exploration\": \"\\u2018Exploration\\u2019 is a slightly overloaded term having different meanings in the context of navigation and in the context of exploration-exploitation trade-off in RL. We would like to point out that the definition of exploration used in the paper is not our definition but has been used in the navigation literature for over two decades [for eg. 1 - 6]. The same definition of exploration is also used in recent machine learning papers tackling exploration in the context of navigation, for example, Chen et al. [7] (ICLR 2019), Fang et al. [8] (CVPR 2019). We use the same definition to keep the terminology consistent in the literature.\\n\\nRegarding references on exploration in RL, thanks for pointing out the error. We agree that the suggested references are much more relevant, and have revised the paper to correct it. The Schmidhuber, 91 reference was originally added for curiosity-based exploration in RL but was incorrectly referenced over revisions.\\n\\nThanks for pointing out the typos, those have been corrected in the revision.\\n\\n[1] B. Yamauchi, \\u201cA Frontier Based Approach for Autonomous Exploration,\\u201d in Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), 1997, pp. 146\\u2013151.\\n\\n[2] F. Amigoni and A. Gallo, \\u201cA Multi-Objective Exploration Strategy for Mobile Robots,\\u201d in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2005, pp. 3861\\u20133866\\n\\n[3] W. Burgard, M. Moors, C. Stachniss, and F. Schneider, \\u201cCoordinated Multi-Robot Exploration,\\u201d IEEE Transactions on Robotics, vol. 21, no. 3, pp. 376\\u2013 386, 2005.\\n\\n[4] R. Sim and N. Roy, \\u201cGlobal A-Optimal Robot Exploration in SLAM,\\u201d in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), April 2005, pp. 661\\u2013666.\\n\\n[5] F. Amigoni, \\u201cExperimental Evaluation of Some Exploration Strategies for Mobile Robots,\\u201d in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2008, pp. 2818\\u2013 2823.\\n\\n[6] Dirk Holz, Nicola Basilico, Francesco Amigoni, and Sven Behnke. Evaluating the efficiency of frontier-based exploration strategies. In ISR 2010 (41st International Symposium on Robotics) and ROBOTIK 2010 (6th German Conference on Robotics), pages 1\\u20138. VDE, 2010.\\n\\n[7] Tao Chen, Saurabh Gupta, and Abhinav Gupta. Learning exploration policies for navigation. In ICLR, 2019\\n\\n[8] Kuan Fang, Alexander Toshev, Li Fei-Fei, and Silvio Savarese. Scene memory transformer for embodied agents in long-horizon tasks. In CVPR, 2019.\"}",
"{\"title\": \"Thank you.\", \"comment\": \"We thank the reviewer for the motivational feedback! We are glad that our efforts to tackle the challenges involved in the real-world aspects of mobile robotics and navigation are appreciated.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper describes ANM, active neural mapping, to learn policies for efficiently exploring 3d environments. The paper combines classical methods with learning based approaches, allowing the final system to work competitively with raw sensory inputs without requiring unreasonable amounts of training samples.\\n\\nI think this is a well-written \\\"ML-systems paper\\\" and I'm especially happy that real-world aspects of mobile robots are taken into account. I was able to follow the overall idea of the approach as well as the description of the three components. I also think that the experiments are well done, showing convincingly ANMs competitive performance and demonstrate, through the ablation studies, the importance of its constituting parts.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new architecture and policy for coverage maximization (which the authors call exploration). Overall the paper is well written, but I have some major concerns. However I am not an expert in navigation / robotics so i have given myself the lowest confidence for this paper.\\n\\nMy highest level concern is that this approach seems extremely complicated (eg Figs 1 and 2), as well as employing several sub-algorithms as part of the procedure (eg Fast Marching Method). It's not clear to me why any of the components are necessary, though I do appreciate the ablation study. But even within that ablation not all components are ablated (e.g., why GRU units?). My experience suggests that extremely complicated architectures such as this one are brittle and don't generalize (and it goes against Sutton's 'bitter lesson'). The fact that the experiments are so small does not help. Perhaps more challenging domains would yield negative results. Further, how tuned are the baselines? And it seems that the baselines are general RL agents and not optimized for coverage maximization like this architecture. The authors say \\\" We will also open-source the code\\\", has this been done? Open-sourcing would help others reproduce the results since as it stands I think this is too complicated to be reproduced. The level of intricacy makes me think that perhaps this paper is more suited to a robotics conference.\\n\\nSecondly, the paper mentions exploration a lot, but it's not clear to me how this is a principled exploration strategy. Exploration is not in fact defined as \\\"visit as much area as possible\\\" or \\\"maximize the coverage in a fixed time budget\\\", as the authors suggest. In fact the sentences \\\"We follow the exploration task setup proposed by Chen et al. 2019 where the objective is to maximize the coverage in a fixed time budget. [The] coverage is defined as the total area in the map known to be traversable\\\" appears twice in this manuscript. Exploration is better defined within the context of the explore-exploit tradeoff, whereby an agent must sometimes take sub-optimal actions in order to learn more about the environment in the hope of possibly increasing it's long-term return. Conflating 'coverage-maximization' and exploration is confusing. I think the paper should be rewritten to de-emphasize exploration and instead talk about coverage-maximization, which is more accurate.\\n\\n\\\"Exploration has also been studied more generally in RL for faster training (Schmidhuber, 1991).\\\" I certainly would *not* cite Schmidhuber 91 as the canonical reference of exploration in RL. Far, far, more appropriate would be either the Sutton+Barto RL book (which doesn't do a great job covering exploration but is at least a decent overall reference) or the works of Auer 2002 and Jaksch et al 2010, and related papers. The Schmidhuber citation should be removed and replaced with a few that actually make sense in this context.\\n\\nI don't understand how the goals (especially long-term) are generated and trained. Is the long-term goal trained using the reward signal? This is not properly explained.\\n\\n\\\"and summarize major these below\\\" typo, probably should be themes or theses?\\n\\n\\\"agnet pose\\\" typo.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper describes a method for visual robot navigation in simulated environments. In terms of overall objectives and targeted reasoning, the current approaches can be roughly divided into two groups: i) learning tasks requiring high-level reasoning for navigation involving the detection and discovery of objects and their affordances and eventually also requiring to process language input, and ii) simpler navigation task involving geometry and the detection of free (navigable space): point goal, maximizing coverage etc. The former target more complex problems but the agents are more difficult (currently up to impossible) to transfer to real environments, whereas the latter directly target problems which can currently realistically used in real world scenarios.\\n\\nThe paper is of the second group, and addresses one of the currently investigated problems in robot navigation and mapping, namely whether learned navigation is superior to traditional planning algorithms, and whether the two different approaches can be integrated. It proposes to separate the task into long-term and short-term goals, which is not new per se, but the proposed formulation is quite interesting. In particular, the integration of the \\u201chandcrafted\\u201d planar (front propagation) into the learned framework solves a couple of issues with sample efficiency of learned methods, while still keeping some flexibility of learning over the 100% traditional approaches.\\n\\nI will be upfront \\u2013 I already reviewed an earlier version of this paper for NeurIPS 2019, where this paper unfortunately did not pass. I was actually a favorable reviewer at this time and was defending it. The paper has been improved since and I would be happy to see it pass. I still have a couple of questions, some of which are similar to the ones I raised in the NeurIPS review (others have been addressed since).\\n\\nWhile I do agree that the targeted tasks might be considered less exciting then tasks involving high level semantics, I do also think that these tasks are far from solved as soon as we try to implement them in real life scenarios. I do think that the proposed paper is an interesting step forward.\\n\\nThe advantages of the proposed method are \\u201cbought\\u201d with a couple of key design choices, in particular the handcrafted non-differentiable long-term path planner. The downside of this is that the loss signals can\\u2019t be backpropagated through the planner, which restricts the mapping module to very simple mapping information, basically free /navigational space. End-to-end training of navigation could in principle learn to map objects and affordances which are discovered through the task and not hardcoded or even learned with supervision, which also must be known in advance. This means that the contribution is limited to simpler navigational tasks like the tested exploration and PointGoal. In contrast, other work from the literature uses differentiable planners (eg cited CMP (Gupta et al 2017), using value Iteration Networks (cited Talmar et al. 2016) which allows fine-tuning.\\n\\nThe mapping network, which is learned with supervision, is a general encoder-decoder network which needs to translate from projective first person views to ego-centric bird\\u2019s eye views. It thus needs to learn projective geometry from data, although projective geometry could be used as structure for the network, given camera calibration, which has been done in other work:\\n\\n-\\tChen et al., 2019\\n-\\tGupta et la., 2017\\n-\\tHenriques et al., 2018\\nAnd a couple of others.\\n\\nSeveral improvements have been made since the NeurIPS submission, some of which I had addressed in my review. The experiments are quite convincing in their comparisons with the state of the art, in particular the generalization performances:\\n- generalization from Gibson (training) to Matterport (testing)\\n- generalization from exploration (training) to PointGoal (testing).\\nA couple of the results have been removed from the NeurIPS submission, unfortunately, I think they should be kept in.\\n\\nI appreciated the realistic sensor model fitted to real data measured with a Locobot robot, and the ablation studies, which indicated the contributions of the different planner modules and of pose estimation. The role of the short term planner has been made clearer in the new paper.\\n\\nI found it interesting that the stellar performance at the Habitat AI challenge was removed from the new paper \\u2013 this method (or at least a preceding version) won the challenge. But I do understand that this choice was motivated by some remarks of the NeurIPS fellow reviewers regarding the simplicity of the PointGoal task of the challenge.\\n\\nA couple of less positive aspects, and questions:\\n\\nOn the downside, and following the remarks on literature above, I still think that the results should be compared with CMP, the main competitor of this method. \\nI think this is the main short coming of the paper, in particular since CMP is able to perform end-to-end training because the planner is differentiable (value iteration networks, NIPS 2016).\\n\\nThe literature w.r.t. to hierarchical planning is very far from exhaustive and lots of work is missing, consisting of recent work \\n\\nEmbodied Question Answering, Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, Dhruv Batra, CVPR 2018 \\n(and several follow up papers)\\n\\nbut also quite classical work like the literature around the options framework, with the following starting point:\\n\\nR.S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1):181\\u2013211, 1999.\\n\\nAnd many other papers.\\n\\nThe figures 1 and 2 have been completely redone, but they are not completely clear. In particular, several intermediate representations/maps/Images are not commented or labeled, they should be annotated with the symbols from the text. \\n\\nThe role of the sensor output is not clear. Sensors normally provide relative positions \\u2026 but the text seems to indicate absolute pose. Some details are lacking.\\n\\nIn \\u201c\\u2026 to predict the pose change between the two maps \\u2026\\u201d it is unclear what is done here. Is this self-supervision?\\n\\nThe authors mention that unexplored area is considered as free space for planning. What consequences did this have in case of unexplored obstacles? I guess the problem was delegated to the local policy, which needed coping with these issues?\\n\\nThe last paragraph before the conclusions briefly mentions experiments and comparisons but without giving any details. This is unfortunate, since there is still space available (the paper length is 8.5 pages).\"}"
]
} |
rklMnyBtPB | Adversarial Robustness Against the Union of Multiple Perturbation Models | [
"Pratyush Maini",
"Eric Wong",
"Zico Kolter"
] | Owing to the susceptibility of deep learning systems to adversarial attacks, there has been a great deal of work in developing (both empirically and certifiably) robust classifiers, but the vast majority has defended against single types of attacks. Recent work has looked at defending against multiple attacks, specifically on the MNIST dataset, yet this approach used a relatively complex architecture, claiming that standard adversarial training can not apply because it "overfits" to a particular norm. In this work, we show that it is indeed possible to adversarially train a robust model against a union of norm-bounded attacks, by using a natural generalization of the standard PGD-based procedure for adversarial training to multiple threat models. With this approach, we are able to train standard architectures which are robust against l_inf, l_2, and l_1 attacks, outperforming past approaches on the MNIST dataset and providing the first CIFAR10 network trained to be simultaneously robust against (l_inf, l_2, l_1) threat models, which achieves adversarial accuracy rates of (47.6%, 64.3%, 53.4%) for (l_inf, l_2, l_1) perturbations with epsilon radius = (0.03,0.5,12). | [
"adversarial",
"robustness",
"multiple perturbation",
"MNIST",
"CIFAR10"
] | Reject | https://openreview.net/pdf?id=rklMnyBtPB | https://openreview.net/forum?id=rklMnyBtPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"MtIbxeI2r",
"SyeDf91Ojr",
"r1g1oFy_ir",
"BJeDyFyOsr",
"HklqfdkdjB",
"H1g-LzbbiB",
"rJxZsWWWiS",
"SJxeVZ-bjS",
"rJgRBxZ-iB",
"SklAX8hm5H",
"HylxMAzTtB",
"SJl0iGAcFH",
"HkxGjYo5FH",
"SkexvzFOKB",
"H1xW1_ptdS",
"HylF9H3F_S",
"rkeCf_Ukur",
"rkeBvmUJOr",
"S1ehCh5pPr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"comment",
"official_comment",
"official_review",
"comment",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798736579,
1573546511305,
1573546391444,
1573546206891,
1573546001859,
1573093961368,
1573093784971,
1573093671746,
1573093445574,
1572222502357,
1571790344077,
1571639973747,
1571629466116,
1571488343690,
1570523096659,
1570518416592,
1569839126434,
1569837917184,
1569725652041
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1944/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1944/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1944/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1944/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1944/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1944/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1944/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1944/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1944/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1944/AnonReviewer3"
],
[
"~Anthony_Wittmer1"
],
[
"ICLR.cc/2020/Conference/Paper1944/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1944/AnonReviewer2"
],
[
"~Anthony_Wittmer1"
],
[
"ICLR.cc/2020/Conference/Paper1944/Authors"
],
[
"~Anthony_Wittmer1"
],
[
"ICLR.cc/2020/Conference/Paper1944/Authors"
],
[
"~Anthony_Wittmer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Thanks to the authors for submitting the paper and providing further explanations and experiments. This paper aims to ensure robustness against several perturbation models simultaneously. While the authors' response has addressed several issues raised by the reviewers, the concern on the lack of novelty remains. Overall, there is not enough support among the reviewers for the paper to be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Updated paper\", \"comment\": [\"As noted above, we have made the following changes to the paper to reflect the feedback you have provided:\", \"We have added a substantial discussion on the different risk tradeoffs between threat models that the various algorithms obtain. In summary, the simpler generalizations result in unclear tradeoffs, while MSD consistently minimizes worst-case performance over the union. This was added to the last paragraph of Section 5.\", \"We have updated Figures 2 and 3 to allow for the more systemic comparison to baseline defenses by merging them with the corresponding figures in the Appendix, as suggested.\"]}",
"{\"title\": \"Updated paper\", \"comment\": [\"As noted above, we have made the following changes to the paper to reflect the feedback you have provided:\", \"We have added the experiment requested, where we show model performance on CIFAR-10-C. This is in Appendix E, and reflects our previous comment.\", \"We have adjusted the text to make it clear that Schott et al. use the fact that the L-infinity defense overfits to L-infinity perturbations as motivation for their paper, to avoid the misunderstanding that you brought up.\"]}",
"{\"title\": \"Updated paper\", \"comment\": \"As noted above, we have made the following changes to the paper to reflect the feedback you have provided:\\n\\n+ We have added the experiment requested, where we train on Linfinity and L1, while evaluating on L2. This is in Appendix E, and reflects the additional discussion in the comment above.\"}",
"{\"title\": \"Revisions to the paper\", \"comment\": \"In light of the reviewer feedback, we have made a number of changes to the paper which we outline in this comment. These changes reflect nearly all of the constructive suggestions that we have received from the reviewers.\\n\\nOf course, we are aware that there seems to be a fundamental disagreement over the importance of evaluating an adversarial defense outside of its threat model, which we discussed in a longer, earlier comment. Despite this being a completely non-standard metric for evaluating adversarial defenses throughout the literature, we have gone ahead and incorporated all of the suggestions for adding these experiments into the updated paper. Note that all of the relevant work that we compare to does *not* do any evaluation of this sort, so it is rather unprecedented for this to suddenly become a necessary requirement.\", \"summary_of_changes\": [\"We have added a substantial discussion on the different risk tradeoffs between threat models that the various algorithms obtain, as requested by Reviewer 2. In summary, the simpler generalizations result in unclear tradeoffs, while MSD consistently minimizes worst-case performance over the union. This was added to the last paragraph of Section 5.\", \"We have updated Figures 2 and 3 to allow for the more systemic comparison to baseline defenses by merging them with the corresponding figures in the Appendix, as requested by Reviewer 2.\", \"We have added the experiment requested by Reviewer 3, where we show model performance on CIFAR-10-C. This is in Appendix E, and reflects the additional discussion we've had with Reviewer 3 on OpenReview.\", \"We have adjusted the text to make it clear that Schott et al. use the fact that the L-infinity defense overfits to L-infinity perturbations as motivation for their paper, to avoid the misunderstanding that Reviewer 3 brought up.\", \"We have added the experiment requested by Reviewer 1, where we train on Linfinity and L1, while evaluating on L2. This is in Appendix E, and reflects the additional discussion we've had with Reviewer 1 on OpenReview.\", \"Lastly, we remind our reviewers that we have already stuck to a very high standard for an extensive adversarial evaluation within the threat model, following best practices in the field and using a wide variety of gradient and non-gradient based attacks, which is among the most comprehensive evaluations present in the literature.\"]}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your feedback.\\n\\nClarification on \\u201coverfitting\\u201d: \\nFirst, we would like to clarify that we took the language of \\u201cadversarial training overfits to the L-infinity norm\\u201d directly from \\u201cTowards the first adversarially robust neural network model on MNIST\\u201d by Schott et al., which was published at last year\\u2019s ICLR, and is where the claim comes from (you can see this in the abstract). Of course, this is by no means a central point of the paper, and we merely wished to contextualize the result with relevant research on the same topic. We are quite willing to adjust the wording (e.g. the referenced phrasing with respect to overfitting and universal robustness).\", \"on_the_motivation_and_significance_of_msd\": \"While it is correct that the straightforward baselines work, they only work to some degree and are suboptimal when measuring their performance *with respect to the robust performance metric at which they attempt to minimize*, namely the robust optimization objective which is the performance against the union of threat models. On both MNIST and CIFAR10, we see a substantial increase in robust performance (5% and 6% respectively) on the union threat model from MSD over the baselines. This shows that the baselines, while they work to some extent, make various implicit tradeoffs that don\\u2019t actually minimize the robust objective that they are trying to minimize, and so MSD is a more direct, explicit way of minimizing the robust loss over the union adversary. The baselines themselves are also not consistent across the datasets: PGD-Aug performs poorly on MNIST, while PGD-Worst performs poorly on CIFAR10, whereas MSD is consistent across both problems. \\n\\nIt is unfortunate that you think the approach is deficient in creativity and generality. Rather, we believe that the simplicity of the method adds to its strength, showing that even simple approaches can perform quite well without resorting to complex procedures. MSD is also general in that it can utilize any first-order iterative method for adversarial generation, and is not an image-specific defense (it is at least as generally applicable as the standard adversarial training approach for a single threat model).\", \"on_generalizing_to_unforeseen_corruptions\": \"Defending against attacks outside of the threat model has never been a goal of adversarial training, and has little theoretical justification for why this would be the case. As such, performance comparisons on out-of-threat-model attacks like CIFAR-10-C, while potentially interesting, are completely orthogonal to the point of the paper. See our general comment here for a more detailed discussion: https://openreview.net/forum?id=rklMnyBtPB¬eId=rJgRBxZ-iB\\n\\nHowever, despite this, since CIFAR-10-C isn\\u2019t too expensive to evaluate, we ran this anyways just to see what happens and got the following mean accuracies:\", \"standard_model\": \"66.52%\", \"pgd_worst\": \"70.8%\", \"pgd_aug\": \"76.84%\", \"msd\": \"74.22%\\n\\nSo indeed, all of the approaches appear to improve model performance on CIFAR-10-C in comparison to standard training to some degree. However, because none of the models were explicitly trained to minimize these sorts of corruptions, we refrain from making any further conclusions.\", \"on_the_budgets_chosen_for_l1_and_l2\": \"The chosen budgets all come from the literature. For MNIST, we chose the same budget as that used in \\u201cTowards the First Adversarially Robust Neural Network Model on MNIST\\u201d [Schott et al. 2019] in order to be directly comparable and most fair in the comparison. For CIFAR10 budgets comes from \\u201cTowards Evaluating the Robustness of Neural Networks\\u201d [Carlini & Wagner 2017], though we used a smaller L1 budget to account for the difference from L0 to L1 and to not entirely subsume the other threat models.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your review and the provided suggestions.\", \"on_the_comparison_to_baseline_defenses\": \"Combining the robustness curves is a great suggestion, thank you. We do in fact have the accuracies as a function of radius for PGD-aug and PGD-worst (we had put them in the appendix as Figures 4-7 for lack of space), but we can certainly combine them into a single plot for MNIST and CIFAR10. \\n\\nAs for the baselines at robust-ml.org, since the setting we study is the union of multiple threat models, we focus on baselines which also study defending against multiple threat models. To our knowledge, the only baseline on robust-ml.org which does this is the ABS model by Schott et al., which we explicitly compare to in our paper.\", \"on_comparing_the_performances_against_individual_attacks_and_the_corresponding_risk_tradeoffs\": \"The main point is that while comparing individual threat models leads to an unclear conclusion about risk tradeoffs as you pointed out, the conclusion for the reader is quite clear when measuring performance in the \\u201call attacks\\u201d mode. This is the metric that makes the most sense, since this is exactly the robust optimization objective being minimized by all the algorithms, and has a simple interpretation as measuring performance when failure in even a single threat model is unacceptable. \\n\\nIf one wishes to instead defend against a different mixture of attacks, then it makes more sense to change the robust optimization objective to reflect the different mixture of attacks using MSD, rather than trying to obtain it ad-hoc with PGD-aug or PGD-worst using a different threat model. Please see our general comment here for a more detailed discussion on measuring and comparing performance: https://openreview.net/forum?id=rklMnyBtPB¬eId=rJgRBxZ-iB\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your feedback. We definitely aimed for this to be a \\u201cnatural\\u201d extension of adversarial, and we view the simplicity of the approach to be an advantage of the approach over relying on more complex methods. As a minor note, the extension goes beyond finding the worst-case projection: it is important to also consider the individual steepest descent directions for each threat model, so there is no singular gradient step like in PGD.\", \"on_convergence\": \"We do agree that studying the convergence properties could be interesting, however, this is not our focus and is out of the scope of this paper. This is actually a fairly complex problem: the convergence properties of steepest descent for a *single* norm (to our knowledge) for deep networks is not quite known.\", \"on_performance\": \"The main point is that while comparing individual threat models leads to an unclear conclusion about risk tradeoffs as you pointed out, the conclusion for the reader is clear when measuring performance in the \\u201call attacks\\u201d mode. This is the metric that makes the most sense, since this is exactly the robust optimization objective being minimized by all the algorithms, and has a simple interpretation as measuring performance when a failure in even a single threat model is unacceptable. We will adjust the paper accordingly to make this more obvious. Please see our general comment here for a more detailed discussion on comparing performances, as well as on generalizing outside the threat model used during training: https://openreview.net/forum?id=rklMnyBtPB¬eId=rJgRBxZ-iB\"}",
"{\"title\": \"General Response to R1, R2, R3\", \"comment\": \"A common theme throughout the reviews focuses on the performance of the MSD trained model on different threat models. Namely, 1) the performance of the model on individual threat models (subsets of the considered threat region), and 2) the ability of the model to generalize beyond the threat model it was trained on. We discuss these points in this comment, but at a high level, the main message is that the most natural metric to use for evaluating the various algorithms is the *robust objective being minimized*, which MSD does best. All other metrics (individual threat models or attacks outside the threat model) are simply not what any of these algorithms are trying to minimize. We will adjust the text to make this clear.\\n\\n1) On the performance for individual threat models: \\nAs you likely are well aware, there is very rarely free lunch in adversarial robustness: at some point, tradeoffs between various metrics (e.g. standard vs robust accuracy, or robust performance against different threat models) become inevitable. As pointed out, the other training procedures (Worst PGD and PGD Aug) do achieve different trade-offs between the various threat models, and for specific individual threat models, can achieve better performance on those specific threat models. \\n\\nHowever, the tradeoffs that these methods achieve are suboptimal when measuring performance against the *union* of multiple threat models, which is the goal of this paper (and importantly, also the mathematical objective for the robust optimization problem). By no means do we claim MSD to have, for example, the best performance on L-infinity robustness for MNIST. An adversarial attack on the union of threat models is successful if it succeeds within *any* of the threat models. This is the objective that all the methods (MSD, Worst PGD, PGD Aug) attempt to minimize, and this is where we see the advantage of MSD: it is able to achieve the best performance when the union of threat models is taken as a whole. \\n\\nThe takeaway here is that yes, there are indeed different tradeoffs obtained by the various methods, however, MSD is most effective at finding the tradeoff that maximizes the goal of robust performance to the union of perturbation sets, which is directly the robust optimization objective and thus the most natural metric. On the other hand, the alternatives (Worst PGD and PGD Aug) find some suboptimal tradeoff that doesn\\u2019t quite maximize the robust optimization objective for the union of multiple sets, despite being seemingly obvious ways to do so. \\n\\nWe do not claim to achieve top performance on individual threat models or standard accuracy (neither of which is directly the goal of the robust optimization problem for the union threat model), and so while it would be nice if this were the case, it is certainly not expected and may not even be possible. To give an analogous example, when studying threat models for a single norm, we do not carve up the threat model into subsets and compare performance within the various subsets (at least, not beyond plotting robustness curves for different radii, which we do in this paper).\", \"if_it_is_still_insisted_that_we_compare_the_individual_threat_models\": \"as Reviewer 2 discussed, it becomes unclear how to evaluate the various tradeoffs. On the other hand, the robust optimization objective, or the performance against the union of threat models, is directly the goal of all these algorithms and leaves the reader with a clear interpretation: it measures the performance of the model when a failure under any threat model constitutes an overall failure, which MSD is able to do best.\\n\\n2) On the ability of the defense to generalize beyond the threat model: \\nBeing able to generalize beyond the threat model on which a model has been trained has never been a goal of adversarial training. In most cases, there is little to no principled reason for why we would believe this to occur, and empirically the answer in the adversarial examples literature tends to be that it does not. Generalizing beyond the threat model used in training is completely orthogonal and not at all a goal of this paper, let alone a goal of adversarial training. \\n\\nRather, the goal of this paper is to present a structured way in which the threat model can be expanded *during* training, as this is the only scenario in which we would expect the defense to generalize. The way to defend against a new threat model would be to add it to the set of threat models and use one of the methods presented in this paper to defend against the union of threat models.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes to do adversarial training on multiple L_p norm perturbation models simultaneously, to make the model robust against various types of attacks.\\n\\n[Novelty] I feel this is just a natural extension of adversarial training. If we define the perturbation set in PGD to be S, then in general S can be union of perturbation set of several L_p norm, and the resulting algorithm will be MSD (everytime you do a gradient update and then find the worst case projection in S). It would be interesting to study the convergence of this kind of algorithms, since S is no longer convex, the projection is trickier to define. Unfortunately this is not discussed in the paper. \\n\\nIn terms of experiments, this is an interesting data point to show that we can have a model that is (weakly) robust to L1, L2 and Linf norms simultaneously. However, the results are not surprising since there's more than 10% performance decreases compared to the original adversarial training under each particular attack. So it's still not clear whether we can get a model that simultaneously achieves L1, L2, Linf robust error comparable to original PGD training. \\n\\n[Performance] \\n- It seems MSD is not always better than others (worst PGD and PGD Aug). For MNIST, MSD performs poorly on Linf norm and it's not clear why.\\n- There's significant performance drop in clean accuracy, especially MSD on MNIST data. \\n\\n[Suggestions]\\n- As mentioned before, studying the convergence properties of the proposed methods will be interesting. \\n- It will be interesting if you can train on a set of perturbation models and make it also robust to another perturbation not in the training phase. For instance, can we apply the proposed method to L{1,inf} in training and generalize to L2 perturbation? \\n\\n=====\\nThanks for the response. I still have concerns about novelty so would like to keep my rating unchanged.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper adversarially trains models against l_p norms where p is of there different values. They then propose a method which does somewhat better than the obvious way of adversarially training against more than one l_p perturbation.\\nThe motivation for the paper is limited, in that they suggest previous works have suggested adversarial training itself \\\"overfits\\\" to the given l_p norm. This isn't surprising that it works, since the straightforward baseline works. They make it seem surprising by suggesting that ABS suggested adversarial training is doomed and cannot provide robustness to l_1, l_2, l_\\\\infty norms simultaneously. The other motivation is that this is a step toward studying an expanded threat model, but the authors have not demonstrated that the learned representations are any bit more robust to common corruptions (could the authors show the generalization performance on CIFAR-10-C or generalization to unforeseen corruptions?). Without further evidence, we are left to believe this only helps for this narrow threat model. Overall the paper is deficient in creativity and generality, so I vote for rejection.\", \"small_comments\": \"> take more time than a single norm, it is a step closer towards the end goal of truly robust models, with adversarial robustness against all perturbations.\\nPlease show model performance on CIFAR-10-C since if the model is more robust, it should hopefully be more robust to stochastic adversaries.\\n\\n> has claimed that adversarial training \\u201coverfits\\u201d to the particular type of perturbation used to generate the adversarial examples\\nWouldn't this be that l_\\\\infty training fits specifically to l_\\\\infty examples, not that robust optimization cannot handle more than one norm at a time? Who is claiming that?\\n\\n> First, we show that even simple aggregations of different adversarial attacks can achieve competitive universal robustness against multiple perturbations models without resorting to complex architectures.\\nI am not sure this was in doubt. The phrase \\\"universal robustness\\\" is misleading.\\n\\nHow were the budgets chosen for l_2 and l_1? Those values seem small.\"}",
"{\"comment\": \"Yes, it is effective to improve the robustness by taking mutiple time to train with mutiple adversaries.\\n\\nHowever, it is better to develop well-generalized model to align better with **human perception**, rather than roughly taking plenty of time to train with all adversaries. Besides the L_p norm restricted adversaries, there are some unrestricted adversaries, such as spatial attack, semantic attack and so on.\", \"title\": \"Yes\"}",
"{\"comment\": \"Defending against one norm, as you probably already know, only defends against a single adversary. While defending against multiple norms does, in fact, take more time than a single norm, it is a step closer towards the end goal of truly robust models, with adversarial robustness against all perturbations.\", \"title\": \"Multiple Norms\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary of the paper: The paper describes adversarial training aiming to build models that are robust to multiple adversarial attacks - with L_1, L_2 and L_inf norms. The method is a based on adversarial training against a union of adversaries. That union is created by taking (projected) gradient steps like PGD (Kurakin 2017), but choosing the maximal loss over GD steps for L1, L2, L_inf at each step.\", \"strengths\": \"The topic is trendy and interesting. The proposed algorithm is simple and easy to implement. The experimental results demonstrate improvement over several baselines.\", \"weaknesses\": \"-- I am missing a more systematic comparisons to baseline defenses in the experiments. Figures 2 and 3 should have shown the accuracy as a function of radius also for PGD-aug, PGD-worst, Schott et al. Also what about comparisons to the latest SoTA defenses, e.g. recent baselines from from \\nwww.robust-ml.org/defenses/. \\n\\n-- An implicit expectation from this paper is that it addresses the key issue of \\\"Defend against one attack but face a different attack\\\". The paper could have done more to advance our understanding of this issue. Specifically: \\n\\nThe approach improves over baselines for the \\\"all attacks\\\" mode, but under-performs compared with PGDaug and PGDworst when attacked with a single norm (Tab 1). \\n\\nWhile this is expected and probably cannot be avoided, it leaves the reader with an unclear conclusion about risk tradeoffs. It would have been useful to clarify the regime of mixtures of attacks where the various approaches are best. For instance, if one uses a of mix attack samples from the three norms, what mixtures would it be best to defend using MSD, wand what mixtures would it be best to use PGD-aug? or ABS?\"}",
"{\"comment\": \"Thanks for the reply.\\n\\nIt looks like a greedy algorithm, which chooses the worst case for various (P_1, P_2 or P_inf) norm at each iteration. However, in this way, the adversarial example at the last iteration is not necessarily the worst case for the attack process. Maybe reinforcement learning can help.\\n\\nDo the authors have any insight about integating different norm to one specific norm, which may take less traning time rather than increasing the training time by the number of various norm?\", \"title\": \"Thanks for the reply.\"}",
"{\"comment\": \"Yes, you are correct. More precisely, the algorithm does a projection back to the relevant l_p ball (which turns out to be clipping in case of l_inf). However, as mentioned above, the \\\"switch in l_p choice\\\" happens only according to the \\\"loss value\\\" of the current step. So, even though it may seem to you that the perturbation value is getting 'reduced', the iterant is actually moving to a point with a higher loss value, and such transitions are found to be beneficial to the training.\", \"title\": \"Clarification\"}",
"{\"comment\": \"Sorry, it may not be clear.\\n\\nIn order to control the adversarial perturbations in the specific norm bounded, it needs the clipping operation at the ending of attack. Later clipping operation may affect the adversarial perturbations generated by previous norm. \\n\\nFor example, previous norm is L_2 norm, so on some pixels, the perturbations are zero, and on some pixels, the perturbations are larger than 10. If the next norm is L_{\\\\infty } with the epsilon 8, the clipping operation will reduce the perturbations (larger than 10) of the previous attack to 8.\", \"title\": \"Clipping operation\"}",
"{\"comment\": \"Thank you for your interest. At each iteration, MSD aims to maximize the loss of adversarial perturbation that is generated after taking a step in the direction of either P_1, P_2 or P_inf adversary and projecting back to the corresponding perturbation ball. The decision of the next iteration is agnostic of what happened in the previous one, so any decision taken 'later' is only taken to improve open the 'previous' loss value. In practice, MSD is found to benefit by 'switching' norm decisions during the descent iterations.\", \"title\": \"MSD : adversarial perturbation w.r.t. iteration\"}",
"{\"comment\": \"Great work.\\n\\nFor the combination of mutiple pertubation by PGD augmentation with all perturbations , the method seems like another type of ensemble adversarial training[1], which trains with different adversaries.\\n\\nFor multi steepest descent(MSD), how to control the adversarial perturbations in the various norm bounded? At each iteration, the norm to be chosen may be different, and later norm may affect the adversarial perturbations generated by previous norm.\\n\\n[1] Ensemble Adversarial Training: Attacks and Defenses. ICLR 2018\", \"title\": \"small question\"}"
]
} |
SylzhkBtDB | Understanding and Improving Information Transfer in Multi-Task Learning | [
"Sen Wu",
"Hongyang R. Zhang",
"Christopher Ré"
] | We investigate multi-task learning approaches that use a shared feature representation for all tasks. To better understand the transfer of task information, we study an architecture with a shared module for all tasks and a separate output module for each task. We study the theory of this setting on linear and ReLU-activated models. Our key observation is that whether or not tasks' data are well-aligned can significantly affect the performance of multi-task learning. We show that misalignment between task data can cause negative transfer (or hurt performance) and provide sufficient conditions for positive transfer. Inspired by the theoretical insights, we show that aligning tasks' embedding layers leads to performance gains for multi-task training and transfer learning on the GLUE benchmark and sentiment analysis tasks; for example, we obtained a 2.35% GLUE score average improvement on 5 GLUE tasks over BERT LARGE using our alignment method. We also design an SVD-based task re-weighting scheme and show that it improves the robustness of multi-task training on a multi-label image dataset. | [
"Multi-Task Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=SylzhkBtDB | https://openreview.net/forum?id=SylzhkBtDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"smZzLCl-ZDT",
"NvlGf_jfLY6",
"nKybGpU3_5",
"SJeBOI5jjB",
"SylK5Ywoir",
"SkgiOcavjB",
"r1x_6VTwsS",
"rJea5zaPir",
"rkee1Z6PiS",
"rkl9a2T3KS",
"SJeNT1T3YH",
"rJlRuaviKB"
],
"note_type": [
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1690909206717,
1601178928110,
1576798736550,
1573787245398,
1573775761189,
1573538419096,
1573536960119,
1573536404705,
1573535959724,
1571769537613,
1571766204485,
1571679606106
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1943/Authors"
],
[
"~Joshua_Yee_Kim1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1943/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1943/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1943/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1943/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1943/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1943/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1943/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1943/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1943/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Re: Open Source Code\", \"comment\": \"Dear Joshua,\\n\\nThanks for your inquiry. We built all of our multitask learning experiments using emmental, which can be publicly accessed in the following GitHub repository:\", \"https\": \"//github.com/SenWu/emmental-tutorials\\n\\nIf you need help with setting up experiments in our paper, feel free to email us. We're happy to help you set things up.\\n\\nApologies for the delay in response.\\n\\nHongyang\"}",
"{\"title\": \"Open Source Code\", \"comment\": \"Hi Authors,\\n\\nThank you for the very interesting work. Could I please follow-up on link to the open source code regarding, \\\"Our code will be open-sourced after the reviewing process.\\\"\\n\\nBest Regards,\\nJoshua\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Many existing approaches in multi-task learning rely on intuitions about how to transfer information. This paper, instead, tries to answer what does \\\"information transfer\\\" even mean in this context. Such ideas have already been presented in the past, but the approach taken here is novel, rigorous and well-explained.\\n\\nThe reviewers agreed that this is a good paper, although they wished to see the analysis conducted using more practical models. \\n\\nFor the camera ready version it would help to make the paper look less dense.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for raising this question\", \"comment\": \"We use alternative optimization in our implementation of Alg 1. For each epoch, we iterate over all the task batches. If the current batch is from task $i$, then the SGD is applied on $A_i$ and $R_i$. The other parameters are fixed. We have revised Alg 1 to clarify this step and also included the description of our SGD implementation in Appendix C.3.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your detailed rebuttal and efforts for making the paper better.\\n\\nMost of my confusions have been solved and the only unclear point might be in question (2):\\n\\n\\u201cStep 3, how to jointly minimize R_1,\\\\dots, R_k, A_1, \\\\dots, A_k ?\\u201d\\nI am wondering the high level solution. For example If we are only considering a simple linear model, jointly optimization means that we simply set a global parameter $\\\\phi = (R_1,\\\\dots, R_k, A_1, \\\\dots, A_k) $ and apply gradient descent over $\\\\phi$ of the loss (seems more difficult). Or we use alternative optimization (seems more common), for each optimization step we fix all parameters except one parameter and optimize only that one parameter.\\n\\nOverall I keep my current decision and think it is indeed a good paper.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": [\"We thank the reviewer for the positive feedback and for the appreciation of our theoretical contribution and our effort on writing. We respond to the two comments under \\u201ccons\\u201d here.\", \"> \\u201cThere is not much of novelty in the algorithm and architecture. Their method is very similar to domain adaptation but for multi-learning setting.\\u201d\", \"We do agree that in domain adaptation, it is well-understood that the divergence between the source and target distributions can cause negative transfer. Hence, the general recipe is to correct this divergence by matching the source distribution to the target. In the multi-task setting, however, the interaction/interference between the tasks is much more complicated, e.g. positive and negative effects can happen at the same time (e.g. figure 6). To determine the type of interference, we provide a theoretical framework to study this question in linear and ReLU models and we develop theory to identify the components which cause positive and negative transfers.\", \"We would like to emphasize that our covariance alignment algorithm and SVD-based reweighing scheme are both consequences derived from our theory. The additional experiments we added into the revision verify that the alignment algorithm can correct misaligned task data for linear models (Appendix C.5), and we have shown that it works well for highly non-linear networks (Sec. 3.2). Our insight for these empirical results is that there exists an alignment matrix that corrects the differences between the task covariances, which can cause negative effect in MTL. We believe that this insight is applicable to more sophisticated architectures.\", \"In addition to the algorithms, our theoretical framework provides some general rules of thumb and tools to help MTL in practice, including i) We show that the capacity of the shared MTL module should not exceed the total capacities of all the STL modules; ii) We propose the cosine similarity score to measure the similarities of task data and track the progress of the alignment procedure.\", \"> \\u201cIn the Theorem 2, they have assumed parameter $c <= 1/3$. They have not provided any insight of how much restrictive this assumption is.\\u201d\", \"In Theorem 2, the assumption that $c <= 1/3$ arises when we deal with the label noise of task 2. If there is no noise for task 2, then this assumption is not needed. If there is noise for task 2, this assumption is satisfied when $\\\\sin(\\\\theta_1, \\\\theta_2)$ is less than $1/(3\\\\kappa(X_2))$. This is satisfied when the two single-task models are close enough, which is intuitively necessary to guarantee positive transfer. Indeed our experiments also show that the value of $\\\\sin(\\\\theta_1, \\\\theta_2)$ affects performance (Figure 8 and 9 in Appendix C.4).\", \"Theorem 2 guarantees positive transfers in MTL, when the source and target models are close enough and the number of source samples is large. While the intuition is folklore in MTL, we provide a formal justification in the linear and ReLU models to quantify the phenomenon.\", \"We have added these discussions to provide more insight on Theorem 2 into the revision in Appendix B.2.2.\"]}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for appreciating the contribution of our work, and we are grateful to the suggestions which help improve our work. We added experiments to evaluate our method on linear and ReLU models regarding the suggested gap between our theory and the experiments. We added the related work [1] and provided comparative experiments with [1]. We addressed the parts that are confusing as suggested. Here are our responses regarding each comment.\", \"response_to_major_comments\": [\"> \\u201cOne issue with the submission is that there is a significant gap between the theory and experimental sections as theory only covers linear models and the experiments don\\u2019t include linear models and purely focus on deep networks.\\u201d\", \"We have added experiments to evaluate our alignment method (Alg 1) on linear and ReLU-activated models for synthetic data (Appendix C.4). Our method can indeed help these models in addition to deep networks.\", \"> \\u201cAdditional assumptions (1D labels, same input dimensionality across all tasks) should be emphasised to clarify limitations of all derivations.\\u201d\", \"We have revised Sec. 2.1 to emphasize that the labels are 1D in our model (the same input dimensionality assumption is also stated in Sec. 2.1). A multi-label problem with k types of labels can be modeled by k tasks with the same covariates but different labels.\", \"> \\u201cWhere previous work addressed model similarity it often looks at models in the context of existing datasets (i.e. taking the data into account to describe boundaries etc) such that the emphasised novelty at looking at data similarity is to be taken with a grain of salt.\\u201d\", \"We agree that if we compare two task models trained from the datasets, their similarity already depends on the data. In our experience in looking at LSTM/CNN/MLP models on sentiment analysis, however, we have observed that just measuring the similarity of the model weights or feature outputs is too crude to tell whether or not MTL shows positive or negative benefit, even for a single layer. This is likely due to the differences between the task data and the specific model used. And there is currently no theoretical framework to answer this question.\", \"To provide a more precise answer, we formulate this question in a simple setup. Our theory disentangles the model part and the data part. By doing so, figure 2 shows that task data similarity plays a second-order effect after controlling model similarity to be the same. Our intuition is that this arises from the shift of the covariance matrices between the task data. Our theory formalizes the covariances and our experiments show the benefit of aligning the covariance matrices on deep networks.\", \"We have revised the third paragraph in the intro to make it more clear.\", \"> \\u201cWhile the model with non-linear activation is mentioned at places, nearly all theorems rely on the linear model instead such that it might make sense to either work towards generalising the theorems or emphasising that most only apply to linear models.\\u201d\", \"We thank Reviewer 2 for pointing these out. We have extended the theoretical result of Sec. 2.2 so that it applies to ReLU settings. The theoretical result of section 2.3 also applies to ReLU settings. So the only result which does not apply to the ReLU setting is proposition 3 in Sec. 2.4. The question of characterizing the optimization landscape in non-linear ReLU models is not well-understood based on the current theoretical understanding of neural networks. We think this is an open research direction and we have stated this question in the revised version.\"], \"response_to_minor_comments\": \"> \\u201cy is used as label and as data terminology at different parts of the text\\u201d\\n\\n - Thanks for pointing out this issue. We have corrected the use of y in the revision.\\n\\n > \\u201cthe model in the first set of experiments has lower capacity than most models individually, suggesting that the capacity should be smaller even for individual tasks to prevent overfitting. An ablation over model capacities is mentioned but missing for 3.3\\u201d\\n\\n - We have added an ablation study over model capacities to show the performance of MTL and STL as we vary the capacities (Appendix C.5). This indicates the best performing capacities we choose in Figure 6. We also added plots on CNN/MLP to show the same results.\\n\\n > \\u201ccomparison against existing multitask loss weighting techniques should be performed [1]\\u201d\\n\\n - Thanks for pointing out this work. We have compared our method (Alg 1) to the techniques in [1]. This is added as a benchmark in Sec. 3.1. Our SVD-based scheme performs favorably on the ChestX-ray14 dataset. The results are in Sec. 3.2 and ablation results are in Sec. 3.3 and Appendix C.5. Across all 14 tasks, our scheme outperforms [1] by 1.3% AUC score.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for appreciating our theoretical insights and the strength of our work. We appreciate the reviewer\\u2019s effort to provide detailed comments which we have incorporated in the revision. Here are our changes with respect to each comment.\\n\\n > \\u201c1. I suggest the author to merge the Figure 3 and Data generation (Page 4) part for a better presentation. e,g which \\u201cdiff.covariance\\u201d is task 3 or 4? And why we use different rotation matrix Q_i ?\\u201d\\n\\n - We have revised Figure 3 and the Data generation paragraph in Sec. 2.3 to clarify the two issues.\\n - Figure 3 is simplified to 2 curves without affecting the message. The caption connects the figure to the generation process. And the generation process is revised accordingly in reference to the figure.\\n - The different rotation matrices Q_i are used to create a \\u201ccovariate shift\\u201d between the two tasks. As Figure 3 shows, this shift leads to a negative transfer of MTL in the regime where the number of source data points is small.\\n\\n > \\u201c2. In algorithm 1 (Page 5), I suggest the author use a formal equation (like algorithm 2) instead of descriptive words.\\n -- Step 2, I have trouble in understanding this step.\\n -- Step 3, how to jointly minimize R_1,\\\\dots, R_k, A_1, \\\\dots, A_k ? we use loss (3) or other losses?\\n -- I suggest that the author release the code for a better understanding.\\u201d\\n\\n - We have revised the description of algorithm 1 to define a formal loss. To minimize the modified loss over the alignment matrices and the output layers, we use standard training procedures (mini-batch SGD, cf. Appendix C.3). Our code will be open-sourced after the reviewing process.\\n\\n > \\u201c3. For theorem 2, can we find some \\u201coptimal\\u201d c to optimize the right part ? Since 6c + \\\\frac{1}{1-3c}\\\\frac{\\\\epsilon}{\\\\X_2\\\\theta_2} might be further optimized.\\u201d\\n\\n - The error bound $6c + \\\\frac{1}{1-3c}\\\\frac{||\\\\epsilon||}{||X_2\\\\theta_2||}$ decreases with c so the smaller c is the better. We have revised Theorem 2 and added a discussion regarding the error bound (Appendix B.2.2).\\n\\n > \\u201c4. In section 3.3. (Figure 6) of the real neural network, the model capacity is the dimension of Z or simply the dimension before last fc-layer?\\u201d\\n\\n - The model capacity is the dimension before the last fc-layer. We have revised this sentence to make it clear.\\n\\n > \\u201c5. Some parts in the appendix can be better illustrated:\\n (a) I am not clear how proposition 4 can derive proposition 1.\\n (b) Page 15, proving fact 8: last line \\\\frac{1}{k^4}sin(a^{prime},b^{prime}) should be \\\\frac{1}{k^4}sin^{2}(a^{prime},b^{prime}).\\u201d\\n\\n - (a) We have revised proposition 4 so that it becomes more clear that it can derive proposition 1. In particular, proposition 4 states that the subspace of the shared module is all that matters, hence having the $\\\\{\\\\theta_i\\\\}$\\u2019s in its column span suffices. Proposition 1 instantiates this intuition.\\n - (b) Thanks for catching the typo. We have fixed it.\"}",
"{\"title\": \"Summary of the revision and the response\", \"comment\": \"We thank all the reviewers for the positive feedback and the detailed comments. In response to the reviewers\\u2019 suggestions, we have revised our paper, including three sets of additional experimental results to consolidate our results as follows.\\n\\n 1- Clarify our model assumption on 1D label and how to model multi-label problems, the example of figure 3 and the data generation description, and the theory part (formal description of Alg 1, extendable to ReLU or not, discussion on Theorem 2 in Appendix B.2.2).\\n\\n 2- Additional experiments on linear and ReLU models to validate our alignment method (Appendix C.5). This confirms that our method (Alg 1) can help linear models in addition to deep networks, as Reviewer #2 asked about.\\n\\n 3- We conduct an additional experiment to compare our SVD-based reweighting scheme to the loss weighting techniques of Kendall et al.\\u201918, as Reviewer #2 requested. On the ChestX-ray14 dataset, we found that our method improves performance by 1.3% AUC score compared to the suggested work (Sec. 3.2).\\n\\n 4- Additional ablation studies on model capacities to further validate our results (Appendix C.5), as Reviewer #2 asked about.\\n\\nFinally, we respond to all the comments raised by the reviewers in detail. The comments have all been incorporated into the revision.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper analyzed the principles for a successful transfer in the hard-parameter sharing multitask learning model. They analyzed three key factors of multi-task learning on linear model and relu linear model: model capacity (output dimension after common transformation), task covariance (similarity between tasks) and optimization strategy (influence of re-weighting algorithm), with theoretical guarantees. Finally they evaluated their assumptions on the state-of-the-art multi-task framework (e.g GLUE,CheXNet), showing the benefits of the proposed algorithm.\", \"main_comments\": \"This paper is highly interesting and strong. The author systematically analyzed the factors to ensure a good multi-task learning. The discovering is coherent with with previous works, and it also brings new theoretical insights (e.g. sufficient conditions to induce a positive transfer in Theorem 2). The proof is non-trivial and seems technically sound.\\n\\nMoreover, they validated their theoretical assumptions on the large scale and diverse datasets (e.g NLP tasks, medical tasks) with state-of-the-art baselines, which verified the correctness of the theory and indicated strong practical implications.\", \"minor_comments\": \"\", \"the_main_message_of_the_paper_is_clear_but_some_parts_still_confuse_me\": \"1. I suggest the author to merge the Figure 3 and Data generation (Page 4) part for a better presentation. e,g which \\u201cdiff.covariance\\u201d is task 3 or 4 ? And why we use different rotation matrix Q_i ? \\n\\n2. In algorithm 1 (Page 5) , I suggest the author use a formal equation (like algorithm 2) instead of descriptive words.\\n -- Step 2, I have trouble in understading this step.\\n -- Step 3, how to jointly minimize R_1,\\\\dots, R_k, A_1, \\\\dots, A_k ? we use loss (3) or other losses ?\\n -- I suggest that the author release the code for a better understanding.\\n\\n3. For theorem 2, can we find some \\u201coptimal\\u201d c to optimize the right part ? Since 6c + \\\\frac{1}{1-3c}\\\\frac{\\\\epsilon}{\\\\X_2\\\\theta_2} might be further optimized \\n\\n4. In section 3.3. (Figure 6) of the real neural network, the model capacity is the dimension of Z or simply the dimension before last fc-layer ?\\n\\n5. Some parts in the appendix can be better illustrated:\\n (a) I am not clear how proposition 4 can derive proposition 1.\\n (b) Page 15, proving fact 8: last line \\\\frac{1}{k^4}sin(a^{prime},b^{prime}) should be \\\\frac{1}{k^4}sin^{2}(a^{prime},b^{prime}). \\n\\n\\nOverall I think it is a good work with interesting discoverings for the multi-task learning. I think it will potentially inspire the community to have more thoughts about the transfer learning.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The submission investigates multitask learning (MTL) and develops new theories around MTL with linear models and linear+ReLU. In the experimental section, the authors improve performance in sentiment analysis on subtasks of the GLUE benchmark (building on BERT - highly non-linear neural network) and show a SVD-based task loss reweighting scheme on an multi-label image classification dataset.\\n\\nThe submission is overall well written though some paragraphs (2.1-2.3, in particular the example section) would benefit from additional effort towards clearer sentences. One issue with the submission is that there is a significant gap between the theory and experimental sections as theory only covers linear models and the experiments don\\u2019t include linear models and purely focus on deep networks. The benefits of a bottleneck in multitask learning are well known (based empirical results). However, it is helpful that the additional theoretical results (given strong assumptions) provide some grounding. \\nWhile the model with non-linear activation is mentioned at places, nearly all theorems rely on the linear model instead such that it might make sense to either work towards generalising the theorems or emphasising that most only apply to linear models.\\n\\nAdditional assumptions (1D labels, same input dimensionality across all tasks) should be emphasised to clarify limitations of all derivations. Where previous work addressed model similarity it often looks at models in the context of existing datasets (i.e. taking the data into account to describe boundaries etc) such that the emphasised novelty at looking at data similarity is to be taken with a grain of salt.\\n\\nOverall, the paper contributes to the conversation around multitask learning but would benefit from comparing again external work on multitask learning (e.g. see under minor) and from bridging between theory and experiments (e.g. experiments with the models described in the theory section - linear/ReLU).\", \"minor\": [\"y is used as label and as data terminology at different parts of the text.\", \"the model in the first set of experiments has lower capacity than most models individually, suggesting that the capacity should be smaller even for individual tasks to prevent overfitting.\", \"An ablation over model capacities is mentioned but missing for 3.3\", \"comparison against existing multitask loss weighting techniques should be performed [1]\", \"[1] Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics\", \"Alex Kendall, Yarin Gal, Roberto Cipolla 2017\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies how to improve the multi-task learning from both theoretical and experimental viewpoints. More specifically, they study an architecture where there is a shared model for all of the tasks and a separate module specific to each task. They show that data similarity of the tasks, measured by task covariance is an important element for the tasks to be constructive or destructive. They theoretically find a sufficient condition that guarantee one task can transfer positively to the other; i.e. a lower bound of the number of data points that one task has to have. Consequently, they propose an algorithm which is basically applying a covariance alignment method to the input.\\nThe paper is well-written, and easy to follow.\", \"pros\": \"A new theoretical analysis for multi-task learning, which can give insight of how to improve it through data selection.\\nThey empirically show that their algorithm improves the multi-task learning on average by 2.35%.\", \"cons\": \"There is not much of novelty in the algorithm and architecture. Their method is very similar to domain adaptation but for multi-learning setting.\\nIn the Theorem 2, they have assumed parameter c <= 1/3. They have not provided any insight of how much restrictive this assumption is.\"}"
]
} |
ryGWhJBtDB | Hyperparameter Tuning and Implicit Regularization in Minibatch SGD | [
"Samuel L Smith",
"Erich Elsen",
"Soham De"
] | This paper makes two contributions towards understanding how the hyperparameters of stochastic gradient descent affect the final training loss and test accuracy of neural networks. First, we argue that stochastic gradient descent exhibits two regimes with different behaviours; a noise dominated regime which typically arises for small or moderate batch sizes, and a curvature dominated regime which typically arises when the batch size is large. In the noise dominated regime, the optimal learning rate increases as the batch size rises, and the training loss and test accuracy are independent of batch size under a constant epoch budget. In the curvature dominated regime, the optimal learning rate is independent of batch size, and the training loss and test accuracy degrade as the batch size rises. We support these claims with experiments on a range of architectures including ResNets, LSTMs and autoencoders. We always perform a grid search over learning rates at all batch sizes. Second, we demonstrate that small or moderately large batch sizes continue to outperform very large batches on the test set, even when both models are trained for the same number of steps and reach similar training losses. Furthermore, when training Wide-ResNets on CIFAR-10 with a constant batch size of 64, the optimal learning rate to maximize the test accuracy only decays by a factor of 2 when the epoch budget is increased by a factor of 128, while the optimal learning rate to minimize the training loss decays by a factor of 16. These results confirm that the noise in stochastic gradients can introduce beneficial implicit regularization. | [
"SGD",
"momentum",
"batch size",
"learning rate",
"noise",
"temperature",
"implicit regularization",
"optimization",
"generalization"
] | Reject | https://openreview.net/pdf?id=ryGWhJBtDB | https://openreview.net/forum?id=ryGWhJBtDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"w7yCvVN-vG",
"rkgI_uAYjB",
"rye3zaZ7or",
"HkemB2WXiB",
"ryltuFZ7sr",
"BJgmhEfTcH",
"rJxkq6waYr",
"r1l1CEFwKr",
"S1xad3lftr",
"rJgnJRYXuS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798736522,
1573673070492,
1573227796418,
1573227579139,
1573226865471,
1572836523087,
1571810695511,
1571423431417,
1571060852621,
1570115043953
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1942/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1942/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1942/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1942/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1942/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1942/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1942/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1942/Authors"
],
[
"~Guodong_Zhang1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Authors provide an empirical evaluation of batch size and learning rate selection and its effect on training and generalization performance. As the authors and reviewers note, this is an active area of research with many closely related results to the contributions of this paper already existing in the literature. In light of this work, reviewers felt that this paper did not clearly place itself in the appropriate context to make its contributions clear. Following the rebuttal, reviewers minds remained unchanged.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for the response.\", \"comment\": \"I've read your response and my score remains unchanged because I haven't seen any update of the paper.\"}",
"{\"title\": \"Response to review\", \"comment\": \"We thank the reviewer for their comments.\\n\\nAlthough our primary contributions are empirical, we also provided a detailed theoretical discussion in section 2, where we give a clear and simple account of why the two regimes arise. Although previous authors have also discussed some of these results, there are differences between our conclusions, as we discussed in our responses to the other two reviewers.\\n\\nWe would also like to emphasize that we make a significant contribution to the debate regarding SGD and generalization. While many papers have proposed that small batches may generalize better than large minibatches, it was recently pointed out by Shallue et al. that none of these experiments provide convincing evidence for this claim, because no experiment to date has compared small and large batch training under a constant step budget with a realistic learning rate decay schedule while independently tuning the learning rate at each batch size. We are the first to run this experiment and conclusively establish that SGD noise does enhance generalization in popular models/datasets. We believe this is an important contribution.\\n\\nWe also provide intriguing results as we vary the epoch budget, which demonstrate that the optimal learning rate which maximizes the test accuracy does not decrease as the epoch budget rises. This supports the notion that SGD has an optimal \\u201ctemperature\\u201d which biases it towards solutions that generalize well. Additional experiments in the appendix G go further and study how the optimal learning rate schedule changes as we increase the epoch budget.\"}",
"{\"title\": \"Response to review\", \"comment\": \"We thank the reviewer for their helpful comments.\\n\\nWe agree that our most surprising results are for SGD under constant step budgets or unlimited epoch budgets. However the behaviour of SGD under constant epoch budgets has generated a lot of debate in the literature in recent years, and we felt it was important to address this simple case first. We agree that some of the observations in sections 2 and 3 have already been made in previous work, however there are also several important differences:\\n\\n1. Ma, Bassily and Belkin also introduced the notion of two regimes, however their theory holds for convex losses in the interpolating regime. We will discuss their contribution explicitly in the updated text. Our discussion in section 2 clarifies why the two regimes arise in practical deep learning models for which these conditions may not hold.\\n\\n2. Our paper is the first to relate the two regimes of SGD to the popular analogy between SGD and stochastic differential equations (SDEs). As we show in later sections, this perspective is crucial to understanding the influence of batch size and learning rate on test accuracy. A common criticism of this analogy is that SGD noise is not Gaussian when the batch size is small. To our knowledge, we are the first to show that the analogy between SGD and SDEs holds for non-Gaussian short-tailed noise (appendix B). \\n\\n3. We clarify the differences to some other recent papers in our reply to reviewer 1.\\n\\nTwo reviewers complained that it was difficult to tell from the text which contributions are novel and which also appear in previous works. We apologise for this. It was not our intention and we will edit sections 1 and 2 to ensure that this is resolved and that the above points are reflected in the text. \\n\\nTurning to our generalization experiments in sections 4 and 5. We agree that many authors have proposed that SGD noise enhances generalization. Most notably, Keskar et al. argued that large minibatches perform worse than small minibatches on the test set, even when both achieve similar performance on the training set. However their experiments do not provide convincing evidence for this claim, because they tuned the learning rate with small batches and then used the same learning rate value with large batches. A convincing experiment should independently tune the learning rate at all batch sizes under a constant step budget, and it should use a realistic learning rate decay schedule. \\n\\nIndeed, Shallue et al. recently argued that no existing paper has provided convincing evidence that small batch sizes generalize better than large batch sizes under constant step budgets, and they state in their abstract \\u2018We find no evidence that larger batch sizes degrade out-of-sample performance\\u2019. Meanwhile, Zhang et al. argued that optimization in deep learning is well described by a noisy quadratic model which predicts that increasing the batch size should always enhance performance under constant step budgets. To our knowledge, our experimental results in section 4 are the first to provide convincing evidence that very large minibatches do perform worse than small batch sizes on the test set, even under constant step budgets and when the learning rate is independently tuned. We believe this is an important contribution. Meanwhile, our results in section 5 suggest that SGD has an optimal temperature early in training which promotes generalization and is independent of the epoch budget.\\n\\nIn response to the reviewer\\u2019s specific comments:\\n\\n1) Looking at Figure 1c, while the optimal learning rate at 8k with Momentum is 4, the error bars at this batch size range from 4 to 32. These error bars can be very large in the curvature regime, precisely because the optimal learning rate is close to instability.\\n\\n2) Yes, Momentum will help under constant step budgets if the batch size is large, since it enables us to achieve larger effective learning rates which are beneficial for generalization. We will add additional experiments to the text to clarify this.\\n\\n3) We will clarify the meaning of warm up, epoch budget and step budget as requested.\"}",
"{\"title\": \"Response to review\", \"comment\": \"We thank the reviewer for their helpful comments.\\n\\nPlease could the reviewer clarify why they felt our work muddies the debate regarding large-batch training? We demonstrate that one can initially increase the batch size with no loss in test accuracy by simultaneously increasing the learning rate. However for very large batch sizes the test accuracy degrades under both constant epoch and constant step budgets.\\n\\nWe agree that some of our observations under constant epoch budgets in sections 2 and 3 have been made in previous work. However there are also several important differences:\\n\\n1. Our paper is the first to relate the two regimes of SGD to the popular analogy between SGD and stochastic differential equations (SDEs). As we show in sections 4 and 5, this perspective is crucial to understanding the influence of batch size and learning rate on test accuracy. A common criticism of this analogy is that SGD noise is not Gaussian when the batch size is small. To our knowledge, we are the first to show that the analogy between SGD and SDEs holds for non-Gaussian short-tailed noise (appendix B). \\n\\n2. Zhang et al. argued that Momentum only helps in the large batch limit. However, their analysis is based on the noisy quadratic model, which cannot explain the results we observed on the test set in sections 4 and 5. These experiments clearly demonstrate that, unlike the SDE perspective, the noisy quadratic model is not an appropriate model for predicting test set performance in deep learning. Their work also does not clarify the assumptions under which linear scaling of the learning rate should arise.\\n\\n3. Our empirical results in section 3 are similar to Shallue et al., however their work argues that there is no reliable relationship between learning rate and batch size. We draw a very different conclusion: the learning rate usually obeys linear scaling, but linear scaling only holds theoretically when the assumptions we specify are satisfied. Linear scaling may not hold in cases where these assumptions break down (e.g., language modelling).\\n\\n4. The observation that the test accuracy is independent of batch size in the noise dominated regime is a natural consequence of the SDE analogy, since any two training runs which integrate the same SDE should sample final parameters from the same probability distribution. We will clarify this in the updated text.\\n\\nTwo reviewers complained that it was difficult to tell from the text which contributions are novel and which also appear in previous works. We apologise for this. It was not our intention and we will edit sections 1 and 2 to ensure that this is resolved and that the above points are reflected in the text. \\n\\nTurning to our generalization experiments in sections 4 and 5. It is true that a number of papers in recent years have claimed that SGD noise enhances generalization. However Shallue et al. recently argued no previous work had provided convincing empirical evidence for this claim. Indeed in their abstract, they state \\u2018We find no evidence that larger batch sizes degrade out-of-sample performance\\u2019. In another recent paper, Zhang et al. argued that optimization in deep learning is well described by a noisy quadratic model which predicts that increasing the batch size should always enhance performance under constant step budgets. \\n\\nCrucially, to establish that SGD noise enhances generalization, one must show that small batch sizes generalize better than large batch sizes under constant step budgets, with realistic learning rate decay schedules, and one must independently tune the learning rate at each batch size. In section 4, we are the first authors to perform this experiment and confirm that the final test accuracy of SGD does degrade for very large batch sizes under both constant epoch and constant step budgets, contradicting the claims of both Shallue et al and Zhang et al. Furthermore, we show in section 5 that the optimal SGD temperature which maximizes the test accuracy is almost independent of the epoch budget. These results provide the first convincing empirical evidence that SGD noise does enhance generalization in well-tuned networks with learning rate decay schedules. We believe this is an important contribution.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper is an empirical contribution regarding SGD arguing that it presents two different behaviors which the authors name a noise dominated regimen, and a curvature dominated regime. They observe that the behaviors seem to arise in different batch sizes\\n\\nThe authors derive empirical conclusions and perform experiments in different settings. The paper is well-written and the experimental setup seems to be carefully carried out. \\n\\nI find the observations interesting, but the contribution is empirical and not entirely new. It would be nice if there were some theoretical results to back up the observations.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the properties of SGD as a function of batch size and learning rate. Authors argue that SGD has two regimes: a noise dominated regime (small batch size) and curvature dominated regime (large batch size). Authors conduct through numerical experiments highlighting how learning rate changes as a function of batch size (initially linear growth and then saturates). The critical contribution of this work appears to be the observation that large batch size can be worse than small under same number of steps demonstrating implicit regularization of small batch size.\\n\\nThe two regime claim of the paper is not really novel. These regimes are fairly well covered by previous works (e.g. Belkin et al as well as others). When it comes to experiments, constant epoch budget is also fairly well understood and the behavior in Figure 1 is not really surprising (as the eventual training performance gets worse with large batches).\\n\\nThe interesting part in my opinion is the experiments on constant steps. Authors verify large batch size reduces test accuracy while improving train. I believe these experiments are novel and the results are interesting. Besides CIFAR 10, authors test this hypothesis in two other datasets while tuning the learning rate. On the other hand, contribution is somewhat incremental given observations made by related literature (Keskar et al and others).\", \"some_remarks\": \"1) In Table 1, batch size 16k has effective LR of 32. However in Figure 1c SGD with momentum at batch size 8k uses an effective LR of 4. Can you explain this inconsistency i.e. why is there such a huge jump from 4 to 32 (in reality we expect the effective LR to stay constant in the curvature regime). I also understand that one is constant epoch and other is constant step. However 4 to 32 seems a bit inconsistent.\\n\\n2) Does momentum help in constant step budget (with sufficiently large steps so that training loss is small)?\\n\\n3) Readability: Consider explaining what is meant by \\\"warm-up\\\", \\\"epoch budget\\\", \\\"step budget\\\" clearly and upfront.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper attempts to clarify the debate on large-batch neural network training, particularly on the relationship between learning rate, batch sizes and test performance. The authors claim two contributions towards understanding how the hyper-parameters of SGD affect final training and test performance: (1) SGD exhibits two regimes with different behaviours and (2) large-batch training leads to degradation of test performance even with same step budgets.\\n\\nOverall, the authors did a comprehensive study on large-batch training with the support of extensive experiments. But I'm concerned with the novelty and contributions of this paper. I tend to reject this paper because (1) the first contribution of the paper is not new as it has already been recognized by a few paper that SGD exhibits two different regimes; (2) this paper makes the debate of large-batch training even muddier.\", \"main_argument\": \"The paper does not do a great job in clarify the debate. Particularly, the authors mixed their observations up with the results of published works, making it hard to identify the contributions of this paper. For example, the two regimes mentioned in the paper has been identified by a few other works and the contribution of this paper is just to verify them again. Also, I find the experiments done in section 3 and 4 are similar to previous works and even the conclusions are similar. The only new observation I'm aware of in these two sections is that the training loss and test accuracy are independent of batch size in the noise dominated regime.\\n\\nBack to introduction section, the goal of this paper (as claimed in the beginning of second paragraph) is to clarify the debate. But does this paper really achieves this goal? In terms of learning rate scaling, this paper gets similar conclusions as Shallue et al. (2018). In terms of the difference between vanilla SGD and SGD with momentum, Zhang et al. (2019) already argued that the difference depends on specific batch sizes and SGD with momentum only outperforms SGD in the curvature dominated regime. \\n\\nI think the authors should instead focus on the discussion of generalization performance and the observation that training loss and test accuracy are independent of batch size in noise dominated regime. To my knowledge, this part is novel and interesting. \\n\\nIn summary, I'm inclined to reject this paper given the current version. However, I think the paper is still worth reading if the authors can reorganize the paper and I might increase my score if my concerns get resolved.\"}",
"{\"comment\": \"There are many theoretical and empirical papers on this topic, however we believe there is not yet consensus in the community. As we emphasized in the introduction of our paper, our main contribution is to provide clarity with substantial empirical evidence supporting both the existence of two distinct SGD regimes, as well as the existence of implicit regularization arising from the noise in the gradient estimate. As we mention in the paper, some of the theoretical predictions we discuss have been known for a long time and derived under multiple different assumptions, and we have tried to cite multiple papers for each claim where appropriate.\\n\\nWe cite a number of papers which discuss the notion that SGD exhibits qualitatively different behaviors at different batch sizes in the paragraph immediately preceding the bullet points that you mention. We are happy to add your paper to this list. Most recent theory papers in deep learning have focused on the behavior of SGD in the small batch \\\"noise dominated\\\" regime, and we cite these appropriately when we discuss this regime in depth in section 2.\\n\\nIt is well known that full batch Momentum converges faster than gradient descent, and there are a number of papers from the 90s onward which prove that SGD and Momentum are equivalent in the small batch small learning rate limit so long as the momentum coefficient is not too large. We cite many of these papers in section 3, and we also mention that Shallue et al. observed this phenomenon empirically last year. We are happy to include your paper in this list too.\\n\\nWe would like to clarify that we do already cite your work multiple times in the main text, including in the introduction when we state that many of our theoretical results can be derived from different assumptions.\", \"title\": \"Response to comments\"}",
"{\"comment\": \"Hi,\\n\\nIn terms of two points (in the second paragraph) you made in the intro, I think you need to cite previous work properly. \\n\\n1. \\\"In the noise dominated regime, the final training loss and test accuracy are independent of\\nbatch size under a constant epoch budget, and the optimal learning rate increases as the\\nbatch size rises. In the curvature dominated regime, the optimal learning rate is independent\\nof batch size, and the training loss and test accuracy degrade with increasing batch size. The\\ncritical learning rate which separates the two regimes varies between architectures.\\\"\\n\\nYou should give credits to previous work on that as it's not a new observation.\\n\\n3. \\\"SGD with Momentum and learning rate warmup do not outperform vanilla SGD in the noise\\ndominated regime, but they can outperform vanilla SGD in the curvature dominated regime.\\\"\\n\\nMy paper \\\"Which Algorithmic Choices Matter at Which Batch Sizes? Insights From a Noisy Quadratic Model\\\" already made this point empirically and theoretically with some assumptions.\", \"title\": \"Minor Comments\"}"
]
} |
SkxWnkStvS | Searching for Stage-wise Neural Graphs In the Limit | [
"Xin Zhou",
"Dejing Dou",
"Boyang Li"
] | Search space is a key consideration for neural architecture search. Recently, Xie et al. (2019a) found that randomly generated networks from the same distribution perform similarly, which suggest we should search for random graph distributions instead of graphs. We propose graphon as a new search space. A graphon is the limit of Cauchy sequence of graphs and a scale-free probabilistic distribution, from which graphs of different number of vertices can be drawn. This property enables us to perform NAS using fast, low-capacity models and scale the found models up when necessary. We develop an algorithm for NAS in the space of graphons and empirically demonstrate that it can find stage-wise graphs that outperform DenseNet and other baselines on ImageNet. | [
"neural architecture search",
"graphon",
"random graphs"
] | Reject | https://openreview.net/pdf?id=SkxWnkStvS | https://openreview.net/forum?id=SkxWnkStvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"YJswe4ejDS",
"B1ljIu0jjr",
"HyxpXICoor",
"Hyxu-HCojB",
"BklbTSL0Fr",
"B1xkzupttH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798736493,
1573804114993,
1573803556927,
1573803264124,
1571870136678,
1571571719095
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1941/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1941/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1941/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1941/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1941/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a graphon-based search space for neural architecture search. Unfortunately, the paper as currently stands and the small effect sizes in the experimental results raise questions about the merits of actually employing such a search space for the specific task of NAS. The reviewers expressed concerns that the results do not convincingly support graphon being a superior search space as claimed in the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Responses to reviewer #1\", \"comment\": \"I agree that their proposed model allows for more architectures but in practice it is not much stronger than WS-G.\\n\\nWe have updated results and as graph sizes increase, performance gaps become more apparent and we go up to Densenet 264 where connectivity improvements results in improvements of up to 0.8%.\\n\\n\\nThe argumentation with respect to parameters is unclear to me. On one hand, you manually influence the number of parameters, on the other you argue that you use less parameters. Obviously, you chose that your baselines have more parameters.\", \"control_over_the_number_of_parameters\": \"The single hyperparameter we can adjust for every stage is the growth rate c. A node that has k input will have kc input channels and c output channels. Here k is determined by the randomly sampled graph (different for each of the six training sessions) and out of our control. Thus, our control over the number of parameters is imprecise. We try to match all parameters. When that's not possible, we err on giving the baselines more parameters in order to create a harsh test.\\n\\nHow do results for WS-G look like if you reduce its parameters to match yours?\\nAs many newly added experiments suggest, in the range we investigated, more parameters always lead to performance improvements.\\n \\nSpecifically, we produced two variants of WS-G that have slightly parameter counts. We report the average of six training sessions.\\n\\n\\t\\t ImageNet-2012 Val\\t\\t\\t\\tImageNet V2 Test\\t\\t\\t\\n\\t # Param \\tTop 1\\tStdev\\tTop 5\\tStdev\\tTop 1\\tStdev\\tTop 5\\tStdev\\nWS-G 169 +\\t14.54M\\t77.11\\t0.06\\t 93.44\\t0.05\\t 65.23\\t0.41 \\t85.84\\t0.15\\nWS-G 169\\t14.23M\\t76.94\\t0.06\\t 93.37\\t0.07\\t 65.18\\t0.23\\t 85.79\\t0.13 \\n\\n\\nThe results show that, even reducing the parameters from 14.54 M to 14.23M has a discernible effect on the performance (a reduction of 0.17% on ImageNet and 0.05% on ImageNet V2)\\n\\n\\nIn fact, you were searching for an architecture on CIFAR-10 but you did not report your results here. Instead you only report your transferred results to ImageNet. Is it possible that you also report results on CIFAR-10? \\n\\nWe answer this in common responses 5.\\n\\n\\nFinally, you do not discuss that your graph contains only one kind of node. In many NAS methods the search space contains various types of operations. Do you think this is a problem? Is there a trivial way to extend your method to cover this as well?\\n\\nThe goal of this paper is to optimize only the connections between homogeneous nodes, but each node can contain multiple different operations. As the reviewer rightly guessed, extending this to allow different operations in the same graph is possible but beyond the scope of this paper. For example, the digraphon formulation provides a way to have different types of connections in the graph. Digraphon is concerned with the direction of connections, but we can easily employ different activation functions, pooling, or any other neural operators as connections.\"}",
"{\"title\": \"Responses to reviewer #2\", \"comment\": \"We thank the reviewer for useful insight and comments. Here are responses to individual questions.\\n\\n1. It simply ignore all other NAS works and just compares with the baseline DenseNet and random deletion/walk (WS-G). \\n\\nMost works on NAS are concerned with the structure of a single cell. After a cell is found, many cells are stacked on top of each other in order to build large-capacity models. This approach is orthogonal and complementary to our work, which is concerned with the connections among such cells. Thus, a direct comparison with these works would not provide evidence that could support or contradict our main claim.\\n \\nFew papers aim to optimize the stage-wise graph. This is at least partially due to the lack of methods to scale a small graph learned on small datasets to match the needs of a large dataset, which this paper provides. We did compare with an existing work that considers the stage-wise graph, which is the WS model found by the randomly wired network paper. Xie et al. (2019) showed that the WS model is competitive with several NAS works including AmoebaNet, PNAS and DARTS.\\n\\nDespite that, the gain (accuracy +0.17% than DenseNet baseline) is very marginal compared to other approaches: random-wire (accuracy +2% than resent50 baseline), FBNet (accuracy +2% than MobileNetv2 baseline).\\n\\nAs discussed in the general response (1a), we have updated the paper with more experiments with improved results (up to 0.8% over DenseNet). The main goal of the experiments is to create fair comparisons and isolate the effect of the stage-wise\\n\\n2. According to Section 5.1, the search is performed on CIFAR-10, but there is no evaluation on CIFAR-10 at all. The only results are reported for ImageNet instead, which is kind of strange.\\n\\nAs of results on CIFAR-10, recent performance improvements on are mostly achieved by regularization techniques rather than neural architecture. For this reason, we are afraid that CIFAR-10 may not have enough discriminating capability to separate different baselines. Instead, we added many more experiments, including on the newly proposed ImageNet V2 test set. Some results we have on CIFAR-10 are: 93.80% for WS and 93.93% for the graph we found.\"}",
"{\"title\": \"Common responses to all reviewers\", \"comment\": \"We thank the reviewers for valuable comments and responses.\\n\\nWe have uploaded a revised version of the paper including the following changes. \\nMore extensive experiments on bigger networks and an additional test set, ImageNet V2, which provides a more accurate estimate of generalization performance. The same method on bigger graphs yields bigger performance gaps up to 0.8% over DenseNet. \\n\\nImprovements in writing to further clarify our main points.\", \"on_our_contribution\": \"Most existing work on architecture transfer in NAS focus on the cell structure, which is stacked consequentially to build large networks. In this paper, we study the problem of transfering and expanding the stage-wise graph from a small dataset to a large dataset. We fill a gap in NAS research because (1) few work investigated the search for stage-wise graphs and (2) there is no known algorithm for transferring small stage-wise graphs.\\nTo validate our approach, we applied the transfer technique on two graphs. First, We expand the WS(4, 0.25) graph, defined on 32 nodes, to the graph of 64 nodes used in Denset-264. Second, we expand the 11-node graph we found on CIFAR-10 to various DenseNet settings. We showed that, after expansion, both maintain their performance lead over DenseNet. \\n\\nThe purpose of our experiment is to show that this approach is feasible and beneficial under fair comparisons. We use the same setup as much as possible across all baselines. We feel this should be encouraged as this helps in isolating the contribution of the proposed technique. \\n\\nAs of results on CIFAR-10, recent performance improvements on are mostly achieved by regularization techniques rather than neural architecture. For this reason, we are afraid that CIFAR-10 may not have enough discriminating capability to separate different baselines. Instead, we added many more experiments, including on the newly proposed ImageNet V2 test set. Some results we have on CIFAR-10 are: 93.80% for WS and 93.93% for the graph we found. \\nA small technical comment is that we improved the accuracies of the DenseNet-121 group due to improved use of the PyTorch API (switching to nn.sequential improves performance)\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new graphon-based search space. Unlike most other NAS works that search for exact network structures, this paper aims to search for the random graph distribution with graphon. Overall, it provides some new angles for NAS search space design, but the experimental results are very weak.\\n\\n1. It simply ignore all other NAS works and just compares with the baseline DenseNet and random deletion/walk (WS-G). Despite that, the gain (accuracy +0.17% than DenseNet baseline) is very marginal compared to other approaches: random-wire (accuracy +2% than resent50 baseline), FBNet (accuracy +2% than MobileNetv2 baseline).\\n2. According to Section 5.1, the search is performed on CIFAR-10, but there is no evaluation on CIFAR-10 at all. The only results are reported for ImageNet instead, which is kind of strange.\\n\\nGiven these weak results, I cannot accept this paper in the current form.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose a new search space based on graphons and explore some of its benefits such as certain theoretical properties. The architecture search shares similarities with DARTS. An important difference is that the network parameters are not shared.\\nThe paper is well-written and the authors consider that the typical reader will not be familiar with graphons. I agree that their proposed model allows for more architectures but in practice it is not much stronger than WS-G. The argumentation with respect to parameters is unclear to me. On one hand, you manually influence the number of parameters, on the other you argue that you use less parameters. Obviously, you chose that your baselines have more parameters. How do results for WS-G look like if you reduce its parameters to match yours? In fact, you were searching for an architecture on CIFAR-10 but you did not report your results here. Instead you only report your transferred results to ImageNet. Is it possible that you also report results on CIFAR-10? Finally, you do not discuss that your graph contains only one kind of node. In many NAS methods the search space contains various types of operations. Do you think this is a problem? Is there a trivial way to extend your method to cover this as well?\"}"
]
} |
S1xWh1rYwB | Restricting the Flow: Information Bottlenecks for Attribution | [
"Karl Schulz",
"Leon Sixt",
"Federico Tombari",
"Tim Landgraf"
] | Attribution methods provide insights into the decision-making of machine learning models like artificial neural networks. For a given input sample, they assign a relevance score to each individual input variable, such as the pixels of an image. In this work, we adopt the information bottleneck concept for attribution. By adding noise to intermediate feature maps, we restrict the flow of information and can quantify (in bits) how much information image regions provide. We compare our method against ten baselines using three different metrics on VGG-16 and ResNet-50, and find that our methods outperform all baselines in five out of six settings. The method’s information-theoretic foundation provides an absolute frame of reference for attribution values (bits) and a guarantee that regions scored close to zero are not necessary for the network's decision. | [
"Attribution",
"Informational Bottleneck",
"Interpretable Machine Learning",
"Explainable AI"
] | Accept (Talk) | https://openreview.net/pdf?id=S1xWh1rYwB | https://openreview.net/forum?id=S1xWh1rYwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"mKhBtMIUv0x",
"0PGiUwlVkQa",
"BxGBa1vmPl",
"K3CKL6W4A",
"dsIQuI8DCU",
"r1xdC7Dnjr",
"H1eii7P2jr",
"ryx1P7wnsB",
"SyxnBRLhoB",
"S1xbOKTJ9S",
"BJeXnpJAKB",
"BJxHG52nFB",
"BJgJBmM7YH"
],
"note_type": [
"official_comment",
"comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1588881849673,
1585606948789,
1581789001018,
1578467731143,
1576798736463,
1573839824441,
1573839779331,
1573839702650,
1573838404483,
1571965288851,
1571843499421,
1571764749274,
1571132214902
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1940/Authors"
],
[
"~Saeid_Asgari_Taghanaki1"
],
[
"ICLR.cc/2020/Conference/Paper1940/Authors"
],
[
"~Mark_Sandler1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1940/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1940/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1940/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1940/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1940/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1940/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1940/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1940/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Infomask paper\", \"comment\": \"Dear Saeid,\\n\\nthanks for pointing us to your paper. We now reference your work. The main difference is that in your work the information bottleneck is already added during the training of the network. In contrast, our methods works aims at already trained network (post-hoc explanations). This is also reflected how we restrict the amount of information. \\n\\nBest,\\nLeon\"}",
"{\"title\": \"Overlaps with the Infomask paper\", \"comment\": \"Dear authors, we found overlaps in the methodology of your paper with our published Infomask paper: https://arxiv.org/abs/1903.11741\\n\\nIt is highly appreciated if you could please highlight the differences.\\n\\nThanks\"}",
"{\"title\": \"Summary of Changes\", \"comment\": [\"We want to summarize our changes since the original submission:\", \"Include Sanity Checks (Adebayo et al., 2018)\", \"Add Figure 4: Different depth and beta values\", \"Include LRP with parameters \\u03b1=1, \\u03b2=0\", \"Include some additional references\"], \"improved_presentation\": [\"switch to seismic color map.\", \"migrate the overloading of X by introducing a new variable for the intermediate representation R.\", \"include heatmaps in the appendix without overlay on not-cherry-picked samples.\", \"redo diagrams of Per-Sample and Readout with tikz (increase beauty).\", \"fixed several minor issues (grammar, wording, clarity).\", \"Many of these changes were encouraged by the feedback of our reviewers.\", \"Thank you for accepting us for a talk!\"]}",
"{\"title\": \"Some related work...\", \"comment\": \"Nice paper!\", \"also_see\": \"Information-Bottleneck Approach to Salient Region Discovery by Zhmoginov et all,\", \"https\": \"//arxiv.org/abs/1907.09578 also explores information bottleneck for similar tasks on simple datasets.\\n\\nTheir (our) model is somewhat different, but relies on similar concept of finding the regions that preserve most of the mutual information between masked image and the labels. It would be interesting if the differences were articulated in this paper.\"}",
"{\"decision\": \"Accept (Talk)\", \"comment\": \"All three reviewers strongly recommend accepting this paper. It is clear, novel, and a significant contribution to the field. Please take their suggestions into account in a camera ready version. Thanks!\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review 2\", \"comment\": \"Thank you very much for your extensive and helpful comments. We addressed changes in the paper in a general comment. Concerning your specific comments:\\n\\n> [...]Some of the design- and implementation-choices needed to render the intractable info bottleneck objective tractable could perhaps be discussed and potentially even improved in light of recent results in other fields (Bayesian DL, deep latent-variable generative models, and variational methods for deep neural network compression),[...]\\n\\nYes, a more complex variational approximation of Q(Z) could make our approximation of the mutual information even more accurate. As a simple normal distribution already yielded good results, we did not further explore this direction. However, it would be an interesting subject for future work.\\n\\n> A short section of current shortcomings/limitations could be added to the discussion.\\n\\nWe agree! We added the following paragraph to the conclusion stating the following limitations:\\n\\nGenerally, we would advise to use the Per-Sample Bottleneck over the Readout Bottleneck. It performs better and is more flexible as it only requires to estimate the mean and variance of the feature map. The Readout Bottleneck has the advantage of producing attribution maps with a single forward pass once trained. Images with multiple object instances provide the network with redundant class information. The Per-Sample Bottleneck may therefore discard some of the class evidence. Even for single object instances, the heatmaps of the Per-Sample Bottleneck may vary slightly due to the randomness of the optimization process.\\n\\n\\n> II) Perturbation-based approaches that inject noise (into the input image directly) have been proposed previously. Most notably: Visualizing and Understanding Atari Agents, Greydanus et al. 2018 and potentially follow-up citations. It would be interesting to compare both works empirically, but perhaps also theoretically/conceptually. Could the Greydanus work be related to applying the noise directly to the input image along with some additional constraints?\\n\\nThank you for the suggestion! Greydanus et al. blurs parts of the input images and then measures the drop in the output of the policy network and the value function. We think this method could be seen as an extension of Occlusion. Instead of setting image patches to zero, they are blurred, effectively removing high-frequency image information. Greydanus et. al do not apply noise to the input image and they also do not optimize the amount of blur. We cited the work as an Occlusion type method. We have searched the follow-up citations, but were not able to find any methods that apply noise for attribution purposes. \\n\\n> Is there a particular reason for this choice of colormap? While it seems to be roughly perceptually uniform (which is of course good), why not choose a simple sequential colormap (instead of a rainbow-like one)? At least the use of red and green at the same time should rather be avoided to maximize colormap readability under the most common forms of color vision deficiencies.\\n\\nWe share your concerns and updated the colormap to red for positive attribution and blue for negative attribution.\\n\\n> Just a pointer - no need to act on this for the current paper. Large parts of the field of neural network compression are concerned with a similar kind of attribution - the question is which weights/neurons/filters are relevant and which ones are not and can thus be removed from the network without loss in accuracy. Information-bottleneck style objectives (or the closely related ELBO / variational free energy) in conjunction with sparsity inducing priors have been proven to be quite fruitful. See e.g. Variational Dropout Sparsifies Deep Neural Networks, Molchanov et al. 2017 for interesting work, that aims at learning the variance of Gaussian noise that is injected into neural network weights using a similar construction and variational objective as shown in this paper. Perhaps some ideas can be borrowed/translated for future, improved versions of the method from that body of literature (Molchanov 2017, but also more sophisticated follow-up work).\\n\\nIndeed, there exist interesting parallels to neural network compression. We agree that both areas could enrich each other. Thanks for pointing this out!\"}",
"{\"title\": \"Response to Review 3\", \"comment\": \"Thank you very much for your comments. We addressed the majority of changes to the paper in a general response. Concerning your specific comments:\\n\\n> \\u201cHow close is the \\\"heat map in beta=10/k\\\" to the \\\"ground-truth heatmap\\\"?\\u201d\\n\\nIt is not clear to us what you mean by \\u201cground-truth heatmap\\u201d. There is no human-labeled set of heatmaps available to evaluate attribution methods. To evaluate how well the attribution mass is localized, we used the \\u201ebbox\\u201c metric which calculates the proportion of most relevant scores falling within the object\\u2018s bounding box. Thus, we use the bounding box labels as ground-truth proxy for localization performance, and we find that beta=10/k performs best: For the ResNet-50, on average 62 % of the highest attribution values are contained in the respective bounding box which is 15.2% higher than the best baseline.\\n\\n\\n> However, according to Table 1, only when beta is smaller than 1/k, the accuracy of the model does not degrade too much. \\n> Try betas in a broader range including very small betas, e.g. [0.0001/k, 0.001/k,....,1/k,10/k], for both Table one and visualization.\\n \\nWe agree with your suggestion and added a comparison of heatmaps for beta values from 0.1/k to 1000/k in a new figure (fig. 4). We found that beta = 0.1/k resulted in more information flowing through the network and producing more vague heatmaps. For beta = 1000/k, heatmaps are uniform with very low values (< 0.1 bits / pixel) meaning almost all information is discarded. \\nWe updated table 1 to also include the Per-Sample Bottleneck. \\n\\n> However, I am not sure if I totally agree with the claim \\\"If L_1 is zero for an area, we can guarantee that no information from this area is used for prediction.\\\"\\n----- Given L_1=0 really implies that no information of the corresponding region is used for the certain beta, but is this true for the original model (beta=0)? Table one shows that different beta would lead to very different downstream task accuracy.\\n\\nWe agree that that sentence could be clearer. In the introduction, we already described it clearer: \\u201c[..] areas scored irrelevant are indeed not necessary for the network's prediction.\\u201d We incorporated your feedback and changed the sentence to: \\u201cIf L_I is zero for an area, we can guarantee that information from this area is not necessary for the network's prediction. Information from this area might still be used when no noise is added.\\u201d \\n\\n> Specific to the two approaches you proposed, can you explain/motivate in what situations per-sample bottle would be better and in what cases we should prefer ReadOut bottleneck?\\n\\nWe addressed the issues you raised, added additional content (stimulated by the other two reviewers) and hope you agree we have significantly improved the manuscript. We would appreciate if our efforts would be rewarded with an updated rating of \\u201cAccept\\u201d. Thank you!\"}",
"{\"title\": \"Response to Review 1\", \"comment\": \"Thank you for your extensive and helpful comments and thorough review of the paper. We summarized our modifications in a general comment. We respond inline to your specific comments:\\n\\n> I'm not sure why the new degradation metric is a useful addition. What does it add that MoRF and LeRF don't capture on their own independently?\\n\\nWe agree that the integral between MoRF and LeRF does not capture anything not already implicitly contained in the MoRF and LeRF curves. However, when comparing different MoRF or LeRF curves visually, it is not always obvious which method performs better overall, as the paths may intersect (see Appendix G). Calculating the integral between the MoRF and LeRF paths yields a single scalar, which is directly comparable and while capturing the objective to perform well in both the MoRF and LeRF task. \\n\\n> I think [1] would be a nice addition to the evaluation section as it tests for something qualitatively different than the various metrics from section 4. It would also be a good addition to the related work.\\n\\nThanks for pointing it out to us. We added the weight randomization sanity check [1] to the evaluation section and compare our method to the others. \\n\\nRegarding your minor comments / presentation issues:\\n* we removed the p(x) in eq. 11. \\n* we now mention the range of lambda when it is introduced \\n* we introduced a new variable R to denote intermediate feature maps\\n\\n> \\\"indicating that all negative evidence was removed.\\\" I think this should read \\\"indicating that only negative evidence was removed.\\\"\\n\\nThank you, we updated the paper accordingly.\\n\\n> \\\"The bottleneck is inserted into an early layer to ensure that the information in the network is still local\\\". I'd like this to be explored a bit more. Though deeper feature maps are certainly more spatially coarse they still might be somewhat \\\"local\\\". To what degree to they loose localization information? My equally vague alternative intuition goes a bit differently: The amount of relevant information flowing through any spatial location seems like it shouldn't change that much, only the way its represented should change. If the proposed visualizations were the same for every choice of layer then it would confirm this intuition. That would also be an interesting result because most if not all of the cited baseline approaches (where applicable) produce qualitatively different attributions at different layers (e.g., see Grad-CAM).\\n\\nThis is indeed an interesting question and we included a new figure which compares different layer depths. The figure backs your intuition that the spatial locations of important features should remain approximately the same. However, deeper layers have larger FOVs, so that the representations are not guaranteed to stay in the exact spatial location. Deeper layers also have drastically smaller spatial resolution, limiting the resolution of the heatmap. For very early layers, the heatmaps are sparser.\"}",
"{\"title\": \"General Response to the Reviews\", \"comment\": [\"We want to thank the reviewers for the extensive feedback, their helpful comments, and suggestions. We appreciate the effort and time you invested very much! We respond to each review below individually. Here is a short summary of our improvements:\", \"A new figure shows the effect of varying layer depth and varying values for beta\", \"included Per-Sample Bottleneck to Table 1\", \"we added \\\"sanity checks\\\" (Adebayo et. al, 2018) to our evaluation section\", \"we fixed typos, integrated minor comments, and improved the presentation\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary\\n---\\n\\n(motivation)\\nLots of methods produce attribution maps (heat maps, saliency maps, visual explantions) that aim to highlight input regions with respect to a given CNN.\\nThese methods produce scores that highlight regions that are in a vague sense \\\"important.\\\"\\nWhile that's useful (relative importance is interesting), the scores don't mean anything by themselves.\\nThis paper introduces another new attribution method that measures the amount of information (in bits!) each input region contains, calibrating this score by providing a reference point at 0 bits.\\nNon-highlighted regions contribute 0 bits of information to the task, so they are clearly irrelevant in the common sense that they have 0 mutual information with the correct output.\\n\\n(approach - attribution methods)\\nAn information bottleneck is introduced by replacing a layer's (e.g., conv2) output X with a noisy version Z of that output.\\nIn particular, Z is a convex combination of the feature map (e.g., conv2) with Gaussian noise with the same mean and variance as that feature map.\\nThe weights of the combination are found so they minimize the information shared between the input and Z and maxmimize information shared between Z and the task output Y.\\nThese weights are either optimized on\\n1) a per-image basis (Per-Sample) or\\n2) predicted by a model trained on the entire dataset (Readout).\\n\\n(approach - evaluation)\", \"the_paper_uses_3_metrics_with_differing_degrees_of_novelty\": \"1) The bbox metric rewards attribution methods that put a lot of mass in ground truth bounding boxes.\\n2) The original Sensitivity-n metric from (Ancona et al. 2017) is reported with a version that uses 8x8 occlusions.\\n3) Least relevant image degredation is compared to most relevant image degredation (e.g., from (Ancona et al. 2017)) to form a new occlusion style metric.\\n\\n(experiments)\\nExperiments consider many of the most popular baselines, including Occlusion, Gradients, SmoothGrad, Integrated Gradients, GuidedBP, LRP, Grad-CAM, and Pattern Attribution. They show:\\n1) Qualitatively, the visualizations highlight only regions that seem relevant.\\n2) Both Per-Sample and Readout approaches put higher confidence into ground truth bounding boxes than all other baselines.\\n3) Both Per-Sample and Readout approaches outperform all baselines almost all the time according to the new image degredation metric.\\n\\n\\nStrengths\\n---\\n\\nThe idea makes a lot of sense. I think heat maps are often thought of in terms of the colloquial sense of information, so it makes sense to formalize that intuition.\\n\\nThe related work section is very well done. The first paragraph is particularly good because it gives not just a fairly comprehensive view of attribution methods, but also because it efficiently describes how they all work.\\n\\nThe results show that proposed approaches clearly outperform many strong baselines across different metrics most of the time.\\n\\n\\nWeaknesses\\n---\\n\\n\\n* I'm not sure why the new degredation metric is a useful addition. What does it add that MoRF and LeRF don't capture on their own independently?\\n\\n* I think [1] would be a nice addition to the evaluation section as it tests for something qualitatively different than the various metrics from section 4. It would also be a good addition to the related work.\\n\\n\\nMissing Details / Points of Confusion\\n---\\n\\n* I think there's an extra p(x) in eq. 11 in appendix D.\\n\\n* I think the variable X is overloaded. In eq. 1 it refers to the input (e.g., the pixels of an image) while in eq. 2 it refers to an intermediate feature map (e.g., conv2) even though it later seems to refer to the input again (e.g., eq. 3). Different notation should be used for intermediate feature maps and inputs.\\n\\n\\nPresentation Weaknesses\\n---\\n\\n* In section 3.1 is lambda meant to be constrained in the range [0, 1]? This is only mentioned later (section 3.2) and should probably be mentioned when lambda is introduced.\\n\\n* \\\"indicating that all negative evidence was removed.\\\" I think this should read \\\"indicating that only negative evidence was removed.\\\"\\n\\n\\nSuggestions\\n---\\n\\n\\\"The bottleneck is inserted into an early layer to ensure that the information in the network is still local\\\"\\nI'd like this to be explored a bit more. Though deeper feature maps are certainly more spatially coarse they still might be somewhat \\\"local\\\". To what degree to they loose localization information? My equally vague alternative intuition goes a bit differently: The amount of relevant information flowing through any spatial location seems like it shouldn't change that much, only the way its represented should change. If the proposed visualizations were the same for every choice of layer then it would confirm this intuition. That would also be an interesting result because most if not all of the cited baseline approaches (where applicable) produce qualitatively different attributions at different layers (e.g., see Grad-CAM).\\n\\n\\n[1]: Adebayo, Julius et al. \\u201cSanity Checks for Saliency Maps.\\u201d NeurIPS (2018).\\n\\n\\nPreliminary Evaluation\\n---\", \"clarity\": \"The paper is clearly written.\", \"originality\": \"The idea of using the formal notion of information in attribution maps is novel, as is the bbox metric.\", \"significance\": \"This method could be quite significant. I can see it becoming an important method to compare to.\", \"quality\": \"The idea is sound and the evaluation is strong.\\n\\nThis is a very nice paper in all the ways listed above and it should be accepted!\\n\\nPost-rebuttal comments\\n---\\n\\nThe author responses and other reviews have only increased my confidence that this paper should be accepted.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents an information-bottleneck-based approach to infer the regions/pixels that are most relevant to the output. For all the metrics listed in the paper, the proposed approaches all achieve very good performance. It turns out, the proposed two architectures are better (at least alternative) choices to the other existing attribution methods.\\n\\nI do agree that the proposed two models (Per-Sample and Readout) can be used to roughly infer regions of interest, which has been strongly supported by the comprehensive experiments. To minimize equation (6), we need to make beta*L_I small. Minimizing L_{CE} in (6) tries to maximize the mutual information between Z and output (labels); while minimizing L_I with respect to weight beta would try to inject noise to each dimension of Z. However, L_{CE} needs to ensure it can get enough information for prediction, and thus would prevent the noise injection process for \\u201cthe key regions\\u201d. By choosing reasonable beta (similar to variational information bottleneck), the proposed approaches are capable to highlight key regions used for prediction.\\n\\nOverall, I think the method is elegant for approximately estimating the relevance score map.\\nBelow are some of my (minor) questions/concerns:\\n\\n1. What we learned = What we want?\\nThe proposed approach seeks a sort of \\u201csparse heatmap\\u201d. \\nThe larger the beta, the more regions/pixels would be suppressed while smaller beta might fail to suppress non-important regions in the image.\\nIn the paper, the beta used for calculating the per-sample bottleneck is among [100/k , 10/k, 1/k].\\nThe beta for ReadOut bottleneck is 10/k.\\nHowever, according to Table 1, only when beta is smaller than 1/k, the accuracy of the model does not degrade too much. \\nWhen using beta=10/k to get the \\\"heat map\\\" (where 10/k is the best choice of per-smaple bottleneck for degradation task), how close is the \\\"heat map in beta=10/k\\\" to the \\\"ground-truth heatmap\\\"?\\nTo better understand the proposed methods, I have a small suggestion:\\n------ Try betas in a broader range including very small betas, e.g. [0.0001/k, 0.001/k,....,1/k,10/k], for both Table one and visualization. \\nFix a few images and visualize the heatmap given different betas.\\nWe might better see how the growth of beta changes the heatmap.\\n\\n2. About zero-valued attributions.\\nI agree with you that equation (5) is an upper bound of MI (eq (4)).\\nHowever, I am not sure if I totally agree with the claim \\\"If L_1 is zero for an area, we can guarantee that no information from this area is used for prediction.\\\"\\n----- Given L_1=0 really implies that no information of the corresponding region is used for the certain beta, but is this true for the original model (beta=0)? Table one shows that different beta would lead to very different downstream task accuracy.\\n\\n3. Specific to the two approaches you proposed, can you explain/motivate in what situations per-sample bottle would be better and in what cases we should prefer ReadOut bottleneck?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\nThe paper proposes a novel perturbation-based method for computing attribution/saliency maps for deep neural network based image classifiers. In contrast to most previous work on perturbation-based attribution, the paper proposes to inject carefully crafted noise into an early layer of the network. Importantly, the noise is chosen such that it optimizes an information-theoretically motivated objective (rate-distortion/info bottleneck) that ensures that decision-relevant signal is flowing while constraining the overall channel-capacity, such that decision-irrelevant signal is blocked from flowing. The flow of signal is controlled by the amount of noise injected, which translates into a certain amount of mutual information between input image regions and noisy activations/features. This mutual information can be visualized in the input image, but it also has a clear, quantitative meaning that is readily interpretable. The paper introduces two ways to construct the injected noise, based on the information bottleneck. Resulting attribution maps are computed and evaluated on VGG-16 and ResNet-50 (on ImageNet), and are compared against an impressive number of previously proposed attribution methods. Importantly, the paper uses three different quantitative measures to compare the quality of attribution maps. The proposed method performs well on all three measures.\\n\\nContributions\\ni) Derivation of a novel method for constructing attribution maps. Importantly, the method is grounded on solid theoretical footing for extracting minimal relevant information (rate-distortion theory / information bottleneck method).\\n\\nii) Proposal of a novel quantitative measure to compare quality of pixel-level attribution maps in image classification, and extension of a previously reported method.\\n\\niii) Evaluation and comparison against a large body of state-of-the-art attribution methods.\\n\\nQuality, Clarity, Novelty, Impact\\nThe paper is clear and well written, with a nice introduction to the information bottleneck method. Experiments are well described and hyper-parameter settings are given in the appendix. To the best of my knowledge, the proposed method is sufficiently novel and the application of the information bottleneck framework to pixel-level attribution has not been reported before. Some of the design- and implementation-choices needed to render the intractable info bottleneck objective tractable could perhaps be discussed and potentially even improved in light of recent results in other fields (Bayesian DL, deep latent-variable generative models, and variational methods for deep neural network compression), but I currently don\\u2019t consider this a major issue. To me personally the work in convincing and mature enough to vote for acceptance - perhaps most importantly it lays important groundwork for important connections to the theory of relevant information and puts a lot of much needed emphasis on objective evaluation of attribution methods (i.e. without subjective visual judgement of saliency maps). My suggestions below are aimed at helping improve the paper even further.\\n\\n\\nImprovements\\nI) A short section of current shortcomings/limitations could be added to the discussion.\\n\\nII) Perturbation-based approaches that inject noise (into the input image directly) have been proposed previously. Most notably: Visualizing and Understanding Atari Agents, Greydanus et al. 2018 and potentially follow-up citations. It would be interesting to compare both works empirically, but perhaps also theoretically/conceptually. Could the Greydanus work be related to applying the noise directly to the input image along with some additional constraints?\\n\\n\\nMinor Comments\\na) Is there a particular reason for this choice of colormap? While it seems to be roughly perceptually uniform (which is of course good), why not choose a simple sequential colormap (instead of a rainbow-like one)? At least the use of red and green at the same time should rather be avoided to maximize colormap readability under the most common forms of color vision deficiencies.\\n\\nb) Just a pointer - no need to act on this for the current paper. Large parts of the field of neural network compression are concerned with a similar kind of attribution - the question is which weights/neurons/filters are relevant and which ones are not and can thus be removed from the network without loss in accuracy. Information-bottleneck style objectives (or the closely related ELBO / variational free energy) in conjunction with sparsity inducing priors have been proven to be quite fruitful. See e.g. Variational Dropout Sparsifies Deep Neural Networks, Molchanov et al. 2017 for interesting work, that aims at learning the variance of Gaussian noise that is injected into neural network weights using a similar construction and variational objective as shown in this paper. Perhaps some ideas can be borrowed/translated for future, improved versions of the method from that body of literature (Molchanov 2017, but also more sophisticated follow-up work).\"}",
"{\"comment\": \"Hello,\\n\\nwe found a bug in our code that had minor effects on our results. When calculating the KL-divergence, we used \\\"log(s)\\\" instead of \\\"log(s**2)\\\" where \\\"s\\\" is the standard deviation. We re-run the evaluation and provide an screenshot of the updated results: https://gist.github.com/attribution-bottleneck/07ee0959bbd8b8ac36f9dba476301dd8 .\\nThe degradation task for VGG is now also performed on the full ImageNet validation set and on 8x8 and 14x14 tiles. Using the correct log variance, we found that the Per-Sample bottleneck even improved a bit on the degradation task. We will update the paper once the rebuttal period starts.\", \"title\": \"bug with minor effects on the results\"}"
]
} |
H1gx3kSKPS | Stein Bridging: Enabling Mutual Reinforcement between Explicit and Implicit Generative Models | [
"Qitian Wu",
"Rui Gao",
"Hongyuan Zha"
] | Deep generative models are generally categorized into explicit models and implicit models. The former assumes an explicit density form whose normalizing constant is often unknown; while the latter, including generative adversarial networks (GANs), generates samples using a push-forward mapping. In spite of substantial recent advances demonstrating the power of the two classes of generative models in many applications, both of them, when used alone, suffer from respective limitations and drawbacks. To mitigate these issues, we propose Stein Bridging, a novel joint training framework that connects an explicit density estimator and an implicit sample generator with Stein discrepancy. We show that the Stein Bridge induces new regularization schemes for both explicit and implicit models. Convergence analysis and extensive experiments demonstrate that the Stein Bridging i) improves the stability and sample quality of the GAN training, and ii) facilitates the density estimator to seek more modes in data and alleviate the mode-collapse issue. Additionally, we discuss several applications of Stein Bridging and useful tricks in practical implementation used in our experiments. | [
"generative models",
"generative adversarial networks",
"energy models"
] | Reject | https://openreview.net/pdf?id=H1gx3kSKPS | https://openreview.net/forum?id=H1gx3kSKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"P1GpmaOruZ",
"B1l7gurisS",
"rkeBXBZiiH",
"BJxgEG-joH",
"B1xDd0essH",
"BkeDzdV5oH",
"SJeaJON9oH",
"HyepbwEcsS",
"SklGwvQpKH",
"rklud-o3tH",
"SJe9euyhFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736434,
1573767146596,
1573750045119,
1573749288154,
1573748334630,
1573697551499,
1573697509223,
1573697285203,
1571792730243,
1571758447707,
1571710962199
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1939/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1939/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1939/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1939/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1939/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1939/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1939/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1939/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1939/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1939/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a generative model that jointly trains an implicit generative model and an explicit energy based model using Stein's method. There are concerns about technical correctness of the proofs and the authors are advised to look carefully into the points raised by the reviewers.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Treat $r^2$ as an auxiliary for $ \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}_E}\\\\left[\\\\nabla_x h(\\\\mathbf{x})^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') \\\\nabla_x h(\\\\mathbf{x}')\\\\right]$\", \"comment\": \"Basically, we introduce an auxiliary variable $r^2$ to represent the value of $ \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}_E}\\\\left[\\\\nabla_x h(\\\\mathbf{x})^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') \\\\nabla_x h(\\\\mathbf{x}')\\\\right]$. The minimization over $r$ and the inequality constraint $ \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}_E}\\\\left[\\\\nabla_x h(\\\\mathbf{x})^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') \\\\nabla_x h(\\\\mathbf{x}')\\\\right] \\\\leq r^2$ forces $r^2=\\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}_E}\\\\left[\\\\nabla_x h(\\\\mathbf{x})^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') \\\\nabla_x h(\\\\mathbf{x}')\\\\right]$.\"}",
"{\"title\": \"following\", \"comment\": \"What do you mean by min ... + \\\\lambda r^2 : E[...] <= r^2 in the derivation following?\"}",
"{\"title\": \"This is a consequence of the definition of the density ratio\", \"comment\": \"According to the definition $h=d\\\\mathbb{P}/d\\\\mathbb{P}_E-1$, we have $d\\\\mathbb{P}=(1+h)d\\\\mathbb{P}_E$. Replacing $d\\\\mathbb{P}$ with $(1+h)d\\\\mathbb{P}_E$ on the left side yields the right side.\"}",
"{\"title\": \"can't see how the proof works\", \"comment\": \"Hi,\\n\\nCan you explain how this line works?\\n\\n\\\\begin{aligned}\\n\\\\min_{h:\\\\mathbb{E}_{\\\\mathbb{P}_E}[h]=0} \\\\left\\\\{\\\\mathbb{E}_{\\\\mathbb{P}_E}[hD] + \\\\lambda_2\\\\cdot \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}}\\\\left[\\\\frac{\\\\nabla_x h(\\\\mathbf{x})^\\\\top}{1+h(\\\\mathbf{x})} k(\\\\mathbf{x},\\\\mathbf{x}') \\\\frac{\\\\nabla_x h(\\\\mathbf{x}')}{1+h(\\\\mathbf{x}')}\\\\right] \\\\right\\\\}\\\\\\\\\\n= & \\\\min_{h:\\\\mathbb{E}_{\\\\mathbb{P}_E}[h]=0} \\\\left\\\\{\\\\mathbb{E}_{\\\\mathbb{P}_E}[hD] + \\\\lambda_2\\\\cdot \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}_E}\\\\left[\\\\nabla_x h(\\\\mathbf{x})^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') \\\\nabla_x h(\\\\mathbf{x}')\\\\right] \\\\right\\\\}.\\n\\\\end{aligned}\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We appreciate your comments, and we apologize for the typos that cause some confusion.\\n\\n\\n1. To begin with, we apologize for insufficient illumination of the motivation why we consider a Stein discrepancy as a bridge between two models. Here we provide a more thorough discussions from three perspectives. \\n\\nFirstly, there are many of applications where we do need both of the explicit density (at least an energy value that can distinguish high-quality samples and low-quality ones) and sample generation. In the introduction part, we discuss some of them, like sample evaluation, data augmentation for insufficient observation and outlier detection. In our experiments, we apply our model to address data insufficiency and outlier detection. \\nWe also observe in the literature of GAN, quite a bit is devoted to the discussion of estimating the likelihood for the obtained generator. \\n\\nSecondly, jointly training two models can presumably compensate and reinforce each other in the training process. Although the ideal global optimum of an individual explicit or implicit model can both guarantee that the model exactly captures the data distribution, when it comes to practical training, an individually learned model could suffer from many issues like mode collapse and training unstability that could lead to undesirable performance. Hence, one important motivation of joint training is to let one model regularize the other and help it avoid the local optima or stablize the training. We verify these arguments in Section 3.2 and 4. \\n\\nThirdly, in some specific tasks, we need to add some induction bias to the model but it is often the case that it is easy for one model to incorporate the induction bias while it is harder for another. For example, if we want to obtain a certain type of generated samples, then it is difficult to mathematically enforce some constraints on the implicit model, but we can consider a truncated density/energy function for explicit model. In this case, the explicit model can guide the implicit one to generate the samples that meets the requirements through joint training. If we specify energy model as PixelCNN, it would be easy to add induction bias that could control pixel-level features of generated images. In fact, we do some additional experiments where we replace the original deep energy model as PixelCNN++, and we achieve better results of generated samples with inception score 7.20.\\n\\n2. The approximations in the first version are not necessary and we apologize for the confusion due to some typos. We modify this part in the updated version as below:\\n\\n\\\\[\\n\\\\begin{aligned}\\n& \\\\min_{h:\\\\mathbb{E}_{\\\\mathbb{P}_E}[h]=0} \\\\{\\\\mathbb{E}_{\\\\mathbb{P}_E}[hD] + \\\\lambda_2\\\\cdot \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}}[\\\\nabla_x\\\\log(1+h(\\\\mathbf{x}))^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') \\\\nabla_x\\\\log(1+h(\\\\mathbf{x}'))] \\\\}\\\\\\\\\\n= & \\\\min_{h:\\\\mathbb{E}_{\\\\mathbb{P}_E}[h]=0} \\\\left\\\\{\\\\mathbb{E}_{\\\\mathbb{P}_E}[hD] + \\\\lambda_2\\\\cdot \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}}\\\\left[\\\\frac{\\\\nabla_x h(\\\\mathbf{x})^\\\\top}{1+h(\\\\mathbf{x})} k(\\\\mathbf{x},\\\\mathbf{x}') \\\\frac{\\\\nabla_x h(\\\\mathbf{x}')}{1+h(\\\\mathbf{x}')}\\\\right] \\\\right\\\\}\\\\\\\\\\n= & \\\\min_{h:\\\\mathbb{E}_{\\\\mathbb{P}_E}[h]=0} \\\\left\\\\{\\\\mathbb{E}_{\\\\mathbb{P}_E}[hD] + \\\\lambda_2\\\\cdot \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}_E}\\\\left[\\\\nabla_x h(\\\\mathbf{x})^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') \\\\nabla_x h(\\\\mathbf{x}')\\\\right] \\\\right\\\\}.\\n\\\\end{aligned}\\n\\\\]\\n\\n3. Yes, the expectation should be over $\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}_E$. Thanks for pointing this out.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We appreciate very much your helpful comments and apologize for the typos that cause some confusion.\\n\\n1. We add the following reasoning in the updated version:\\n\\n\\\\[\\\\begin{aligned}\\n &\\\\min_{h:\\\\mathbb{E}_{\\\\mathbb{P}_E}[h]=0} \\\\left\\\\{\\\\mathbb{E}_{\\\\mathbb{P}_E}[hD] + \\\\lambda_2\\\\cdot \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}}\\\\left[\\\\nabla_x h(\\\\mathbf{x})^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') \\\\nabla_x h(\\\\mathbf{x}')\\\\right] \\\\right\\\\} \\\\\\\\\\n = & \\\\min_{r\\\\ge0} \\\\min_{h:\\\\mathbb{E}_{\\\\mathbb{P}_E}[h]=0} \\\\left\\\\{\\\\mathbb{E}_{\\\\mathbb{P}_E}[hD] + \\\\lambda_2 r^2: \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}}\\\\left[\\\\nabla_x h(\\\\mathbf{x})^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') \\\\nabla_x h(\\\\mathbf{x}')\\\\right] \\\\leq r^2 \\\\right\\\\} \\\\\\\\\\n = & \\\\min_{r\\\\ge0} \\\\min_{h:\\\\mathbb{E}_{\\\\mathbb{P}_E}[h]=0} \\\\left\\\\{r\\\\mathbb{E}_{\\\\mathbb{P}_E}[hD] + \\\\lambda_2 r^2: \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}}\\\\left[\\\\nabla_x h(\\\\mathbf{x})^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') \\\\nabla_x h(\\\\mathbf{x}')\\\\right] \\\\leq 1 \\\\right\\\\}\\\\\\\\\\n = & \\\\min_{r\\\\ge0} \\\\ \\\\left\\\\{\\\\lambda_2 r^2 - r||D||_{H^{-1}(\\\\mathbb{P}_E;k)} \\\\right\\\\}\\\\\\\\\\n = & -\\\\frac{1}{4\\\\lambda_2} ||D||_{H^{-1}(\\\\mathbb{P}_E;k)}.\\n \\\\end{aligned}\\n \\\\]\", \"note_that_we_slightly_change_the_definition_of_the_kernel_sobolev_dual_norm_just_to_get_a_cleaner_result\": \"\\\\[\\n\\t||D||_{H^{-1}(\\\\mathbb{P};k)} := \\\\sup_{u\\\\in C_0^\\\\infty}\\\\left\\\\{\\\\langle D,u \\\\rangle_{L^2(\\\\mathbb{P})}:\\\\ \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}}\\\\left[\\\\nabla_x h(\\\\mathbf{x})^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') \\\\nabla_x h(\\\\mathbf{x}')\\\\right] \\\\leq 1,\\\\ \\\\mathbb{E}_{\\\\mathbb{P}}[h]=0\\\\right\\\\}.\\n\\t\\\\]\\n\\n2. As you pointed out, the approximation is indeed unnecessary, and we modify the proof (see the second bullet in response to reviewer 2). $t$ should appear in (15) -- we apologize again for the typo and as above-mentioned, in the updated version, we slightly change the definition of the kernel Sobolev dual norm just to get a cleaner result without explicitly having $t$.\\n\\n3. Thanks for pointing out the confusing terminology. We have changed all `density' into `log-density' or `energy'.\\n\\n4. We have modified the update rule according to your suggesion in the updated version.\\n\\n5. We have changed the assumed energy model to $p(x)=\\\\exp(-\\\\frac{1}{2}x^2-\\\\phi x)$ to avoid the infinite normalizing constant, and the results remain the same.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We appreciate very much your comments and suggestions. We apologize for the typos and unjustified claims in the first version and we correct them in the updated version.\\n\\n1. As you kindly mentioned, we can directly consider $S(\\\\lambda_1P_{real}+\\\\lambda_2P_G, P_E)$ when $\\\\lambda_1+\\\\lambda_2=1$ (after suitable scaling). In fact, in this case\\n\\\\[\\nS(\\\\lambda_1P_{real}+\\\\lambda_2P_G, P_E) = \\\\lambda_1 S(P_{real}, P_E) + \\\\lambda_2 S(P_G, P_E).\\n\\\\]\\nSince $\\\\lambda_1P_{real}+\\\\lambda_2P_G$ is a mixture, we can generate samples from it by the usual sampling scheme for mixture models which involve sampling from $P_{real}$ and $P_G$, separately. So theoretically and computationally, the above two approaches make little difference. \\n\\nOne may think to just minimize $S(\\\\lambda_1P_{real}+\\\\lambda_2P_G, P_E)$ (without $W(P_G,P_{real})$). We tried this approach and the numerical results are not good.\\n\\nThere is an advantage of using the $\\\\lambda_1 S(P_{real}, P_E) + \\\\lambda_2 S(P_G, P_E)$ formulation: this is when we actually have a density model for $P_E$ and we want to use MLE for estimating $P_E$ from data, then we can use\\n\\\\[\\n-\\\\lambda_1 {\\\\cal E}_{P_{real}} [P_E] + \\\\lambda_2 S(P_G, P_E).\\n\\\\]\\n\\nWe empirically compared using shared Stein critics and two different Stein critics and found that there was little difference for the performance. So we used shared Stein critics to reduce computational cost. \\n\\n2. We add the following reasoning to justify the swap of $\\\\min_G$ and $\\\\max_D$.\\n\\nUsing the notations in the paper, we consider\\n\\\\[\\\\begin{aligned}\\n\\\\min_{h\\\\in L^1(\\\\mathbb{P}_E):\\\\mathbb{E}_{\\\\mathbb{P}_E}[h]=0} \\\\max_{D:\\\\mathrm{Lip}(D)\\\\leq 1} \\\\bigg\\\\{ \\\\mathbb{E}_{\\\\mathbb{P}_E}[D] + \\\\mathbb{E}_{\\\\mathbb{P}_E}[hD] - \\\\mathbb{E}_{\\\\mathbb{P}_{real}}[D] + \\\\lambda_1\\\\mathcal{S}(\\\\mathbb{P}_{real},\\\\mathbb{P}_E) \\\\\\\\\\\\quad \\\\quad + \\\\lambda_2\\\\cdot \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\sim\\\\mathbb{P}}[\\\\nabla_x\\\\log(1+h(\\\\mathbf{x}))^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') \\\\nabla_x\\\\log(1+h(\\\\mathbf{x}'))]\\\\bigg\\\\},\\n\\\\end{aligned}\\n\\\\]\\nwhere $h(\\\\mathbf{x}) := d\\\\mathbb{P}/d\\\\mathbb{P}_E(\\\\mathbf{x}) -1$ (see next bullet for the validity of this formulation). Without loss of generality, we can only consider those $D$'s with $D(\\\\mathbf{x}_0)=0$ for some element $\\\\mathbf{x}_0$, as a constant shift does not change the value of $\\\\mathbb{E}_{\\\\mathbb{P}_E}[(1+h)D]-\\\\mathbb{E}_{\\\\mathbb{P}_{real}}[D]$.\\nThe space of Lipschitz functions that vanish at $\\\\mathbf{x}_0$ is a Banach space, and the subset of 1-Lipschtiz functions is compact (Weaver (1999)). Moreover, $L^1(\\\\mathbb{P}_E)$ is also a Banach space. The above verifies the condition of Sion's minimax theorem, and thus the claim is proved.\\n\\n3. We apologize for omitting details in the proof of Theorem 1 and 2. We add more explanations in the updated version. \\n\\nFor the transition from (14) to the $\\\\inf_{\\\\mathbb{P}}$ in the proof of Theorem 1, we add the following reasoning:\\n\\nAssume $\\\\mathbb{P}_G$ exhausts all continuous probability distributions.\\nFrom the definition of kernel Stein discrepancy\\n\\\\[\\n\\\\mathcal{S}(\\\\mathbb{P},\\\\mathbb{P}_E) = \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{x}'\\\\in\\\\mathbb{P}} [(\\\\nabla_x \\\\log \\\\mathbb{P}(\\\\mathbf{x}) - \\\\nabla_x \\\\log \\\\mathbb{P}_E(\\\\mathbf{x}))^\\\\top k(\\\\mathbf{x},\\\\mathbf{x}') (\\\\nabla_x \\\\log \\\\mathbb{P}(\\\\mathbf{x}') - \\\\nabla_x \\\\log \\\\mathbb{P}_E(\\\\mathbf{x}'))],\\n\\\\]\\n$\\\\mathcal{S}(\\\\mathbb{P},\\\\mathbb{P}_E)$ is infinite if $\\\\mathbb{P}$ is not absolutely continuous with respect to $\\\\mathbb{P}_E$.\\nHence, it suffices to consider those $\\\\mathbb{P}$'s that are absolutely continuous with respect to $\\\\mathbb{P}_E$.\\n\\nFor the swapping of $\\\\min$ and $\\\\mathbb{E}$ in the proof of Theorem 2, we add the following justification: the exchanging of $\\\\min$ and $\\\\mathbb{E}$ follows from the interchangebability principle (Theorem 7.80 in Shapiro (2009)).\\n\\n4. We have supplemented the derivation for (8) (which changes to (6) in the updated version) in Appendix D.1. \\n\\n5. We kindly clarify that the Stein critic can be a function $f: R^d\\\\rightarrow R^{d'}$ where $d'$ does not necessarily equal to $d$, see definition 2.1 in Liu et al. (2016). The only requirement for $f$ is Stein identity condition, i.e.,\\n$$E_p[A_pf(x)] = E_p[\\\\nabla_x \\\\log p(x) f(x)^\\\\top + \\\\nabla_x f(x)] = 0.$$\\nSuch property induces a measurement of difference between two distributions $p$ and $q$ as $\\\\mathbb E_p[A_qf(x)]$, which is a $d\\\\times d'$ matrix. The Stein identity guarantees that $ E_p[ A_qf(x)]=0, \\\\forall f$ in some function space, if and only if $p=q$. So a general Stein discrepancy can be written as $\\\\phi( E_p[A_qf(x)])$ where $\\\\phi$ is an operation that transforms a $d\\\\times d'$ matrix into a scalar. A common choice of $\\\\phi$ is trace operation on condition that $d'=d$. Note that one can also consider $d'\\\\neq d$ and use other forms for $\\\\phi$, like matrix norm, as is indicated in Liu et al. (2016). Therefore, for practical implementation we use $d'=1$ and further simply $\\\\phi$ as an average of each dimension of $E_p(A_qf(x))$.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper has a lot of typos and needs a lot of proofreading and is not in shape for being reviewed.\", \"summary_of_the_paper\": \"The paper proposes to train implicit model such as gan and an explicit model (Energy Based ) jointly . The GAN is trained using WGAN-GP objective or the original JS objective (we have a discriminator D and Generator G). The energy based model (E) is trained using Stein Divergence with a fixed kernel k or a learned critic who's parameters are denoted pi in the paper. Note that the critic of the stein divergence is vector valued. This paper propose to add a regularization loss on the stein divergence between the generator G (implicit model ) and the explicit model (E). This gives a training objective \\n\\n$\\\\min_{G,E} W(P_r, G) + \\\\lambda_1 S(P_r, P_E)+ \\\\lambda_2 S(P_{G}, P_{E})$\", \"in_the_paper_the_stein_critic_is_shared_between_the_two_stein_divergence_which_means_that_the_authors_are_rather_considering\": \"$S(\\\\lambda_1 P_r + \\\\lambda_ 2P_{G}, P_{E})$\\n\\nPaper shows the effect of this additional coupling between the two models as a regularization on the Discriminator D and on the critic of the stein divergence. \\n\\nThen the effect of the regularization is also show in terms of convergence in the optimization on a bilinear game, and in the convex concave case. \\n\\nExperiments are given showing the benefits of the joint training.\", \"there_are_too_many_concerns_with_this_papers\": \"\", \"1__the_first_one_was_mentioned_above_if_the_critic_is_shared_then_you_better_be_considering\": \"$S(\\\\lambda_1 P_r + \\\\lambda_ 2P_{G}, P_{E})$\\n\\n2- In equation 4, the problem is $\\\\min_{G} \\\\max_{D}$ it is swapped.\\n\\n3- There a lot of gaps in the proofs of Theorems 1 and 2. The transition from equation 14 to the $\\\\inf_{\\\\mathbb{P}}...$ is not explained and seems flawed. In theorem 2 , the proof is too short and swapping of $\\\\min$ and $\\\\mathbb{E}$ is not backed rigoursly. \\n\\n4- Again in Equation 8, it is not clear how the Stein terms were computed , the appendix does not give the derivations either.\\n\\n5 - Authors say that the Stein critic have similar architecture to the GAN critic , which indicates an error in the implementation in the neural case for stein critic. Stein critic has to be vector valued, after checking the code of this paper on GitHub, stein critic maps to a real value in the code , which is flawed. The critic of stein needs to map the image to an image , which actually quite expensive.\", \"typos\": \"\", \"abstract\": \"without explicitly defines -> defining\\nmultimodal data . has been -> have \\nwithout explicit defines -> defining\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to train a GAN and an EBM jointly, and bridge them using a Stein discrepancy. The paper claims it leads to novel regularization effect on both models, and stablizes the optimization process. Experiments on MNIST and CIFAR-10 show improvement in sample quality and outlier detection.\\n\\nBoth the idea and the experiment results are interesting. However, the derivations contain too many typos and are in general confusing, and I cannot confirm their correctness. Therefore I cannot recommend acceptance.\", \"specifically_the_proof_of_theorem_1_seems_problematic\": \"1. In the proof you claim (15) equals $\\\\frac{-1}{4\\\\lambda_2}\\\\lVert D-t\\\\rVert_{H^{-1}}$. But (15) could only simplify to \\n$\\\\frac{1}{\\\\lambda_2} ( E[D\\\\cdot(\\\\lambda_2 h)] + E_{x,x'}[\\\\nabla(\\\\lambda_2 h(x))^T k(x,x') \\\\nabla(\\\\lambda_2h(x'))],$\\nwhere h is unconstrained. Compare this with the definition of the $H^{-1}$ norm,\\n$sup_h \\\\{E[D\\\\cdot h]: E_{x,x'}[\\\\nabla h(x)^T k(x,x') \\\\nabla h(x')] \\\\le 1\\\\},$\\nhow did you drop the inequality constraint on h?\\n2. The transformation from the original objective (14) to (15) is strange as well. In the proof you claim the minimization problem below \\\"invoking Lagrangian duality gives\\\" could only turn to (15) after \\\"applying the approximation log(1+a)=a+O(a^2)\\\" and \\\"a further approximation\\\". But you can turn it into\\nE[(D-t)h]+\\u03bb E_{x,x'~P_E}[\\u2207h(x)^T k(x,x') \\u2207h(x')]\\nsimply by simplifying the gradient terms. Also, why did the $t$ disappear in (15)?\\n\\nThere are also typos and issues elsewhere. To list a few:\\n3. Energy-based models are not generally referred to as \\\"explicit models\\\", since the normalization constant is intractable. I would suggest to replace the occurrences of (log) \\\"density\\\" with \\\"energy\\\" to avoid confusion.\\n4. The GD update rule of (6) is incorrect; the optima should also be (0, 1), instead of (1, 0).\\n5. On the second line on Page 8, the unnormalized log density cannot be x^2+\\\\phi x, as the normalization constant would then be infinity.\\n\\nFor these reasons, I believe this paper needs a thorough proofreading before it can be reviewed efficiently.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper proposes a training objective that combines three terms:\", \"A Stein discrepancy for learning a energy model with intractable normalizing constant\", \"A Wasserstein GAN objective for learning an implicit neural sampler.\", \"A Stein discrepancy for minimizing the distance between distributions defined by the energy model and the GAN.\", \"The third term is called \\\"Stein bridging\\\" by the authors. It seems pretty difficult to motivate such a bridging term because from the first glance this term does not add anything to learning the two models from data. So I'm wondering how the authors motivate themselves to study this modification. This is my main concern about the paper if this term appears simply because that energy model has an unnormalized density while from GANs we can sample, and Stein discrepancy is best applicable to such pair of distributions.\", \"Apart from the concern on motivations, I tried to follow the arguments in Section 3, as the bridging term is justified as regularization to both models. However, I think the proof of Theorem 1 is incorrect:\", \"I don't think it makes sense from \\\\nabla \\\\log (1 + h(x)) to (1 + h(x))\\\\nabla h(x), even if the taylor expansion suggested by the authors is applied.\", \"In the next step, \\\"Consider a further approximation\\\", this approximation basically sets 1 + h(x) to 1, if h(x) is approximately 0, then P=P_E..\"], \"minor\": [\"IN proof of Theorem 1, the expectation should be always over x,x'~P_E instead of x~P, right?\"]}"
]
} |
Sygg3JHtwB | Step Size Optimization | [
"Gyoung S. Na",
"Dongmin Hyeon",
"Hwanjo Yu"
] | This paper proposes a new approach for step size adaptation in gradient methods. The proposed method called step size optimization (SSO) formulates the step size adaptation as an optimization problem which minimizes the loss function with respect to the step size for the given model parameters and gradients. Then, the step size is optimized based on alternating direction method of multipliers (ADMM). SSO does not require the second-order information or any probabilistic models for adapting the step size, so it is efficient and easy to implement. Furthermore, we also introduce stochastic SSO for stochastic learning environments. In the experiments, we integrated SSO to vanilla SGD and Adam, and they outperformed state-of-the-art adaptive gradient methods including RMSProp, Adam, L4-Adam, and AdaBound on extensive benchmark datasets. | [
"Deep Learning",
"Step Size Adaptation",
"Nonconvex Optimization"
] | Reject | https://openreview.net/pdf?id=Sygg3JHtwB | https://openreview.net/forum?id=Sygg3JHtwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"DA8TJOjW4U",
"S1xEkJKysr",
"ryeaLodyoS",
"HkxuABPC5S",
"S1l1vLzkqB",
"rkx1fl00YH",
"HJx3p4tTYH",
"SkeABj_TKH",
"Bkecwky9KH",
"Syx5cIUHFS",
"Syg0LyARdH",
"r1xG0jhCuB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment",
"official_review",
"official_comment",
"comment",
"comment"
],
"note_created": [
1576798736403,
1572994779745,
1572993877444,
1572922831681,
1571919447436,
1571901447154,
1571816644350,
1571814214443,
1571577698453,
1571280529724,
1570852693665,
1570847689689
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1938/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1938/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1938/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1938/Authors"
],
[
"~Jianlin_Su1"
],
[
"ICLR.cc/2020/Conference/Paper1938/Authors"
],
[
"~Jianlin_Su1"
],
[
"ICLR.cc/2020/Conference/Paper1938/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1938/Authors"
],
[
"~Junxiang_Wang1"
],
[
"~Junxiang_Wang1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper is rejected based on unanimous reviews.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to review #2\", \"comment\": \"We agree with your comment, so we will append a conclusion or discussion section.\\n\\nThe performance improvement on CIFAR-10 and CIFAR-100 datasets is not marginal. Could you give a reference that achieves similar performance using RMSProp and Adam on ResNet-18?\\n\\nWe first defined step size adaptation as a constrained optimization problem and converted it into a solvable problem by applying linearization and introducing slack variables. Then, we analyzed convergence of the proposed method with L2 regularization that is the most common regularization technique. To alleviate bad convergence problem, we developed the upper bound decay that is a generalized technique of the step size decay. Furthermore, we extended the proposed method into the stochastic learning environments. Thus, this paper is not just a list of existing methods. In this work, is there really no technical contribution? You should provide some references to criticize the performance improvement and technical contributions.\"}",
"{\"title\": \"Response to review #4\", \"comment\": \"(1, 2) We don't know what you're pointing out. SSO always showed faster convergence speed than RMSProp and Adam. In addition, SSO consistently showed the performance improvement with relatively large initial learning rate (e.g., 0.5). Note that RMSProp and Adam commonly use very small initial learning rate (e.g., 0.001`). Thus, your comments are incorrect. Furthermore, SSO showed comparable convergence speed with L4-Adam and AdaBound while improving the generalization significantly.\\n\\n(3) We tuned hyperparameters of the competitors with the grid search and achieved the experimental results similar to other papers and GitHub repositories on CNN and ResNet-18. Especially, for L4-Adam and AdaBound, we used the best hyperparameters suggested in their original papers.\\n\\nYou should provide clearer and more understandable review.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a new step size adaptation in first-order gradient methods. The proposed method establishes a new optimization problem with the first-order expansion of loss function and the regularization, where the step size is treated as a variable. ADMM is adopted to solve the optimization problem.\\n\\nThis paper should be rejected because (1) the proposed method does not show the convergence rate improvement of the gradient method with other step sizes adaptation methods. (2) the linearization of the objective function leads the step size to be small ($0<\\\\eta<\\\\epsilon$), which could slow down the convergence in some cases. (3) the experiments generally do not support a significant contribution. In table 1, the results of the competitor are not with the optimal step sizes. The limit grid search range could not verify the empirical superiority of the proposed method.\", \"minor_comments\": \"The y-axis label of (a) panel in each figure is wrong. I guess it should be \\\"Training loss \\\".\"}",
"{\"title\": \"Unrealistic training environments\", \"comment\": \"I agree with your concern because eta is trivial as epsilon for g^T v > 0 and zero for g^T v < 0 on the loss function without the regularization term. Nonetheless, this solution is mathematically correct due to the linearity of the simplified loss function.\\n\\nFurthermore, if the regularization term is added or minibatch is used, the solution is no longer trivial. Your concern is about the training environments that the training dataset perfectly represents the test dataset (non-overfitting) and the size of the training dataset is tiny (non-minibatch). These training environments are unrealistic in deep learning, so I did not consider them in designing the method.\\n\\nThanks.\"}",
"{\"title\": \"trivial if no regularization term\", \"comment\": \"if without regularization term, the optimal \\u03b7 of eq.(3) is just \\u03f5, which is just a trivial result and makes no sense.\"}",
"{\"comment\": \"Linearization was not applied to obtain a closed-form solution, but to simplify the severely complex and nonlinear loss function. In this process, the approximation error inevitably occurs, so there is no reason to unnecessarily linearize the regularization term when it is simple (e.g., convex).\\n\\nSSO can be derived without regularization term, but we did not consider this training environment because most objective functions for training deep neural networks include the regularization term to improve training or testing performances.\\n\\nThanks.\", \"title\": \"Linearization on the objective function\"}",
"{\"title\": \"How do we explained why just expanding J(\\u03b8) but not \\u2126(\\u03b8)?\", \"comment\": \"An excellent job but still some confusions.\\n\\nWhy we just expanding J(\\u03b8 \\u2212 \\u03b7v) as J(\\u03b8)\\u2212\\u03b7 g^T v, but not \\u2126(\\u03b8 \\u2212 \\u03b7v) as \\u2126(\\u03b8)\\u2212\\u03b7 g^T v?\\n\\nI know you may want to get a closed form like eq.(14), but it is not a sufficient reason in my opinion. I think we must demonstrate that ignoring higher order of J(\\u03b8 \\u2212 \\u03b7v) is reasonable.\\n\\nMeanwhile, can we do it while the loss has no regularizer term?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"First, I would like to point out that there has not been a conclusion or discussion section included, therefore the paper appears to be incomplete.\\nAside from this the main contribution of the paper is a study on optimising the step size in gradient methods. They achieve this through the use of alternating direction method of multipliers. Given all the formulations provided, it appears as if this method does not rely on second order information or any probabilistic method.\\nAn extension of the proposed method covers stochastic environments.\\nThe results demonstrate some promising properties, including convergence and improvements on MNIST, SVHN, Cifar-10 and Cifar-100, albeit marginal improvements.\\nAlthough the results appear to be promising the overall structure of the paper and the method presented are based upon established techniques, therefore the technical contribution is rather limited.\\nI have read the rebuttal and answered to some of the concerns of the authors.\"}",
"{\"title\": \"New update rule and experiment results\", \"comment\": \"I am sorry for the late reply. I realized that I forgot to change the inequality constraints into the equality constraints in derivation of the Lagrangian. Thus, the nonnegative constraints on the dual variables should be removed. For this reason, I conducted the experiments again using a new update rule for the dual variables without nonnegative constraints. The experiments results are here:\\n------------------------------------------------------------------------------------------------\\n|\\t | MNIST | SVHN | CIFAR-10 | CIFAR-100 |\\n===========================================================\\n| SSO-SGD | 99.32+=0.05 | 96.45+=0.13 | 94.42+=0.19 | 75.54+=0.14 |\\n------------------------------------------------------------------------------------------------\\n| SSO-Adam | 99.28+=0.05 | 95.75+=0.07 | 92.43+=0.06 | 71.23+=0.18 |\\n------------------------------------------------------------------------------------------------\\nThe results are generally similar to the results in the paper, and I got further improvement on CIFAR-100 dataset. I really appreciate to your comment. If I have a change to revise the paper, I will modify the update rule of the dual variables and the experiment results.\\n\\nThe convergence of SSO with L2 regularization is guaranteed because its objective function consists of two convex functions and one strongly convex function [1]. However, if the regularization term is not strongly convex, the convergence is not guaranteed as you mentioned. To answer this general situation, I further studied the convergence of the multi-block ADMM and found an advanced ADMM called RP-ADMM [2]. It randomly permutes the order of the update rules for primal variables and empirically showed the convergence of ADMM on multi-block objective functions. I will also append your concern and RP-ADMM for SSO in the appendix of the paper.\\n\\n\\nThanks.\\n\\n\\n[1] Lin, T., Ma, S., Zhang, S. Global Convergence of Unmodified 3-block ADMM for a Class of Convex Minimization Problems. J. Sci. Comput 76, 69-88 (2018).\\n[2] http://www.iciam2015.cn/Yinyu%20Ye.html\"}",
"{\"comment\": \"Dear Author:\\n Thank you for your answer.\\n1. The dual variables lambda1 lambda2 correspond to two linear equality constraints eta-s1=0 and epsilon-eta-s2=0, respectively. So in this case, lambda1 and lambda2 can be any real number to my knowledge. If lambda1 and lambda2 correspond to any inequality constraint, this means that lambda1 lambda2 should be nonnegative [1].\\n2. Equation 4 actually has three decision variables eta, s1, and s2, even though s1 and s2 are auxiliary. So it should be multi-block ADMM. By the way, there is no single ADMM to my knowledge. The goal of the ADMM is to split a problem into multiple subproblems. So ADMM has at least two variables[2]. If there is only one variable, that is called the augmented Lagrangian method (ALM)[2].\\nPlease point out my mistakes if necessary. Thanks.\\n[1]. Boyd, Stephen, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.\\n[2]. Boyd, Stephen, et al. \\\"Distributed optimization and statistical learning via the alternating direction method of multipliers.\\\" Foundations and Trends\\u00ae in Machine learning 3.1 (2011): 1-122.\", \"title\": \"A more clear description of confusion\"}",
"{\"comment\": \"Dear author:\\n Thank you for your interesting work. Step size optimization is an important topic. However, I find it difficult to understand some points in the paper.\\n1. In page 2, why the dual variables lambda1 and lambda2 must be nonnegative? This may explain why there is a max operation in Equations 8 and 9.\\n2. The convergence analysis of the proposed ADMM is confusing. As far as I know, Equation 4 is a multi-block ADMM (i.e., with more than two variables), and the multi-block ADMM is not guaranteed to converge. See the following paper for reference. \\nThe direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent\", \"https\": \"//link.springer.com/article/10.1007/s10107-014-0826-5.\\n Thanks.\", \"title\": \"Interesting approach some points are confusing\"}"
]
} |
H1xJhJStPS | Equilibrium Propagation with Continual Weight Updates | [
"Maxence Ernoult",
"Julie Grollier",
"Damien Querlioz",
"Yoshua Bengio",
"Benjamin Scellier"
] | Equilibrium Propagation (EP) is a learning algorithm that bridges Machine Learning and Neuroscience, by computing gradients closely matching those of Backpropagation Through Time (BPTT), but with a learning rule local in space.
Given an input x and associated target y, EP proceeds in two phases: in the first phase neurons evolve freely towards a first steady state; in the second phase output neurons are nudged towards y until they reach a second steady state.
However, in existing implementations of EP, the learning rule is not local in time:
the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.
This is a major impediment to the biological plausibility of EP and its efficient hardware implementation.
In this work, we propose a version of EP named Continual Equilibrium Propagation (C-EP) where neuron and synapse dynamics occur simultaneously throughout the second phase, so that the weight update becomes local in time. We prove theoretically that, provided the learning rates are sufficiently small, at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss given by BPTT (Theorem 1).
We demonstrate training with C-EP on MNIST and generalize C-EP to neural networks where neurons are connected by asymmetric connections. We show through experiments that the more the network updates follows the gradients of BPTT, the best it performs in terms of training. These results bring EP a step closer to biology while maintaining its intimate link with backpropagation. | [
"Biologically Plausible Neural Networks",
"Equilibrium Propagation"
] | Reject | https://openreview.net/pdf?id=H1xJhJStPS | https://openreview.net/forum?id=H1xJhJStPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"K23TsejCZ",
"ByxzbYN3oS",
"SJlBGB8Nir",
"HJxY6VU4sS",
"SyxII4IEiB",
"H1ewjm8EiH",
"H1e010yl9r",
"BkxSJJoJ9H",
"rygnS_8nYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736373,
1573828858004,
1573311756791,
1573311680698,
1573311566278,
1573311391231,
1571974630314,
1571954397328,
1571739716481
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1937/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1937/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1937/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1937/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1937/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1937/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1937/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1937/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Main content: paper introduces a new variant of equilibrium propagation algorithm that continually updates the weights making it unnecessary to save steady states. T\", \"summary_of_discussion\": \"\", \"reviewer_1\": \"likes the idea but points out many issues with the proofs.\", \"reviewer_2\": \"he really likes the novelty of paper, but review is not detailed, particularly discussing pros/cons.\", \"reviewer_3\": \"likes the ideas but has questions on proofs, and also questions why MNIST is used as the evaluation tasks.\", \"recommendation\": \"interesting idea but writing/proofs could be clarified better. Vote reject.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Overview of revisions and responses\", \"comment\": \"We thank the reviewers for their valuable comments, which have help us improve our manuscript - see our revised version.\\n\\nBased on their feedback, we have now revised our manuscript with the following amendments:\\n\\n1- To address the request of Reviewer # 3, we have now defined precisely what we meant by \\\"biological plausibility\\\" in our context of study. We have emphasized that the main motivation of our study was not the development of a model of biological learning but to make EP better comply with hardware constraints which in particular require both the locality in space and time of the learning rule. For this purpose, we have amended the abstract, the introduction and the discussion.\", \"2__we_have_proceeded_to_a_change_of_terminology_in_the_whole_manuscript\": \"the quantity $\\\\Delta_{\\\\theta}^{\\\\rm C-EP}$ is no longer called an 'update' but a 'normalized update' to stress that $\\\\Delta_{\\\\theta}^{\\\\rm C-EP}$ is not the effective parameter update (which is $\\\\eta\\\\Delta_{\\\\theta}^{\\\\rm C-EP}$) but the update normalized by the learning rate $\\\\eta$.\\nThis addresses 1-A) of our answer to Reviewer # 1. \\n\\n3- We have clarified the proof of Lemma 2 in Appendix A.3, taking the limit $\\\\eta \\\\to 0 $ with $\\\\eta > 0$. This addresses 1-B) of our answer to Reviewer # 1.\\n\\n4- We have clarified the link between the total parameter update of C-EP and the standard learning rule of EP ($\\\\Delta\\\\theta = \\\\frac{\\\\eta}{\\\\beta}\\\\left(\\\\frac{\\\\partial\\\\Phi}{\\\\partial s}(s^\\\\beta_*) - \\\\frac{\\\\partial\\\\Phi}{\\\\partial s}(s_*) \\\\right)$) after the derivation of Lemma 2 in Appendix A.3. This addresses 1-C) of our answer to Reviewer # 1.\", \"5__we_have_clarified_in_the_introduction_that_our_work_addresses_two_issues_of_ep\": \"first that its learning rule is not local in time, second that it relies on the requirement of a primitive function $\\\\Phi$ for the transition function $F$; the first issue is addressed with the C-EP algorithm, the second with C-VF algorithm. This addresses 2-A) of our answer to Reviewer # 1.\\n\\n6- In section 4, we now make a clear distinction between the training algorithms (C-EP and C-VF) and the models they train. What was previously called the 'C-EP model' has become the 'Vanilla RNN with symmetric weights trained with C-EP'. Likewise, the 'C-VF model' has become the 'Vanilla RNN with asymmetric weights trained with C-VF'. We have also stressed after Eq.(12) (of the revised manuscript), after introducing the Vanilla RNN model with asymmetric weights trained with C-VF that although in this case the dynamics do not derive from a primitive function $\\\\Phi$, Theorem 1 can be generalized to this setting, by referring to the related Appendix where this generalization is derived. This addresses 2-C) of our answer to Reviewer # 1.\\n\\n7- We have explained in details why our vanilla RNN model with symmetric weights trained with C-EP described in Section 4.1 extends to deep architectures with any number of layers with symmetric connections. Also, we have explicitly written the primitive function $\\\\Phi$ for all our models. For this purpose we have added mathematical details in Appendix E describing all the models used in the papers, on a simple example and we now refer to this Appendix in Section 4.1. This addresses 2-B) and 2-D) of our answer to Reviewer # 1.\"}",
"{\"title\": \"Answer to Review #2\", \"comment\": \"We thank the reviewer for his/her comments, and are happy that he/she appreciated our work.\"}",
"{\"title\": \"Biological plausibility of C-EP\", \"comment\": \"We would like to thank the reviewer for his/her comments. Based on this feedback, in the revised version of the paper, we have decided to define explicitly what is meant by biological plausibility in this work, to clarify that continual EP does not aim at being a model of biological learning, and to discuss explicitly the aspects of this algorithm that still differ from biological learning.\\n\\nBrains learn using learning rules that have to be local in space and time. Error backpropagation is particularly non biologically plausible in this regard, as it is fundamentally non local, both in space and time. Our interest is to propose learning rules that feature the two localities, and this is what we define here as being \\u00ab\\u00a0biologically plausible\\u00a0\\u00bb. In this work, we build on EP, which is already local in space, and propose C-EP, which adds locality in time. An important motivation for the development of such minimally biologically-plausible learning rules is that they could be used for the development of extremely energy efficient learning-capable hardware.\\n\\nWe want to be clear that Continual EP does not aim at being a model of biological learning in that it would account for how the brain works or how animals learn. Continual EP does indeed retain considerable differences with actual brain learning. As the Reviewer says, the learning paradigm that is the closest to the way the brain actually learns is Reinforcement Learning. Also, C-EP is evaluated on the MNIST dataset, as the whole current EP literature, which is indeed a conceptual and not a realistic biological task. On the other hand, the use of this task allows a natural bridge with conventional machine learning research. Finally, the equations used in C-EP have no ties with neuroscience experiments.\\n\\nWe propose to make an important overhaul of the introduction and of the discussion of the paper to clarify these points about the nature of our work in the next few days.\"}",
"{\"title\": \"Properties of the transition function F\", \"comment\": \"2 - Concerning the properties of the transition function F:\\n\\nA) First of all, in the revised version of the manuscript we are going to clarify the primary goal of our work, which is to address two issues related to the biological plausibility of EP: the first is the fact that the learning rule of EP is not local in time, the second is the requirement of a primitive function $\\\\Phi$ for the transition function $F$.\", \"c_ep_solves_the_first_problem_but_not_the_second\": \"it still relies on the biologically unrealistic assumption that $F=\\\\frac{\\\\partial \\\\Phi}{\\\\partial s}$. This is precisely this constraint that motivates the second part of our work, where we introduce the C-VF model that gets rid of this assumption.\\n\\nB) In our \\\"C-EP model\\\" with a symmetric weight matrix $W$ (section 4.1), the transition function $F$ (almost) derives from a primitive function $\\\\Phi$.\\nThis property is true for any topology (not just a fully connected recurrent network) as long as existing connections have symmetric values: this includes networks with multiple layers (deep networks), in which case the variable $s$ represents the concatenation of all the layers of neurons, and the weight matrix $W$ is a block sparse concatenation of all the layers of weights. More explicitly, denoting the layers of neurons $s^0$, $s^1$, ..., $s^N$, with $W_{n, n+1}$ connecting the layers $s^n$ and $s^{n+1}$ (in both directions), then $s = (s^0, s^1, \\\\dots, s^N)^\\\\top$ and\\n\\n$W =\\n\\\\begin{bmatrix} \\n0 & W_{01} & 0 & 0 & 0 & 0 \\\\\\\\\\nW_{01}^\\\\top & 0 & W_{12} & 0 & 0 & 0 \\\\\\\\\\n0 & W_{12}^\\\\top & 0 & W_{23} & 0 & 0 \\\\\\\\\\n0 & 0 & W_{23}^\\\\top & 0 & \\\\ddots & 0 \\\\\\\\\\n0 & 0 & 0 & \\\\ddots & 0 & W_{N-1,N} \\\\\\\\\\n0 & 0 & 0 & 0 & W_{N-1,N}^\\\\top & 0\\n\\\\end{bmatrix}$\\n\\nWe propose to add this clarification in Appendix~E.\\n\\nAlternatively, to see why $F$ (almost) derives from a function $\\\\Phi$ in the setting with multiple layers, it is also possible to directly redefine the function $\\\\Phi$ in this specific case: we define $\\\\Phi = \\\\sum_{n} (s^n)^\\\\top \\\\cdot W_{n, n+1}\\\\cdot s^{n+1}$. In the revised version of the manuscript, we are going to amend Appendix~E, in which the models with multiple layers are detailed, by writing explicitly the form of the function $\\\\Phi$ for each of them.\\n\\nC) In the C-VF model of section 4.1, the weight matrix is no longer assumed to be symmetric, thus there is no primitive function $\\\\Phi$, and therefore Theorem 1 does not apply.\\nAlthough our study of the C-VF model is mostly experimental, we also prove a generalisation of the GDD theorem that holds in this more general setting (Theorem 2 in Appendix D.2). Fig.5 illustrates this generalisation of the GDD theorem.\\nWe have clarified this in the paragraph after Eq.(11) in the revised manuscript.\\n\\nD) We clarify here a point that we had not explained in our manuscript.\\nThe theory of our paper (section 3) directly assumes a function $\\\\Phi$ and defines the transition function as $F = \\\\frac{\\\\partial \\\\Phi}{\\\\partial s}$.\\nIn the experimental section however (section 4), we proceed the other way around: we first define $F$, then show the existence of a $\\\\Phi$ such that $F \\\\approx \\\\frac{\\\\partial \\\\Phi}{\\\\partial s}$, which $\\\\Phi$ can finally be used to compute the quantities of the form $\\\\frac{\\\\partial \\\\Phi}{\\\\partial \\\\theta}$ required in the learning rule.\\n\\nMore concretely, let us consider the case of the C-EP model of section 4.1.\\nWe first define the dynamics $s_{t+1} = \\\\sigma(W\\\\cdot s_t)$.\\nThis dynamics can be rewritten in the form $s_{t+1} = F(s_t,W)$ with the transition function $F(s,W) = \\\\sigma(W\\\\cdot s)$.\\nIn this case, if we define $\\\\Phi(s,W) = \\\\frac{1}{2}s^\\\\top\\\\cdot W\\\\cdot s$, we can compute $\\\\frac{\\\\partial \\\\Phi}{\\\\partial s} = W\\\\cdot s$,\\nand then notice that $F \\\\approx \\\\frac{\\\\partial \\\\Phi}{\\\\partial s}$ if we ignore $\\\\sigma$.\\nNow that we have the analytical expression of $\\\\Phi$, we can also use it to compute $\\\\frac{\\\\partial \\\\Phi}{\\\\partial W}(s,W) = s^\\\\top \\\\cdot s$.\\nFinally we can compute the forward-time gradient of C-EP, which reads $\\\\Delta_W^{\\\\rm C-EP}(\\\\beta,\\\\eta,t) = \\\\frac{1}{\\\\beta} \\\\left( s_{t+1}^{{\\\\beta,\\\\eta}^\\\\top} \\\\cdot s_{t+1}^{\\\\beta,\\\\eta} - s_t^{{\\\\beta,\\\\eta}^\\\\top} \\\\cdot s_t^{\\\\beta,\\\\eta} \\\\right)$.\"}",
"{\"title\": \"Equivalence between EP and C-EP\", \"comment\": \"We thank the reviewer for his/her comments.\\n\\n1 - Concerning the equivalence between EP and C-EP (Lemma 2, p.~11), which states that $\\\\lim_{\\\\eta \\\\to 0} \\\\Delta_{\\\\theta}^{\\\\rm C-EP}(\\\\eta, \\\\beta, t) = \\\\Delta_{\\\\theta}^{\\\\rm EP}(\\\\beta, t)$. \\n \\nA) The first point that we want to clarify deals with what we call an `update'.\\nIn machine learning in general, one usually distinguishes between the error gradient $\\\\frac{\\\\partial L}{\\\\partial \\\\theta}$ and the update $\\\\Delta\\\\theta = \\\\eta \\\\frac{\\\\partial L}{\\\\partial \\\\theta}$, which is the gradient rescaled by a learning rate $\\\\eta$.\\nIn C-EP in contrast, what we deceivingly call an `update' and denote $\\\\Delta_\\\\theta^{\\\\rm C-EP}(\\\\beta,\\\\eta,t)$ actually corresponds to the gradient, not the update itself.\\nTo get the actual parameter update in C-EP one needs to rescale $\\\\Delta_\\\\theta^{\\\\rm C-EP}(\\\\beta,\\\\eta,t)$ by $\\\\eta$ ;\\nthe actual update is $\\\\eta \\\\; \\\\Delta_\\\\theta^{\\\\rm C-EP}(\\\\beta,\\\\eta,t)$, so that $\\\\theta_{t+1}^{\\\\eta,\\\\beta} = \\\\theta_{t}^{\\\\eta,\\\\beta} + \\\\eta \\\\; \\\\Delta_{\\\\theta}^{\\\\rm C-EP}(\\\\eta, \\\\beta, t)$. \\nFor this reason, when $\\\\eta$ is tiny (or even zero), it is not contradictory that $\\\\theta_{t}^{\\\\eta, \\\\beta}$ does not change while $\\\\Delta_\\\\theta^{\\\\rm C-EP}(\\\\beta,\\\\eta,t)$ is non-zero -- more generally in machine learning it is not incompatible that the update is zero while the gradient is non-zero, if the learning rate is $\\\\eta = 0$.\\nIn the rest of our answer, to better convey the idea that $\\\\Delta_\\\\theta^{\\\\rm C-EP}(\\\\beta,\\\\eta,t)$ corresponds to a gradient (and not an update in the usual sense),\\nwe will refer to it as the `forward-time gradient' of C-EP.\\nThe term `update' will be used to refer to $\\\\eta\\\\Delta_\\\\theta^{\\\\rm C-EP}$.\\nWe also propose to change the terminology in the whole manuscript (not done yet).\\n\\nB) Although Lemma~2 holds in the limit $\\\\eta \\\\to 0$, in practice there is however a trade-off between taking $\\\\eta$ small enough so that the forward-time gradient $\\\\Delta_\\\\theta^{\\\\rm C-EP}(\\\\beta,\\\\eta,t)$ is close enough to $\\\\Delta_\\\\theta^{\\\\rm EP}(\\\\beta, t)$, but not too tiny so that the parameter update $\\\\eta \\\\Delta_\\\\theta^{\\\\rm C-EP}(\\\\beta,\\\\eta,t)$ is not too small to ensure the loss is optimized within a reasonable time (see the bottom of p.6 of the submitted manuscript and Appendix~F.2 p.30). Taking $\\\\eta = 0$ in practice is thus excluded.\\nIn the proof of Lemma 2 in Appendix A.3, we take $\\\\eta = 0$ because it is mathematically equivalent to taking the limit $\\\\eta \\\\to 0$ (with $\\\\eta >0$), by continuity. We have clarified the proof in the revised version of the manuscript.\\n\\nC) To understand the equivalence of C-EP and EP, one key thing to have in mind is that if the second phase of EP is run for $K$ steps (i.e. if it takes $K$ steps to get from the first steady state $s_*$ to the second steady state $s_*^\\\\beta$) then the total forward-time gradient (in the sense defined above) of EP is not $\\\\Delta_\\\\theta^{\\\\rm EP}(\\\\beta,K)$ but $\\\\Delta_\\\\theta^{\\\\rm EP}(\\\\beta,0) + \\\\Delta_\\\\theta^{\\\\rm EP}(\\\\beta,1) + \\\\cdots + \\\\Delta_\\\\theta^{\\\\rm EP}(\\\\beta,K)$. To see why this is the case, one has to look at the definition of $\\\\Delta_\\\\theta^{\\\\rm EP}(\\\\beta,t)$ (see Appendix~A.3, Eq.~(18)).\\nNow, let us denote $\\\\Delta_\\\\theta^{\\\\rm EP}(\\\\beta,{\\\\rm tot}) = \\\\Delta_\\\\theta^{\\\\rm EP}(\\\\beta,0) +\\\\Delta_\\\\theta^{\\\\rm EP}(\\\\beta,1) + \\\\cdots + \\\\Delta_\\\\theta^{\\\\rm EP}(\\\\beta,K)$ the total gradient of EP, for short.\\nIf we do the suggested procedure, keeping $\\\\eta = 0$ for the first $K-1$ steps and changing $\\\\eta$ to a positive value at time step $K$, then the effective update of C-EP is $\\\\theta^{\\\\beta,\\\\eta}_{K} - \\\\theta^{\\\\beta,\\\\eta}_{K-1}$ (also equal to $\\\\eta \\\\Delta_\\\\theta^{\\\\rm C-EP}(\\\\beta,\\\\eta,K)$ ). By Lemma 2, this C-EP update is close to $\\\\eta \\\\Delta_\\\\theta^{\\\\rm EP}(\\\\beta,K)$, but not to the total update of EP, which is $\\\\eta \\\\Delta_\\\\theta^{\\\\rm EP}(\\\\beta,{\\\\rm tot})$.\\nConversely, if we keep a constant $\\\\eta$ positive but sufficiently small throughout the second phase, the total parameter update of C-EP at the end of the second phase (after $K$ time steps) is approximately equal to the total parameter update of EP:\\n $\\\\theta^{\\\\beta,\\\\eta}_K - \\\\theta^{\\\\beta,\\\\eta}_0 = \\\\sum_{t=0}^{K-1}(\\\\theta_{t+1}^{\\\\eta, \\\\beta} - \\\\theta_{t}^{\\\\eta, \\\\beta}) = \\\\sum_{t=0}^{K-1} \\\\eta \\\\Delta_{\\\\theta}^{\\\\rm C-EP}(\\\\eta, \\\\beta, t) $\\n $\\\\approx \\\\sum_{t=0}^{K-1} \\\\eta \\\\Delta_{\\\\theta}^{\\\\rm EP}(\\\\beta, t) = \\\\sum_{t=0}^{K-1} \\\\eta \\\\frac{1}{\\\\beta} \\\\left(\\\\frac{\\\\partial \\\\Phi}{\\\\partial \\\\theta}(s_{t+1}^\\\\beta) -\\\\frac{\\\\partial \\\\Phi}{\\\\partial\\\\theta}(s_t)\\\\right) = \\\\eta\\\\frac{1}{\\\\beta}\\\\left(\\\\frac{\\\\partial \\\\Phi}{\\\\partial \\\\theta}(s_*^\\\\beta) -\\\\frac{\\\\partial \\\\Phi}{\\\\partial\\\\theta}(s_*)\\\\right)$\\nTo derive the above equation, we have successively used: a telescoping sum, the definition of $\\\\Delta_{\\\\theta}^{\\\\rm C-EP}(\\\\eta, \\\\beta, t)$, Lemma 2, the definition of $\\\\Delta_{\\\\theta}^{\\\\rm EP}(\\\\beta, t)$, and another telescoping sum. We propose to include this equation after the proof of Lemma 2, Appendix A.3, p.11.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: this paper introduces a new variant of equilibrium propagation algorithm that continually updates the weights making it unnecessary to save steady states. The also mathematically prove the GDD property and show the effectiveness of\\n their algorithm (Continual-EP) on MNIST. They also show C-EP is conceptually closer to biological neurons than EP.\\n\\nThis paper tackles an important problem in bridging the gap between artificial neural networks and biological neurons. It is well-motivated and stands well in the literature as it improves its precedent algorithm (EP). The contributions are clear and well-supported by mathematical proofs. The experiments are accurately designed and results are convincing. I recommend accepting this paper as a plausible contribution to both fields.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"I think it is an intriguing paper, but unfortunately left me a bit confused. I have to admit is not a topic I'm really versed in, so it might be that this affected my evaluation of the work. But also, as a paper submitted to ICLR I would expect the paper to be self-contained and be able to provide all the details needed.\\n\\nI do appreciate the authors providing in the appendix the proofs of the other theorems even if they come form other works. \\n\\n\\nThe paper introduces C-EP, an extension of a previously introduced algorithm EP, such that it becomes biologically plausible. In particular EP is local in space but not in time (you need the steady state of the recurrent state after the first stage at the end of the second stage to get your gradients). I think this is fair, and the need for biological plausibility is well motivated in the beginning of the work. \\n\\nMy first issue is with the proof for the equivalence between EP and C-EP. This is done by taking the limit of eta goes to 0. I think I must be missing something. But the proof relies on eta being small enough such that \\\\theta_i = \\\\theta (i.e. theta does not change). Given this state evolves the same way as for EP, because we basically not changing theta. \\nYet the crux of my issue is exactly here. The proof relies on the fact that we don't change theta. So then when you converged on the second phase, isn't theta the same as theta_0? So you haven't actually learned anything!? Basically looking at the delta isn't this just misleading? \\nOk lets assume that on the last step you allow yourself to change eta to be non-zero. (I.e. we are just after the delta in theta, and what to show we can get the same delta in theta as EP which is how the proof is phrased). Then in that difference aren't you looking at s_{t+1} and s_t rather than s_{t+1} and s_0, which is what EP would do? In EP you have s^\\\\beta_* - s_*. This is not what you get if you don't update theta and apply C-EP? \\n\\nI think there might be something I'm missing about the mathematical argument here. \\n\\nAt a higher-level question, we talk about the transition function F as being a gradient vector field, i.e. there exist a phi such that F is d phi/d theta. Why is this assumption biologically plausable ? Parametrizing gradient vector fields in general is far from trivial, and require very specific structure of the neural implementation of F to be true. Several works have looked at parametrizing gradient vector fields (https://arxiv.org/abs/1906.01563, https://arxiv.org/pdf/1608.05343.pdf) and the answer is that without parametrizing it by actually taking the gradient of a function there is not much of a choice. \\nIncidentally, here we exploit that F = sigma (Wx), with W symmetric. This is a paramtrization of a gradient vector field, i.e. of xU, where UU^T =W I think. But if you want to make F deep than it becomes non-trivial to restrict it to gradient vector field. Is the assumption that we never want to move away from vanilla RNNs? And W symmetric is also not biologically plausible. In C-EP you say is not needed to be symmetric, but that implicitly means there exist no phi and everything that follows breaks, no? \\n\\nI'm also confused by how one has access to d phi / ds and d phi / d theta. Again I feel like I'm missing information and the formalism is not introduced in a way that it is easy to parse. My understand is that you have an RNN that updates the state s. And the transfer function of this RNN is meant to be d phi / ds, which is trues if the recurrent weight is symmetric. Fine. But then why do we have access to d phi/ dtheta? Who is this function? Is the assumption that d s / dtheta is something we can compute in a biologically plausible way? Is this something that is obvious?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper is concerned with biologically plausible models of learning. It takes equilibrium propagation -- where updates depend on local spatial information, in the sense that the information is available at each neuron -- and modifies the algorithm so updates are also local in time, thus obtaining C-EP. The key insight is that the updates in EP can be written as a telescoping sum over time points, eq (5).\\n\\nMaking equilibrium propagation more biologically plausible is an interesting technical contribution. But, taking a step back, the setup is misguided. It is true that humans can solve classification problems. And various animals can be trained to do so as well. However, it should be obvious that animals learn to solve these problems by reinforcement learning -- they are literally given rewards like sugar water for correct answers. \\n\\nMNIST is an unusual dataset with a stark constrast between foreground and background that is far from biologically plausible. I know it has a long and important history in machine learning, but if you are interested in biologically plausible learning then it is simply the wrong dataset to start with from both an evolutionary and developmental perspective. It\\u2019s not the kind of problem evolution started with, nor is it the kind of problem human babies start with. \\n\\nMaybe C-EP can be repurposed into a component of some much larger, biologically plausible learning system that does a mixture of RL and unsupervised learning. Maybe not. The MNIST results provide no indication.\\n\\nThe authors have done a lot of solid work analysing BPTT, RBT, and C-EP. I suspect they are far more interested in understanding and designing efficient mechanisms for temporal credit assignment than they are in biological learning. That work can and should stand on its own feet.\"}"
]
} |
BJgyn1BFwS | Global Adversarial Robustness Guarantees for Neural Networks | [
"Luca Laurenti",
"Andrea Patane",
"Matthew Wicker",
"Luca Bortolussi",
"Luca Cardelli",
"Marta Kwiatkowska"
] | We investigate global adversarial robustness guarantees for machine learning models. Specifically, given a trained model we consider the problem of computing the probability that its prediction at any point sampled from the (unknown) input distribution is susceptible to adversarial attacks. Assuming continuity of the model, we prove measurability for a selection of local robustness properties used in the literature. We then show how concentration inequalities can be employed to compute global robustness with estimation error upper-bounded by $\epsilon$, for any $\epsilon > 0$ selected a priori. We utilise the methods to provide statistically sound analysis of the robustness/accuracy trade-off for a variety of neural networks architectures and training methods on MNIST, Fashion-MNIST and CIFAR. We empirically observe that robustness and accuracy tend to be negatively correlated for networks trained via stochastic gradient descent and with iterative pruning techniques, while a positive trend is observed between them in Bayesian settings. | [
"Adversarial Robustness",
"Statistical Guarantees",
"Deep Neural Networks",
"Bayesian Neural Networks"
] | Reject | https://openreview.net/pdf?id=BJgyn1BFwS | https://openreview.net/forum?id=BJgyn1BFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"qR8g32awiF",
"Skgtq0ZqjH",
"SylLPC-5sS",
"S1xJZR-5sH",
"rygk6TWqiH",
"Byeyd6Zqir",
"Skl1SoDAFH",
"SkxcESapFS",
"rJxOXFAoFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736341,
1573686928945,
1573686877858,
1573686774652,
1573686711243,
1573686631072,
1571875639419,
1571833138300,
1571707167593
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1936/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1936/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1936/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1936/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1936/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1936/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1936/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1936/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors propose a framework for estimating \\\"global robustness\\\" of a neural network, defined as the expected value of \\\"local robustness\\\" (robustness to small perturbations) over the data distribution. The authors prove the the local robustness metric is measurable and that under this condition, derive a statistically efficient estimator. The authors use gradient based attacks to approximate local robustness in practice and report extensive experimental results across several datasets.\\n\\nWhile the paper does make some interesting contributions, the reviewers were concerned about the following issues:\\n1) The measurability result, while technically important, is not surprising and does not add much insight algorithmically or statistically into the problem at hand. Outside of this, the paper does not make any significant technical contributions.\\n2) The paper is poorly organized and does not clearly articulate the main contributions and significance of these relative to prior work.\\n3) The fact that the local robustness metric is approximated via gradient based attacks makes the final results void of any guarantees, since there are no guarantees that gradient based attacks compute the worst case adversarial perturbation. This calls into question the main contribution claim of the paper on computing global robustness guarantees.\\n\\nWhile some of the technical aspects of the reveiwers' concerns were clarified during the discussion phase, this was not sufficient to address the fundamental issues raised above.\\n\\nHence, I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Responses to Reviewer #1 (Part 3)\", \"comment\": \"Reviewer: I would like a clarification on whether any of these networks were trained to be robust, although it appears that they were all trained normally.\", \"response\": \"Yes, that is what we meant. It has been shown in several previous works, including [Goodfellow et al. Explaining and Harnessing Adversarial Examples.], that standard regularization, which focuses on magnitude reduction of weights, was shown empirically to have no effect on the adversarial robustness of models. This is precisely what we found for iterative weight pruning. So, despite this method regularizing the network size in a different way, we nonetheless observe the same phenomenon.\", \"reviewer\": \"In the last sentence of Section 4.3, I didn\\u2019t understand what you meant about the relationship between weight pruning and network regularization. Do you mean that weight regularization has no effect on robustness, just like iterative weight pruning?\"}",
"{\"title\": \"Responses to Reviewer #1 (Part 2)\", \"comment\": \"Reviewer: The bounds require that the dataset size scales with $eps^-2$, where $eps$ is the error. This is not terrible but also not great; for example, achieving 1\\\\% error requires a dataset size of $10^4$ (realistically, even larger datasets would be required to achieve results with high probability).\", \"response\": \"The bounds of Theorems 1 and 2 provide strict, statistical guarantees on the estimation of any measurable global robustness property.\\nThat is, independently of the local robustness measure used (e.g., that resulting from FGSM attacks, PGD attacks, or from provable guarantees obtained via formal verification), Theorems 1 and 2 provide formal bounds on the global robustness statistical error specific to that particular local notion.\\nFollowing the reviewer comment, we will modify the paper to stress that the guarantees we provide are for the global estimator, and not for the local robustness property. \\n\\tThough we agree with the reviewer that formal verification methods for local robustness, which provide provable guarantees, are ideally suited to the task, and indeed we apply them for the experiments reported in Figure 1 with the MNIST datasets, the computational burden of state-of-the-art verification methods does not allow us to scale to large datasets or to consider analysis on multiple neural network architectures in any reasonable amount of time (e.g., in Figure 2 we evaluated local robustness millions of times, which would be infeasible with formal verification methods). Similarly to [Osbert Bastani, et al. Measuring neural net robustness with constraints. NIPS 2016.], we thus compare models in terms of resistance to specific adversarial attacks. We focused on FGSM for scalability, but any other method can be used. \\n\\tFinally, it is interesting to note that while FGSM (or PGD or any other heuristic attack) does not tell us the exact robustness value for formal local robustness, it does provide us with an upper bound for it. That is, if FGSM finds a small attack that is successful, then the true worst-case attack must be at least that small, if not smaller. Moreover, more sophisticated attacks (e.g. PGD and CW attacks) have been empirically shown to be close to the true worst-case attack [Carlini et. al., Provably Minimally-Distorted Adversarial Examples, 2018]\", \"reviewer\": \"I would suggest that the authors avoid using the word \\u201cguarantees\\u201d if they are estimating empirical local robustness in an approximate (rather than exact) manner. Guarantees implies strict results, but the authors use a weak attack (FGSM) to approxi- mate empirical local robustness. The results from FGSM could be far from optimal; the authors could use a stronger attack (e.g. PGD) in addition to changing the wording, or they could find provable guarantees using alternate methods.\"}",
"{\"title\": \"Responses to Reviewer #1 (Part 1)\", \"comment\": \"Reviewer: First, the notion of global robustness is not well-motivated (why do we want to compute this metric? What does it tell us that local robustness does not?). While I acknowledge that a few prior works exists along these lines, I do not feel that this work provides much new insight into why global robustness is interesting to examine.\", \"response\": \"We would like to stress that the main contribution of this paper lies in the development of a framework for the computation of global adversarial robustness with a-priori statistical guarantees. We first show that these error bounds can be used for neural networks, and then we apply them to investigate the robustness of different neural network architectures and training paradigms. It is in this way that the investigation into iterative magnitude pruning, which had not been done before, is\\nrelated. We are empirically studying the effect of network compression on robustness in a way that puts bounds on our error. Given that this is a comparison of networks where only one variable has changed (the training method), the method proposed in the paper allows us to precisely quantify the expected change in robustness from SGD to IMP. \\n\\tThe decision to consider BNNs was driven by the fact that many of the symptoms of adversarial examples appear to be from overfitting to the training distribution, and in principle BNNs do not suffer from overfitting, which may result in different trends. Further, whereas it is true that BNNs trained with HMC do not scale to large datasets beyond MNIST, other scalable but approximate Bayesian training methods exist (i.e., mean-field variational inference and Monte Carlo dropout). We decided to not include these methods in our analysis because they would introduce a non-trivial approximation error. However, we believe that our analysis may lead to novel insights about the existence of adversarial examples and how these may be inherently related to training with SGD\", \"reviewer\": \"the paper tries to do too many different things, and as a result does not give enough attention to any particular topic. ... The authors try to tackle 3 extra questions beyond global robustness toward the end of the paper, and the last two questions are not properly fleshed out... Section 4.3 explores iterative pruning, but that seems fairly unrelated to the rest of the paper. Finally, Section 4.4 tries to show the opposite trend for Bayesian Neural Networks, but unfortunately the results for such networks do not\\nyet scale beyond MNIST\"}",
"{\"title\": \"Responses to Reviewer #2\", \"comment\": \"Reviewer: The authors\\u2019 insistence on their contribution being proving measurability does not make sense \\u2013 of course everything is measurable!\", \"response\": \"We would like to stress that the main contribution of this paper lies in the development of a framework for the computation of global adversarial robustness with a-priori statistical guarantees. We first show that these bounds can be used for neural networks, and then we apply them to investigate the robustness of different neural network architectures and training paradigms. This allows us to confirm and quantify previously reported results on the trade-off between generalisation accuracy and adversarial robustness of neural networks. We then investigate the relationship between model capacity and model robustness in iterative magnitude pruning settings. We find that weight pruning does not increase robustness despite greatly reducing model capacity. Finally, we evaluate the robustness of Bayesian neural networks and compare it with their deterministic counterpart, which, to the best of our knowledge, had never been evaluated. Our finding that BNNs are more robust wrt gradient based attacks, in our opinion, warrants further exploration into the use of BNN models in safety-critical scenarios.\", \"reviewer\": \"The redeeming aspect of the paper is the experiments, where the authors show that these bounds can actually be (approximately) calculated. However, I feel that merely experimental results with correct but not significant theoretical contributions does not meet the bar for acceptance.\"}",
"{\"title\": \"Responses to Reviewer #3\", \"comment\": \"Reviewer: the measurability property is very much expected \\u2013 no one was doubting it, and the proof is more of a formality than a contribution.\", \"response\": \"While Chernoff bounds are well known, the main contribution of this paper lies in the development of a framework for the computation of global adversarial robustness with a-priori statistical guarantees (provided by the Chernoff's bound), which we use to investigate the robustness of different neural network architectures and training paradigms. This allow us to confirm and quantify previously reported results on the trade-off between generalisation accuracy and adversarial robustness of neural networks, demonstrated through a large-scale study. We then investigate the relationship between model capacity and model robustness in iterative magnitude pruning settings. We find that weight pruning does not increase robustness despite greatly reducing model capacity. Finally, we evaluate the robustness of Bayesian neural networks and compare it with their deterministic counterparts, which, to the best of our knowledge, had never been evaluated. Our finding that BNNs are more robust wrt gradient-based attacks, in our opinion, warrants further exploration into the use of BNN models in safety-critical scenarios.\", \"reviewer\": \"has this reviewer missed any important details? If not, then it\\u2019s only the bounds that are a contribution, but the method is not. We would appreciate more specific description of the main contribution, without it we cannot recommend the acceptance of this paper.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies the adversarial robustness of neural networks by giving theoretical guarantees, providing statistical estimators and running experiments. It is a lot of work and it is reasonably written. The problem is that a fair bit of it is quite basic: for example the measurability property is very much expected -- noone was doubting it, and the proof is more of a formality than a contribution. Similarly with the statistical sampling: the method seems to rely on i.i.d. sampling -- has this reviewer missed any important details? If not, then it's only the bounds that are a contribution, but the method is not. We would appreciate more specific description of the main contribution, without it we cannot recommend the acceptance of this paper.\\n\\nI am very grateful to the authors for their response. I feel now that a main weakness of this paper may be that it puts too many results in one place. I would strongly suggest re-writing it, possibly into separate papers, to make the things pointed out in the response more clear and self-standing.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\n\\nThe paper formally defines local and global adversarial robustness. Following that, the paper investigates how to estimate local and global adversarial robustness using an estimator based on evaluating these quantities on the empirical distribution. Using a Chernoff bound, the papers evaluate probabilistic bounds on the deviation of the estimated quantities from the true quantities. Finally, simulations are provided to evaluate these bounds for examples.\", \"comments\": \"The authors' insistence on their contribution being proving measurability does not make sense -- of course everything is measurable! Furthermore, the formal definitions or local and global robustness are well-known, the bounds in Theorems 1 and 2 are not novel and highly unlikely to be tight. The redeeming aspect of the paper is the experiments, where the authors show that these bounds can actually be (approximately) calculated. However, I feel that merely experimental results with correct but not significant theoretical contributions does not meet the bar for acceptance.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper seeks to analyze the global robustness of neural networks, a concept defined in the paper. The authors show using concentration inequalities that the empirical local robustness approximates the global robustness. The authors investigate various other issues in the robustness literature, including the robustness/accuracy tradeoff, whether iterative pruning increases robustness, and the robustness of Bayesian networks.\\n\\nI would vote for rejecting this paper for two key reasons. First, the notion of global robustness is not well-motivated (why do we want to compute this metric? What does it tell us that local robustness does not?). Second, the paper tries to do too many different things, and as a result does not give enough attention to any particular topic.\\n\\nFirst, I believe it is up to the authors to motivate their study of global robustness further. While I acknowledge that a few prior works exists along these lines, I do not feel that this work provides much new insight into why global robustness is interesting to examine.\\n\\nThe authors go on to prove results showing that an empirical estimator of the local robustness will converge to the global robustness. The bounds require that the dataset size scales with eps^-2, where eps is the error. This is not terrible but also not great; for example, achieving 1% error requires a dataset size of 10^4 (realistically, even larger datasets would be required to achieve results with high probability).\\n\\nNext, I would suggest that the authors avoid using the word \\u201cguarantees\\u201d if they are estimating empirical local robustness in an approximate (rather than exact) manner. Guarantees implies strict results, but the authors use a weak attack (FGSM) to approximate empirical local robustness. The results from FGSM could be far from optimal; the authors could use a stronger attack (e.g. PGD) in addition to changing the wording, or they could find provable guarantees using alternate methods.\\n\\nLastly, the authors try to tackle 3 extra questions beyond global robustness toward the end of the paper, and the last two questions are not properly fleshed out.\\n\\nI like section 4.2, where the authors empirically show that networks that have better hyperparameters (for regular accuracy) tend to be less robust. This is a confirmation of a previously studied phenomena in the literature. Ideally, I would also appreciate it if the authors found the line of best fit to the dataset in addition to the plots provided. I would like a clarification on whether any of these networks were trained to be robust, although it appears that they were all trained normally. I would also like to see plot 2c (for the standard case of robustness of C(x_tilde) = C(x)), except for MNIST and CIFAR10 as well. I feel that the last-layer representation metric the authors analyze (f(x) is close to f(x_tilde)) could be misleading, as robustness on the last layer does not necessarily imply standard adversarial robustness.\\n\\nSection 4.3 explores iterative pruning, but that seems fairly unrelated to the rest of the paper. Finally, Section 4.4 tries to show the opposite trend for Bayesian Neural Networks, but unfortunately the results for such networks do not yet scale beyond MNIST.\", \"additional_feedback\": [\"Why did you use R^emp and D^emp as opposed to just R^emp(g) and R^emp(g_bar)?\", \"In Figure 1, what is the dataset size |S|?\", \"In the last sentence of Section 4.3, I didn\\u2019t understand what you meant about the relationship between weight pruning and network regularization. Do you mean that weight regularization has no effect on robustness, just like iterative weight pruning?\"]}"
]
} |
HylAoJSKvH | A Stochastic Derivative Free Optimization Method with Momentum | [
"Eduard Gorbunov",
"Adel Bibi",
"Ozan Sener",
"El Houcine Bergou",
"Peter Richtarik"
] | We consider the problem of unconstrained minimization of a smooth objective
function in $\mathbb{R}^d$ in setting where only function evaluations are possible. We propose and analyze stochastic zeroth-order method with heavy ball momentum. In particular, we propose, SMTP, a momentum version of the stochastic three-point method (STP) Bergou et al. (2019). We show new complexity results for non-convex, convex and strongly convex functions. We test our method on a collection of learning to continuous control tasks on several MuJoCo Todorov et al. (2012) environments with varying difficulty and compare against STP, other state-of-the-art derivative-free optimization algorithms and against policy gradient methods. SMTP significantly outperforms STP and all other methods that we considered in our numerical experiments. Our second contribution is SMTP with importance sampling which we call SMTP_IS. We provide convergence analysis of this method for non-convex, convex and strongly convex objectives. | [
"derivative-free optimization",
"stochastic optimization",
"heavy ball momentum",
"importance sampling"
] | Accept (Poster) | https://openreview.net/pdf?id=HylAoJSKvH | https://openreview.net/forum?id=HylAoJSKvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"NxEVwQUODU",
"Sye7wOqniB",
"ryxmy_9nsH",
"B1g-jNqhoS",
"S1lvKE9noS",
"Hye-lZhE5B",
"Hye_vaCb5B",
"rJxwjjnhYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736310,
1573853275251,
1573853146773,
1573852312744,
1573852287364,
1572286697205,
1572101472310,
1571765150638
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1935/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1935/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1935/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1935/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1935/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1935/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1935/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"A new method for derivative free optimization including momentum and importance sampling is proposed.\\n\\nAll reviewers agreed that the paper deserves acceptance.\\n\\nAcceptance is recommended.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to R1\", \"comment\": \"We thank R1 for their positive feedback and acknowledgment to our contributions. All comments have been addressed in blue in the revised version. Follows our response.\\n\\n$\\\\bullet$ \\\"(1) As compared with other derivative free optimization algorithm, such as Bayesian optimization/genetic algorithms/simulated annealing, what is the advantage of the proposed method and also the STP framework?\\\"\\n\\n\\nAs we mentioned in our response to R3 there are many DFO methods and comparing against all of them is beyond the reach of our paper. Advantage of {\\\\tt STP} framework in general is in its simplicity and generality (see the paragraph in the introduction devoted to {\\\\tt STP}).\\n\\n$\\\\bullet$ \\\"(2) The experiments seem weak to me. Why does the paper only compare with STP? Are there any other baselines, such as stochastic two points, BO and GA? Is it possible to conduct evaluation on other applications? For example, some general optimization tasks but not allowing gradient calculation?\\\"\\n\\n\\nIn the experiments section, we compare against several algorithms on the MuJoCo task and not only against STP. For instance, in Table 2, we report comparisons against 2 policy gradient methods (NG-lin) and (TRPO-nn). Moreover, we compare against ARS(V1-t) and ARS(V2-t) which to the best of our knowledge achieve the state-of-art results on the respective environments.\"}",
"{\"title\": \"Response to R3\", \"comment\": \"We thank R3 for their constructive thorough detailed review. Note that all typos and minor comments have been addressed in blue in the revised version. Follows our response.\\n\\n$\\\\bullet$(1) \\\"The analysis is nice in that it shows the methods work, but doesn't demonstrate benefit of their method over other methods\\\"\\n\\nIndeed, we do not confirm theoretically that ${\\\\tt SMTP}$ outperforms ${\\\\tt STP}$. However, we mentioned that for the general case of objectives it is still an open question whether Heavy ball method outperforms Gradient Descent theoretically. Since ${\\\\tt STP}$ can be considered as zeroth-order variant of Gradient Descent and ${\\\\tt SMTP}$ as zeroth-order variant of Heavy ball method, it is natural to have no benefits of ${\\\\tt SMTP}$ over ${\\\\tt STP}$, at least theoretically. But still it was needed to show that ${\\\\tt SMTP}$ is not worse in terms of theoretical convergence rates than ${\\\\tt STP}$ in order to have some guarantees for the new method and verify that it relates to ${\\\\tt STP}$ in the same way as Heavy ball methods relate to Gradient Descent.\\n\\n$\\\\bullet$ (2) \\\"Given that there are no results showing this method has better worst-case rates than other methods, we rely on experiments to see the actual benefit. In this case, more experiments is always better.\\\"\\n\\nAn advantage of the proposed algorithm is that it indeed enjoys a provable convergence rate. None of the competitors, except ${\\\\tt STP}$ and ${\\\\tt STP{\\\\_}IS}$, enjoy any theoretical results for the rate of convergence but they have been well celebrated for their excellent performance on the MuJoCo environments. In this work, the proposed algorithm enjoys both the theoretical rates and practicality that outperforms several competition. The complexity of the experiments is comparable and match previous art (Rajeswaran et al. conduct experiments on 6 environments while Schulman et al. on 7). We conduct experiments on 5 environments but provide convergence guarantees. \\n\\n\\n\\n\\n$\\\\bullet$ \\\"(3) I am quite skeptical of the importance sampling scheme. It's nice to include it, but I don't think it strengthens the paper too much. Empirically, the performance seems to help sometimes but not other times. Finding ... \\\"\\n\\nIt has been observed by Bibi et. al. that importance sampling with STP vastly improves upon uniform sampling. In this work and upon merging both momentum and importance sampling such conclusion is indeed far less obvious as pointed out by R3. We provide such analysis for the sack of completion and leave most of the details to the supplementary material. The preprocessing of retraining the smooth function is very negligible compared (order of milliseconds) to the reward function evaluation (simulator run) that is of order of seconds. \\n\\n\\n$\\\\bullet$\\\"Theorem 3.5 (Thm D.2) requires the $\\\\mu_D^2$ to be less than the condition number, which is weird. The easier the problem is, the tighter your assumptions are. I suspect that this is because you use an inequality somewhere that simplifies things by bounding a term by the condition number. But as ...\\\"\\n\\nActually, it is not so weird assumption. Please, see the Lemma~F.1. It covers 5 examples of distributions $\\\\mathcal{D}$ that fit Asumption~3.1. Note that for the first two examples $\\\\mu_{\\\\mathcal{D}}$ is less than $1$ and $\\\\|\\\\cdot\\\\|_{\\\\mathcal{D}} = \\\\|\\\\cdot\\\\|_2 = \\\\|\\\\cdot\\\\|_{\\\\mathcal{D}}^*$, so, we always have $\\\\mu_{\\\\mathcal{D}}^2 \\\\le 1 \\\\le \\\\frac{L}{\\\\mu}$ for these cases. For the third case we have $\\\\|\\\\cdot\\\\|_{\\\\mathcal{D}} = \\\\|\\\\cdot\\\\|_1$, $\\\\|\\\\cdot\\\\|_{\\\\mathcal{D}}^* = \\\\|\\\\cdot\\\\|_\\\\infty$, $\\\\mu_{\\\\mathcal{D}} = \\\\frac{1}{d}$ and due to classical relation $\\\\|x\\\\|_2 \\\\le \\\\sqrt{d}\\\\|x\\\\|_\\\\infty$ we get that if the function $f$ is $\\\\mu$-strongly convex in $\\\\ell_\\\\infty$-norm then it is $\\\\hat{\\\\mu}$-strongly convex in $\\\\ell_2$-norm with $\\\\hat{\\\\mu} \\\\ge \\\\frac{\\\\mu}{d}$. Using this we get $\\\\frac{L}{\\\\mu} \\\\ge \\\\frac{L}{\\\\hat\\\\mu d} \\\\ge \\\\frac{1}{d} \\\\ge \\\\frac{1}{d^2} = \\\\mu_{\\\\mathcal{D}}^2$ since $L \\\\ge \\\\hat\\\\mu$.\\n\\nThank you so much for your comment, it helped us to find a small typo in Assumption~3.1, see the revised version.\\n\\n\\n$\\\\bullet$ \\\"sentences like \\\"We achieve the state-of-the-art performance compared to *all* DFO based and policy gradient methods\\\" are in appropriate (*italics* are mine). You mean ...\\\"\\n\\nThe statement has been tuned down the final version.\"}",
"{\"title\": \"Response (2/2) to R2\", \"comment\": \"$\\\\bullet$ \\\"I would say that the paper is rather on the light side regarding experiments. Only MuJoCo is used as an experimental setup. It would be nice to also ... \\\"\\n\\n\\nWe believe that the complexity of the experiments either matches or exceeds the experiments reported in the stochastic optimization literature that provide provably convergence algorithms with rates. That is to say, we believe that the real experiments on real data (continuous control MuJoCo experiments) are the major factor here. We believe this speak about the quality of our work as a whole. We will take the reviewers comment into consideration in the final version of the paper.\\n\\n\\n$\\\\bullet$ \\\"What is more, the experimental choices are not entirely clear. What is the \\\"predefined reward ...\\\"\\n\\nThe predefined reward thresholds were previously proposed in the literature of continuous control community as such that MuJoCo agents are considered to have successfully completed the task if they achieve these predefined thresholds for the reward function or higher. See ARS paper for reference.\"}",
"{\"title\": \"Response (1/2) to R2\", \"comment\": \"We thank R2 for their constructive thorough detailed review. Follows our response. All edits are marked in blue in the revised version.\\n\\n\\n$\\\\bullet$ \\\"While interesting and useful, I am not completely convinced whether the added novelty over (Bergou et al, 2019) is significant enough. At the end of the day, the final algorithm is the conglomeration of two existing algorithms, that is STP and momentum. STP is very similar to the ...\\\"\\n\\n\\nNote that we claim nothing about optimality of our approach in the paper. However, we agree that investigating different approach beyond stochastic three points is an interesting direction of future work. By non-triviality of our approach we mean that ${\\\\tt SMTP}$ is not a straightforward ${\\\\tt STP}$-like modification of Polyak's method: instead of classical form of the Heavy ball method we use equivalent form from (Yang et al. 2016) and choose next iterate $x^{k+1}$ not as argminimum of $x^k, x_{+}^{k+1}, x_{-}^{k+1}$, but use virtual iterates instead.\\n\\n\\n\\n$\\\\bullet$ \\\"In the strongly convex case one assumption (knowing the $f(x*)$ ) is replaced with another assumption, that all points lie on a hypersphere ($\\\\|s\\\\|_2=1$). I suppose this would assume a spherical normalization of the input space. While this is not an unrealistic assumption, it does ...\\\"\\n\\n\\n\\nWe do not think that Assumption~3.4 from our paper is unrealistic or restrictive. Indeed, in the case of high-dimensions it can cause additional problems connected with normalization. However, in the high-dimensional case the method itself works slow which is the common ``disease of DFO methods and additional normalization for this case does not change the situation dramatically. According your comment about our analysis in the strongly convex case itself -- yes, due to space limitations we do not introduce some classical fact about $\\\\mu$-strongly convex and $L$-smooth problems. For example, classical relations $\\\\frac{\\\\mu}{2}\\\\|x^0 - x^*\\\\|_{{\\\\mathcal{D}}}^2 \\\\le f(x^0) - f(x^*)$ and $f(x^0) - f(x^*) \\\\le \\\\frac{L}{2}\\\\|x^0 - x^*\\\\|_2^2$ are the answer for your question: in this case $R_0^2$ and $f(x^0) - f(x^*)$ are equivalent up to some constants and it does not play a big role since $f(x^0) - f(x^*)$ appears in (25) under the logarithm.\\n\\n\\n$\\\\bullet$ Defining $\\\\epsilon$\\n\\nWe added a definition for $\\\\epsilon$ the first time it is introduced (Theorem 3.2).\\n\\n$\\\\bullet$ Assumption 3.1\\n\\nWe have restated Assumption 3.1 to address R2's comments.\\n\\n$\\\\bullet$ \\\"Between eq. (11) and (12) there is reference to (35)? What is (35)?\\\"\\n\\nEquation (35) is identical to Eq (11) but was rederived in the supplementary material. We have corrected the reference to Equation (11).\\n\\n\\n\\n$\\\\bullet$ \\\"It is not clear in practice how the importance sampling is performed. In Algorithm 2 the probabilities $p_i$ are defined as function inputs and ...\\\"\\n\\nThat is correct. The probabilities $p_i$ are computed once before the algorithm and never updated and they are a function of $L_i$. Table 1 summarizes the choice of $p_i$. Note that in the supplementary material, we derive the rates for nonconvex (Theorem E.1), convex (Theorems E.2 and E.3) and strongly convex (Theorems E.4 and E.5) problems as a function of arbitrary sampling probabilities $p_i$. We propose the importance sampling strategy (proportional to $L_i$) as depicted in Table 1, to show that this strategy enjoys better worst complexity rate than uniform sampling. For non-convex problems of the MuJoCo experiments, $L_i$s are not known apriori for the reward function. Thus we follow, section E of Bibi et. al. and approximate the objective function with a smooth parametric family of neural networks where we can estimate the smoothness constants $L_i$.\\n\\n\\n\\n$\\\\bullet$ \\\"A highly relevant field appears to be Bayesian Optimization, where also one cannot compute gradients and must optimize a black-box function. Some relevant recent ... \\\"\\n\\n\\n\\nTo the best of our knowledge, Bayesian optimization is about global optimization of black box functions where there are no necessary assumptions regarding smoothness. The flavor of our work is slightly different where we have convergence rate while we are not aware of any for Bayesian optimization.\\n\\nThe most related methods to us from the literature are STP and the methods mentioned in the STP paper (deterministic direct search, random gradient free method, direct search based on random directions). We had a comparison of STP with them on the STP paper. STP was outperforming all compared methods and since $STP_{\\\\text{momentum}}$ is outperforming STP, this demonstrates the superiority of our proposed algorithm compared to all methods of the same class.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the so called problem of derivative-free optimization, which is relevant for cases when the evaluation function is continuous but access to gradients is not possible. The paper improves on top of the stochastic three points method (STP), an existing work (published in arXiv), by proposing adding momentum (SMTP). The intuition behind both STP and SMTP is rather straighforward: you sample a random direction s, then given your current position x you check x+as and x-as. You then move to the best position from (x, x+as, x_as). In a way, this is like computing the numerical derivatives (instead of the gradient) given a random location and its mirror, and then applying gradient descent given the best numerical derivative. However, take this analogy with a large grain of salt, as there are many differences with GD. The proposed algorithm adds momentum and importance sampling. Momentum helps speed up convergence, as the paper shows for non-convex, convex and strongly convex functions. All three cases are individually examined and bounds are derived regarding the speed of convergence. For the non-convex case the speed of convergence is 1/\\\\sqrt{K}, K being the number of iterations. For the convex case it is 1/K. For the strongly convex case the (unrealistic) assumption of knowing the optimal value is removed while maintaining the same speed of convergence. Importance sampling helps computing the derivatives focusing on those coordinate dimensions that are more critical to the objective function f(x), improving the speed of convergence further. The importance sampling is proportional to the coordinate-wise Lipschitz constants, assuming that the objective function is coordinate-wise Lipschitz smooth. The methods are validated on five different cases of MuJoCo. Results seem good when compared to the STP ones. Compared to policy gradient methods, the results seem much better.\", \"strengths\": [\"The paper presents a small but interesting and well-motivated addition to the original algorithm STP. I particularly liked how straightforward the final algorithm is: applying momentum and sampling according to the Lipschitz constants.\", \"At least at a first glance the results look good. Compared to STP in figure 1 there is a clear improvement not only in the final optimum but also in the speed of attaining the said optimum.\", \"I liked a lot the presentation and clarity of writing. While quite mathematically dense, it was easy to follow the big story and understand that underlying points.\"], \"weaknesses\": \"+ While interesting and useful, I am not completely convinced whether the added novelty over (Bergou et al, 2019) is significant enough. At the end of the day, the final algorithm is the conglomeration of two existing algorithms, that is STP and momentum. STP is very similar to the final algorithm, after all it is the basis for it. The authors argue that it is not trivial to select the next points under the momentum term. To this end, they propose to rely on yet another existing approach, that is the virtual iterates analysis from (Yang et al. 2016). However, it is not clear why these points are \\\"optimal\\\", what is so \\\"non-trivial\\\" about selecting them? This is basically skimmed over in two lines.\\n\\n+ In the strongly convex case one assumption (knowing the f(x*) ) is replaced with another assumption, that all points lie on a hypersphere (|s|_2=1). I suppose this would assume a spherical normalization of the input space. While this is not an unrealistic assumption, it does place a constraint which could be problematic in the case of high dimensions for s? In that case the high dimensionality would render distances rather unreliable and in turn could hurt convergence? This is also perhaps the reason that only the MuJoCo enviroments were tested? In general, I would say that the strongly convex case was discussed less clearly and it is not exactly clear the final result. In the end, eq (25) does contain f(x*), whereas in the convex case K does not (K \\\\approx 2 R_0^2 L \\u03b3_D/(\\u03b5\\u03bc_D^2).\\n\\n+ Some statements are unclear.\\n ++ In p. 2 some symbols are not explained, e.g., \\u03b5. While it is quite clear for peopled versed in the field, in my opinion it is bad practice to leave notation not explained.\\n ++ In assumption 3.1 seems rather trivial? Wouldn't \\u03b3_D by definition be always positive, since is the expectation of a squared norm (always positive)? Does this need to be an assumption?\\n ++ Between eq. (11) and (12) there is reference to (35)? What is (35)?\\n ++ It is not clear in practice how the importance sampling is performed. In Algorithm 2 the probabilities p_i are defined as function inputs and then never updated. Is that true? If yes, how is p_i decided in the first place? What is the connection to the Lipschitz constants L_i?\\n\\n+ A highly relevant field appears to be Bayesian Optimization, where also one cannot compute gradients and must optimize a black-box function. Some relevant recent works are [1] and [2] for continuous and discrete inputs. It would be interesting to discuss what are the distinct differences with bayesian optimization methods in [1] and [2].\\n\\n+ I would say that the paper is rather on the light side regarding experiments. Only MuJoCo is used as an experimental setup. It would be nice to also report results on synthetic experiments with known functions to better understand the limitations of the algorithm. Synthetic and realistic setups can be found in [1] and [2].\\n\\nWhat is more, the experimental choices are not entirely clear. What is the \\\"predefined reward threshold\\\" and why was that chosen? For instance, the leaderboard for \\\"Swimmer\\\" is in: https://www.endtoend.ai/envs/gym/mujoco/swimmer/. How does the proposed algorithm fair compared to these works? Also, *maybe* it would be interesting to compare even against [1] or [2] (I guess [2] is harder as it is for discete inputs), assuming that a relatively low number of iterations is performed.\\n\\n[1] BOCK: Bayesian Optimization with Cylindrical Kernels, C. Oh, E. Gavves, M. Welling, ICML 2018\\n[2] BOCS: Bayesian Optimization of Combinatorial Structures, R. Baptista, M. Poloczek, ICML 2018\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors extend recent the recent stochastic three-point (STP) method to allow for Polyak-style momentum, as well as momentum with importance sampling. They provide a range of analysis that mostly extends existing STP results to the STP+momentum case. Most of these results are similar in spirit to stochastic gradient or subgradient results, as in the methods converge up to a ball around the solution, with radius depending on step-size, so you can get an epsilon solution by choosing a suitably small stepsize. The analysis covers non-convex cases (bounds are on the norm of the gradient) and the importance sampling case as well.\\n\\nOverall, I think this is a strong paper, and a very interesting topic, and hence I support a \\\"Weak accept\\\". The numerical results look good, as the new method outperforms most of the compared methods, at least for the easier problems (SMTP beats competitor ARS in 3/5 trials; both methods are generally very similar, though SMTP does better on the easy Swimmer problem; the SMTP_IS results are more complicated). The analysis is mostly good (non-trivial), and shows a broad understanding.\\n\\nThat said, I have some concerns. \\n\\n(1) The analysis is nice in that it shows the methods work, but doesn't demonstrate benefit of their method over other methods\\n\\n(2) Given that there are no results showing this method has better worst-case rates than other methods, we rely on experiments to see the actual benefit. In this case, more experiments is always better.\\n\\n(3) I am quite skeptical of the importance sampling scheme. It's nice to include it, but I don't think it strengthens the paper too much. Empirically, the performance seems to help sometimes but not other times. Finding the individual Lipschitz constants seems tricky; this paper re-uses a scheme that iterates for a while, fits a function, and uses that to estimate the constants (it wasn't clear if this pre-processing was counted in the iteration count for experimental results). It's not clear how well that works to get an accurate estimate. Furthermore, to exploit the importance sampling, the directions must be sampled from a pre-determined basis, which seems restrictive. This criticism is not just of the current paper but of other papers that use this approach.\\n\\n\\n-- The manuscript needs more proof reading, as there are mistakes in most paragraphs. There are a lot of problems with missing articles. Phrases like \\\"results for STP are shorts and clear\\\" (\\\"shorts\\\" --> \\\"short\\\"), \\\"which updates rule\\\" [?? which-->with?? ], \\\"hints [at] the update rule\\\", \\\"is far more superior\\\" [-->\\\"is far superior\\\", since you can't be more superior], etc.\\n\\n-- There is a confusion over how to use \\\\cite, \\\\citet and \\\\citep in latex. Given the bibtex citation style, this makes it very hard to read in places\\n\\n-- Literature review seems good and pretty thorough (mentions most key references through 2015, and a good selection of references since then).\\n\\n-- Assumption 3.1 part 2 is stated in a funny way (it says, \\\"there is a constant mu_D and a norm || ||_D such that ...\\\"). You are free to choose the norm, and then find the constant (since all norms in finite dimensions are equivalent). That way, you can choose the norm that gives the tightest inequality. I think you are aware of this, and it's just a wording issue.\\n\\n-- Theorem 3.5 (Thm D.2) requires the mu_D^2 to be less than the condition number, which is weird. The easier the problem is, the tighter your assumptions are. I suspect that this is because you use an inequality somewhere that simplifies things by bounding a term by the condition number. But as stated, this is a weak theorem. It is also confusing because you have a mu_D which is not the strong convexity constant, but the actual strong convexity constant (mu) *does* depend on the norm D (cf eq 19; and this must be so, otherwise you can cheat and then the value of mu_D is meaningless). So both mu's are functions of the norm D. However, the Lipschitz constant L is *not* a function of D. So notation is confusing and makes interpreting the results harder.\\n\\n-- sentences like \\\"We achieve the state-of-the-art performance compared to *all* DFO based and policy gradient methods\\\" are in appropriate (*italics* are mine). You mean to say that on the few examples you ran, based on a few DFO and policy gradient methods you tested, that the best of your two methods was better than the competitor methods on 4/5 problems.\\n\\n-- I think a common-sense algorithm to compare to would be gradient descent (or heavy-ball) using finite differences to estimate the gradient. In small dimensions this isn't such a bad idea. I don't actually know what the dimensions of your test problems are (I looked in section 5 but didn't see it mentioned, other than reference to Ant-v1 and Humanoid-v1 being \\\"high dimensional\\\"; I think this is extremely relevant information. In small dimensions, traditional DFO and Bayesian optimization methods are competitive).\\n\\n-- p 26/27, \\\"Causchy-Schwartz\\\" is spelled wrong, and usually this is called \\\"Holder's inequality\\\" when it's not the Euclidean norm.\\n\\n-- Table 3, there is no space between the caption and the main text, so it's confusing\\n\\n-- Eq (76) in appendix, the sum should go to d not n.\\n\\n-- I think s^k may need to be independent of z^k for their tower property thing to work, otherwise it's not clear what's happening with the inequality prior to overset (30) on that last line of pg17. For example, if s^k were z^k measurable then that whole thing in the inner expectation would be a constant. This isn't a problem, it's fairly natural to assume that s^k is independent of z^k, I just didn't see the assumption anywhere.\\n\\n-- Overall, comparing importance sampling results is hard, due to the different norms (this is mentioned in the paper, and there are inequalities between norms, but it's still hard to get a good result that shows importance sampling has better worst-case rates).\\n\\n\\n== AFTER READING REBUTTAL ==\\nI read the authors' response, and I am still slightly positive about the paper, though my major points were not addressed, but mostly deflected, e.g., referring to other papers that claim to show benefits of importance sampling. I think all the reviewers were curious how Bayesian optimization (BO) would perform. We understand numerical experiments are time consuming, but it's disappointing that you're not curious yourself whether your method outperforms BO. Your basic deflection seems to be that BO doesn't have provable guarantees, so because you do have guarantees, you don't need to compare with it. Having one of the fastest methods among all methods with provable guarantees, but not necessarily the fastest method in general, sounds like a consolation prize to me.\\n\\nThe revision did not address some of the minor issues I mentioned, such as confusing \\\"cite\\\"/\\\"cited\\\"/\\\"citep\\\" issues in latex (which makes it hard to read). My comment about Holder's inequality (vs Cauchy-Schwarz) applies not just to the $\\\\|\\\\cdot\\\\|_1 \\\\le \\\\sqrt{d}\\\\|\\\\cdot\\\\|_2$ bound, but also to the $\\\\|\\\\cdot\\\\|_2 \\\\le \\\\sqrt{d}\\\\|\\\\cdot\\\\|_\\\\infty$ bound in the next paragraph.\\n\\nBut despite a few items I'm being cranky about, I think it's still a solid paper, and it's still a weak accept.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a stochastic derivative free optimization algorithm. The contribution is two-fold: first, the paper introduces the heavy ball momentum into the STP framework; second, the paper fulfills both the importance sampling and heavy ball momentum in the STP framework. For both methods, the paper provides the convergence guarantees and rates. The experiments on reinforcement learning data-sets, as compared with the original STP, shows improvement.\\n\\nThe idea seems straightforward --- just combining a classical momentum strategy with the an existing derivative free optimization framework. But the author claim that they are the first to exploit this strategy. The analysis part, for strongly convex, convex and nonconvex problems, however, is solid to me. I am not the expert in this direction. Here are a few questions, from the answers of which I want to learn more about the meaning of this work. \\n\\n(1) As compared with other derivative free optimization algorithm, such as Bayesian optimization/genetic algorithms/simulated annealing, what is the advantage of the proposed method and also the STP framework? \\n(2) The experiments seem weak to me. Why does the paper only compare with STP? Are there any other baselines, such as stochastic two points, BO and GA? Is it possible to conduct evaluation on other applications? For example, some general optimization tasks but not allowing gradient calculation?\"}"
]
} |
SygRikHtvS | Coresets for Accelerating Incremental Gradient Methods | [
"Baharan Mirzasoleiman",
"Jeff Bilmes",
"Jure Leskovec"
] | Many machine learning problems reduce to the problem of minimizing an expected risk. Incremental gradient (IG) methods, such as stochastic gradient descent and its variants, have been successfully used to train the largest of machine learning models. IG methods, however, are in general slow to converge and sensitive to stepsize choices. Therefore, much work has focused on speeding them up by reducing the variance of the estimated gradient or choosing better stepsizes. An alternative strategy would be to select a carefully chosen subset of training data, train only on that subset, and hence speed up optimization. However, it remains an open question how to achieve this, both theoretically as well as practically, while not compromising on the quality of the final model. Here we develop CRAIG, a method for selecting a weighted subset (or coreset) of training data in order to speed up IG methods. We prove that by greedily selecting a subset S of training data that minimizes the upper-bound on the estimation error of the full gradient, running IG on this subset will converge to the (near)optimal solution in the same number of epochs as running IG on the full data. But because at each epoch the gradients are computed only on the subset S, we obtain a speedup that is inversely proportional to the size of S. Our subset selection algorithm is fully general and can be applied to most IG methods. We further demonstrate practical effectiveness of our algorithm, CRAIG, through an extensive set of experiments on several applications, including logistic regression and deep neural networks. Experiments show that CRAIG, while achieving practically the same loss, speeds up IG methods by up to 10x for convex and 3x for non-convex (deep learning) problems. | [
"subset",
"ig methods",
"craig",
"problems",
"data",
"experiments",
"coresets"
] | Reject | https://openreview.net/pdf?id=SygRikHtvS | https://openreview.net/forum?id=SygRikHtvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"jZBh9cesdt",
"H1xpWpO2iB",
"rkgf0hu2iS",
"rygP1n_hsr",
"H1l_fhnnqH",
"Hkg22VYtKH",
"BkghxNl2OS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736281,
1573846276759,
1573846218267,
1573845983104,
1572813839875,
1571554483856,
1570665460365
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1934/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1934/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1934/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1934/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1934/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1934/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper investigates the practical and theoretical consequences of speeding up training using incremental gradient methods (such as stochastic descent) by calculating the gradients with respect to a specifically chosen sparse subset of data.\\n\\nThe reviewers were quite split on the paper. \\n\\nOn the one hand, there was a general excitement about the direction of the paper. The idea of speeding up gradient descent is of course hugely relevant to the current machine learning landscape. The approach was also considered novel, and the paper well-written. \\n\\nHowever, the reviewers also pointed out multiple shortcomings. The experimental section was deemed to lack clarity and baselines. The results on standard dataset were very different from expected, causing worry about the reliability, although this has partially been addressed in additional experiments. The applicability to deep learning and large dataset, as well as the significance of time saved by using this method, were other worries.\\n\\nUnfortunately, I have to agree with the majority of the reviewers that the idea is fascinating, but that more work is required for acceptance to ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Detailed Responses\", \"comment\": \"We thank the reviewer for acknowledging the novelty and the theoretical strength of our work and for noting that our\\u200b experiments \\u200bare\\u200b \\u200bsolid\\u200b and our setup and analyses are sound.\", \"re\": \"Minor suggestions\\nWe thank the reviewer for pointing out the mislabeled subfigures. We modified them accordingly. We will add more explanation on the objective function F to improve readability.\"}",
"{\"title\": \"Detailed Responses\", \"comment\": \"We thank the reviewer for insightful feedback and for acknowledging the novelty and the algorithmic strength of our work. The reviewer asks great questions, and we provide detailed answers below.\", \"re\": \"Test loss, relative distance and weight distribution\\nWe will add test loss and relative distance plots to the final version. This is a great observation by R1 that the weight distribution of Covtype is more uniform than the other 2 datasets. We will add this discussion to the final version.\"}",
"{\"title\": \"Detailed Responses\", \"comment\": \"We thank the reviewer for acknowledging the technical aspects and the theoretically founded nature of the work. Based on reviewer\\u2019s valuable feedback we conducted a number of additional experiments, which further validate the efficacy of our CRAIG framework, and further strengthen the paper.\", \"re\": \"Upper-bounding gradients in neural networks\\nAs discussed in Section 3.4, this upper-bound is valid given the parameters of the neural network and hence for non-convex loss functions we update the subset at the beginning of every epoch. Our experiments and the experimental results of (Katharopoulos & Fleuret \\u201819) confirm that this upper-bound is indeed useful in practice. For cases where the gradients may change too quickly, we can update the subset more than once during every epoch. As upper-bounds on the normed gradient distances can be obtained by a forward pass, this would still be considerably faster than backpropagation required for training on the full dataset.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes a theoretically founded method to generate subsets of a dataset, together with corresponding sample weights in a way that the average gradient of the subset is at most epsilon far from the average gradient of the full dataset. Given such a subset, the authors provide theoretical guarantees for convergence to an epsilon neighborhood of the optimum for strongly convex functions. The proposed algorithm to create such a subset is a greedy algorithm that relies on parameter independent similarities between samples, namely similarity scores that are true regardless of the current value of the function parameters.\\n\\nAlthough I find the approach interesting, I have three main concerns with the proposed method.\\n1. The experimental setup is lacking significant information, baselines and baseline tuning (see below for more in depth comments).\\n2. The proposed upper bound which has been used for a similar purpose by [1] becomes nonsensical in high dimensions and although for [1] this would mean sampling with a non optimal sampling distribution for CRAIG it means converging very far from the optimum. What are the values of epsilon that you observe in practice?\\n3. I do not see how CRAIG would be applied to deep learning. The argument in section 3.4 is that the variance of the gradient norm is captured by the gradient of the last layer or last few layers, however this is true given the parameters of the neural network. The gradients can change arbitrarily after a very small number of parameter updates as shown by [2].\\n\\nExperimental setup\\n----------------------------\\n\\nFor the case of the convex problems, the learning rate is not tuned independently for each method. Even more importantly the stepsizes of CRAIG are all numbers larger than 1 so the expected learning rate is multiplied by the average step size. This makes it difficult to understand whether the speedup is due to a larger learning rate or due to CRAIG. Similarly for figure 3 the result could be due to a non decreasing step size because of \\\\gamma_j while for CRAIG \\\\gamma_j are ordered in decreasing order.\\n\\nIn addition, there is no experimental analysis of the epsilon bound and the actual difference of the gradients for the subset and the full dataset. There are also no baselines that use a subset to train. A comparison with a baseline that uses 1. random subset or 2. a subset selected via importance sampling from [1] would contribute towards understanding the particular benefits of CRAIG.\", \"regarding_the_neural_network_experiments\": \"1. There is no explicit definition of the similarity function used for the case of neural networks. If we assume based on 3.4 that the algorithm requires an extra forward pass in the beginning of every epoch there should be visible steps in Figure 3 where time passes but the loss doesn't move.\\n2. 2000 seconds and 80% accuracy on MNIST points towards a mistake on the implementation of the training. On a laptop CPU it takes ~15s per epoch and achieves ~95% test accuracy from the first epoch for the neural network described.\\n3. Similarly 80% accuracy on CIFAR10 is sufficiently low for Resnet-56 to be alarming.\\n\\n[1] Zhao, Peilin, and Tong Zhang. \\\"Stochastic optimization with importance sampling for regularized loss minimization.\\\" international conference on machine learning. 2015.\\n[2] Defazio, Aaron, and L\\u00e9on Bottou. \\\"On the ineffectiveness of variance reduced optimization for deep learning.\\\" arXiv preprint arXiv:1812.04529 (2018).\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a novel extension to SGD/incremental gradient methods called CRAIG. The algorithm selects a subset of datapoints to approximate the training loss at the beginning of each epoch in order to reduce the total amount of time necessary to solve the empirical risk minimization problem. In particular, the algorithm formulates a submodular optimization problem based on the intuition that the gradient of the problem on the selected subset approximates the gradient of the true training loss up to some tolerance. Each datapoint in the subset is a medoid and assigned a weight corresponding to the number of datapoints in the full set that are assigned to that particular datapoint. A greedy algorithm is employed to approximately solve the subproblem. Theory is proven based on based on an incremental subgradient method with errors. Experiments demonstrate significant savings in time for training both logistic regression and small neural networks.\", \"strengths\": \"The proposed idea is novel and intriguing, utilizing tools from combinatorial optimization to select an appropriate subset for approximating the training loss. Based on the experiments provided in the paper, it does appear to yield a significant speedup in training time. It is interesting to observe how the order of the datapoints matter significantly for training, and that CRAIG is also able to naturally define a good ordering of the datapoints for SG training. This is strong algorithmic work.\", \"weaknesses\": \"\", \"some_questions_i_had_about_the_work\": [\"How well does one have to approximate $d_{ij}$ in order for the method to be effective? The authors provide an approach to approximate this for both logistic regression and neural networks. How does one guarantee that one is obtaining the maximum over $x in \\\\mathcal{X}$ for neural networks via backpropagating only on the last layer? Does taking this maximum matter?\", \"How does one choose $\\\\epsilon$? Is this related to how $d_{ij}$\\u2019s are approximated?\", \"If one were to consider an algorithm that samples points from this new distribution over the data given by CRAIG, if one were to include the weight $\\\\gamma_j$ into the algorithm, would the sample gradient be unbiased? What if one were to simply use $\\\\gamma_j$ to weight that particular sample in the new distribution?\", \"In machine learning, the empirical risk (finite-sum) minimization problem is an approximation to the true expected risk minimization problem. What is the effect of CRAIG on the expected risk? Is there any deterioration in generalization performance?\", \"In page 4, what does the $\\\\min_{S \\\\subseteq V}$ refer to? Should the equation be interpreted as with the set $S$ fixed or not?\", \"Theorems 1 and 2 are stated a bit non-rigorously. Are these theorems for fixed $k$? What does it mean for these bounds that $k \\\\rightarrow \\\\infty$?\", \"In Theorems 1 and 2, what is the bound on the steplength in order to obtain the convergence result for $\\\\tau = 0$?\", \"In Theorem 1 for $0 < \\\\tau < 1$, why does one obtain a result where $\\\\|x_k \\u2013 x_*\\\\|^2 \\\\leq 2 \\\\epsilon R / \\\\mu$, why is the distance to the solution bounded by a constant? What if one were to initialize $x_0$ to be such that $\\\\|x_0 \\u2013 x_*\\\\|^2 > 2 \\\\epsilon R / \\\\mu$? (Similar for Theorem 2.)\", \"In the experiments, how is the steplength and other hyperparameters tuned? Are multiple trials run?\", \"Is $\\\\epsilon$ used to determine the subset or is it based on a predetermined subset size?\", \"How do the final test losses compare between CRAIG and the original algorithms?\", \"How do the relative distances (rather than the absolute distance) to the solution behave?\", \"How does CRAIG perform over multiple epochs? How does the algorithm transition when the subset is changed (as in neural networks)?\", \"Why does CovType appear more stable with the shuffled version over the other datasets? Is the stability related to the distribution of the weights $\\\\gamma_j$?\", \"Some grammatical errors/typos/formatting issues:\", \"Equation (9) needs more space between the $\\\\forall x, i, j$ and the rest of the equation.\", \"What is $\\\\Delta$ on page 5? Is it supposed to be $F$?\", \"On page 8, And should not be capitalized.\", \"Page 14, prove not proof\", \"Page 14, subtracting not subtracking\", \"Page 16, cycle not cycke\", \"Overall, although I like the ideas in the paper, the paper still needs some significant amount of refining in terms of both writing and theory, as well as some additional experiments to be convincing. If the comments I made above were addressed, I would be open to changing my decision.\"]}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a method for subselecting training data\\nin order to speed up incremental gradient (IG) methods (in terms of computation time). \\nThe idea is to train a model on a representative subset of the data such \\nthat the computation time is significantly decreased without \\nsignificantly degrading performance. Given a dataset D and \\nsubset S selected by the proposed method, it is shown that \\ntraining on subset S achieves nearly the same loss as training on the full \\ndataset D would, while achieving significant computational speedups.\\n\\nThis paper is a clear accept. The approach is novel and has well-developed \\ntheory supporting it. The empirical evaluation of the method shows \\nlarge speedups in training time without degradation in performance \\nfor reasonably large subsets (e.g. 20% of the data). The paper is \\nvery clear, well-written, and was a genuinely fun read.\", \"clarifying_questions\": \"- In results reporting speedups, does the reported training time for CRAIG \\n include the preprocessing time? Or only the time spent running IG on the resulting \\n subset?\\n - How many runs are the experiments averaged over? There don't seem to be \\n error bars, which makes it difficult to assess whether the speedups are \\n statistically significant\\n - I imagine that an approach like this would be desirable when working with very large datasets. Has \\n CRAIG been evaluated in settings with millions of datapoints? Or does it become impractical? I think \\n that the paper stands on its own without such a demonstration, but it would go a long way towards \\n encouraging mainstream adoption of your method.\\n - Figure 3, left: What could be happening at around 40s? It looks like \\n all three of the random permutations have a spike in loss at around the same time, despite being \\n different permutations of the sub-selected data\\n - How were hyperparameters, such as the regularization parameter, step-size etc. chosen? One of the \\n main claims of the paper is that using the subset selected by CRAIG doesn't significantly \\n effect the optimization performance. But if the baselines weren't thoroughly tuned, it could be the case \\n that IG on the CRAIG subset performs similarly to IG on the full training data, but that neither \\n is actually reaching satisfactory performance in a given domain.\\n - Figure 4: isn't 2000s \\\\approx 30min really slow for MNIST? From what I remember, reasonable test accuracy \\n on MNIST with a feed-forward network with a single layer takes only a few minutes? Though admittedly, I could \\n be misremembering this.\", \"somewhat_open_ended_questions\": \"- To what extent are the results hardware dependent? Do you see similar results on \\n different hardware? I'm wondering how much of the speedup could be attributed to \\n something like better memory locality when using the smaller subset selected using CRAIG.\\n - Section 3.4 mentions that the O(|V||S|) complexity can be reduced \\n to O(|V|) using a stochastic greedy algorithm. Has the performance \\n when training on a subset selected via the stochastic algorithm \\n been compared to the performance when training on a subset selected by the \\n deterministic version?\", \"i_have_only_minor_suggestions\": \"- The CRAIG Algorithm \\n - When F is introduced, I had trouble conceptualizing what kind of object it was. I think \\n mentioning what spaces it's mapping between could increase readability.\\n - Mirzasoluiman et al. (2015a) looks like it is supposed to be a parenthetical citation\\n - Figure 4: The caption labels appear to be swapped. In the figure, (a) is MNIST, but in the \\n caption, (b) is MNIST\\n - 5.1: There is a vertical space issue between the introduction of section 5 and section 5.1. \\n I suspect this was necessary to make the max page requirements. If space is an issue, my suggestion would \\n be to move Algorithm 1 to the appendix. It's nice to have a concrete algorithm specification, but I personally \\n did not find that it aided my understanding of your paper.\"}"
]
} |
BJgRsyBtPB | A Greedy Approach to Max-Sliced Wasserstein GANs | [
"András Horváth"
] | Generative Adversarial Networks have made data generation possible in various use cases, but in case of complex, high-dimensional distributions it can be difficult to train them, because of convergence problems and the appearance of mode collapse.
Sliced Wasserstein GANs and especially the application of the Max-Sliced Wasserstein distance made it possible to approximate Wasserstein distance during training in an efficient and stable way and helped ease convergence problems of these architectures.
This method transforms sample assignment and distance calculation into sorting the one-dimensional projection of the samples, which results a sufficient approximation of the high-dimensional Wasserstein distance.
In this paper we will demonstrate that the approximation of the Wasserstein distance by sorting the samples is not always the optimal approach and the greedy assignment of the real and fake samples can result faster convergence and better approximation of the original distribution. | [
"GEnerative Adversarial Networks",
"GANs",
"Wasserstein distances",
"Sliced Wasserstein Distance",
"Max-sliced Wasserstein distance"
] | Reject | https://openreview.net/pdf?id=BJgRsyBtPB | https://openreview.net/forum?id=BJgRsyBtPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"pKT6QAHqbv",
"rygE9UQgqS",
"BylRM78J9S",
"rkxm6Fp2tr"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798736251,
1571989131629,
1571934997646,
1571768762786
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1933/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1933/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1933/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a variant of the max-sliced Wasserstein distance, where instead of sorting, a greedy assignment is performed. As no theory is provided, the paper is purely of experimental nature.\\n\\nUnfortunately the work is too preliminary to warrant publication at this time, and would need further experimental or theoretical strengthening to be of general interest to the ICLR community.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes two alternative approaches to max-sliced Wasserstein GANs. They are based on the authors\\u2019 claim that there is a \\u201cflaw\\u201d in the Wasserstein-1 distance between probability distributions on the one-dimensional space. Briefly, the authors\\u2019 argument says that the \\u201cflaw\\u201d is that the optimal transport may not be unique, some of which are better for network learning than others. One proposal, described in Section 2.2, is to find a plausible transport plan in a greedy manner. The other proposal, described in Section2.3, is a hybrid of the greedy approach in Section 2.2 and the original sliced Wasserstein distance.\\n\\nThe working hypothesis in this paper, that the above \\u201cflaw\\u201d is indeed problematic in learning in max-sliced Wasserstein GANs, has not been confirmed in any sense in this paper. Algorithm 1 is meant to explain one of the proposal, the greedy approach, but I found that several undefined symbols are used there, so that it seems hard to understand it. Numerical experiments in Section 3 are not well described. Because of these, I would not be able to recommend acceptance of this paper.\\n\\nAlgorithm 1 seems to heavily rely on Algorithm 1 in Deshpande et al., 2019. This paper does not provide any explanation about why one should sample n data for each i running from 1 to n (line 3), what the \\u201csurrogate loss\\u201d is (line 4), what \\u00a5omega is (line 4), why one should care about the surrogate loss between ith data and ith generated sample (line 4), as well as what D^i_k means (line 11).\\n\\nI do not understand how the Pearson correlation coefficient between the generator (or fake) distribution and the real distribution in Section 3. As far as my understanding, fake samples and real samples are sampled independently, so that correlation coefficient should ideally vanish in any case. Also, the KL divergence is not symmetric, so that whether KL(P_F,P_R) or KL(P_R,P_F) was evaluated has to be explicitly stated. Furthermore, recalling that the Wasserstein distance has originally been introduced to the GAN literature in order to alleviate the problems associated with the KL-based divergence (Jensen-Shannon), I do not understand either why the authors chose to use the KL divergence in their performance comparison.\\n\\nThe \\u201cflaw\\u201d argued in this paper does not apply to the Wasserstein distance in general, but specifically to the Wasserstein-1 distance between one-dimensional distributions. This fact should be stated clearly.\\n\\nPage 2, equation (2): The subscript \\u00a5mathbb{P} should read p.\\nPage 2, two lines below equation (3): w should be italicized.\\nPage 4, line 4: if the(y->ir) probability\\nPage 4, line 24: has no effect (t)on the inference complexity\\nPage 5, Algorithm 1, line 17: g of \\u00a5theta g should be a subscript of \\u00a5theta.\\nPage 6, line 3: which signals th(e->at) the\\nPage 6, line 12: was executed for (500.000->500,000) iterations.\\nPage 6, line 15: Figure number is missing.\\nPage 7, lines 9-10: (10.000->10,000) random projection(s) and used\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper suggests a new way to train max slides Wasserstein GANs. I find that the paper has too few innovation in comparison with the approach introduced in Deshpande et al. (2019) (reference in the paper). The authors themselves described that the difference is \\u2018instead of sorting the samples of the projections we iteratively select the most similar pairs of them for loss calculation\\u2019. I think it is not enough for a publication.\", \"minor_suggestions\": [\"Page 3. \\u2018As it can be seen from the figure, for example\\u2026\\u2019 I think \\u2018for example\\u2019 here is redundant. In the same sentence \\u2018will results\\u2019 -> \\u2018will result\\u2019.\", \"Section 2.2. \\u2018First we select the smallest element\\u2026\\u2019 I would remove \\u2018first\\u2019 because it was used in the previous sentence.\", \"Section 2.2. has no effect ton-> on\", \"Equation (6). There must be a comma before \\u2018otherwise\\u2019. And maybe it would look better to write the equation as a system (with one left bra\\u0441ket).\", \"Section 3.2. There is \\u2018Fig. ??\\u2019\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": [\"The paper proposes a variant of the max-sliced Wasserstein distance, where instead of sorting, a greedy assignment is performed. As no theory is provided, the paper is purely of experimental nature.\", \"Considering the above, the experimental evaluation is way too preliminary:\", \"Looking at the generated images, much better results can be achieved with a Vanilla-GAN + GP regularization, so it is completely unclear to me why the proposed GAN should be used, as it is seems more complicated to implement.\", \"The KL divergence evaluation seems non-standard, and it is not explained why this metric is chosen over the standard ones (FID, Inception Score). However, I think a evaluation with respect to standard metrics is a must for an experimental GAN paper.\", \"I would have liked to see a comparison to using the exact Wasserstein distance, as it also scales roughly like n^3. For example, the recent paper \\\"Wasserstein GAN with Quadratic Transport Cost\\\" computes the exact distance using linear programming, and there it is shown to yield good results w.r.t. FID.\", \"Minor comments I noticed during reading (no impact on my rating):\", \"In Eq. 1, the maximization over D is confusing given that in the following Eq. 2 the primal form of the Wasserstein distance is shown.\", \"The number of possible joint distributions is a continuum, and not a discrete quantity that increases factorially. Anyway if both distributions are discrete, then the problem can be reduced to a discrete optimization problem with factorially many candidate solutions, but this is very misleading as there are polynomial time algorithms.\"]}"
]
} |
HygaikBKvS | Off-Policy Actor-Critic with Shared Experience Replay | [
"Simon Schmitt",
"Matteo Hessel",
"Karen Simonyan"
] | We investigate the combination of actor-critic reinforcement learning algorithms with uniform large-scale experience replay and propose solutions for two challenges: (a) efficient actor-critic learning with experience replay (b) stability of very off-policy learning. We employ those insights to accelerate hyper-parameter sweeps in which all participating agents run concurrently and share their experience via a common replay module.
To this end we analyze the bias-variance tradeoffs in V-trace, a form of importance sampling for actor-critic methods. Based on our analysis, we then argue for mixing experience sampled from replay with on-policy experience, and propose a new trust region scheme that scales effectively to data distributions where V-trace becomes unstable.
We provide extensive empirical validation of the proposed solution. We further show the benefits of this setup by demonstrating state-of-the-art data efficiency on Atari among agents trained up until 200M environment frames. | [
"Reinforcement Learning",
"Off-Policy Learning",
"Experience Replay"
] | Reject | https://openreview.net/pdf?id=HygaikBKvS | https://openreview.net/forum?id=HygaikBKvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"NuU2F5wTFJ",
"2QWpDiQ0b",
"SyeIzEp_iH",
"SkgcOXT_iH",
"SJxs4m6doS",
"rkgGtMTdiH",
"SylTACkgjH",
"HygTpk4Z9S",
"rJget8QTtH",
"ryxo9N4wKH"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1577054620799,
1576798736220,
1573602318475,
1573602161608,
1573602098912,
1573601914295,
1573023444997,
1572057029305,
1571792504128,
1571402899088
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1932/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1932/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1932/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1932/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1932/Authors"
],
[
"~Michael_Dann1"
],
[
"ICLR.cc/2020/Conference/Paper1932/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1932/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1932/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Re: Paper Decision\", \"comment\": \"We presented two experiments on atari - in both LASER obtains state-of-the-art results:\\n * single agent vs. single agent on (LASER is 15x better than R2D2 at the 400% mark) \\n * population training vs. population training (LASER is 4x better than IMPALA at the 400% mark)\\n\\nOur state-of-the-art claims are *not* based on a single-agent training vs. population training experiment. The comparisons (see above) are indeed like-for-like and fair.\"}",
"{\"decision\": \"Reject\", \"comment\": \"The paper presents an off-policy actor-critic scheme where i) a buffer storing the trajectories from several agents is used (off-policy replay) and mixed with the on-line data from the current agent; ii) a trust-region estimator is used to select trajectories that are sufficiently close to the current policy (e.g. in the sense of a KL divergence).\\n\\nAs noted by the reviews, the results are impressive.\", \"quite_a_few_concerns_still_remain\": [\"After Fig. 1 (revised version), what matters is the shared replay, where the agent actually benefits from the experience of 9 other different agents; this implies that the population based training observes 9x more frames than the no-shared version, and the question whether the comparison is fair is raised;\", \"the trust-region estimator might reduce the data seen by the agent, leading it to overfit the past (Fig. 3, left);\", \"the influence of the $b$ hyper-parameter (the trust threshold) is not discussed. In standard trust region-based optimization methods, the trust region is gradually narrowed, suggesting that parameter $b$ here should evolve along time.\"], \"title\": \"Paper Decision\"}",
"{\"title\": \"Re: Sample efficiency of shared replay agents\", \"comment\": \"Thank you for the question. We have addressed it in the updated version of the paper. In Figure 1 we now also present a single agent that uses the same hyper-parameter schedule that was published by Espeholt et al. (2018). This agent obtains a score of 431% human normalized median across the 57 atari games, achieving a new state of the art in the single agent regime. The fastest prior agent to reach 400% is presented by Kapturowski et al. (2019) requiring more than 3,000M steps. This constitutes a 15x improvement in data-efficiency like-for-like.\\n\\nComparing our single-agent and population (9 agents) results, we would like to point out that:\\n1) population training achieves higher performance (448% vs 431%) but indeed observes 9x more frames;\\n2) the single agent result used an optimised hyper-parameter schedule from Espeholt et al. (2018), while the population set up reflects the setting where a good hyper-parameter schedule is not known;\\n3) Like-for-like comparing population training with and without shared replay, we observe that sharing the replay leads to more efficient training (370% vs 233% at 50M steps per-agent).\"}",
"{\"title\": \"Response to Official Blind Review #2\", \"comment\": \"Thank you for your review.\"}",
"{\"title\": \"Response to Official Blind Review #1\", \"comment\": \"Thanks for your review.\", \"re_1\": \"Proposition 2 emphasizes that the V-trace policy gradient with clipped importance sampling optimizes a wrong objective. In particular the policy gradient implicitly optimizes the target policy for a wrong Q function. We can compute how wrong this Q-function is in expectation. We provide a formula for a state action dependent distortion factor w(s, a) <= 1 in propositions 2 and 3. The factor distorts the Q functions in multiplicative way. When w(s, a)=1 there is no distortion at all.\\n\\nThe question of how biased the resulting policy will be depends on whether the distortion changes the argmax of the Q function. Little distortions that don\\u2019t change the argmax will result in the same local fixpoint of the policy improvement. The policy will continue to select the optimal action and it will not be biased at this state.\\nThe policy will however be biased if the Q function is distorted too much. For example consider a w(s, a) that swaps the argmax for the 2nd largest value, the regret will then be the difference between the maximum and the 2nd largest value. Intuitively speaking the more distorted the Q, the larger will be the regret compared to the optimal policy.\\n\\nMore precisely, the regret of learning a policy that follows a distorted Q is:\\nRegret = Q(s, a_best) - Q(s, a_actual) = max_b Q(s, b) - Q(s, a_actual)\\nwhere \\n * a_best = argmax_a (Q, a) is the optimal action according to the real Q\\n * a_actual = argmax_a(Q(s, a) * w(s, a)), is the optimal action according to the distorted Q\\n\\n\\nIn proposition 3 we recall that mixing online data leads to a linear interpolation between real Q function and the implied Q function. In practice this moves each w(s, a) closer to 1.0. Given sufficient online data the argmax can be preserved. \\n\\nWe have expanded section 2.3 in the paper and added further derivations to the appendix after Proposition 3. \\n\\nIn particular consider the added equation 13 which provides interpretation on how to choose alpha such that the learnt policy will correctly choose the best action. One of the insights is that alpha may be small if there is a large action value gap between a_best and b.\\n\\nThe provided conditions can be computed and checked if an accurate Q function and state distribution is accessible. Using imperfect Q function estimates to adaptively choose such an alpha remains a question for future research. \\n\\nIn this paper we investigate different constant alpha values for their practical performance. We empirically show in Figure 2 that alpha as small as 1/8 results in stable learning performance.\", \"re_2\": \"We have clarified that V is the bootstrap value -- the previously estimated state value function.\", \"re_3\": \"Propositions 4 and 5 show that the trust-region value estimation operator is a sound operator that really obtains an improved estimate in expectation. We consider this as an essential condition and present it here for reference to show the correctness of our method.\", \"re_4\": \"We have added a derivation. In related matters we reference Degris (2012) around equation 1.\", \"re_5\": \"We present in Figure 2 that running a hyper-parameter sweep of 9 agents with shared experience replay is better than running a sweep with 9 separate agents.\", \"page_8_states\": \"\\u201cOn Atari sweeps contain 9 agents with different learning rate and entropy cost combinations {3 \\u00b7 10\\u22124 , 6 \\u00b7 10\\u22124 , 1.2 \\u00b7 10\\u22123} \\u00d7 {5 \\u00b7 10\\u22123 , 1 \\u00b7 10\\u22122 , 2 \\u00b7 10\\u22122} (distributed by factors {1/2, 1, 2} around the parameters reported in Espeholt et al. (2018)).\\u201d\\n\\nThe \\u201cb\\u201d parameter in the trust region was investigated by considering the values {1, 2, 4} on DMLab-30. The differences were minor such that we excluded them from the figure to improve readability.\", \"re_6\": \"Thank you very much for pointing this out. We have fixed this in the revision.\"}",
"{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"Thanks for your review.\\n\\nWe have provided pseudocode in the appendix and made the paper more self-contained.\", \"re_1\": \"The random variable z indexes the set of policies for which we have saved sampled episodes in the experience replay: Consider uniform sampling of experiences from replay -- in that case, the random variable z indexes the previous policies mu_z=pi_t that saved data to the replay. Here pi_t is the target policy at training step t. In this case the distribution of z (equal to t) would be uniform as the experience replay is uniform.\\n\\nWe also consider the case where experience is sampled uniformly from both agents id (in a parameter sweep) and training time (episode id).\", \"re_2\": \"We have reworded this term in the updated version. By \\u201cvery off-policy\\u201d in the abstract we meant learning from replay generated by other agent instances. This stands in comparison to classic experience replay where agents learn from data that they have generated themselves and saved into a replay buffer.\", \"re_3\": \"We present an actor-critic algorithm that is robust to off-policy data. We have shown that off-policy data from other agents may have an adverse effect (left green curve in Figure 3) and deteriorate performance significantly. The proposed trust region is able to discard harmful data. This avoids negative interference. However the harmful data still occupies space in the replay and in the training batch (where the loss is zeroed out). This can be a slight disadvantage in certain circumstances if computational resources are limited. Note that the trust region agent trained with population based training (red curve in the right plot) obtains the best results of all considered experiments.\", \"re_4\": \"Thanks for the suggestion. We have added this.\"}",
"{\"title\": \"Sample efficiency of shared replay agents\", \"comment\": \"Hi there,\\n\\nOne aspect of this paper that I was unclear on is how much experience the shared replay agents have access to. Does the sharing of experience between 9 agents mean that they are effectively exposed to 1.8B frames by the 200M frame mark? If so, is it entirely fair to compare against agents like Rainbow that strictly learn from 200M frames? Either way, your results are impressive, but since they\\u2019re likely to become the new benchmark for sample efficiency in Atari (at least in the 200M frame setting) I think it\\u2019s important to have clarity on this.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper investigates off-policy actor critic (AC) learning with experience replay using V-trace. It shows that V-trace policy gradient is not guaranteed to converge to a local optimal solution. To mitigate the bias and variance problem of V-trace and importance sampling, a trust region approach is proposed to adaptively selects only suitable behavior distributions when estimating the state-value of a policy. To this end, a behavior relevance function (KL divergence) is introduced to classify behavior as relevant. The proposed learning method LASER demonstrates the state-of-the-art data efficiency in Atari among agents trained up until 200M frames. In all, this paper is well motivated and technically sound. The draft can be improved by making it more self-contained by providing a sketch of the proof rather than refer everything to the appendix. Also it might be helpful to provide a pseudocode of LASER to help readers better understand the technical details.\", \"other_comments_and_questions\": \"1) When talking about the selection process, z is treated as a random variable. What is its distribution?\\n2) what does \\u201cvery off-policy learning\\u201d mean?\\n3) In figure 3(left), why \\u201cLASER: shared + trust region\\u201d performs worse than \\u201cLASER: not shared\\u201d? \\n4) In proposition 3. Q^w should be explained in the main text.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper aims to improve the efficiency of the actor-critic method. The authors first analyzed the cause of instability in the prior work, from the perspective of bias and variance. Two remedies were then presented: (i) mixing the experience replay with online learning; (ii) proposing a trust region scheme to select the behavior policies. The authors finally tested the proposed method on Atari games, and showed the better results, compared with the state-of-the-art methods.\\n\\nIn my opinion, the empirical results are impressive, and the authors also provided some insights for the motivation. Given the results on Atari games, this paper could be a great contribution on the actor critic methods. The propositions are presented to support relevant claims, while their significance seems a bit limited, and some further clarification is necessary. The authors also need to address a few confusing statements and missing details.\\n\\n1. In Proposition 3, the authors claimed that mixing with on-policy data can reduce the bias. I checked the proof but did not find anything relevant. Also, what is the amount of bias reduced?\\n2. In Equation (1), could you provide a formal definition for \\\"V\\\"? \\n3. The authors claimed at the beginning of Section 4 that the trust region method was proposed to mitigate the bias and variance problem of V-trace. However, I did not see how this is reflected in Propositions 4 and 5. Is this statement only based on empirical results?\\n4. It was mentioned right below Equation (4) that \\\"Observe how this inner expectation ... matches the on-policy return...\\\". Could you provide a formal proof?\\n5. What are the hyperparameters for the 9 agents used in Figure 1? Also, how did you choose \\\"b\\\" in trust region?\\n6. A few notation issues / typo:\\n(1) it's -> its\\n(2) In Equation (5), should \\\"z \\\\in M_{\\\\beta, \\\\pi} (s_t)\\\" be \\\"\\\\mu_z \\\\in M_{\\\\beta, \\\\pi} (s_t)\\\"?\\n(3) At the 2nd line of Page 7, should the content for the indicator function be \\\"\\\\beta (\\\\pi, \\\\mu, s_t) < b\\\"?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors investigate off-policy actor-critic reinforcement learning where they want to make use of shared experience replay. Two approaches were suggested and compared. One was to mix replayed experience with on-policy data and the other was to create trust regions that only selects well-behaved behavioral distributions for state value estimation.\\nAccording to the authors the several experiments provide evidence that their algorithm achieves competitive or even state-of-the-art results in data efficiency. They underpin this with some theoretical analysis.\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.